Huawei’s artificial intelligence division, Noah’s Ark Lab, has refuted allegations that its Pangu Pro Moe large language model is derived from Alibaba’s Qwen 2.5-14B. The denial came in response to a paper posted by an anonymous group called HonestAGI on code-sharing platform GitHub.
The paper, written in English, claimed that Huawei’s Pangu Pro Moe demonstrated an “extraordinary correlation” with Alibaba’s Qwen 2.5-14B, suggesting that Huawei’s model may not have been trained from scratch. It further alleged that the model was the result of “upcycling” — a process that repurposes existing models — and raised concerns over potential copyright infringement, inaccuracies in technical documentation, and misrepresentation of Huawei’s claimed investment in model training.
Noah’s Ark Lab issued a public statement on Saturday, asserting that the Pangu Pro Moe was developed independently and not based on “incremental training of other manufacturers’ models.” The lab emphasized its “key innovations in architecture design and technical features,” and noted that it is the first large-scale model built entirely on Huawei’s proprietary Ascend chipsets.
The statement also mentioned that the team complied with all open-source license requirements for any third-party code used, though it did not specify which open-source resources, if any, were referenced.
Alibaba has yet to respond to media inquiries regarding the matter, and HonestAGI remains anonymous, with no available contact information.
Huawei initially launched its Pangu model in 2021 but has been viewed as trailing other Chinese AI leaders in recent years. In June, the company open-sourced its Pangu Pro Moe models on the Chinese developer platform GitCode, aiming to accelerate adoption by offering developers free access.
While Alibaba’s Qwen series, released in May 2024, is tailored more for consumer applications — including chatbot functions — Huawei’s Pangu models have largely been applied in sectors like government, finance, and manufacturing.
The release of Chinese startup DeepSeek’s low-cost open-source model R1 earlier this year has intensified competition among domestic tech giants, raising the stakes for AI leadership in the region.