Chinese artificial intelligence (AI) companies like DeepSeek and Alibaba are rapidly gaining influence among global developers by unveiling AI technologies that rival the performance of leading American big tech AI models.
While U.S. firms have doubled down on closed strategies, Chinese companies have leapfrogged America in the global open-source AI market by leveraging open ecosystems.
According to the Open Intelligence Economy (EOI) report, jointly released by Hugging Face and Massachusetts Institute of Technology (MIT) in late October, Chinese open-source models captured a 17.1% global download share over the past year, outpacing the U.S. (15.8%) for the first time.
Hugging Face data shows Chinese model downloads surpassed 540 million as of October, with DeepSeek and Alibaba’s Qwen model dominating the count.

The EOI report likely underestimates China’s true market share, as it excludes most domestic Chinese data.
Chinese developers primarily use alternative platforms like Alibaba’s ModelScope or Gitee, unless employing virtual private networks (VPNs) to bypass government restrictions on Hugging Face access.
This market shift suggests that developers worldwide, from the U.S. to Europe, Asia, and the Middle East, are increasingly opting for Chinese models.
DeepSeek’s V3.2 model, unveiled on Monday, exemplifies this trend. It scored 96.0% on the American Invitational Mathematics Examination (AIME) 2025 math benchmark, surpassing OpenAI’s GPT-5 High (94.6%), and achieved gold medal-level results in the International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).
Alibaba’s open-source Qwen3-coder, released in July, approached the performance benchmarks of OpenAI and Google’s flagship models at that time.
The Kimi-K2-Thinking model, recently open-sourced by Chinese AI startup Moonshot AI, reportedly outperformed GPT-5 and Anthropic’s Claude Sonnet 4.5 in specific benchmarks.
Chinese models boast a significant edge in cost efficiency.
DeepSeek claims it trained its V3 model for just 5.576 million USD, roughly one-tenth the cost of competitors. In a peer-reviewed Nature paper appendix, DeepSeek confirmed that training its R1 model, which caused a stir earlier this year, cost only 294,000 USD.
The training cost for the competitive Kimi K2 Thinking model is estimated at around 4.6 million USD.
Former Google Chief Executive Officer (CEO) Eric Schmidt warns that while American large models remain closed, Chinese counterparts are open-source. He predicts most cash-strapped countries will adopt free Chinese models as their standard.
U.S. tech giants like OpenAI, Google, and Anthropic have kept their flagship models closed from the outset, monetizing through subscriptions and enterprise contracts.
Their reluctance to go open-source, despite potential rapid development gains through collective intelligence, likely stems from capital concerns (investor appeal) and security issues (protecting research and development (R&D) investments and proprietary tech).
OpenAI CEO Sam Altman hinted at open-sourcing possibilities during the DeepSeek shock, but concerns over future funding likely pushed the company towards a business model focused on maintaining technological superiority.
An industry insider noted that Chinese models now offer comparable performance to American ones at a fraction of the cost – sometimes 10 to 100 times cheaper. This creates a compelling case for companies to choose Chinese models. They’re hearing that even over half of new U.S. AI startups are opting for Chinese open-source models.