The U.S. Clears Nvidia H200 Sales: 10 Chinese Companies Approved, but Delivery Is Still Uncertain

A summary of the U.S. Commerce Department's approval for about 10 Chinese companies to buy Nvidia H200 chips: approved buyers, purchase limits, Lenovo's confirmation, pending delivery, and remaining policy variables on both sides.

The U.S. export license process for Nvidia H200 sales to China has finally made concrete progress.

According to Reuters-related reports, the U.S. Commerce Department has approved about 10 Chinese companies to buy Nvidia H200 AI chips. The approved list includes major internet companies and supply-chain firms, such as Alibaba, Tencent, ByteDance, JD.com, Lenovo, and Foxconn. However, as of May 14, 2026, H200 chips had still not been delivered to the Chinese market.

This needs to be read carefully: the U.S. side has granted some licenses, but that does not mean the chips have arrived, nor does it mean Chinese companies can immediately deploy them at scale.

What Was Approved

There are three key points in this approval.

First, the U.S. Commerce Department approved about 10 Chinese companies to purchase H200 chips. According to reports, approved customers may buy directly from Nvidia or through authorized intermediaries and distributors.

Second, each approved customer may buy up to about 75,000 H200 chips. If fully delivered, this volume would significantly improve high-end GPU supply for major cloud providers and large-model companies.

Third, Lenovo has confirmed that it is one of the companies that received Nvidia export licenses and is allowed to sell H200 in China. Companies like Lenovo and Foxconn are not only buyers; they may also handle server systems, rack integration, and distribution.

The most important caveat is that a license is not the same as delivery. Public reports emphasize that no H200 shipments to China have been completed yet.

Why H200 Matters

H200 belongs to Nvidia’s Hopper-generation accelerator lineup and is positioned above the H20, which was previously designed for the Chinese market. H20 was a reduced-spec product built to fit earlier export restrictions, while H200 offers stronger compute and memory capabilities.

Public information shows that H200 comes with 141GB of HBM3e memory, making it valuable for large-model training, inference, long-context services, and enterprise AI deployments. It is not Nvidia’s latest Blackwell-generation product, but for Chinese cloud providers and AI companies, it is still a high-end compute resource.

That is why H200 has remained sensitive in U.S.-China AI chip controls. The U.S. wants to limit China’s access to the most advanced AI compute while avoiding a complete loss of Nvidia’s China business. China, meanwhile, wants to reduce reliance on U.S. GPUs and direct more compute investment toward domestic chips and local ecosystems.

It Has Not Really Landed Yet

The easiest mistake is to read “approved to buy” as “supply has reopened.”

Based on current public information, there are still several variables:

  1. U.S. approval is only the first step; orders, review, shipment, and compliance workflows still need to continue.
  2. Whether China will allow actual import and deployment still requires clearer policy guidance.
  3. Whether approved companies place orders immediately depends on price, delivery time, domestic alternatives, and long-term policy risk.
  4. Nvidia may need to re-coordinate H200 capacity because its focus had already shifted to Blackwell and later products.

In other words, H200 sales to China now look more like an opened license window than a supply chain that is already moving chips into Chinese data centers at scale.

What It Means for Nvidia

For Nvidia, the China market remains too important to ignore.

After export restrictions tightened, Nvidia’s share in China’s high-end AI accelerator market was clearly affected. Jensen Huang has repeatedly argued that the U.S. should not casually give up the Chinese market, because doing so would hurt Nvidia’s revenue and weaken the influence of the U.S. technology ecosystem among global AI developers.

If H200 can eventually be delivered, Nvidia can partially recover Chinese customer orders and keep CUDA in Chinese large-model and cloud-computing workflows.

But this business will not return to the old frictionless state. Licenses, quotas, revenue-sharing arrangements, third-party verification, re-export restrictions, and customer identity review may all become long-term costs. For Nvidia, H200 is not just a product sale; it is a way to maintain market presence in a narrow policy corridor.

What It Means for Chinese Companies

For Chinese companies, H200 is short-term compute supply, not long-term certainty.

If approved companies can actually receive H200 chips, large-model training, inference services, AI cloud, agent platforms, and enterprise private deployments will all benefit. Teams already deeply tied to the CUDA toolchain face far lower migration costs with H200 than with a completely new hardware ecosystem.

But policy uncertainty will make companies cautious. Being able to buy H200 today does not mean stable procurement next year. Buying one batch does not mean a long-term expansion path exists. Even if major companies buy, they will likely continue pushing domestic GPUs, heterogeneous compute, inference optimization, and model compression to avoid being trapped again by a single supply chain.

So H200 is more of a buffer for Chinese AI companies than a final solution.

Pressure on Domestic Chips Will Not Disappear

U.S. approval of H200 does not reduce pressure on domestic AI chips. In some ways, it may make competition more direct.

If H200 really enters the Chinese market, domestic chip vendors will face a stronger benchmark in both performance and ecosystem. Customers will compare training stability, inference throughput, memory capacity, software toolchains, cluster communication, and operations cost.

Domestic chips still have room, however. As long as high-end GPU imports remain policy-sensitive, companies will not put their entire long-term compute base on Nvidia. Domestic solutions still have opportunities if they can provide controllable cost, stable supply, and usable software in specific scenarios.

A more realistic pattern may be: high-end training and critical inference continue to seek Nvidia resources such as H200, while large-scale inference, government and enterprise projects, and controllable supply-chain scenarios shift more toward domestic or mixed compute.

How to Read This

The most accurate reading is that U.S.-China AI chip friction has loosened temporarily, but has not returned to full openness.

The U.S. granted licenses to rebalance controls and commercial interests. Nvidia wants to use H200 to return to China’s high-end AI chip market. Chinese companies want stronger compute, but they also need to evaluate import uncertainty and domestic substitution strategy.

The key questions are not only whether the U.S. “allows” the sale, but what happens next:

  1. Whether the first H200 batch is actually delivered to Chinese customers.
  2. Whether approved companies disclose purchase scale and deployment scenarios.
  3. Whether China provides clearer guidance on import, procurement, and usage.

Until those questions land, H200 remains an opened window for the Chinese market, not a fully restored supply chain.

References

记录并分享
Built with Hugo
Theme Stack designed by Jimmy