H200

Technology

A specific GPU from Nvidia, one generation behind the most advanced, that was part of the export deal with China. It is better than what China can produce but not the best the US has.


First Mentioned

1/9/2026, 4:44:55 AM

Last Updated

1/9/2026, 4:47:36 AM

Research Retrieved

1/9/2026, 4:47:36 AM

Summary

The H200 is a high-performance Graphics Processing Unit (GPU) developed by Nvidia under the leadership of CEO Jensen Huang, specifically designed to accelerate generative AI and high-performance computing (HPC) workloads. Built on the Hopper architecture, the H200 features 141GB of HBM3e memory and 4.8 TB/s of bandwidth, providing a significant performance leap over its predecessor, the H100. In late 2025, the H200 became a central element of the Trump administration's economic and technological strategy. Through the Bureau of Industry and Security (BIS), the US Department of Commerce established a deal allowing Nvidia to export these chips to China in exchange for a 25% fee. This policy, detailed by Secretary of Commerce Howard Lutnick, aims to maintain US leadership in semiconductors and AI while addressing trade deficits. Despite intense demand from Chinese firms like ByteDance, the Chinese government briefly paused orders in early 2026 to evaluate the impact on its domestic semiconductor industry.

Referenced in 1 Document
Research Data
Extracted Attributes
  • Export Fee

    25% (paid to the US Government)

  • Architecture

    NVIDIA Hopper

  • Form Factors

    SXM5, PCIe double-wide

  • Manufacturer

    Nvidia

  • Memory Capacity

    141GB HBM3e

  • Memory Bandwidth

    4.8 terabytes per second (TB/s)

  • Estimated Unit Price

    $27,000 USD

  • Performance (FP8 Tensor Core)

    3,958 TFLOPS

  • Max Thermal Design Power (TDP)

    700W

Timeline
  • President Donald Trump approves the export of H200 GPUs to China, implementing a 25% fee on each unit exported. (Source: Beijing tells companies to pause H200 purchases - Tom's Hardware)

    2025-12-01

  • Reports emerge that Nvidia is requiring full upfront payment for H200 chips from Chinese internet giants due to high demand and regulatory complexity. (Source: Nvidia requires full upfront payment for H200 chips in ... - Reuters)

    2026-01-07

  • The Chinese government instructs domestic tech companies to temporarily pause H200 orders while regulators determine domestic purchase quotas. (Source: Nvidia requires full upfront payment for H200 chips in ... - Reuters)

    2026-01-08

  • Expected arrival of the first major batch of approximately 82,000 H200 GPUs in China before the Lunar New Year. (Source: Beijing tells companies to pause H200 purchases - Tom's Hardware)

    2026-02-15

  • TSMC is scheduled to begin ramping up additional H200 production to meet the high volume of orders from the Chinese market. (Source: Nvidia requires full upfront payment for H200 chips in ... - Reuters)

    2026-04-01

H200

H200 or H-200 may refer to:

Web Search Results
  • ThinkSystem NVIDIA H200 141GB GPUs Product Guide

    With the introduction of H200, energy efficiency and TCO reach new levels. This cutting-edge technology offers unparalleled performance, all within the same power profile as H100. AI factories and at-scale supercomputing systems that are not only faster but also more eco-friendly deliver an economic edge that propels the AI and scientific community forward. For at-scale deployments, H200 systems provide 5X more energy savings and 4X better cost of ownership savings over the NVIDIA Ampere architecture generation. Key features of the Hopper architecture: NVIDIA H200 Tensor Core GPU H200 is the world’s most advanced chip ever built. It features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data center scale. Transformer Engine [...] ## Introduction The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. H200 is the newest addition to NVIDIA’s leading AI and high-performance data center GPU portfolio, bringing massive compute to data centers. The NVIDIA H200 141GB 700W GPU is offered in either the SXM5 form factor or as a PCIe double-wide GPU adapter. Four or eight SXM5 GPU modules are implemented with a fully-connected NVLink topology in supported ThinkSystem servers. SXM5 GPUs are either air-cooled or water-cooled, depending on the server. PCIe double-wide GPUs are air-cooled, and can be implemented using 2-way or 4-way NVLink bridges. [...] NVIDIA HGX™ H200, the world’s leading AI computing platform, features the H200 GPU for the fastest performance. An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications. Key AI and HPC workload features: Unlock Insights With High-Performance LLM Inference In the ever-evolving landscape of AI, businesses rely on large language models to address a diverse range of inference needs. An AI inference accelerator must deliver the highest throughput at the lowest TCO when deployed at scale for a massive user base. H200 doubles inference performance compared to H100 when handling LLMs such as Llama2 70B. Optimize Generative AI Fine-Tuning Performance

  • Beijing tells companies to pause H200 purchases - Tom's Hardware

    President Donald Trump approved exports of the H200 early last month, with Washington, D.C., charging Nvidia a 25% fee for every H200 GPU exported to China. Even though the H200 chip is the last-generation GPU following the release of Blackwell AI GPU, Chinese companies are still lining up to get their hands on these relatively powerful chips that domestic chipmakers struggle to match. The Chinese government claims that homegrown semiconductors can now match H20 and RTX Pro 6000D chips, but these are still far behind the latest Blackwell and even the just-allowed full-fledged Hopper AI GPUs. [...] It might seem that this command to hold orders came suddenly, but Beijing has already been in discussion with its biggest tech giants following Trump’s reversal on the H200 ban. Despite the directive, several server manufacturers were said to have already placed non-refundable and non-modifiable orders with Nvidia. It’s also reported that the AI chip company is preparing a shipment of 82,000 GPUs, with the hardware expected to arrive by mid-February 2026. This shows that demand for these chips is so high that they’re willing to take the risk, as Beijing is still deciding on how it will approach the influx of these chips. [...] Don't miss these Trending CES 2026 Day One 9850X3D Panther Lake Vera Rubin Ryzen AI 400 CES 2026 RAM Shortage Win 10 EOL 1. Tech Industry # Beijing tells companies to pause H200 purchases — China govt deliberating terms for letting local tech companies buy US chips while still growing homegrown semiconductors News By Jowi Morales published This is seemingly a temporary halt to H200 orders. (Image credit: Nvidia) Share by: Facebook X Whatsapp Reddit Flipboard Share this article Join the conversation Follow us Add us as a preferred source on Google

  • nvidia h200 gpu

    NVIDIA H200 NVL comes with a five-year NVIDIA Enterprise subscription. This subscription includes NVIDIA AI Enterprise to simplify the way you build an enterprise AI-ready platform. H200 accelerates AI development and deployment for production-ready generative AI solutions, including computer vision, speech AI, retrieval augmented generation (RAG), and more. NVIDIA AI Enterprise includes NVIDIA NIM™, a set of easy-to-use microservices designed to speed up enterprise generative AI deployment. Together, deployments have enterprise-grade security, manageability, stability, and support. This results in performance-optimized AI solutions that deliver faster business value and actionable insights. Activate Your NVIDIA AI Enterprise License Specifications NVIDIA H200 GPU [...] NVIDIA MGX™ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs View the NVIDIA H200 Datasheet Products Data Center GPUs NVIDIA DGX Platform NVIDIA HGX Platform Networking Products Virtual GPUs Technologies NVIDIA Blackwell Architecture NVIDIA Hopper Architecture MGX Confidential Computing Multi-Instance GPU NVLink-C2C NVLink/NVSwitch Tensor Cores Resources [...] Learn More About Sustainable Computing Accelerating AI Acceleration for Mainstream Enterprise Servers With H200 NVL Image 6: NVIDIA H200 NVL NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for every AI and HPC workload regardless of size. With up to four GPUs connected by NVIDIA NVLink™ and a 1.5x memory increase, large language model (LLM) inference can be accelerated up to 1.7x, and HPC applications achieve up to 1.3x more performance over the H100 NVL. Image 7 Enterprise-Ready: AI Software Streamlines Development and Deployment

  • Nvidia requires full upfront payment for H200 chips in ... - Reuters

    Chinese internet giants including ByteDance and others view the H200 as a significant upgrade over currently available chips. The H200, currently Nvidia's second-most powerful chip, delivers roughly six times the performance of the now-blocked H20 chip that Nvidia had designed specifically for the Chinese market. Nvidia plans to fulfill initial orders from existing stock, with the first batch of H200 chips expected to arrive before the Lunar New Year holiday in mid-February, Reuters reported last month. The company has approached contract chipmaker Taiwan Semiconductor Manufacturing Co (2330.TW), opens new tab about ramping up H200 production to meet the Chinese demand, with additional manufacturing expected to begin in the second quarter of 2026, Reuters reported last week. [...] China plans to approve some H200 imports as soon as this quarter, Bloomberg reported on Thursday. Chinese officials are preparing to allow purchases for select commercial uses, while barring the military, sensitive government agencies, critical infrastructure and state-owned enterprises due to security concerns, the report said. Beijing in recent days asked some Chinese tech companies to temporarily pause their H200 chip orders as regulators are still deciding how many domestically produced chips each customer will need to buy alongside each H200 order, the second person said. The Information first reported the pause on Wednesday. [...] Both people spoke on condition of anonymity because the information is not public. The stepped-up policy enforcement has not been reported previously. Nvidia and China’s industry ministry had yet to respond to requests for comment at the time of publication. Advertisement · Scroll to continue Chinese technology companies have placed orders for more than 2 million H200 chips that are priced at around $27,000 each, Reuters reported last month, exceeding Nvidia's inventory of 700,000 of the chips. While Chinese chipmakers like Huawei have developed AI processors including the Ascend 910C, their performance still lags behind Nvidia's H200 for large-scale training of advanced AI models.

  • NVIDIA HGX H200 GPU Servers - Arc Compute

    CPU 2x Intel Xeon 48C processors Form Factor 6U / air cooled Manufacturer Dell Starting at $287,999 USD View Full Specs ### NVIDIA HGX H200 8U Supermicro SYS-821GE-TNHR GPU 8x H200 141GB SXM5 CPU 2x Intel Xeon 48C processors Form Factor 8U / air cooled Manufacturer Supermicro Starting at $284,999 USD View Full Specs ## NVIDIA H200 SXM5 Tensor Core GPU The NVIDIA H200 was the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). That’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. Explore Configuration ## Ideal Workloads for HGX H200 Servers ### High Performance Computing [...] ## NVIDIA HGX H200 Specifications | | | --- | | GPU Architecture | NVIDIA Hopper Architecture | | FP64 TFLOPS | 34 | | FP64 Tensor Core TFLOPS | 67 | | FP32 TFLOPS | 67 | | TF32 Tensor Core TFLOPS | 989 | | BFLOAT16 Tensor Core TFLOPS | 1,979 | | FP16 Tensor Core | 1,979 | | FP8 Tensor Core | 3,958 | | INT8 Tensor Core | 3,958 TOPS | | GPU memory | 141GB | | GPU memory bandwidth | 4.8TB/s | | Decoders | 7 NVDEC | 7 JPEG | | Max thermal design power (TDP) | Up to 700W (configurable) | | Multi-Instance GPUs | Up to 7 MIGS @ 16.5GB each | | Form factor | SXM | | NVLink Support | NVLink: 900GB/s PCIe Gen5: 128GB/s | ## Trusted Across Industries 2x Aivres HGX B200 GPU systems deployed in Secaucus, NJ [...] Talk to a GPU Expert NVIDIA HGX B300 servers available now. Learn more # NVIDIA HGX H200 GPU Servers Unleash Hopper architecture in your data center with NVIDIA H200 Tensor Core GPUs. Perfect for large-scale AI training, high-throughput inference, and advanced HPC workloads. Explore Configurations ## NVIDIA HGX H200 Server Options Arc Compute offers customizable server configurations, featuring up to 8x NVIDIA HGX™ H200 GPUs, manufactured by various OEMs. ### NVIDIA HGX H200 6U Aivres KR6288-X2/E2 GPU 8x H200 141GB SXM5 CPU 2x Intel Xeon 48C processors Form Factor 6U / air cooled Manufacturer Aivres Starting at $277,999 USD View Full Specs ### NVIDIA HGX H200 6U Dell PowerEdge XE9680 GPU 8x H200 141GB SXM5 CPU 2x Intel Xeon 48C processors Form Factor

Location Data

H200, Geddalahalli, Bengaluru North City Corporation, Bengaluru, Bangalore North, Bengaluru Urban, Karnataka, 560094, India

drain

Coordinates: 13.0424852, 77.5662508

Open Map