Hopper (GPU)

Technology

Nvidia's previous generation of GPUs, which were instrumental in the initial AI boom. The transition from Hopper to Blackwell is a major event in the tech industry.


entitydetail.created_at

7/22/2025, 5:57:37 AM

entitydetail.last_updated

7/22/2025, 5:59:32 AM

entitydetail.research_retrieved

7/22/2025, 5:59:32 AM

Summary

Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia, primarily designed for datacenters and serving as the latest generation in their Nvidia Data Centre GPU line, previously branded as Nvidia Tesla. Named after computer scientist Grace Hopper, its architecture was officially revealed in March 2022, building upon predecessors like Turing and Ampere with significant enhancements including a new streaming multiprocessor, an improved memory subsystem, and a transformer acceleration engine. Hopper GPUs, such as the H100, are optimized for complex AI and high-performance computing (HPC) workloads, including training large models, inference, and generative AI, and are implemented using advanced processes like TSMC N4 with 80 billion transistors. The ongoing transition from Hopper to the next-generation Blackwell GPUs is a key factor influencing Nvidia's financial metrics, such as its accounts receivable, and highlights the increasing demand for specialized AI infrastructure.

Referenced in 1 Document
Research Data
Extracted Attributes
  • Type

    GPU microarchitecture

  • Developer

    Nvidia

  • Named after

    Grace Hopper (computer scientist)

  • Key Features

    New streaming multiprocessor, improved memory subsystem, transformer acceleration engine, automatic inline compression, Multi-Instance GPU (MIG), Transformer Engine (FP8/FP16 precision), NVLink Switch System, HBM3 memory

  • Product Line

    Nvidia Data Centre GPUs (formerly Nvidia Tesla)

  • Primary Purpose

    Datacenters, AI, HPC workloads

  • Performance (H100)

    Up to 4 petaflops of AI performance, 30x speed over NVIDIA A100 for complex AI workloads

  • Transistor Count (H100)

    80 billion transistors

  • Process Technology (H100)

    TSMC N4 process

Timeline
  • Hopper architecture leaked. (Source: Wikipedia)

    2019-11-XX

  • Hopper architecture officially revealed/launched. (Source: Wikipedia)

    2022-03-XX

  • NVIDIA launched the Hopper architecture, including the H100 GPU. (Source: Web Search Results)

    2022-XX-XX

  • Transition from Hopper (GPU) architecture to Blackwell GPUs begins, impacting Nvidia's accounts receivable. (Source: Related Documents)

    2022-XX-XX

Hopper (microarchitecture)

Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is used alongside the Lovelace microarchitecture. It is the latest generation of the line of products formerly branded as Nvidia Tesla, now Nvidia Data Centre GPUs. Named for computer scientist and United States Navy rear admiral Grace Hopper, the Hopper architecture was leaked in November 2019 and officially revealed in March 2022. It improves upon its predecessors, the Turing and Ampere microarchitectures, featuring a new streaming multiprocessor, a faster memory subsystem, and a transformer acceleration engine.

Web Search Results
  • Hopper (microarchitecture) - Wikipedia

    Image 4_025.png) 4 Nvidia H100 GPUs Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is used alongside the Lovelace "Ada Lovelace (microarchitecture)") microarchitecture. It is the latest generation of the line of products formerly branded as Nvidia Tesla, now Nvidia Data Centre GPUs. [...] Hopper allows CUDA compute kernels to utilize automatic inline compression, including in individual memory allocation, which allows accessing memory at higher bandwidth. This feature does not increase the amount of memory available to the application, because the data (and thus its compressibility) may be changed at any time. The compressor will automatically choose between several compression algorithms.[\[9\]]( [...] \[edit&action=edit&section=1 "Edit section: Architecture")\] The Nvidia Hopper H100 GPU is implemented using the TSMC N4 process with 80 billion transistors. It consists of up to 144 streaming multiprocessors.[\[1\]]( Due to the increased memory bandwidth provided by the SXM5 socket, the Nvidia Hopper H100 offers better performance when used in an SXM5 configuration than in the typical PCIe socket.[\[2\]]( ### Streaming multiprocessor

  • All You Need to Know About NVIDIA Hopper GPUs: NVIDIA H100 vs ...

    NVIDIA Hopper is a groundbreaking GPU architecture designed to accelerate complex AI and high-performance computing (HPC) workloads. It is named after American computer scientist and mathematician Grace Hopper. The Hopper architecture is optimised for tasks requiring large-scale parallel processing and enhanced memory efficiency. The NVIDIA Hopper GPUs cater to a diverse niche of users including researchers, developers and enterprises to achieve faster results in their AI and machine learning [...] NVIDIA Hopper is an advanced GPU architecture designed for high-performance AI and HPC workloads, featuring innovations like the Transformer Engine and NVLink Switch System for optimal performance in training, inference, and generative AI applications. ### What is NVIDIA Hopper GPU used for? [...] When NVIDIA launched the Hopper architecture in 2022, Jensen Huang said "NVIDIA H100 is the engine of the world's AI infrastructure that enterprises use to accelerate their AI-driven businesses." The NVIDIA Hopper is built with powerful innovations to handle complex AI workloads like training large models and running inference at scale with 30x speed over the NVIDIA A100. But how does the NVIDIA Hopper achieve this level of superior performance? Let's explore this in our latest blog.

  • H100 Tensor Core GPU - NVIDIA

    The Hopper Tensor Core GPU will power the NVIDIA Grace Hopper CPU+GPU architecture, purpose-built for terabyte-scale accelerated computing and providing 10X higher performance on large-model AI and HPC. The NVIDIA Grace CPU leverages the flexibility of the Arm® architecture to create a CPU and server architecture designed from the ground up for accelerated computing. The Hopper GPU is paired with the Grace CPU using NVIDIA’s ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth,

  • NVIDIA Hopper GPU Architecture

    The NVIDIA Hopper architecture advances Tensor Core technology with the Transformer Engine, designed to accelerate the training of AI models. Hopper Tensor Cores have the capability to apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers. Hopper also triples the floating-point operations per second (FLOPS) for TF32, FP64, FP16, and INT8 precisions over the prior generation. Combined with Transformer Engine and fourth-generation NVIDIA® NVLink®, Hopper [...] With Multi-Instance GPU (MIG), a GPU can be partitioned into several smaller, fully isolated instances with their own memory, cache, and compute cores. The Hopper architecture further enhances MIG by supporting multi-tenant, multi-user configurations in virtualized environments across up to seven GPU instances, securely isolating each instance with confidential computing at the hardware and hypervisor level. Dedicated video decoders for each MIG instance deliver secure, high-throughput [...] Learn about the next massive leap in accelerated computing with the NVIDIA Hopper™ architecture. Hopper securely scales diverse workloads in every data center, from small enterprise to exascale high-performance computing (HPC) and trillion-parameter AI—so brilliant innovators can fulfill their life's work at the fastest pace in human history. Explore the Technology Breakthroughs ------------------------------------

  • NVIDIA Blackwell vs NVIDIA Hopper: A Detailed Comparison

    NVIDIA Hopper was launched in 2022 and named after Grace Hopper, a pioneering computer scientist and U.S. Navy rear admiral who was instrumental in the development of computer programming languages. The NVIDIA Hopper architecture excels in tasks like transformer-based AI models, large-scale language models (LLMs), and scientific computing. With 80 billion transistors and HBM3 memory, the H100 GPU offers up to 4 petaflops of AI performance, making it ideal for enterprises and research [...] The NVIDIA Hopper series, including the NVIDIA HGX H100 and NVIDIA HGX H200, are available on the AI Supercloud for deployment. These GPUs are designed for high-performance AI and accelerated computing workloads, offering industry-leading capabilities for tasks like large language model (LLM) inference, HPC, and enterprise AI solutions. [...] The NVIDIA H100 Tensor Core GPU is a powerhouse for AI inference, training, and accelerated computing. Leveraging innovations from the Hopper architecture, the H100 is equipped with 80 billion transistors and HBM3 memory, delivering breakthrough performance and scalability for AI models, HPC tasks, and enterprise data centres. The H100 features a Transformer Engine with FP8 precision, offering up to 4x faster training compared to previous generations for large models like Llama 3.