High-bandwidth memory (HBM)

Technology

A critical component for AI accelerators (GPUs), predicted to be the best-performing asset class in 2025 due to a forecast shortage of compute and inference capacity.


entitydetail.created_at

7/26/2025, 5:37:18 AM

entitydetail.last_updated

7/26/2025, 6:02:39 AM

entitydetail.research_retrieved

7/26/2025, 6:02:38 AM

Summary

High Bandwidth Memory (HBM) is an advanced computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM), initially developed by Samsung, AMD, and SK Hynix. Standardized by JEDEC, HBM is designed to deliver exceptional data bandwidth and efficiency by vertically stacking multiple memory dies and integrating them closely with processors, utilizing Through-Silicon Vias (TSVs) for interconnection. This architecture significantly reduces latency and power consumption compared to traditional memory types like DDR and GDDR. HBM is widely utilized in high-performance computing applications such as graphics accelerators, network devices, data center AI ASICs, and as on-package cache or RAM in CPUs, FPGAs, and supercomputers. Due to an anticipated compute shortage, HBM is predicted to be a top-performing asset in 2025.

Referenced in 1 Document
Research Data
Extracted Attributes
  • Type

    Computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM)

  • Applications

    High-performance graphics accelerators, network devices, data center AI ASICs, on-package cache/RAM in CPUs, FPGAs, supercomputers (e.g., NEC SX-Aurora TSUBASA, Fujitsu A64FX)

  • Architecture

    Stacks up to eight DRAM dies and an optional base die, connected via Through-Silicon Vias (TSVs) to a memory controller (GPU/CPU) through a substrate (e.g., silicon interposer) or directly

  • Key Advantages

    Higher bandwidth than DDR4 or GDDR5, lower power consumption, smaller form factor, reduced latency, high density

  • HBM Max Capacity

    4GB

  • HBM Max Bandwidth

    128 GBps

  • HBM3 Max Capacity

    64GB

  • HBM3 Max Bandwidth

    512 GBps

  • HBM2/HBM2E Max Capacity

    24GB

  • HBM2/HBM2E Max Bandwidth

    410 GBps

  • HBM Max Pin Transfer Rate

    1 Gbps

  • HBM2/HBM2E Max Pin Transfer Rate

    3.2 Gbps

Timeline
  • SK Hynix produced the first HBM chip. (Source: summary)

    2013-00-00

  • HBM was adopted by JEDEC as an industry standard. (Source: summary)

    2013-10-00

  • AMD's Fiji GPUs were the first devices to incorporate HBM. (Source: summary)

    2015-00-00

  • The second generation, HBM2, was accepted by JEDEC. (Source: summary)

    2016-01-00

  • Samsung and Hynix announced new generation HBM memory technologies with increased density, bandwidth, and lower power consumption. (Source: web_search_results)

    2016-08-00

  • The HBM3 standard was officially announced by JEDEC. (Source: summary)

    2022-01-27

  • HBM is anticipated to be a top-performing asset due to a compute shortage. (Source: related_documents)

    2025-00-00

High Bandwidth Memory

High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. It is used in conjunction with high-performance graphics accelerators, network devices, high-performance datacenter AI ASICs, as on-package cache in CPUs and on-package RAM in upcoming CPUs, and FPGAs and in some supercomputers (such as the NEC SX-Aurora TSUBASA and Fujitsu A64FX). The first HBM memory chip was produced by SK Hynix in 2013, and the first devices to use HBM were the AMD Fiji GPUs in 2015. HBM was adopted by JEDEC as an industry standard in October 2013. The second generation, HBM2, was accepted by JEDEC in January 2016. JEDEC officially announced the HBM3 standard on January 27, 2022.

Web Search Results
  • High Bandwidth Memory - Wikipedia

    High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. It is used in conjunction with high-performance graphics accelerators, network devices, high-performance datacenter AI ASICs, as on-package cache in CPUs and on-package RAM in upcoming CPUs, and FPGAs and in some supercomputers (such as the NEC SX-Aurora TSUBASA and Fujitsu A64FX). The first HBM memory chip was produced by SK Hynix [...] HBM achieves higher bandwidth "Bandwidth (computing)") than DDR4 or GDDR5 while using less power, and in a substantially smaller form factor. This is achieved by stacking up to eight DRAM dies "Die (integrated circuit)") and an optional base die which can include buffer circuitry and test logic. The stack is often connected to the memory controller on a GPU or CPU through a substrate, such as a silicon interposer. Alternatively, the memory die could be stacked directly on the CPU or GPU chip. [...] At Hot Chips in August 2016, both Samsung and Hynix announced a new generation HBM memory technologies. Both companies announced high performance products expected to have increased density, increased bandwidth, and lower power consumption. Samsung also announced a lower-cost version of HBM under development targeting mass markets. Removing the buffer die and decreasing the number of TSVs lowers cost, though at the expense of a decreased overall bandwidth (200 GB/s).

  • High Bandwidth Memory: Concepts, Architecture, and Applications

    High Bandwidth Memory (HBM) is an advanced memory technology that leverages a 3D-stacked DRAM architecture to deliver exceptional data bandwidth and efficiency. Unlike traditional memory modules that rely on wider buses and higher clock speeds, HBM stacks multiple memory dies vertically and integrates them closely with processors. This approach enables a significantly wider communication interface while reducing latency and power consumption. Standardized by JEDEC, HBM was initially [...] Memory technology is crucial for modern computing, impacting performance, power efficiency, and cost. High Bandwidth Memory (HBM) is a cutting-edge solution that competes with more traditional memory types such as DDR (used for system memory), GDDR (used in graphics processing), and LPDDR (optimized for mobile devices). This document provides a detailed comparison of these memory types in terms of bandwidth, power consumption, cost, and typical use cases. [...] # High Bandwidth Memory: Concepts, Architecture, and Applications ## High-Bandwidth Memory (HBM) is a 3D-stacked DRAM designed for ultra-high bandwidth and efficiency. Used in GPUs, AI, and HPC, it tackles memory bottlenecks by stacking dies vertically near processors. This article explores its evolution, architecture, and impact on modern computing. author avatar 13 Mar, 2025. 24 minutes read Follow High Bandwidth Memory: Concepts, Architecture, and Applications Topic Follow Tags

  • What is HBM (High Bandwidth Memory)? - Simms International

    High Bandwidth Memory (HBM) is an emerging type of computer memory that is designed to provide both high-bandwidth and low power consumption. Typically, it will be suited and used in high-performance computing applications where that data speed is required. [...] | | | | | | --- | --- | --- | --- | | | HBM | HBM2/HBM2E (Now) | HBM3 (Next) | | Max Pin Transfer Rate | 1 Gbps | 3.2 Gbps | Unknown | | Max Capacity | 4GB | 24GB | 64GB | | Max Bandwidth | 128 GBps | 410 GBps | 512 GBps | ## What are its key advantages? Here are the many advantages HBM holds over traditional technologies:- ## High Bandwidth Memory right now A variety of semi-conductors are using HBM in their products, none more so that Micron - a key vendor partner of Simms.

  • What is high-bandwidth memory (HBM)? By - TechTarget

    # What is high-bandwidth memory (HBM)? Sean Michael Kerner High-bandwidth memory is a type of computer memory that is optimized for fast data transfer and reduced power consumption. It has become increasingly used and deployed alongside high-performance computing (HPC) and artificial intelligence workloads that require optimized high-speed memory. HBM is used in various computing platforms -- including graphics processing units (GPUs), field programmable gate arrays and AI accelerators. [...] The goal with HBM is to place more memory closer to a processor, which reduces signal travel distance and decreases latency. The ability to provide more memory with less latency helps accelerate data transfer rates. [...] HBM isn't an entirely unique type of memory. It is an implementation that deploys dynamic random access memory silicon in a different way than it is conventionally used. With HBM, DRAM silicon dies are stacked vertically with a connection technology known as through-silicon vias to achieve high density and performance. TSV technology is a process where thin electrical wires run through holes drilled in the silicon chips to connect multiple layers to a base logic chip. HBM memory can be directly

  • High Bandwidth Memory (HBM): Customization vs. Standardization

    HBM is a revolutionary technology that stacks memory dies vertically and interconnects them using Through-Silicon Vias (TSVs). This architecture shortens the distance signals must travel between dies, delivering significantly higher bandwidth and lower power consumption than legacy memory technologies. [...] As artificial intelligence (AI) reshapes industries and advances technological frontiers, its success hinges on advanced memory capabilities. Leading this transformation is High Bandwidth Memory (HBM), which offers unparalleled speeds and efficiencies. [...] HBM is well suited for data-intensive applications such as AI, graphics processing, and high-performance computing (HPC). It is also rapidly evolving to meet ever-growing processing and memory requirements. As new use cases emerge, critical questions must be answered: