NVIDIA may adopt Micron’s HBM3E memory to power its next-gen Blackwell GPUs. Slated to arrive in mid or late 2024, these Tensor Core GPUs (accelerators) will train the most complex neural networks. Micron plans to start volume shipments of its HBM3E memory in the first half of 2024. It is the first vendor to reveal its HBM3E memory, well ahead of rivals Samsung and HK Hynix.
Micron has announced that NVIDIA is currently testing its HBM3E memory for its data center GPUs set to drive future HBM3E-powered AI solutions. For now, NVIDIA’s only HBM3E product is the Grace Hopper CPU-GPU combo which leverages the Arm-based Grace CPU and the GH200 “Hopper” GPU. This can only mean that we’re looking at an unreleased SKU, likely the GB100 or “B100” Blackwell Tensor Core GPU.
Micron is primarily known for manufacturing DRAM (computer memory) and NAND-based SSDs. In the HBM market, it’s little more than an outlier with a share of just 10%. Its HBM3E or HBM3 Gen 2 memory is supposed to be its ticket to the lucrative AI memory segment currently dominated by SK Hynix and Samsung.
Micron HBM3E modules offer 24GB packages that consist of 8-Hi 24Gbit memory dies fabbed using its 1β (1-beta) process. Each of these dies can attain data transfer rates of up to 9.2GT/s, allowing for peak bandwidths of up to 1.2TB/s for the entire 8-Hi stack. Existing HBM3 solutions top out just over 0.83TB/s, marking a 44% advantage for Micron’s HBM3 Gen 2 memory. The company plans to further beef up its HBM portfolio with plans of 12-Hi 36GB stacks for the latter half of 2024.