NVIDIA plans to launch its next-gen Blackwell graphics architecture next year, offering unprecedented levels of performance to games and data centers alike. Unlike the last two generations, both gaming and data center products are expected to leverage the same broader GPU architecture (Blackwell). As always, the Tensor Core GPU or accelerators will be launched a few quarters earlier, probably at GTC 2024, slated to be held in March next year.
The GeForce RTX 50 series lineup will likely be released close to the year’s end, continuing the 2-year cadence followed by Team Green. A report from DigiTimes claims that NVIDIA will tap into TSMC’s 3nm process node for its B100/GB100 data center accelerators, the most advanced process technology on the market.
After Apple, NVIDIA will be the second fabless IC company to get a share of TSMC’s cutting-edge process capacity. AMD, Qualcomm, and MediaTek are also going to utilize the N3 node later next year. For NVIDIA, Blackwell will mark a major shift in its product design strategy. For the very first time, it is set to leverage a chiplet (MCM) design, connecting multiple dies on a common interposer. After exceeding a die area of 800mm² with the H100, this seems like the logical way forward.
TSMC’s CoWoS packaging will be employed to connect the disaggregated dies with several HBM 3 packages connected to the primary package. Expect a massive SKU with unprecedented levels of neural network training potential and a steep price tag to boot.
As far as the GeForce gaming products go, a modular top-end SKU is conceivable, but how well it scales will be of utmost importance. With temporal upscalars present in every aspect of game rendering, all present and past frame data has to be easily accessible (without a latency penalty) to every processing cluster, or we risk a repeat of the SLI/XFX disaster.