Since the DirectX 12 API release, developers have had to allocate hardware resources for every game manually. Assigning several hundred resources per scene is no easy task, especially when working on a massive open-world game with ray-traced global illumination and dynamic lighting. Instead of carefully binding the texture and shader data on a per-scene basis, most developers place all of them in the memory all the time.
The result? We’ve got games that take up over 10GB of graphics memory, even at 1080p, and sometimes without ray-tracing. GPU vendors like AMD and NVIDIA have decided to limit the VRAM on their budget graphics cards to 8GB (for now), most notably the RTX 4060 Ti and the RX 7600.
Interestingly, if you look at the GDDR6 memory prices, they’re at an all-time low. You can now buy 8Gb (Gigabits) of GDDR6 memory for as low as $3.3, less than a quarter of its early 2022 price. In February 2022, the same 8Gb GDDR6 IC was going for $13. Since then, however, demand has weakened, but the supply has remained largely the same.
Going by these numbers, 8GB of GDDR6 memory will cost $27 on the spot market, and 16GB will cost $54. Early last year, the same memory configurations would have cost over $80 and $160, respectively. As you can see, memory prices should be a non-issue for graphics card vendors, and yet we’re stuck with 12GB and 8GB models of the RTX 4070/4070 Ti and RTX 4060/4060 Ti.
Of course, NVIDIA and AMD don’t buy memory off the spot market. Contracts with memory manufacturers are usually signed well ahead of production targets with existing memory prices in mind. Furthermore, most next-gen GPUs use 16Gb ICs (pricier) instead of 8Gb ones to accommodate the slimmer 128 or 192-bit buses.
Upgrading the memory buffer after the release is tricky as AIBs have to revise the PCB design and the BOM to accommodate the changes. It would also involve signing new contracts (at the last moment), which will likely be less lucrative than existing ones.
Source: Reddit.