GamingGPUs

Will Intel’s 500W Xe Graphics Card Really Beat NVIDIA’s RTX GPUs?

Yesterday, Digital Trends shared a couple of slides indicating that Intel might use an MCM design in its Xe graphics cards, similar to how AMD’s Ryzen CPUs are packaged. The leak alleges that Intel will be using 128 EU chiplets dubbed as tiles to build massive GPUs with as many as 512 EUs. Keep in mind that Intel’s EUs aren’t comparable to NVIDIA’s CUDA cores or AMD’s stream processors, they are a cluster of 8 ALUs.

Each Intel Gen11 EU contains a pair of SIMD ALUs. They support both floating-point and integer computation. These units can execute up to four 32-bit floating-point (or integer) operations, or up to eight 16-bit floating-point operations. This is similar to an AMD Navi SIMD. Each RDNA SIMD can execute up to thirty-two 32-bit FP32 ops per cycle. Read more on this here.

From recent leaks, we know that the Gen12 graphics featured in Tiger lake and DG1 will pack 96EUs. The performance of this GPU is estimated to rival lower-end discrete GPUs like the GTX 1050 or perhaps even the 1050 Ti if we’re being optimistic. The document shared by DT includes three GPUs, with 128, 256 and 512 EUs.

Since the 128 EU part includes 512 ALUs or cores (128 x 4), the 512 EU variant should pack 2048 shaders. That should put it directly in competition with the NVIDIA GeForce RTX 2060 (at least in theory). I still expect the Turing part to come out on top, but let’s remember that Intel doesn’t need to beat the competition. They just need to prove that they can make a comparable discrete GPU. However, there are a lot of problems here:

  • Firstly, scaling. It’s not clear if Intel’s tile system will work for gaming. Unlike CPUs, GPUs include thousands of cores. It won’t be easy to make four 512 core clusters to work in tandem. Synchronizing them will be a major challenge. This is similar to how SLI and XFX work. However, unlike multi-GPU systems, here Intel’s EMIB fabric will be connecting the chiplets. Even then, coordinating them and avoiding latency spikes will be a tough job.
  • Second, and the more obvious question mark here is with respect to the TDP. If Intel’s 512EU GPU is going to perform in the vicinity of the NVIDIA GeForce RTX 2060 and consume 500W while doing so, that’s going to be a major problem. The 2060 draws just 150-160W of power. It’ll be hard to persuade gamers and enterprise clients alike to buy an Xe graphics card that draws 500W while performing similar to a sub-200W GPU. This is something that has popped up in rumors before as well. Going by this document, it seems like the efficiency of the Xe GPUs is indeed quite poor.
  • Thirdly, the thermals. As indicated in the above document, you’ll need an AIO cooler to keep the 512W GPU from exploding. And those are pricey, and from what we know Intel hasn’t partnered with any AIB companies for manufacturing their cards. A reference, blower-style heatsink just isn’t going to cut it. You’ll need a full-fledged liquid cooling solution.

As per DT, this document is a year old, so there’s a good chance that these designs have greatly changed, or even scraped. Either way, things don’t look good for Intel’s Graphics Odyssey. At the very least, Intel needs a 250W Xe GPU that performs in the same ballpark as the GeForce RTX 2060, not a 500W monstrosity that can’t be cooled using conventional means.

Source
DT

Areej Syed

Processors, PC gaming, and the past. I have been writing about computer hardware for over seven years with more than 5000 published articles. Started off during engineering college and haven't stopped since. Mass Effect, Dragon Age, Divinity, Torment, Baldur's Gate and so much more... Contact: areejs12@hardwaretimes.com.
Back to top button