AMD finally announced its Epyc Milan-X processors at the Accelerated Data Center event earlier today. Featuring the same skeleton as the existing Milan lineup, these SKUs simply add another layer of L3 cache atop the existing chiplets. This increases the overall cache size from 256MB to a rather incredible 768MB or 0.75GB. Other than the cache, everything else is mostly unchanged. We’re still looking at 64 cores and 128 threads, with a maximum TDP of up to 280W.
To connect the 3D stacked “V-Cache”, AMD is leveraging direct copper connections to extract maximum performance and density while also reducing the power consumption of the added cache. The Milan-X family will feature a total of four SKUs, ranging from 16 core (and 32 threads) to 64 cores (and 128 threads). All the SKUs are getting the cache upgrade to 768MB, with each of the CCDs getting an additional 64MB stacked on top of the existing chiplets.
Since we’re essentially looking at a “simple” cache upgrade, the performance gains with Milan-X will vary greatly depending on the workload. Applications sensitive to the cache subsystem will benefit greatly while purely compute-oriented workloads should perform more or less the same.
Microsoft’s Azure platform saw a performance boost of up to 1.78x with Milan-X (compared to standard Milan) in HPC workloads. VM scaling is also quite impressive, gaining an incredible 127x performance, with 200% scaling efficiency.
Compared to 32-core 2P Xeon Ice Lake-SP processors, we’re looking at a performance advantage of up to 40% in technical computing such as fluid dynamics and structural analysis.
Milan-X is socket compatible with SP3 and is different from Trento which is a custom Milan SKU meant to be paired with the MI200 accelerators. It will launch in the first quarter of 2022.