Table of Contents
The graphics cards of the future are heading in a very clear direction, and this will come sooner or later whether we like it or not. The path is very similar to the one that processors have obtained in recent years: an MCM design based on chiplets, which in this case almost act as SoC due to their function. But, these MCM chips within the same GPU, are they similar to an SLI or CrossFire ? or do they have differences with these technologies?
Both AMD and NVIDIA have led the way forward, but they are more than likely late in the race for MCM GPUs. And is that Intel seems to be the last player to arrive, but the first to get ahead in this regard with its Xe graphics cards. Will this technique of tiles or Tiles or chiplets be similar to the now defunct SLI or Crossfire?
SLI, CrossFire, Dual GPU and MCM, different concepts that do not lead to the same performance
It must be clear that these four concepts are not the same, they are not achieved in the same way and above all, the performance is very different between them. For example, SLI and CrossFire have always needed the PCIe bus to exchange data, textures, synchronization etc … Even through an interconnection bridge as we all know.
This creates synchronization problems due to rendering times between GPUs, so in many cases we have had the so-called Dual GPUs, which are two chips on a single PCB interconnected by it, each one with its dedicated VRAM and its resources. . This was the most advanced step of SLI or CrossFire, but in terms of consumption it was complicated and in terms of cooling, let’s not even talk, really a challenge for the engineers.
Now MCM arrives and as such, there are other challenges that must be overcome, and of course they are not minor. We are talking about using different chips on the same substrate, not like in Dual GPU that each had its direct solder to the PCB, no, here we are talking about a common interposer for the chips.
Size matters, consumption and data buses do too
One of the paradigms MCM-based GPUs face is precisely the overall chip size that is generated by joining the SoCs in the interposer. Intel has shown the way, with a single chip that includes a curious IHS to dissipate heat by direct contact.
And it is that, the watts to dissipate are going to be very high in all cases of high-end GPUs. It must be taken into account that the use of interposer is precisely to reduce the data consumption generated in the interconnection of each chip, neither more nor less, apart from being able to introduce the corresponding clear wiring.
The second problem is latency, as games are very sensitive in these cases and performance can crash. The game engines are going to have to be updated, therefore, the APIs are going to be of great importance for hardware development companies.
And it is that although the interconnection of the modules / chiplets / tiles / SoC must be the most transparent for the engines and APIs, as well as for the operating systems, something very important for proper performance. In addition to this, the scalability with this type of GPU will be much higher, since we could even see several types of cores or chips in the immediate future, creating a new paradigm very similar to when big.LITTLE appeared.
So no, the four concepts are not even remotely alike, MCM architectures are the future and whoever implements them successfully first can take the lead because of the advantage it entails in all terms.