Blue, red and green are the three primary colors in the world of optics and the only three colors which have corresponding photoreceptors in our retinas. So it’s almost poetic that Intel, AMD and Nvidia are the three primary PC component makers in the world of consumer PCs and PC gaming.
What’s even more fascinating is the realization that while these three companies are quite different, they share a singular vision for the future of this industry. In this editorial we’re going to explore what these three companies have been doing to bring that vision to life. To achieve that we have to understand the philosophies that each of them impart onto the products they make and how those decisions came to be.
To Understand Where We’re Going We Must Look At Where We’ve Been
Let’s start from the very beginning, with the first consumer chip to integrate a CPU – Central Processing Unit – with a GPU – Graphics Processing Unit – . In 2010 Intel released the first processor to feature integrated graphics but just like the company’s first effort to make a dual or a quad core processor the resulting product was a multi-chip module or an MCM for short, in which multiple discreet chips were connected together and sold as one. Intel called this design Clarkdale & it was not a fully integrated monolithic solution. The CPU and graphics portions were manufactured separately as two discreet chips that were later assembled together on single circuit board.
A truly integrated solution did not come from Intel until Sandy Bridge was released a year later in January 2011.
On the very same month AMD released its first APU ever, a processor with an integrated graphics solution, code named Brazos. However unlike Intel’s first effort with Clarkdale the graphics portion of these two aforementioned chips was truly integrated solutions. Where the graphics engine and the CPU shared a single die of silicon.
AMD had began working towards this from 2006, when the company had made the decision to buy ATi with the vision to fuse GPU cores and CPU cores into a single piece of silicon called the Accelerated Processing Unit or APU for short. AMD called this effort “Fusion” and it was the company’s main advent from buying ATi.
So in a sense Brazos & Sandy Bridge marked the beginning of the “Fusion” era for the industry.
So in a sense Brazos & Sandy Bridge marked the beginning of the “Fusion” era for the industry.
Fast forward to today and you’ll find that almost all processors have some sort of integrated graphics solution. Intel’s entire range of consumer processors have integrated graphics except for the niche that socket LGA 2011 addresses. All the processors that AMD launched in the past three years have been APUs and thus featured integrated graphics solutions. All mobile processors spanning notebooks to handheld devices also feature integrated graphics solutions. This is the direction that the entire industry is heading towards as it has becomie strikingly clear, but to what purpose?
Evolution Of Intel’s And AMD’s Integrated Graphics
Today integrated graphics processors take up a considerable percentage of chips’ real estate.
The GPU in Kaveri, AMD’s most recent desktop APU, takes up 47% of the chip’s real estate, almost half the die.
GPU real-estate has been consistently growing for the past several years. For instance AMD doubled the GPU portion from Llano, the company’s first generation mainstream APU, to Kaveri.
The GPU in Kaveri, AMD’s most recent desktop APU, takes up 47% of the chip’s real estate, almost half the die.
GPU real-estate has been consistently growing for the past several years. For instance AMD doubled the GPU portion from Llano, the company’s first generation mainstream APU, to Kaveri.
We see a very similar trend with Intel as well, increasing the GPU portion of the die with each generation. Going from Sandy Bridge, Intel’s first processor with integrated graphics built on a single monolithic die, to mainstream GT2 Haswell parts the GPU real estate nearly doubled. And with Intel’s “Iris” GT3 graphics parts the real estate quadrupled compared to Sandy Bridge.
This trend continued with Broadwell in 2015 and Skylake in 2016. Mainstream quadcore variants of both generations now have half the real-estate of the chip dedicated to the integrated graphics engine. And mainstream dual core variants have nearly two thirds of the real-estate dedicated to the graphics engine.
Integrated graphics processors are no longer just responsible for graphics, they can accelerate a plethora of workloads using OpenCL and other compute languages. QuickSync is a great example of how iGPUs can be used for things other than graphics processing. AMD has also worked on enabling integrated GPU acceleration for a variety of workloads like photo & video editing, stock analytics & media conversion.
We’ve witnessed a significant increase of research and development resources being poured into enabling graphics acceleration in the past few years. Both Intel and AMD have continued to invest in this field and in the graphics engines they build into their chips. However, this focus around integrated visualization technologies and CPU+GPU acceleration is relatively new. The fact that the earliest consumer based products to feature integrated graphics engines had only arrived in 2011, 4 years ago, highlights this.
For the past several decades engineers relied primarily on Moore’s Law to get better performance out of their designs. Each year engineers were handed more transistors to play with, transistors that were faster and consumed less power. But in recent years Moore’s Law has slowed down considerably.
Power consumption continued to decline under the recent progression of Moore’s Law but frequency stopped scaling as it used to and it became exponentially more difficult to squeeze greater performance out of a single serial processing unit (CPU). So engineers resorted to adding more CPU cores instead. However the more CPU cores they added the more challenging it became to extract the performance out of these chips. As it’s very challenging to write code that distributes the workload across the increasing number of cores. And the more cores you pile up, the more challenging it gets. And that’s all besides the fact that some workloads simply can’t run in parallel.
What the computing world ended up with are multicore core CPUs that could not effectively accelerate all types of code as their single core predecessors have for decades. This meant that all of a sudden designers’ jobs just got a lot harder. Simply keep adding cores and you end up with wasted transistors, inflating the cost to manufacture the processor without any tangible gain. The industry hit a brick wall trying to improve the performance of CPU designs without blowing power budgets or resorting to extremely complex and difficult design methods. If the industry was to continue its journey towards making faster processors the processor design game had to change. What the world needed was a technology that did not rely primarily on frequency or complexity to scale.
No comments:
Post a Comment