Monday, 28 December 2015

2016 : A Pivotal Year For AMD, Nvidia, PC Gaming And VR

2015 is quickly coming to a close and while most of us are still enjoying our yearly holiday slumber, AMD, Nvidia, Intel and many other tech giants have been busy preparing for one of the world’s largest exhibits for the latest and greatest in the world of tech. CES, the international consumer electronics show in Las Vegas, is how all tech companies like to kick things off each year. And learning about the latest and greatest is also how we love to greet the new year here at WCCFTech, and really what a year 2016 is shaping up to be!
AMD, Intel Nvidia
Next year is set to be one of the most pivotal years for PC gaming and technology since the beginning of the decade. With all three major PC hardware makers planning to introduce a truly game-changing lineup of next generation PC hardware. And all the big names in the Virtual Reality sphere gearing up to launch what they’ve been working on for years to the masses. So if you’re a hardware enthusiast then you’ll find more than enough compelling reasons to get over the 2015 holidays finally coming to an end.

2016 The Year Of Virtual Reality

2016 will see the release of the first wave of big name VR HMDs – Head Mounted Displays – in almost two decades. In the first quarter of 2016 Oculus will release its consumer version of the Rift VR headset to the world. Shortly afterwards in April, HTC will launch its own Vive headset that the company has developed in collaboration with the owner of the world’s largest PC games distribution platform, Valve. Sony also plans to introduce its own home-cooked VR headset later in the year for its living room game console the Playstation 4.
GameWorks VR
What makes this the first wave of VR HMDs which represent a serious attempt by the tech industry in almost two decades is that one major VR device preceded them in 1995, the Virtual Boy by Nintendo. All the reasons that led to the failing of VR’s take off in the 90s can be summed up in the Virtual Boy. It was the first serious attempt at breaking into the VR market. Unfortunately because we’re talking about the 90s here, we’re talking about displays with no where near enough pixel density or fast enough refresh rates to immerse users in the experience. And let us not forget that the processors at the time were so hopelessly under powered to even contemplate conjuring up a visual experience that resembled anything even remotely relatable to reality.
1_vr_08
However all of these setbacks that completely erased VR from the market as quickly as it was introduced 20 years ago are very much behind us. We have unbelievably rich and vivid display technology, largely thanks to the extremely competitive smart phone market that came to be only a handful of years ago and is led by the industries largest and most influential tech corporations. Additionally, Industry leaders in visual computing like Nvidia and AMD are also soon reaching a point where they would finally have GPUs – graphics processing units – with enough computing horsepower to drive a visual experience that’s almost indistinguishable from reality, and that point in time just so happens to be 2016. So it seems all the stars are finally aligning to make this 20 year old dream a reality next year.

2016 The Year Of The Next Generation Gaming Experiences

Next year is not all about VR either, it’s also when traditional PC gaming is taking a huge step forward. The PC gaming industry is a quick one and generations of PC hardware are launched at a much brisker pace than what has traditionally been the case with game consoles. However, next year hardware companies are finally reaching several crucial milestones around PC hardware and display technology, which have always been the number one driver of complexity and visual fidelity in PC games.
Let’s first talk about the important milestones in display technology coming next year.
High Dynamic Range Displays And DisplayPort 1.3 To Bring Lifelike Imagery & Higher Refresh Rates
HDR is all about delivering richer, more vibrant and more lifelike colors to the screen. There are very real limitations to the maximum range of colors and the level of luminance – a measure of brightness – that the vast majority of today’s displays can deliver, save for a few professional grade monitors at the very top-end of the spectrum. In other words the vast majority of monitors out today will struggle to actually mimic real world imagery because their limited range of colors and maximum luminance which they can deliver will simply fail to match that of the world around us.
AMD HDR
Think of it this way, if you take a photo of a scene outside of your window no matter how good of a camera you have, the vast majority of displays will fall short of displaying that image as vividly as you’d see it yourself by simply looking out the window. In a sense then HDR displays aim to open a window into the world. And the goal is to reach a point where the difference between looking out the window yourself and simply looking at a picture of the same scene on your screen is so absolutely minuscule that our eyes can no longer so readily perceive the difference.
AMD HDR
While HDR capable displays have been reserved only for professionals and the very highest-end of the monitor spectrum, they’re finally making their way to the masses starting next year. And this shift will influence all visual content consumption, be it looking at photos, watching movies or playing video games. But let’s not forget that the push for higher resolutions and high dynamic range monitors requires a next generation set of interfaces with sufficient bandwidth to support it all.
RTG_Page_34
As such DisplayPort 1.3 is coming to address that need and it isn’t a new standard by any means, having been announced in 2014. However display makers haven’t felt the need to incorporate DP 1.3 until this point, and that’s why 2016 is going to be the year that monitor makers will finally start adding this interface to their product lines. DP 1.3 is going to push the bandwidth available to 32.4Gbps, enough to drive higher resolution displays at even higher refresh rates. We’re talking about enabling 5K resolutions at 60Hz with a single cable and up to 120Hz on 4K displays.
So that covers one end of the hardware spectrum, which deals with what happens on the screen but we haven’t yet talked about the upcoming revolution of what happens behind the screen. The other side that’s responsible for doing the computation necessary to generate all of those pixels. And that’s where graphics processing technology spearheaded by Nvidia and AMD comes in.

2016 The Year Of Next Generation Graphics Processors – Nvidia’s Pascal & AMD’s Arctic Islands

Next year will be the very first time that the market is going to see the launch of truly next generation GPUs since the introduction of AMD’s GCN and Nvidia’s Kepler GPUs back in 2012. This has been by far the longest period of time spent at the same process node that we’ve witnessed in the GPU space. Process nodes dictate the progression of what has become known as Moore’s law, which states that integrated circuits of the same size should double in complexity – number of transistors – every couple of years. Sadly, we haven’t seen that take place for four years now, however that’s finally changing next year.
Pascal1
In 2016, both AMD and Nvidia are going to introduce their next generation Arctic Islands and Pascal GPU families. Both families of graphics chips will be built using the 14nm/16nm process nodes as opposed to the aging 28nm manufacturing process. In addition to the smaller feature sizes, 14nm & 16nm technologies from AMD’s and Nvidia’s manufacturing partners feature FinFET technology, a groundbreaking innovation in manufacturing, first introduced by Intel in 2012, that serves to boost the speeds of chips while reducing their power consumption.
AMD Arctic Islands Roadmap
Advances in graphics architecture aside, the jump from 28nm to 14/16nm alone gives engineers double the number of transistors to design with at any given chip size. Which means a de facto doubling of performance of graphics processors next year from both companies. And it is at this point that PC gaming at 4K and 60 frames per second will not only be feasible with a single GPU but accessible to PC gamers.
In addition to next generation FinFET nodes, next year’s GPUs will feature a next generation graphics memory technology called HBM, short for High Bandwidth Memory. This revolutionary memory architecture employs the concept of stacking chips on top of each other to maximize power efficiency, minimize the chips’ footprint and boost performance.
The Pascal and Arctic Islands graphics architectures will power everything from next generation mobile devices to gaming desktops, laptops and professional applications. There’s also strong evidence to suggest that Nindendo’s next generation gaming device “Nintendo NX” coming next year will be powered by an AMD semi-custom SOC featuring AMD’s Arctic Islands GCN architecture. And AMD’s push to extend the support of its FreeSync technology – which enables panels to display frames at a variable refresh to completely eliminate tearing and lag in video games- to TVs via HDMI has without a doubt directly been influenced by Nintendo’s upcoming gaming device.

2016 The Year Of DirectX 12 Games And High Performance Multicore Intel & AMD CPUs

We witnessed a few glimpses of Microsoft’s DirectX 12 low level graphics API in action this year, but 2016 is going to be the year when we’ll finally see DirectX 12 enabled games come out. Low level graphics APIs like DirectX 12, Vulkan and even AMD’s Mantle before them attempted to address many issues that game developers were facing with APIs of the time like DX11 and OpenGL. But one major issue that they attempted to solve and have successfully addressed is one that’s related to CPUs.

While graphics processing units are responsible for doing the grand majority of all the computation necessary for the visuals of any modern game, there’s a lot of work that GPUs are inherently very inefficient at doing. GPUs are vast parallel engines that absolutely excel at doing a huge amount of computational work in parallel. If you needed to process colors for millions of pixels tens of times every second a parallel engine is simply perfect for the job, and that’s why GPUs are the most widely known and used parallel processors to date.
CPU and GPU
CPUs are only made up of a handful of execution engines rather than thousands of cores like GPUs. CPUs differ from GPUs in several key areas, generally they have a lot more decode and branch prediction resources because they tend to deal with more complex branchy code. GPUs on the other hand are designed with heavy emphasis on execution resources. Because GPUs deal with code that is relatively less complex and data that’s massively more parallel. This in turn means that the weight would fall on the execution engines rather than the front end of the processor having to deal with the complexity of serial code.
This is why in applications like games, filled with parallel and serial code, CPUs & GPUs working together in harmony is essential. And so much of DirectX 12’s improvements come from improving upon this relationship, especially in cases where CPUs with high core counts are in use.
DirectX 12 ThreadingThis is why multi-core CPUs with higher core counts are going to play a much more prominent role in games, which brings us to two other important hardware launches coming up next year.

AMD’s Zen And Intel’s Broadwell-E

Intel is launching its first ever 10 core desktop CPU next year as the flagship of the Broadwell-E family of CPUs. What’s even more interesting is that it took Intel four years and three different generations of products to jump from six cores to eight cores at the high-end since it introduced its very first six core desktop “Gulftown” CPUs in 2010. This time however, Intel is jumping straight from 8 cores with Haswell-E to 10 cores with Broadwell-E immediately and only after one generation of products. One could argue that this haste has been prompted by the next generation of low level API’s, or alternatively one could argue that this is a knee jerk reflex to AMD, Intel’s only competitor in this space, finally re-entering the high-end CPU segment next year.
Zen is undoubtedly one of the most crucial products for AMD next year as it marks the company’s re-entry to the high-end desktop CPU segment with a brand new, clean slate,  microarchitecture after roughly half a decade of absence. Zen’s design began in 2012 and was led by a prolific CPU architect called Jim Keller, the same man who was responsible for designing the most successful products in AMD’s history, the AthlonXP and Athlon64 processors.
AMD 40% IPC Zen Zen+
Zen is the most important CPU architecture for the company in decades and represents one of the most significant architectural performance leaps in years. The company is planning to introduce Zen to the high-end desktop segment next year with high core count SKUs, support for DDR4 and a brand new set of motherboards based on the upcoming AM4 socket.
With major virtual reality launches next year, in addition to Pascal, Arctic Islands, Broadwell-E, Zen and DirectX 12 games we can’t think of any year that’s more exciting for technology and gamers than 2016. What are you most excited about next year? Please share your thoughts in the comments section below.

No comments:

Post a Comment