Friday 27 March 2015

10TB SSDs

SSDs and other flash memory devices will soon get cheaper and larger thanks to big announcements from Toshiba and Intel. Both companies revealed new "3D NAND" memory chips that are stacked in layers to pack in more data, unlike single-plane chips currently used. Toshiba said that it's created the world's first 48-layer NAND, yielding a 16GB chip with boosted speeds and reliability. The Japanese company invented flash memory in the first place and has the smallest NAND cells in the world at 15nm. Toshiba is now giving manufacturers engineering samples, but products using the new chips won't arrive for another year or so.
At the same time, Intel and partner Micron revealed they're now manufacturing their own 32-layer NAND chips that should also arrive in SSDs in around a year. They're sampling even larger capacity NAND memory than Toshiba, with 32GB chips available now and a 48GB version coming soon. Micron said the chips could be used to make gum-stick sized M.2 PCIe SSDs up to 3.5TB in size and 2.5-inch SSDs with 10TB of capacity -- on par with the latest hard drives. All of this means that Toshiba, Intel/Micron and companies using their chips will soon give some extra competition to Samsung, which has been using 3D NAND tech for much longer. The result will be nothing but good for consumers: higher capacity, cheaper SSDs that will make spinning hard disks sleep with one eye open.

Nexus 5 And Nexus 6 Receives Latest Lollipop Release Android 5.1 Factory Image

There have been news for the Nexus 5 and Nexus 6 users concerning the latest release of their host operating systems. There have been news as such about the update builds last week and yet again, they pop up. Both of the these devices, Nexus 5 and Nexus 6 receives the latest version of Android Lollipop 5.1. This is good news as factory images have arrived and users can manually download them.
nexus-5-android-5-0-lollipop
The factory images that just arrived for the Nexus 5 and Nexus 6 devices clocks to the build number of LM471 which replaces the LMY47E version which further was a replacement for LMY47D version. There have been quite a streak of inheritance in the versions. Users who own a a Nexus device can use the Factory Images to restore or reset the stock Android version OS to them. If anyone has modified or customized their Nexus devices, this step might be necessary and will probably miss the OTA update being sent.

Restore Your Nexus Device To Stock Android With Factory Image – Download Here

If one may be completely clueless about the update, lets have a look and get a bit of know how for what is this update about. What we have here is an Android version5.1.0_r3 which also happens to contain some bug fixes to enhance performance. Some users might also expect this update to fix bigger issues which have been prevailing for a time now, issue like huge memory leaks and more. However, this update does not tackle anything like it, but handles issues related to sim card problems and handling. There may as well be other tweaks and changes but nothing much fascinating.
If you ask us why you should turn toward the factory image is to cope situations where attempts to customize your Nexus device went completely wrong. Hence if you use these Factory image to flash your device to jump to the stock android version to its original space. Infact, you can use the factory image to jump to the latest version available before it even hits your Nexus 6. You can download the Factory Images for Nexus 5 and Nexus 6 from the links provided.
That is all for now folks, for more on this we will surely keep you updated. Tell us your experience with the updated Android 5.1. Share your thoughts in the comments below.

Intel Knights Landing Further Detailed – 16 GB High-Bandwidth On-Package Memory, 384 GB DDR4 System Memory Support and 8 Billion Transistors

Intel has further detailed their Knights Landing Xeon Phi co-processor which is built for HPC (High Performance Computing) purposes. The Intel Knights Landing Xeon Phi is designed to take on a fight with HPC accelerators from NVIDIA (Tesla) and IBM’s Power8 which will feature higher insanely high floating point performance when compared to previous generation accelerators.
Intel Knights Landing Xeon Phi_Die

Intel Knights Landing Xeon Phi Further Detailed – Massive Die With Massive Potential In HPC

The Knights Landing Xeon Phi family will be available in three variants, unlike the first generation Knights Corner, Knights Landing will have a Co-processor variant, a standalone processor variant and a second stand alone variant with integrated fabric. The trio of these accelerators will be available in various SKUs with different core count and TDPs but all three variants are expected to feature a double precision floating point performance of over 3 TFlops. With the basic details mentioned, let’s get on with the new details as reported by theplatform.
Starting off with the specifications, the Knights Landing Xeon Phi will feature up to 8 billion transistors that are crammed inside a massive die as can be seen from the image posted above. The die has several fascinating facts and as a MIC (Many Integrated Cores) design, Intel has fused their latest Xeon Phi die with over 60 cores which are part of the Silvermont generation of core architecture built on a 14nm process node. We have seen Silvermont performing on the 22nm node but Intel has redesigned the core completely and is now regarded as the Knights Core. The processor will remain compatible with Linux and Windows application along with the addition of much higher AVX floating point processing performance.
The design on the chip is separated into several tiles which is a partition dedicated to two such cores, each featuring 32 KB + 32 KB L1 cache (Instruction/Data) and a pair of custom 512-bit AVX vector units that adopts the same instruction set as featured on Xeon chips. This puts the total number of AVX units to 120 on the top end Xeon Phi accelerator. Unlike the regular Silvermont core, the new Knights core are repurposed to deliver better x86 performance that is on-par to a proper core. Each tile is configured along a shared L2 cache which weighs at 1 MB and adds up to 30 MB of L2 cache. The chips further has two independent DDR4 memory controllers that allow 6-channel (3 channel per controller) memory support  that allows up to 384 GB of RAM to be supported by the complete platform and furthermore a separate memory controller for on-package memory which will be detailed in a bit.

16 GB of High-Bandwidth On-Package Memory

The On-Die things that Intel have stirred up with the latest Knights Landing are quite interesting, with the integrated Omni-Path which provides fast interconnect along with an I/O controller that provides up to 36 PCI-E 3.0 lanes, Intel has managed to put 8 High-Bandwidth memory banks on the package which is the reason for its massive size. The reason behind this is to deliver fast memory access that is close to the die itself rather than system memory. This high-performance memory is not to be associated with either HBM (High-Bandwidth memory) or HMC (Hybrid Memory Cube). In fact, the memory is created by Intel is collaboration with memory creator, Micron Technology and is known as MCDRAM which is a variant of the Hybrid Memory Cube design. The top variant of the Xeon Phi SKU will feature up to 16 GB of highly fast memory that will deliver up to 400 GB/s memory bandwidth in addition to the 90 GB/s bandwidth that is pumped just by the DDR4 system ram alone.

Intel Knights Corner Die Pictures (Courtesy of TweakTown):

Avinash Sodani, chief architect of the Knights Landing chip at Intel, tells The Platform that the DDR4 far memory has about 90 GB/sec of bandwidth, which is on par with a Xeon server chip. Which means it is not enough to keep the 60-plus hungry cores on the Knights Landing die well fed. The eight memory chunks that make up the near memory, what Intel is calling high bandwidth memory or HBM, deliver more than 400 GB/sec of aggregate memory bandwidth, a factor of 4.4 more bandwidth than is coming out of the DDR4 channels. These are approximate memory bandwidth numbers because Intel does not want to reveal the precise numbers and thus help people figure out the clock speeds ahead of the Knights Landing launch. That near memory on the Knights Landing chip delivers about five times the performance on the STREAM memory bandwidth benchmark test than its DDR4 far memory if you turn the near memory off. via ThePlatform
With over 5x bandwidth stream than DDR4 and proper NUMA memory support, the Xeon Phi will deliver flexible memory modes including cache and flat and a design that 5 times more energy efficient and 3 times more dense than GDDR5 memory. Intel also explained why the new Xeon Phi is restricted to 1-socket and not multiple sockets like other Xeon processors.
“Actually, we did debate that quite a bit, making a two-socket,” says Sodani, “One of the big reasons for not doing it was that given the amount of memory bandwidth we support on the die – we have 400 GB/sec plus of bandwidth – even if you make the thing extremely NUMA aware and even if only five percent of the time you have to snoop on the other side, that itself would be 25 GB/sec worth of snoop and that would swamp any QPI channel.”
Intel Knights Landing Xeon Phi_Variants
Intel’s Knights Landing will be available in 2H of 2015 and with the possibility of 72 core variants are previous rumors indicate. The designs could be a major win for Intel in the HPC sector to compete along with IBM and NVIDIA which have powerful accelerators currently and coming in the future (NVIDIA has skipped Tesla parts with Maxwell generation). The next logical step for Intel in the evolution of their Xeon Phi line is their Knights Hill which is expected to be updated in 2017 and built on 10nm process technology and second generation Intel Omni-Path architecture.

AMD R9 290X As Fast As Titan X in DX12 Enabled 3DMark – 33% Faster Than GTX 980

DX12 support has just been added to 3DMark showing unbelievable results with DX12 delivering up to 20 times faster performance than DX11. AMD Radeon graphics cards are showing the most significant gains compared to their Nvidia GeForce GTX counterparts.
AMD, Nvidia, DirectX 12, DX12
Testing has shown in fact that with AMD’s most recent driver update the R9 290X is not only on par with the $999 12GB GeForce GTX Titan X, but also maintains a minute edge against the Nvidia flagship. When compared against Nvidia’s $549 GTX 980 the R9 290X from AMD delivered a 33% performance lead.

AMD R9 290X As Fast As Titan X in DX12 Enabled 3DMark – 33% Faster Than GTX 980

Results from both PCWorld and PCPer are in.
PCWorld’s results show the difference in API draw call handling between both the Nvidia GeForce GTX Titan X and the AMD Radeon R9 290X. With DX11 the Titan X manages a maximum of 740 thousand drawcalls per second and a whopping 13 million, 419 thousand calls with DX12. In comparison the 290X manages a maximum of 935 thousand drawcalls with DX11, so more than the Titan X to begin with and again an unbelievable 13 million, 474 thousand calls with DX12. So the 290X slightly edges out the Titan X in DX12 drawcall handling.Interestingly enough AMD’s latest drivers have improved DX12 performance to the point where it’s actually a head of Mantle by about 8%.
PCPer’s testing is more traditional, showing the frames per second instead of the number of drawcalls per second. The GTX 980 delivers 15.67 FPS in DX12, a massive improvement compared to the 2.75 FPS it manages in DX11. On the other hand AMD’s R9 290X delivers 19.12 frames per second in DX12 a significant jump over the GTX 980. And perhaps what’s more interesting is that the next iteration of 3DMark which will debut with Windows 10 also supports Mantle. And the 290X delivers 20.88 FPS in the same benchmark running Mantle. A 33% performance lead over the GTX 980 running DX11 and a 9% lead over the same 290X running DX11.
Last but not least, let’s take a quick look at the performance scaling in relation to the number of CPU cores. DX12 shows remarkable scaling with the addition of more CPU cores but hits a wall at 6 cores. On the other hand Mantle will continue to scale beyond 6 cores, showing extended performance scaling over DX12 with eight cores. I should point out an interesting phenomenon that occurs when hyperthreading is enabled with eight cores for a total of 16 threads. it seems that neither the GTX 980 from Nvidia or the R9 290X from AMD will make use of Intel’s hyperthreading technology. In fact performance shows slight degradation with HT on vs off with the R9 290X.

I should remind everyone that these are synthetic benchmarks which should not be taken as the be-all end-all performance metric. Also please keep in mind that the race for faster, more efficient DX12 drivers is still in its infancy and very much represents a leapfrog pattern at this point. Last time we reported on the state of DX12 drivers and performance Nvidia was ahead. Today we see the roles being reversed with AMD taking the lead. However I’m more interested in testing out actual DX12 games when they come out. Which is rumored to be around the end of the year. In the meantime stay tuned as we may have our own DX12 3DMark results to present soon !
UPDATE :
I thought I might as well delve into a peculiar phenomenon that PCPer.com had run into in their testing. Which is that once the DX12 benchmark was run on the lower tier GPUs after they were overclocked the result was actually far closer to their higher end siblings than would’ve made sense. Futuremark did not provide an explanation nor did PCPer attempt to do so.

However what would explain this phenomenon is that the amount of drawcalls that the GPU can handle aren’t necessarily tied to the amount of shaders available on the GPU. Because remember the DX12 3DMark test in question only involves polygons and textures, no shaders. What would determine how many drawcalls the chip can handle comes down to software and hardware.

Note how the 960 and 980 show identical numbers in DX11, while the 290X and 285 also show identical numbers in DX11. So this explains the software part. Once we remove the API software bottleneck we expose another bottleneck or in this case probably multiple other bottlenecks. In this case It’s most likely due to the amount of available bandwidth to the GPU in addition to the internal GPU hardware and semantics by which CPU drawcalls are handled.
So what could explain the relatively small delta between the R9 285 and the 290X is that the 285 has eight asynchronous compute engines, exactly the same as the 290X. These are engines built directly into the GPU core itself which are responsible for handing out tasks to the various compute units / shaders inside the chip. However the additional memory bandwidth which the R9 290X has available to it compared to the 285 could be the reason why it can still edge out the 285 . In the case of the GTX 960, the result doesn’t make much sense either. Because the GPU was overclocked by approximately 30% yet that resulted in the GPU handling 57% more drawcalls.
All in all as mentioned previously, synthetic benchmarks are very rarely the ideal metric to reflect real world performance. So the numbers you see above should only be taken as a rough estimate of how much superior lower level APIs such as DX12 and Mantle are compared to established traditional approaches which is what DX11 represents today.

Nvidia GeForce GTX 980 Ti Coming This Summer Featuring 3072 CUDA Cores and 6GB of GDDR5

Nvidia is preparing its GeForce GTX 980 Ti graphics card featuring the “big Maxwell” GM200 GPU for introduction this summer.  Nvidia has already launched its flagship GTX Titan X graphics card, featuring a 12GB of VRAM and a fully unlocked GM200 GPU with 3072 Maxwell CUDA cores.
NVIDIA GM200 Block Diagram
Thus it has been speculated for quite some time that Nvidia will follow that launch with a GTX 980 Ti introduction at a lower price point but still based on that same GM200 GPU all be it with a few CUDA cores disabled. However that’s not going to happen according to the report by our good friends from Sweden. What is going to happen is actually even better.

Nvidia GeForce GTX 980 Ti Coming This Summer Featuring 3072 CUDA Cores and 6GB of GDDR5

According to Sweclockers.com the card in question will actually feature the full 3072 CUDA cores and will not use a cut-down GM200 GPU. In addition the card will be clocked 10% higher than the Titan X, so it will in fact be faster. Finally the memory will be halved, down from 12GB to a more reasonable 6GB.

The question of price immediately comes up, how will Nvidia price this product ? because it will offer superior performance to the Titan X based on the alleged specifications all be it with less memory. And because the Maxwell architecture has very poor double precision performance by design. The Titan X cannot be differentiated from the GTX 980 Ti based on double precision performance as was the case with the GTX Titan Black and the 780 Ti. So the main difference will be the size of the frame buffer. How much of a price difference will the 6GB of additional VRAM inflict ? We’ll have to wait and see.
Although from a functionality stand point 6GB of VRAM will be enough for the majority of current game titles at 4K. There are a few circumstances where memory usage will actually exceed 6GB and reach up to 7GB according to Nvidia’s own testing as well as independent testing. Which might be a downside for potential GTX 980 Ti SLI users at 4K who may very well run into this problem, giving the Titan X an advantage here. Or even allowing AMD to take advantage of this downside with its fabled 8GB R9 390X, which is rumored to go head to head with Nvidia’s Titan X in terms of performance.
And apart from the difference in the amount of available VRAM. The Titan X is manufactured directly by Nvidia. Nvidia’s add-in-board partners are not allowed to sell any modified versions of the card with a different PCB design or a different cooler out of the box. The case will be very different with the 980 Ti. As Nvidia will allow its AIB’s to differentiate themselves with custom PCBs and cooling solutions, a plus for consumers.
NVIDIA GeForce GTX Titan XNvidia GeForce GTX 980 TiNVIDIA GeForce GTX 980NVIDIA GeForce GTX 970NVIDIA GeForce GTX 960
GPU ArchitectureMaxwellMaxwellMaxwellMaxwellMaxwell
GPU NameGM200GM200GM204GM204GM206
Die Size601mm2601mm2398mm2398mm2228mm2
Process28nm28nm28nm28nm28nm
CUDA Cores30723072204816641024
Texture Units19219212810464
Raster Devices9696646432
Clock Speed1002 MHz1126? MHz1126 MHz1051 MHz1126 MHz
Boost Clock1089 MHz1216?
MHz
1216 MHz1178 MHz1178 MHz
VRAM12 GB GDDR56 GB GDDR54 GB GDDR54 GB GDDR52 GB GDDR5
Memory Bus384-bit384-bit256-bit256-bit128-bit
Memory Clock7.0 GHz7.0 GHz7.0 GHz7.0 GHz7.0 GHz
Memory Bandwidth336.0 GB/s336.0 GB/s224.0 GB/s224.0 GB/s112.0 GB/s
TDP250W250W165W145W120W
Power Connectors8+6 Pin8+6 PinTwo 6-PinTwo 6-PinOne 6-Pin
Price$999 US  $699 ?$549 US$329 US$199 US

The Witcher 3: Wild Hunt – Special of the Day

Just like every day, we bring you a brand new screenshot for The Witcher 3: Wild Hunt. The image has been revealed by CD Projekt RED.

witcher


Below is some information on a few of the creatures you will come across in The Witcher 3: Wild Hunt.

witcher

LESHEN

Humans have long been fascinated by the wildwood—living in its vicinity was the source of tales about creatures ferocious and benign, friendly and hostile. As they started to settle deeper and deeper into the forests, respect for the unknown diminished. Lumber was gathered, stone abodes were built. As the pestilence that was humanity grew bigger, so did the forest’s and its inhabitants’ wrath.At the heart of the forest lies a secret. In a place born of darkness and primeval nature, resides a mighty and terrifying guardian. Immune to human steel, it is believed theLeshen is nature’s way of protecting the forest and the animals that live within it from the threat humans started to pose upon their ravaging expansion deeper into the lands.Along with the animals it commands, the Leshen became a force to be reckoned with. Sometimes worshipped, this creature can heal other woodland animals and summon nekkers or crows to protect the forest. Its attacks are slow, but deadly—be cautious not to get shackled by its underground roots.

GRIFFIN

The griffin looks like a combination of a ferocious cat and a giant bird. It usually inhabits primeval highlands and builds its nests on unreachable mountain summits. The griffin preys on large mammals and, being a highly territorial creature, fiercely defends its hunting grounds. When the first colonists appeared and trade routes expanded, griffins were known to attack settlers and merchants in defense of their territory. Griffins are tough opponents and their strength should not be underestimated. Obstinate and aggressive, they make deadly use of their ability to fly during combat, swooping down on their enemies, knocking them to the ground and ripping them to shreds with their claws and beak.

FOGLET

Foglets can appear wherever thick fog arises: swamplands, mountain passes or the shores of rivers and lakes. If no fog is forthcoming, they can create or summon it themselves. By manipulating fog they can separate travelers from each other, hide trails and deafen noise. Like ghastly glowworms, their bodies emit a pale light they use to lure those lost in the fog towards the ravines, swamps or caves in which they make their lairs. When fighting foglets, a witcher must remain calm and keep his wits about him no matter what. Since foglets can take on immaterial form, a slight shimmer of air or a rustle in the grass might be the only clues a witcher has to their location. Casting the AARD Sign at these beings will cause them to become tangible, giving purchase to blades and other weapons.

WEREWOLF

Neither animal, nor man, the Werewolf takes the worst from both species: the bloodlust and primal nature of a wolf, and the ruthlessness and cruelty of a human. One becomes a Werewolf as a result of a curse thrown by a witch — the change itself is uncontrollable and unwilling. A man who transforms back to his human form can’t usually remember the atrocious acts committed as a werewolf.
Werewolves are creatures of the night and they are especially active during the full moon. They usually go hunting alone, as there rarely is an opponent that can match their strength, agility and fast health regeneration. If a Werewolf actually encounters an enemy that has equivalent strength and can fight a fair fight, the creature can call for wolven reinforcements that will come to its aid. A good way of dealing with werewolves is a sword covered with oil to combat cursed creatures, or a Silver Bomb that will temporarily block the creature’s regenerative power.
Witcher (2)

The Witcher 3: Wild Hunt, will release on May 19,2015 for the PlayStation 4, Xbox One and PC. We will continue to bring you any new information on the game as soon as it becomes available. Be sure to check out our previous coverage, here and here for more information on Witcher 3.

3DMark API Overhead Feature Test – First DirectX 12 Application Test

Futuremark has released the first API Overhead test, by which PC users can measure performance differences between DirectX 12 , DirectX 11, AMD’s Mantle API, as well as the upcoming Khronos Group’s Vulkan API.
3DMark’s API Overhead Feature Test is available now in the latest version of 3DMark Advanced Edition and 3DMark Professional Edition.
3DMark (2)
Futuremark details the test as:
“The 3DMark API Overhead feature test measures API performance by making a steadily increasing number of draw calls. The result of the test is the maximum number of draw calls per second achieved by each API before the frame rate drops below 30 fps.”

Games make thousands of draw calls per frame, but each one creates performance-limiting overhead for the CPU. APIs with less overhead can handle more draw calls and produce richer visuals. The 3DMark API Overhead feature test is the world’s first independent test for comparing the performance of DirectX 12, Mantle, and DirectX 11. See how many draw calls your PC can handle with each API before the frame rate drops below 30 fps.
The application requires the most recent version of Windows 10 Technical Preview (build 10041 or later), 4DB of system memory, and DirectX feature level 11_0 compatible hardware with at least 1GB of graphics memory.

Nvidia CUDA 7 REL Update Disables Double Precision on the Geforce GTX Titan Z

Double precision is one of the major selling points of the dual GK110 based Geforce GTX Titan Z. However it reports suggest that the new CUDA 7 REL update cripples double precision to a surprising extent (thank you for the tip, Dave Oh) for atleast the GTX Titan Z. Where the card usually reads as 1/3 DP, after the update it falls to somewhere around 1/19 DP. For those of you who employ FP64, this could be a serious work-breaker and we would urge such readers to stick to the current version until we have a lock on this possible bug.
NVIDIA GeFOrce GTX Titan Z
The Nvidia GTX Titan Z poses for the camera. @Unknown

Nvidia Titan Z users get double precision computing crippled with CUDA 7 REL

The problem is as of yet unverified and we are awaiting official response from Nvidia on the matter. That said, there is a high likelihood that it could be a one-time bug in the drivers or very particular configurations. In any case, the problem of FP64 being crippled on GK110 based TITAN cards is pretty serious. The entire reason “semi-pros” bought TITANs in the first place was because of the double precision benefit. Without that, the TITAN quickly becomes a glorified gaming card with no ‘pro’ advantage.
If this problem is not a bug than it would make for a very interesting paradigm shift. It would mean that that Nvidia is subtly but gradually steering its FP64 consumer base to the much more expensive Tesla lineup. As far as the versions go, the CUDA 7 RC version appears to still have DP enabled. If you have installed CUDA 7 REL and had your DP crippled, simply re-installing the drivers won’t fix the problem. Using a driver sweeping program might work, but your best bet would still be a fresh install.
Another point to note here is that while we have a confirmed report of a TITAN-Z user, the bug or deliberate cripple could be applicable to other models as well – not just a TITAN-Z. We are going to refrain from jumping to conclusions on what exactly is happening till we receive word from Nvidia HQ, so stay tuned for a follow up when that happens (or doesn’t).  For users in the video industry or scientific industry, this is definitely one issue to look out for.