HBM3: Cheaper, Up To 64GB On-Package, And Terabytes-Per-Second Bandwidth

@ 2016/08/29
Here is some talk about the next iteration of High Bandwidth Memory, which Samsung and Hynix are working on. HBM is basically a stacked RAM configuration, meaning space savings and bandwidth improvements. The third generation is expected to not only increase the density of memory dies but have more stacked on a single chip. HBM1, as used in AMD's Fury graphics cards, was limited to 4GB stacks. HBM2, as used in Nvidia's workstation-only P100 graphics card, features higher density stacks up to 16GB, but is prohibitively expensive for consumer cards. HBM3 will double density of the individual memory dies from 8Gb to 16Gb (~2GB), and will allow for more than eight dies to be stacked together in a single chip. Graphics cards with up to 64GB of memory are possible. HBM3 will feature a lower core voltage than the 1.2V of HBM2, as well as more than two times the peak bandwidth: HBM2 offers 256GB/s of bandwidth per layer of DRAM, while HBM3 doubles that to 512GB/s. The total amount of memory bandwidth available could well be terabytes per second.

No comments available.