Thursday, March 9th 2023

NVIDIA GeForce RTX 50-series and AMD RDNA4 Radeon RX 8000 to Debut GDDR7 Memory

With Samsung Electronics announcing that the next-generation GDDR7 memory standard is in development, and Cadence, a vital IP provider for DRAM PHY, EDA software, and validation tools announcing its latest validation solution, the decks are clear for the new memory standard to debut with the next-generation of GPUs. GDDR7 would succeed GDDR6, which had debuted in 2018, and has been around for nearly 5 years now. GDDR6 launched with speeds of 14 Gbps, and its derivatives are now in production with speeds as high as 24 Gbps. It provided a generational doubling in speeds from the preceding GDDR5.

The new GDDR7 promises the same, with its starting speeds said to be as high as 36 Gbps, going beyond the 50 Gbps mark in its lifecycle. A MyDrivers report says that NVIDIA's next-generation GeForce RTX 50-series, probably slated for a late-2024 debut, as well as AMD's competing RDNA4 graphics architecture, could introduce GDDR7 at its starting speeds of 36 Gbps. A GPU with a 256-bit wide GDDR7 interface would enjoy 1.15 TB/s of bandwidth, and one with 384-bit would have a cool 1.7 TB/s to play with. We still don't know what is the codename of NVIDIA's next graphics architecture, it could be any of the ones NVIDIA hasn't used from the image below.
Source: MyDrivers
Add your own comment

27 Comments on NVIDIA GeForce RTX 50-series and AMD RDNA4 Radeon RX 8000 to Debut GDDR7 Memory

#1
TheDeeGee
Nice, it can mature for 2 generations before i upgrade my 4070 Ti.
Posted on Reply
#2
ratirt
It begs a question. What would the price be for those graphics cards? Gives me chills to even think about it though.
Posted on Reply
#3
ZoneDymo
ratirtIt begs a question. What would the price be for those graphics cards? Give me chills to even think about it though.
an extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
Posted on Reply
#4
stimpy88
nGreedia should love this. 256bit+ busses will be a thing relegated to Titan cards, and the rest of us will be lucky to get more than 128bit!
Posted on Reply
#5
Ownedtbh
ZoneDymoan extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
Could be the case that the manufacturer hold back their gpu so customers can ramp up more cash to buy better and expensive ones rather than waiting until the lower end model come out.
Posted on Reply
#6
ratirt
ZoneDymoan extra 200 dollars or so, which people keep being willing to pay, its hard to even be mad that de manufactuers or developers anymore....I mean if the consumer is fine with these prices then why not....I am just not one of those consumers.

Also while I know we get this kind of information all the time, just shove it over one gen, inb4 future message of GDDR8 memory being used in the gen after this one or the gen after that. shocking news I know, ANYWHO it is in a way still weird to read about this when the current gen isnt even fully out yet: 7900x, 7900, 7800, 7600, rtx4070, 4060, 4050....still MIA.
I think it is GDDR7 not 8 at this point. extra $200? One is fine the other isn't that is not the point here at least for me. You pay extra buck for a product which is being advertised as the best with all the features. Then you realize you can't run a game with the features it has been advertised that convinced you to purchase it, without using some artificial FPS generator since it cant cope with it. Not to mention, today that artificial feature is a mainstream and tomorrow with new graphics it is not since it has been replaced. You would need to purchase another graphics and so on. I'm just worried were these all is going. Today AMD has it open for every card tomorrow it may not. From a business perspective i get it from a consumer perspective it is not that good. It becomes not $200 since you need to purchase brand new thing every year for instance to make use of the new features. It is like these features are tied to hardware which is expensive. You can go with the best again but that would require a bag filled with money so the $200 extra becomes moot as an argument for getting something extra since it will no longer be the case.
Posted on Reply
#7
Prima.Vera
So GDDR7 is faster than the HBM memory?
Who would have thought....
Posted on Reply
#8
watzupken
Prima.VeraSo GDDR7 is faster than the HBM memory?
Who would have thought....
It is hard to tell if it is really faster across the board. To achieve this speed, it generally means increasing latencies drastically. And if GDDR6 and 6X runs so hot, I wonder how hot GDDR7 will get.
Posted on Reply
#9
Vya Domus
Prima.VeraSo GDDR7 is faster than the HBM memory?
No it isn't, people still don't understand the reason why HBM is so fast, i.e the bus interface.

HBM3 is up to 1 TB/s per 1024bit interface, so 4 chips of HBM3 for example would equate to 4 TB/s of bandwidth, there is absolutely no way to match that with GDDR7 modules in a real product.
Posted on Reply
#10
Dimitriman
Hopefully by then AMD will have delivered FSR 3.0
Posted on Reply
#11
tabascosauz
watzupkenIt is hard to tell if it is really faster across the board. To achieve this speed, it generally means increasing latencies drastically. And if GDDR6 and 6X runs so hot, I wonder how hot GDDR7 will get.
21Gbps GDDR6X is just fine, not sure where you've read that they run hot. Most cards in the entire 40 series stack hang out in the low 60s memory temp.

The doubling of capacity per package, general cooler improvements, and not having packages on the back of the PCB all help.

AMD's new 20Gbps GDDR6 for RDNA3 does run quite warm, but it's hard to say for sure since
  • Memory temp is not officially reported by AMD
  • Reviews mostly do not report on memory temp
  • MBA coolers are unimpressive and also pretty bad at memory cooling
  • 10-12 packages is still a lot for smaller not-4090 coolers
All in all, not really any reason to assume that faster product runs hotter just because, remember speculation about 21Gbps G6X just because Micron's 19Gbps product was hard to cool?
Posted on Reply
#12
Vya Domus
tabascosauz
  • Memory temp is not officially reported by AMD, only by HWInfo so take their word for it
  • Reviews mostly do not report on memory temp
  • MBA coolers are unimpressive and also pretty bad at memory cooling, considering it's still 10-12 packages under there
AMD reports hotspot temperatures for memory meaning they are not comparable to what Nvidia cards report but there is no reason to believe both of those temperatures respectively differ in any significant manner. On the reference cards all the components share the same vapor chamber, under heavy load the cooling is likely better compared to a classic heatpipe design.
Posted on Reply
#13
john_
64bit is coming to mainstream.
Posted on Reply
#14
tabascosauz
Vya DomusAMD reports hotspot temperatures for memory meaning they are not comparable to what Nvidia cards report but there is no reason to believe both of those temperatures respectively differ in any significant manner. On the reference cards all the components share the same vapor chamber, under heavy load the cooling is likely better compared to a classic heatpipe design.
Both 40 series and Navi31 show up as Memory Junction Temp in HWInfo. GPU hotspot is reported differently (and more accurately on RDNA), but AMD doesn't inject secret sauce into its GDDR6 packages. Haven't seen any sources proving otherwise, you're welcome to share some.

Granted, it's still 3rd party software only for Radeon (HWInfo) so the jury is still out, but I find it highly unlikely that the move from 16Gb 16Gbps G6 > 16Gb 20Gbps G6 completely broke memory temp reporting.
Posted on Reply
#15
Vya Domus
tabascosauzBoth 40 series and Navi31 show up as Memory Junction Temp in HWInfo. GPU hotspot is reported differently (and more accurately on RDNA), but AMD doesn't inject secret sauce into its GDDR6 packages. Haven't seen any sources proving otherwise, you're welcome to share some.

Granted, it's still 3rd party software only for Radeon (HWInfo) so the jury is still out, but I find it highly unlikely that the move from 16Gb 16Gbps G6 > 16Gb 20Gbps G6 completely broke memory temp reporting.
That's not what I am saying, since it's junction temperature we have no idea what it really means nor how that is calculated, that's why I am saying you can't compare them.
Posted on Reply
#16
ZoneDymo
Prima.VeraSo GDDR7 is faster than the HBM memory?
Who would have thought....
Well its not like HBM is just some established fact, once discovered, thats what it is.
It also has iteration/generation that are faster and more energy efficient.

I mean purely talking physics, distance between objects = delay, then GDDR per definition loses out because its further away.
Posted on Reply
#17
Tomorrow
john_64bit is coming to mainstream.
64bit 5070 with 4x PCI-E 5.0 bus for 1200 bucks. yay
CrAsHnBuRnXpWhat are you, 12?
So Nvidia isnt greedy? That's news to me.
12 year olds hang out in wccftech comment section where they use terms like AMDone etc.
Posted on Reply
#18
bonehead123
It could be GDDR231.89 as far as I am concerned, but without reducing prices back down to normal, pre-pandemic/pre-scalping levels, it won't matta :D
Posted on Reply
#19
80251
@bonehead123
Will prices ever come back to 'normal' for mid-range and top tier GPU's? I think the days of $700 to $750 MSRP top tier videocards are gone forever...
Posted on Reply
#20
CrAsHnBuRnXp
TomorrowSo Nvidia isnt greedy? That's news to me.
12 year olds hang out in wccftech comment section where they use terms like AMDone etc.
Didnt say they werent. But only young kids trying to look cool on the internet and salty adults talk like that. The rest of us acknowledge the greed but call them by their actual names. Like Microsoft and not M$. Just shows youre still living in the past and cant get over the fact that a company is actually being a company and if you were in the exact same shoes as one of the big tech companies with next to no competition, youd be price gouging too. It's the way of the corporate world and youre just salty about it.
Posted on Reply
#21
mrnagant
Prima.VeraSo GDDR7 is faster than the HBM memory?
Who would have thought....
Nope. GDDR7 is getting to what HBM2/e is already capable of and was doing 4 years ago, and HBM3 is already in production.

2019 the Radeon VII has HBM2 1,024GB/s.
2020 the MI100 has HBM2 1,228.8GB/s
2022 the MI210 has HBM2e 1,638.4GB/s
Nvidia has an H100 with HBM3 that will do 3,000GB/s.

When you consider that the Vega 56/64 back in 2017 had 8GB of HBM2 starting at $400 and the VII having 16GB for $700. I don't think that HBM is crazy expensive. For RDNA3 there is the packaging issue. Would be coold if AMD could 3D stack the MCDs onto the GCD so they have room for HBM3 :D
Posted on Reply
#22
95Viper
OK let's stay on topic...
Stop the bickering.
Posted on Reply
#23
bblzd
Nvidia is going to love including only 8GB of this on their next gen 128 bit wide bus GPUs.
Posted on Reply
#24
Minus Infinity
mrnagantNope. GDDR7 is getting to what HBM2/e is already capable of and was doing 4 years ago, and HBM3 is already in production.

2019 the Radeon VII has HBM2 1,024GB/s.
2020 the MI100 has HBM2 1,228.8GB/s
2022 the MI210 has HBM2e 1,638.4GB/s
Nvidia has an H100 with HBM3 that will do 3,000GB/s.

When you consider that the Vega 56/64 back in 2017 had 8GB of HBM2 starting at $400 and the VII having 16GB for $700. I don't think that HBM is crazy expensive. For RDNA3 there is the packaging issue. Would be coold if AMD could 3D stack the MCDs onto the GCD so they have room for HBM3 :D
Well performance is one thing, cost is another and HBM3 prices have soared, so who really cares! It will never come to consumer desktop gpu's
Posted on Reply
#25
Godrilla
ratirtIt begs a question. What would the price be for those graphics cards? Give me chills to even think about it though.
If we extrapolate the price of the 3090 $1499 to 4090 $1599 I would say $1699 is likely. Nvidia will attempt to space itself far far away from Intel in terms of price and performance and fill in the gap accordingly intead of competing with the other 2 ( current strategy). Sure Nvidia will try to justify a price hike using the ai demands more Cuda eg. 30k gpus dedicated for Chatgpt servers scapegoat for investors. In order for even that price to be successful price point, the performance uplift has to match or beat the delta gain from 3090 to 4090 imo. It took 1 quarter for the demand for the 4090 to settle and for the past month it is readily available for $1599 . MSI Gaming GeForce RTX 4090 24GB GDRR6X 384-Bit HDMI/DP Nvlink Tri-Frozr 3 Ada Lovelace Architecture Graphics Card (RTX 4090 Gaming Trio 24G) a.co/d/09mF9V2
Or as low as $1519 @ microcenter with 5% discount but the later might trigger those that paid over $2000 plus for theirs.
Posted on Reply
Add your own comment
May 4th, 2024 00:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts