The Slowing Growth of vRam in Games

Auer

[H]ard|Gawd
Joined
Nov 2, 2018
Messages
1,972
I can expect VRAM to be appropriate for a 4 year span on a $699 card. It was for the $699 GTX 1080 8GB (I'm using it now 4.33 years after launch and 8GB doesn't fail at this performance level), and GTX 1080 Ti 11GB (certainly will be fine in 2021). 16GB will be fine for the $699 Radeon VII in 2023 without a doubt. I'm not sure how well the $699 3GB GTX 780 Ti did in 2017, but Kepler should not be our target for aging well.

You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

I doubt very much we'll see 4 year spans like pascal ever again. At least not for people that like to play new demanding titles.
 

Krisium

Weaksauce
Joined
Feb 16, 2016
Messages
95
You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

2023? I said " I can expect VRAM to be appropriate for a 4 year span on a $699 card." and listed what the 4th year is. And yes, the Radeon VII will be sucking it in 2023 but VRAM will have nothing to do with it. Which means the VRAM was not inadequate for the card.
 

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
I can expect VRAM to be appropriate for a 4 year span on a $699 card. It was for the $699 GTX 1080 8GB (I'm using it now 4.33 years after launch and 8GB doesn't fail at this performance level), and GTX 1080 Ti 11GB (certainly will be fine in 2021). 16GB will be fine for the $699 Radeon VII in 2023 without a doubt. I'm not sure how well the $699 3GB GTX 780 Ti did in 2017, but Kepler should not be our target for aging well.

GDDR6 should not be compared to GDDR5, at least until it matures some. After all, we have had GDDR5 for 10 years!

Right now, it seems that 2 GB chips are rather costly for GDDR6, so your limited to 1 GB per lane or else the price skyrockets. Furthermore, the lanes for GDDR6 are much hard to produce and is why we are now seeing 128 bit cards (4 lanes) for mainstream cards instead of 192 and 256 bit in the past. 256 bit is high end now.

With GTX 3080 performance, its better to have the 320 bit bandwidth with 10 GB than 256 bit bandwidth with 16 GB.

Bottom line GDDR6 has been too little too late.
 

Krisium

Weaksauce
Joined
Feb 16, 2016
Messages
95
GDDR6 should not be compared to GDDR5, at least until it matures some. After all, we have had GDDR5 for 10 years!

Right now, it seems that 2 GB chips are rather costly for GDDR6, so your limited to 1 GB per lane or else the price skyrockets. Furthermore, the lanes for GDDR6 are much hard to produce and is why we are now seeing 128 bit cards (4 lanes) for mainstream cards instead of 192 and 256 bit in the past. 256 bit is high end now.

With GTX 3080 performance, its better to have the 320 bit bandwidth with 10 GB than 256 bit bandwidth with 16 GB.

Bottom line GDDR6 has been too little too late.

What are your thoughts on 10GB 320-bit 19 Gbs G6X vs 12GB 384-bit 16 or 15.5 Gbs G6? It's the same GA102 chip either way, and approximately the same bandwidth. I'd certain that per GB G6X is more expensive, so it at least can't be significantly more expensive. Unless there's extra work on the PCB that costs significantly more, TBH I'm not sure.
 
Last edited:

VirtualMirage

Limp Gawd
Joined
Nov 29, 2011
Messages
470
Because I have 4K monitor and also game in VR at 120Hz (Valve Index), I was concerned if settling for the 3080 10GB would shortening its useful life too quickly and that I should instead step up to the 3090 24GB, especially if ray tracing gets more traction. While 24GB is certainly overkill, even at 4K, there is nothing in the middle that is being offered at time of release. But the cost of a roughly 20% jump in performance and a 2.4x increase in VRAM for around double of a 3080 is a really tall order. I have the means to do it as well as room in the case of my new build (but barely). I am usually the person that buys a video card once every 4-5 years (about the frequency I build a new machine), I'm still on my GTX1080 that I got in May of 2016. But with the RAM layout of these cards, I'm wondering if I may be better off just going with the RTX 3080 for now and doing an mid-cycle refresh instead.
 

Advil

2[H]4U
Joined
Jul 16, 2004
Messages
2,078
I think the goal is for PC games to start leveraging the high SSD and bus throughput like consoles are going to. Stream the data in at fantastically high speed as needed rather than having to cache enormous amounts on the video card at once. If this tech becomes as standard on the PC as it is going to be on next gen consoles, we just won't need VRAM to increase at the rate it has been. Necessary quantity may actually reduce somewhat if texture streaming becomes fast and efficient enough. It will become more a matter of having just two things:
1) enough frame buffer in general for the resolution you use. At the higher end that would be 4k and 8k.
2) enough texture storage that whatever data streaming tech that is in use can always keep up. No more than that is necessary. This can increase slowly over time.

Perhaps, with the 10GB 3080, Nvidia knows darn well what is happening with texture streaming and they know console and PC game devs are going to adopt this tech immediately and we will not really reach a point where 4k gaming ever needs 20+ GB of VRAM because all cards will just get fed as needed going forward.

Who knows. Just a guess.
 

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
What are your thoughts on 10GB 320-bit 19 Gbs G6X vs 12GB 384-bit 16 or 15.5 Gbs G6? It's the same GA102 chip either way, and approximately the same bandwidth. I'd certain that per GB G6X is more expensive, so it at least can't be significantly more expensive. Unless there's extra work on the PCB that costs significantly more, TBH I'm not sure.

You would assume the latter would be cheaper but evidence shows that its the GDDR6 lanes on pcb are the real cost.

Case in point: They used very expensive at the time 16 GB/s gddr6 for the 2080 Super instead of adding a few lanes with cheaper stuff. Bandwidth was absolutely the bottleneck for that card as the extra cores over the 2080 did nothing for performance. (There was even a HOF 2070 Super with 16 GB/s memory that came within spitting distance)
 

Krisium

Weaksauce
Joined
Feb 16, 2016
Messages
95
You would assume the latter would be cheaper but evidence shows that its the GDDR6 lanes on pcb are the real cost.

Case in point: They used very expensive at the time 16 GB/s gddr6 for the 2080 Super instead of adding a few lanes with cheaper stuff.

TU104 is a 256-bit GPU. Can you explain what "adding a few lanes" would entail? I don't think it would be possible without R&D on a new chip or using a ridiculously more expensive TU102. They simply used the fastest G6 memory availble in mid 2019 on their second largest chip; I don't see how adding more lanes would be a tenable option at all.
 
Last edited:

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
TU104 is a 256-bit GPU. Can you explain what "adding a few lanes" would entail?
Making it it 320 ore 352 bit with 14 GB/s GDDR6

It will be interesting to see what they will do with the 3060. I am still thinking it will be 192 bit with 6GB and 12 GB options of GDDR6x.
 

Krisium

Weaksauce
Joined
Feb 16, 2016
Messages
95
Making it it 320 ore 352 bit with 14 GB/s GDDR6

That's not a thing? Not without designing a whole new chip (eg. a theoretical TU103). TU104 is 256-bit. That's it. You either cut it to smaller, or stick with the 256-bit.
 

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
That's not a thing? Not without designing a whole new chip (eg. a theoretical TU103). TU104 is 256-bit. That's it. You either cut it to smaller, or stick with the 256-bit.

Your absolutely correct. They would have had to neuter the hell out of a tu102 as the only other option, which would have been even more costly.

As for the 3060, I qould assume there is a ga106 coming instead of trimming the ga104, but maybe not.
 

Okatis

Limp Gawd
Joined
Jan 16, 2014
Messages
194
How has the VRAM been measured in the games tested btw? (Perhaps I missed it). As I've appreciated the compilation of figures but the single critique I've come across (repeated ad nauseam) of late is allocated isn't used. However someone just now pointed out that since 2017 Windows Task Manager has reported actual usage distinct from allocation, despite every poster I've seen making the aforementioned critique claiming nothing shows true usage.

Either the linked poster is incorrect about Windows reporting or no poster has bothered checking their own statements. Having accurate figures would certainly be useful so I'm curious.
 

bigbluefe

[H]ard|Gawd
Joined
Aug 23, 2014
Messages
1,055
You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

I doubt very much we'll see 4 year spans like pascal ever again. At least not for people that like to play new demanding titles.

Of course we will. The PS5 and Xbox X will be around for at least 7 years. So that's basically 7 years where hardware won't advance at all. Every game is just going to be a console port. The 2080ti is more powerful than the GPU in either the PS5 or Xbox X.

Any GPU you buy today should absolutely last an entire console generation. In fact, I'd expect hardware to be even more resilient now because we're not going to have to deal with another major resolution bump. We already got over the 1080p -> 4k hill. We're at least another console generation away from anyone even considering 8k.

I won't be surprised if the PS6 can't even do 8k. It might just be a 4k/high framerate machine.
 

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
So Cyberpunk really seems to push the vram amount as well, especially with RT.

In regards to usage reported at 2k/1440p/4k:
Techspot - 6.5/7/8 GB
TPU - 5.6/6/7.2 GB // 7.5/7.9/9.9 GB! RT
Guru3D - 7/7.5/8.6 GB // 7.3/7.8/9.3 GB RT w/dlss

Big numbers, even without RT, but none of the cards seemed to be vram starved without RT. The 4 GB 1650 Super did well at 1080p and the 5600xt and 1660ti were fine at higher resolutions.

However, those vram requirements for RT seem legit. The 2060 gets slaughtered with any RT and the 3070 seems to lose to the 2080ti, even with dlss.
https://www.techspot.com/article/2165-cyberpunk-dlss-ray-tracing-performance/

There were some here that were getting crashes even with the 10 GB 3080, but most likely, those were at unplayable settings any hoe - ie. ultra everything with psycho RT settings and no dlss.

The game potentially has very high vram usage, but when at playable settings, it is still fine for 99% of scenarios, unlike games such as Doom eternal.

For the most part, all cards seem to have enough vram for their performance level in pure rasterization, while the 2060 is likely the only failure with any kind of RT while having playable settings.
 

alxlwson

You Know Where I Live
Joined
Aug 25, 2013
Messages
8,673
So Cyberpunk really seems to push the vram amount as well, especially with RT.

In regards to usage reported at 2k/1440p/4k:
Techspot - 6.5/7/8 GB
TPU - 5.6/6/7.2 GB // 7.5/7.9/9.9 GB! RT
Guru3D - 7/7.5/8.6 GB // 7.3/7.8/9.3 GB RT w/dlss

Big numbers, even without RT, but none of the cards seemed to be vram starved without RT. The 4 GB 1650 Super did well at 1080p and the 5600xt and 1660ti were fine at higher resolutions.

However, those vram requirements for RT seem legit. The 2060 gets slaughtered with any RT and the 3070 seems to lose to the 2080ti, even with dlss.
https://www.techspot.com/article/2165-cyberpunk-dlss-ray-tracing-performance/

There were some here that were getting crashes even with the 10 GB 3080, but most likely, those were at unplayable settings any hoe - ie. ultra everything with psycho RT settings and no dlss.

The game potentially has very high vram usage, but when at playable settings, it is still fine for 99% of scenarios, unlike games such as Doom eternal.

For the most part, all cards seem to have enough vram for their performance level in pure rasterization, while the 2060 is likely the only failure with any kind of RT while having playable settings.

Can see my settings and vram usage here for 3840x1600. Holds a steady 45fps in the city.
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
2,948
I feel sometime they start with a conclusion in their heads and after that interpret the data under that lens, for example they say:

It’s important to note that VRAM is a big factor in Cyberpunk with ray tracing enabled. It’s the primary reason why the RTX 2060 is so slow, and it also causes limitations for the RTX 3070 with 8GB of VRAM being right on the edge at ultra settings. The RTX 2080 Ti, despite delivering similar performance without ray tracing enabled, is the faster GPU with RT enabled thanks to its 11GB VRAM buffer.

At 1440p ultra details with DLSS quality with no rt
2080Ti: 90.1 avg. fps / 70.9 1% low fps
3070..: 82.6 avg. fps / 65.4 1% low fps


Medium RT
2080: 60.6 / 50.0
3070: 56.2 / 44.6


Ultra RT:
2080: 47.8 / 40.0
3070: 46.7 / 38.8


Relative avg FPS
No RT.....: 91.6%
medium RT.: 92.7%
Ultra RT..: 97.6% (near margin of error difference)


The numbers at 1440p at least (didn't saw those for the 2080Ti at 4K but I suspect unplayable like for the 3070 anyway looking how low a 3080 got) seem to tell the exact opposite, the more RT you put in there the closer the 3070 get to the 2080Ti, at least under that metric nothing show that 8gig is limiting the 3070 at 1440p Ultra detail, ultra RT with dlss on (or 11 gig is limiting as well and being more limited isn't a factor... ?).

Is it bias ? It is because of the actual real life experience has you maybe cannot feel any difference between 90 vs 82 fps but you see one at 60 vs 56 depending of your screen (where it will happen at some point you will get under the VRR low limit, etc....) or it is the 60 FPS barrier that play a mind trick that make 60.6 avg fos look much bigger than 56.2 avg fos than 90 is relative to 82.
 
Last edited:

Nightfire

2[H]4U
Joined
Sep 7, 2017
Messages
3,280
I feel sometime they start with a conclusion in their hands and after that interpret the data under that lens, for example they say:

It’s important to note that VRAM is a big factor in Cyberpunk with ray tracing enabled. It’s the primary reason why the RTX 2060 is so slow, and it also causes limitations for the RTX 3070 with 8GB of VRAM being right on the edge at ultra settings. The RTX 2080 Ti, despite delivering similar performance without ray tracing enabled, is the faster GPU with RT enabled thanks to its 11GB VRAM buffer.

At 1440p ultra details with DLSS quality with no rt
2080Ti: 90.1 avg. fps / 70.9 1% low fps
3070 : 82.6 avg. fps / 65.4 1% low fps


Medium RT
2080: 60.6 / 50.0
3070: 56.2 / 44.6


Ultra RT:
2080: 47.8 / 40.0
3070: 46.7 / 38.8


Relative avg FPS
No RT : 91.6%
medium RT: 92.7%
Ultra RT : 97.6% (near margin of error difference)


The numbers at 1440p at least (didn't saw those for the 2080Ti at 4K but I suspect unplayable like for the 3070 anyway looking how low a 3080 got) seem to tell the exact opposite, the more RT you put in there the closer the 3070 get to the 2080Ti, at least under that metric nothing show that 8gig is limiting the 3070 in any at 1440p Ultra detail, ultra RT with dlss on (or 11 gig is limiting as well and being more limited isn't a factor... ?).

Is it bias ? It is experience has you maybe cannot feel any difference between 90 vs 82 fps but you see one at 60 vs 56 depending of your screen (where it will happen at some point you will get under the VRR low limit, etc....) or it is the 60 FPS barrier that play a mind trick that make 60.6 look much bigger than 56.2 but 82.6 to be about the same than 90.

I noticed that inconsistent trend as well, but then there was the 4k with just reflections graph. The 3070 really took a pounding at 4k without DLSS using just reflections. Even though the 2080ti likely only would have been an unplayable 20 instead of the 3070's 6 fps, that is a bit too close for comfort in finding a scenario where 8 GB runs out.

RT is simply a vram hog in this game.

1.png
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
2,948
I noticed that inconsistent trend as well, but then there was the 4k with just reflections graph. The 3070 really took a pounding at 4k without DLSS using just reflections. Even though the 2080ti likely only would have been an unplayable 20 instead of the 3070's 6 fps, that is a bit too close for comfort in finding a scenario where 8 GB runs out.
Yes there a point (at 4K no dlss with RT ultra a 3080 go under 20 fps, not sure with reflection only but with medium RT it is still under 25), a point that is well pass playable framerate for a 3070 even if it had twice the vram (at least from what we saw). And if your example is a good indication, once VRAM become a significant issue the drop become obvious it seem.

I am not sure I would use it has a reference (because it was maybe optimized for those Ampere card and the Vram limited console), but if is an indication, Cyberpunk seem to show that if you need more than 8 gig of VRAM for something it is probably too much for a 3070 regardless of vram issue (same for over 10 gig and a 3080 and so on) and that type of game feel like a best candidate with a flight Sim for worst case Vram wise.
 
Last edited:
Joined
Nov 24, 2021
Messages
396
For modern games at playable settings with low end gpus, even 3 GB is enough it seems. The GTX 1060 3 GB still matches and often beats the 4 GB 1650. Seems silly that people freaked out about the Fury being 4 GB.

While 96 bit 6 GB would have been ideal for the 'new' gtx 1630, even a 96 bit with 3 GB of gddr6 would have been better given the tested info.

 

Advil

2[H]4U
Joined
Jul 16, 2004
Messages
2,078
Necro thread lives!

But it is still relevant. Maybe even more so. The last couple of years have seen video card prices explode. The "slow march" of RAM increase that has existed since the dawn of 3D video cards and even in PC DRAM has stalled heavily over the last 5-10 years. When the RTX 3080 was first announced and I started the process of switching from my 2080 8GB to the 3080 8GB I honestly thought it was odd that RAM hadn't increased at the time.

Now? It's entirely clear. This isn't the old days where high end game devs would automatically assume that by the time their game launched several years later ALL the video cards that might play their game would have at least 2GB more VRAM on board. If we stick to mainstream priced cards that will eventually make up the bulk of market share 8GB is still going to be king for years yet. How long ago was the 12GB 1080Ti released now? And it will still be several more years before even the higher end cards gain enough market share for 12GB to be an "average" among even the top 20% of video cards. Who is going to aim their development at that?

It's going to take a major price drop in the costs of producing fast VRAM to make 12-16GB cards mainstream.

And that seems to be the exact opposite of what is happening. The industry is pushing ever faster and more expensive new VRAM designs but keeping the total amount relatively stable. GDDR5, GDDR6, HBM, etc.

We're seeing the same thing with DDR5 in PCs. It's a fairly expensive process so far and adoption has not been fast, nor is that switch to DDR5 including much of a push for MORE RAM either.

I think we are near peak RAM from a design standpoint now. Look for bus improvements and high speed data streaming from here.
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
2,948
One also difference with the past, why would you need much more VRAM on pc than on consoles, one easy way that was always 100% available for big title was playing on much higher resolution. Game run at 900p on a Xbox, you need much more at 1440p.

Console now ran around 1400 to 4k with some cheats, consoles have not much VRAM (they have 16 gig total system ram for everything), making it less obvious that you will always be easy to use the extra VRAM 100% of the time.

Ram + harddrive bandwith + better compression, like missing ram got less obvious/painful on an OS that has a SSD for it's virtual memory, missing VRAM can get less painful/obvious has the ability to speak large amount of new data to the card and better caching in general.

Bref very new and very hard to run title does it often under 5 gig of VRAM reserved/allocated on the card even at 4K, we usually have no idea the actual use.

And even in 2022 from what I gather there is no title that run significantly faster on the 2080ti than the 8gig 3070 on playable fps setting, the doomtalk is almost confirmed to have been completely overblown and even that it would be an issue is close to have been, that said maybe it would have been otherwise in a covid less world.

If we start to see 2080Ti significantly above 8gig 3070 on playable fps, maybe the 10 gig of the 3080 will be an question, but it need to be quite soon to matter.
 
Joined
Nov 24, 2021
Messages
396
Necro thread lives!

But it is still relevant. Maybe even more so. The last couple of years have seen video card prices explode. The "slow march" of RAM increase that has existed since the dawn of 3D video cards and even in PC DRAM has stalled heavily over the last 5-10 years. When the RTX 3080 was first announced and I started the process of switching from my 2080 8GB to the 3080 8GB I honestly thought it was odd that RAM hadn't increased at the time.

Now? It's entirely clear. This isn't the old days where high end game devs would automatically assume that by the time their game launched several years later ALL the video cards that might play their game would have at least 2GB more VRAM on board. If we stick to mainstream priced cards that will eventually make up the bulk of market share 8GB is still going to be king for years yet. How long ago was the 12GB 1080Ti released now? And it will still be several more years before even the higher end cards gain enough market share for 12GB to be an "average" among even the top 20% of video cards. Who is going to aim their development at that?

It's going to take a major price drop in the costs of producing fast VRAM to make 12-16GB cards mainstream.

And that seems to be the exact opposite of what is happening. The industry is pushing ever faster and more expensive new VRAM designs but keeping the total amount relatively stable. GDDR5, GDDR6, HBM, etc.

We're seeing the same thing with DDR5 in PCs. It's a fairly expensive process so far and adoption has not been fast, nor is that switch to DDR5 including much of a push for MORE RAM either.

I think we are near peak RAM from a design standpoint now. Look for bus improvements and high speed data streaming from here.

100% this. There is only so much vram you would need for a single frame. Obviously, you need to cache assets for mote than one frame, but as bandwidth has increased qnd bexome more efficient, it's very clear that 4 GB is still good on the bottom end and 8GB does fine on the top end for 99% of scenarios, even with RTX. Software really does a great job of seemlessly utalizing sysyem memory when vram looks to be innsufficient.
 

TheSlySyl

2[H]4U
Joined
May 30, 2018
Messages
2,272
At some point they're gonna start pushing 8k, and 8k with the now standard full HDR (or higher) color range is probably gonna saturate current RAM..

But I think we still have at least 5 years before that happens.
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
2,948
More like 10 years. 1080p still controls a 66% share and 1440p adoption is at 10%. 4K a whopping 2%...
It will not be that long for giant proportion of TV/consoles of the playstation/xbox combo to be 4K one in the US market (and it does not matter if most game does not run 4k all the time or ever, perception being reality here) too.

Would not be surprising for the PS6/Xbox to have some marketing over 8k support, 7 year's between generation would also not be that surprising which would put that push around 2027 or in 5 year's or so, could be sooner if TV maker have an hard time selling similarly sized but just much better 4K tv to 4K tv owner.

And while true that 3440x1440 or 4K is just 3.75% of gamers according to steam survey that is not far from the market share of the 3080 or higher video card

3080 (1.44%), 3080TI (.6%), 3090 (0.46%), 6900xt(0.15%), 2.65% which would be on the PC side for the first 3-4 year's the only 8K gaming market and it could seem really low, but with a stream user base of over 120 millions, every percentage point is 1.2 millions people and they type of people that pay huge dollar/margin on stuff.

If video card made for the server/industrial/AI side happen to be strong enough to do 8K, the unsold or not good enough for the former could be passed down on the later.

There is also Apple VR with Apple custom silicon coming with a rumored 7680 x 4320 resolution and the technology for only the relevant portion of the display to be rendered at 8K, with good eye tracking and large enough tv (or very large VR FOV), you can have 8K on only the 3% of our field of view that see high resolution with some margin of error around it with the rest at only "4k" to 2K far away, with less denoising has well on those part if you do RT and so on, which would make on the GPU side the task much easier and possible.
 

TheSlySyl

2[H]4U
Joined
May 30, 2018
Messages
2,272
More like 10 years. 1080p still controls a 66% share and 1440p adoption is at 10%. 4K a whopping 2%...
Oh, i'm not thinking MASS adoption, I'm talking about adoption for the top 1% or so of people who are willing to spend $$$$ for both displays and videocards.
 

GoldenTiger

Fully [H]
Joined
Dec 2, 2004
Messages
26,006
Oh, i'm not thinking MASS adoption, I'm talking about adoption for the top 1% or so of people who are willing to spend $$$$ for both displays and videocards.
I'd go for an 8k60 gsync display at 32" in a heartbeat. I've been on 4k since 2014!
 
Top