Modded NVIDIA GeForce RTX 3070 With 16 GB of VRAM Shows Impressive Performance Uplift

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,573
Love this kind of stuff

“As said, the recent mod is a bit more complicated than the earlier one done on some earlier graphics cards, as some resistors needed to be grounded in order to support higher-capacity memory ICs, and the modded graphics card had to be set to high-performance mode in the NVIDIA Control Panel, in order to fix flickering.

AMD marketing has recently called out NVIDIA and pulled the VRAM card, but with NVIDIA launching the GeForce RTX 4070 with 12 GB of VRAM, it appears this won't change anytime soon. These mods show that there is definitely the need for more VRAM, at least in some games.”



Source: https://www.techpowerup.com/307724/...b-of-vram-shows-impressive-performance-uplift
 
The fact it works leaves them room for the SUPER refresh in Q4 2023, Q1 2024 where they give slightly higher clocks and increases in ram.
 
Man, I've been really REALLY wanting to do this to my 3070Ti for basically as long as I've had the card. Gotta find a deceased 3090Ti for to to harvest some 16gbit GDDR6X chips... Oh yeah, and someone who is excellent at reballing and willing to help.
 
You just read my mind.

Salvage some memory from a dead 3090 Ti?

And if that works. How about a 48GB 3090?
Maybe digikey or mouser sells chips?

As for the 3090...nope. having put a waterblock on both a 3080 and 3090, they use the same board. There are free slots on a 3080 board, but the 3090 is utilizing all of the spots...12gb on the front and 12gb on the back.
 
I wonder if the bios just sees the additional vram.

I also wonder about taking a 3080 and increasing it to say....24gb.
According to GPU-z, every RTX 2000 / 3000 card I've checked already have support in the VBIOS for 16Gbit GDDR modules, so doubling VRAM should be possible for most (except for 3090Ti and other cards with 16Gbit chips). I've seen people doubling 2070, 2080, 2080Ti, a different 3070... the biggest hurdle seems to be the actual swap.
Adding memory channels is a whole other thing- not possible if those segments of the IMC are disabled/defective, even if the 3080 is on a PCB that supports 12 channels. 20GB 3080 would be quite nice though!

Maybe digikey or mouser sells chips?

As for the 3090...nope. having put a waterblock on both a 3080 and 3090, they use the same board. There are free slots on a 3080 board, but the 3090 is utilizing all of the spots...12gb on the front and 12gb on the back.
48GB 3090 should be possible with this method, it has 8Gbit chips (hence the GDDR sandwich to get 24GB) so swapping with 16Gbit would result in 48GB (like RTX A6000)
 
Last edited:
As for the 3090...nope. having put a waterblock on both a 3080 and 3090, they use the same board.

For most boards, this is not the case. You must have a rare example? Just looking at the back of a 3080 pcb is simple enough to prove this.

There are free slots on a 3080 board, but the 3090 is utilizing all of the spots...12gb on the front and 12gb on the back.

We're talking double capacity chips i.e. 2GB modules. The 3070 in the OP was originally an 8GB card brought up to 16GB via replacing all the memory ICs with double capacity versions. All pads on it were populated when it was in its original 8GB configuration. It's not about free slots.

All 3090 pcbs have 12 pads on the front and back respectively iirc. So in theory, 24x 2GB = 48GB.

Most if not all 3080 pcbs have 12 pads on the front. So in theory, 12x 2GB = 24GB.

This all assumes they are wired up correctly.
 
EVGA used to do this stuff for us before nVidia stopped allowing partners to make hardware modifications to their product stack.

They more or less could have shown that all of these cards could have had a massive uplift in performance by having the vRAM they should have had in the first place. And I bet it would’ve been an SKU that wouldn’t have cost customers more than $70-$100 additional.

Pretty sure all the 12GB and under cards would see a noticeable performance benefit in both 2.5k and UHD. And the 8GB and under cards unquestionably are starved at this point. The 3070 points this out perfectly.
 
EVGA used to do this stuff for us before nVidia stopped allowing partners to make hardware modifications to their product stack.

They more or less could have shown that all of these cards could have had a massive uplift in performance by having the vRAM they should have had in the first place. And I bet it would’ve been an SKU that wouldn’t have cost customers more than $70-$100 additional.
I miss the days of multiple memory capacity options. IIRC up until G80 / GT200-ish times board partners didn't even have to go rogue to do it, NVidia and ATI etc just straight up offered two memory options as a way of differentiating the product stack.

Seems like that would be relevant now... People who are gaming at lower res/higher fps don't need a ton of VRAM but people playing at higher res/lower fps do. Obvious product segmentation opportunity...
 
No. Flip your card and buy a new.
At expense and risk are sane folks giving a fuck at this?
Hard-core/ modders/OC'ers etc... ya go bruh! 100% for the win. HW sport is cool.
 
Hardware Unboxed just dropped a new video comparing the RTX 3070 to an A4000 which is essentially a 3070 with 16gb of RAM.



Pretty incontrovertible now that Nvidia's stinginess on RAM absolutely has major implications for gaming longevity, anyone who bought a 3070 should be rightly pissed that their GPU only lasted one generation in terms of being able to run games at maximum settings.
 
For most boards, this is not the case. You must have a rare example? Just looking at the back of a 3080 pcb is simple enough to prove this.



We're talking double capacity chips i.e. 2GB modules. The 3070 in the OP was originally an 8GB card brought up to 16GB via replacing all the memory ICs with double capacity versions. All pads on it were populated when it was in its original 8GB configuration. It's not about free slots.

All 3090 pcbs have 12 pads on the front and back respectively iirc. So in theory, 24x 2GB = 48GB.

Most if not all 3080 pcbs have 12 pads on the front. So in theory, 12x 2GB = 24GB.

This all assumes they are wired up correctly.
So I watched the video and didn't realize they were replacing the modules. I thought they were just adding modules. Makes more sense now.
 
The term for what NVIDIA is doing here is called forced obsolescence, and they are finally getting to Intel-levels of product segmentation and artificial life cycles for usage.
This is what happens when there is little to no competition.
1682484609037.jpeg


… And as of 2021 efforts being poured into: “Intel Software Defined Silicon or SDSi for short. It's clear that it's for Xeon CPUs and the GitHub page mentions that SDSi "allows the configuration of additional CPU features through a license activation process."”
 
View attachment 566483

… And as of 2021 efforts being poured into: “Intel Software Defined Silicon or SDSi for short. It's clear that it's for Xeon CPUs and the GitHub page mentions that SDSi "allows the configuration of additional CPU features through a license activation process."”
As opposed to now where I have to decide if I want my HCI stack to use the H series Xeons or the M series, but I can't get the M series because of demand so I go for the P series instead and augment it with a different storage solution. Or perhaps I blow caution to the wind load up on my 240v and go with the Q or S series or does accounting say you don't need to be paying more for that letter at the end when you can get the normal one instead? Still, it turns out none of it matters because the one I really wanted has a 6 month wait time because everybody else wanted it too.
If you are somebody who buys Xeons then the SDSi makes your life easier.
I already pay extra for a letter on the end over the base model to accelerate some specific task the machine will be doing over its life span, buying a base model then choosing to pay extra to activate those features ends up costing me the same but now I don't have long ass wait times while I wait on supply for a specific chip. Or god forbid the server get repurposed and now it would be much nicer if it had the P series and not the M but now I need to track down a 2-year-old Xeon that may or may not be hardware locked to the vendor anyways.
SDSi for business and enterprise is a good thing, if Intel ever makes it a consumer thing I will personally lead the mob with the pitchforks.
 
As opposed to now where I have to decide if I want my HCI stack to use the H series Xeons or the M series, but I can't get the M series because of demand so I go for the P series instead and augment it with a different storage solution. Or perhaps I blow caution to the wind load up on my 240v and go with the Q or S series or does accounting say you don't need to be paying more for that letter at the end when you can get the normal one instead? Still, it turns out none of it matters because the one I really wanted has a 6 month wait time because everybody else wanted it too.
If you are somebody who buys Xeons then the SDSi makes your life easier.
I already pay extra for a letter on the end over the base model to accelerate some specific task the machine will be doing over its life span, buying a base model then choosing to pay extra to activate those features ends up costing me the same but now I don't have long ass wait times while I wait on supply for a specific chip. Or god forbid the server get repurposed and now it would be much nicer if it had the P series and not the M but now I need to track down a 2-year-old Xeon that may or may not be hardware locked to the vendor anyways.
SDSi for business and enterprise is a good thing, if Intel ever makes it a consumer thing I will personally lead the mob with the pitchforks.
The only issue with this is if it is dropped down to consumer-grade CPUs and SoCs.
For enterprise this would be highly efficient and a much needed improvement, but for everyone else it could turn into a vendor lock-in nightmare and potential security issue; knowing how Intel has skimped on security until recent years, this is most likely a 'when' than an 'if'.
 
They must want to get all the lower end cards out the door before the 4090TI is released. I haven't heard a thing about that card since it was leaked 3-4 months ago.
Just interested in Benchmarks for the card I would never buy one would be afraid of melting my system.
 
EVGA used to do this stuff for us before nVidia stopped allowing partners to make hardware modifications to their product stack.

They more or less could have shown that all of these cards could have had a massive uplift in performance by having the vRAM they should have had in the first place. And I bet it would’ve been an SKU that wouldn’t have cost customers more than $70-$100 additional.

Pretty sure all the 12GB and under cards would see a noticeable performance benefit in both 2.5k and UHD. And the 8GB and under cards unquestionably are starved at this point. The 3070 points this out perfectly.
Yep. I had an EVGA GTX 570 with 2.5GB VRAM while the rest came with 1.25GB.
 
The only issue with this is if it is dropped down to consumer-grade CPUs and SoCs.
For enterprise this would be highly efficient and a much needed improvement, but for everyone else it could turn into a vendor lock-in nightmare and potential security issue; knowing how Intel has skimped on security until recent years, this is most likely a 'when' than an 'if'.
The idea of a programmable FPGA in a consumer device is scary. Can’t stop people from clicking now and that only writes to boot sectors, so let’s give the bad people the ability to write to the silicon now too… big fat nope… that’s a nightmare right there.
 
Yep. I had an EVGA GTX 570 with 2.5GB VRAM while the rest came with 1.25GB.
I understand why Nvidia cracked down on that stuff, it’s frustrating but I get it.
Nvidia is going to need to split their silicon at some point, AMD with their RDNA and CDNA split allows them to protect their workstation market with out artificially gimping their consumer one.
Nvidia mostly relies on drivers and vram and vbios as the silicon for the most part is identical just differently binned.
 
Ok, so I been thinkin...

Where did they get 2gb vram modules? The 3070 uses gddr6. Seems to me that the 3090ti is the only card using 2gb modules, but it is gddr6x....

So that makes me think they bought fresh modules.

Question for those out there with a 30 series...

Would you pay $150 to have someone upgrade the ram to 16gb/20gb?
 
Ok, so I been thinkin...

Where did they get 2gb vram modules? The 3070 uses gddr6. Seems to me that the 3090ti is the only card using 2gb modules, but it is gddr6x....

So that makes me think they bought fresh modules.

Question for those out there with a 30 series...
2GB GDDR6 modules are around. I'm pretty sure all the Ampere A-series Pro cards used them (except for the 6GB A2000), and they can presumably be bought in bulk from suppliers bc independent GPU repair people have them. Most RDNA2 cards use 2GB GDDR6 too, but idk if all those bins would work with NV BIOS bc some (like 17.5GTs and 18GTs) were never used by NV at all.

Would you pay $150 to have someone upgrade the ram to 16gb/20gb?
If the specific someone came with references and had replacing a lot of GDDR6 modules (ie an experienced GPU repair person)

My 3070Ti FE with 16GB instead of 8 would last absolutely forever for WS use and light gaming. I've already been planning on getting it done sometime :)
 
2GB GDDR6 modules are around. I'm pretty sure all the Ampere A-series Pro cards used them (except for the 6GB A2000), and they can presumably be bought in bulk from suppliers bc independent GPU repair people have them. Most RDNA2 cards use 2GB GDDR6 too, but idk if all those bins would work with NV BIOS bc some (like 17.5GTs and 18GTs) were never used by NV at all.


If the specific someone came with references and had replacing a lot of GDDR6 modules (ie an experienced GPU repair person)

My 3070Ti FE with 16GB instead of 8 would last absolutely forever for WS use and light gaming. I've already been planning on getting it done sometime :)
I have a 3080 10gb in my son's machine and have been eyeing Hogwarts. But that dang 10gb just isn't enough.

I've been bumming around on the techpowerup reviews with the pictures of the memory chips. It looks like the 2gb variant of the gddr6x xhip is maybe in the $30 each range....
 
The fact it works leaves them room for the SUPER refresh in Q4 2023, Q1 2024 where they give slightly higher clocks and increases in ram.

That is cute that you believe they would add more ram. :)
The super refreshes are not about the consumer they are about refreshed lower cost silicon. A few extra % points of performance for the customer a few % points more margin for Nvidia. No way Nvidia is increasing their costs buying more ram. OR if they do they will just pass the cost on to the consumer.
 
I have a 3080 10gb in my son's machine and have been eyeing Hogwarts. But that dang 10gb just isn't enough.

I've been bumming around on the techpowerup reviews with the pictures of the memory chips. It looks like the 2gb variant of the gddr6x xhip is maybe in the $30 each range....
Unless you are running 4K max settings with RT 10GB should be enough.
13GB VRAM was what is being hit on the top-end settings, but if that is what you are going for...
 
If there was a reliable way to mod additional RAM onto graphics cards and still have them perform normally, I'd probably want to do it with my RTX 3080. I'm pretty competent with soldering.

If someone can figure it out, it could warrant a new market for GPU RAM-mod kits.
 
The only issue with this is if it is dropped down to consumer-grade CPUs and SoCs.
For enterprise this would be highly efficient and a much needed improvement, but for everyone else it could turn into a vendor lock-in nightmare and potential security issue; knowing how Intel has skimped on security until recent years, this is most likely a 'when' than an 'if'.
I see it as a win potentially. I had the same initial feeling as you did but people WILL hack it. It'll be back to flashing something or whatever to unlock features, like we could back in the day.

Can't unlock a cpu that's been cut, but if the feature is just off then life finds a way.
 
It looks like the 2gb variant of the gddr6x xhip is maybe in the $30 each range....
Digikey has[1] 2GB GDDR6 chips for $20...in quantity 2000.

E2A: Mouser's got 1.5GHz stuff at $28 in single-unit quantity.

[1] well, they have a dozen or so listed; but they're all out of stock.
 
Well, the problem is no one wants to buy anyone's 8GB nVidia card.
Have to take a big loss on it after a single generation. Thank nVidia!
So true, which is why the 4060/4060ti being 8GB of memory "could" be a massive failure if Nvidia really does try to release them at $400 and $500 LOL
 
Well, the problem is no one wants to buy anyone's 8GB nVidia card.
Have to take a big loss on it after a single generation. Thank nVidia!
The problem is the perceived value by owners of said cards IMO. Used 3070 selling for half of new price may sell, allowing people to get into the game with their "budget" card. However trying to sell a used 3070 at $400+ when the next gen card that is probabley just as fast for the same price or less that is new "right around the corner" is a definitely a hard sell

Plus I do think the whole mining card thing may have scared away potential used buyers as well.
 
Back
Top