The 4090 Ti specs have been leaked.
https://www.pcgamesn.com/nvidia/rtx-4090-ti-release-date-price-specs-benchmarks
https://www.pcgamesn.com/nvidia/rtx-4090-ti-release-date-price-specs-benchmarks
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?The 4090 Ti specs have been leaked.
https://www.pcgamesn.com/nvidia/rtx-4090-ti-release-date-price-specs-benchmarks
It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?
Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
I don't know about you, but 20-25% increase in a single gen is reasonable to me.It's because they can't.
It's all DLSS smoke and mirrors. If you look at the process size as a predictor of perf/watt and the watt specs of the GPU, the theoretical max increase in going from Samsungs 8N to TSMC's 4N process, while moving from 384 to 320w is going to be 38.5%, but since power hasn't scaled linearly with gate size since the 32nm era, the real max increase is going to be much smaller than that. Probably 20-25%.
The 2x claims are some sort of bold faced lies based on AI DLSS no sense, and misleading choices in benchmarks and baselines.
it isI don't know about you, but 20-25% increase in a single gen is reasonable to me.
It's all DLSS smoke and mirrors.
yup. i just posted the pic to keep the traffic here, that site had a bunch of garbage all over it.This is just speculation based on the max specs of ad102. Nothing new, we've seen them before on videocardz.
Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then Zarathustra[H]I don't know about you, but 20-25% increase in a single gen is reasonable to me.
Ah gotchayup. i just posted the pic to keep the traffic here, that site had a bunch of garbage all over it.
I don't know about you, but 20-25% increase in a single gen is reasonable to me.
We don't really need to be rescued from marketing. We can read a graph and its footnotes. Some of us also like DLSS and don't find boosts to it to be "bull".It is to me as well. We have had gens when we've gotten less than that.
That number is simply there to point out that their public 2x claims are complete bullshit.
Because graphics is basically infinitely parallel. You can just do more at the same time, which means you can increase performance by just putting more copies of the stuff that does things. While you can do that with CPUs, it doesn't scale on everything. Not all problems can be broken down to run in parallel to an arbitrary degree.It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?
Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then Zarathustra[H].
The new power spec might be problematic for people looking to upgrade current systems.
.
Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.
Strip away all the brand specific value adds like DLSS and RTX, it's still going to be a monster leap.
Btw I wouldn't be surprised if the actual 4090 Ti ends up being the 600W design they scaled 4090 back from.
Well, that would be a 50%-100% increase over the 3090 ti and that isn't happening either.Imagine being bothered that Nvidia or AMD isn't literally 2x doubling raw raster performance every 2 years, and may have only managed somewhere between 1.5x and 2x.
Depends,It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?
Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.Well, that would be a 50%-100% increase over the 3090 ti and that isn't happening either.
Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.
The 4090 has 2.7 time the transistor, about 2.3 the pixel rate, 2.05 time the Teraflops and had in native rendering no dlss:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/
About 1.66x time the average FPS than a 3090TI Suprim X boosted at a similar 455W power usage in Cyberpunk (1440p max setting, ultra RT + Psycho)
Thats very explicitly without:That's with DLSS enabled. That's not apples to apples.
They're both valid and often used technologies nowadays.If DLSS isn't disabled, the benchmark isn't valid. The same should probably go for RT as well.
That is in one game and using RT, let's wait and see what the average improvement is when measured by independent reviewers using non RT titles. If those extra transistors were added to improve performance in RT enabled games the improvement in raster only titles might be less.Not sure which one we are talking about exactly, but with the mention of a 3090TI and the 490TI specs, I am not sure why you are saying that.
The 4090 has 2.7 time the transistor, about 2.3 the pixel rate, 2.05 time the Teraflops and had in native rendering no dlss:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/
About 1.66x time the average FPS than a 3090TI Suprim X boosted at a similar 455W power usage in Cyberpunk (1440p max setting, ultra RT + Psycho)
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.That is in one game and using RT, let's wait and see what the average improvement is when measured by independent reviewers using non RT titles. If those extra transistors were added to improve performance in RT enabled games the improvement in raster only titles might be less.
After a check, still many title under 100fps at 4K ultra details without RT (ultra details can often be ridiculous hard to ran), let alone yet to be released one in the next 2 years.Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+
After a check, still many title under 100fps at 4K ultra details without RT (ultra details can often be ridiculous hard to ran), let alone yet to be released one in the next 2 years.
View attachment 514195View attachment 514196
If it is just 25-30% higher than a 3090TI instead of 60% too, that could be enough for it to be well enough, specially with a small tweak of settings that give back giant fps for little change.
I feel it is close to fair if not fully fair the idea that if you want more than 6950xt or the upcoming close to top RDNA 3 card, it will probably be to be able to ran upcoming path traced level of RT games and correct to fully include that aspect in how much performance jump we see.
Raster performance improvement in the top card isn't interesting anymore when the 3090 can already run very nearly everything in 4K resolution at 120+ FPS without ray tracing. If I'm spending $1,600 on a video card then the ray tracing performance is what I'm buying it for. If you're only interested in raster performance then a 3070/3070 Ti is very likely more than enough card for your use case, especially when they're going for half the price of the 4080 12GB will go for at this point.
I miss the days when this was the goal. Now it's all compromises.Native resolution or GTFO.
Thats very explicitly without:
Cyberpunk 2077 Ultra Quality + Psycho RT (Native 1440p):
Cyberpunk 2077 Ultra Quality + Psycho RT (DLSS 1440p):
- MSI RTX 3090 TI SUPRIM X (Stock Native 1440p) - 37 FPS / 455W Power / ~75C
- NVIDIA RTX 4090 FE (Stock Native 1440p) - 60 FPS / 461W Power / ~55C
- RTX 4090 vs RTX 3090 Ti = +62% Faster
- MSI RTX 3090 Ti SUPRIM X (DLSS 2 1440p) - 61 FPS / 409W Power / 74C
- NVIDIA RTX 4090 FE (DLSS 3 1440p) - 170 FPS / 348W Power / ~50C
- RTX 4090 vs RTX 3090 Ti = +178% Faster
the DLSS strangeness will see what to make out of it, is more 2.78 time than 1.66
I miss the days when this was the goal. Now it's all compromises.
Maybe but RT overdrive is not mentioned anywhere, while Psycho RT is (and it require a yet to be released dev build to access that special new RT feature I think).I remember now though. This is the new extreme RT mode they created just as a tech demo frot he new GPU's that intentionally sabotages performance of previous gens so it can highlight the amazing performance increases.
Play it at normal RT modes or RT off, and there is much less improvement.
It is absolutely possible (already is in some aspect) for a lower resolution + intelligent reconstruction to beat native without any work done on it or at least i really do not understand why that would be the case, considering the learning model use way better than regular game native 4K image to construct itself.Nothing will ever beat native resolution, and it should be the basic expecation, not using trickery that sabotages IQ.
Maybe but RT overdrive is not mentioned anywhere, while Psycho RT is (and it require a yet to be released dev build to access that special new RT feature I think).
And DLSS 3 + RT overdrive marketing talk about 4x the performance, which is significantly lower here under 3.0x (but maybe that only at 4K that it reach that 4x-5x boost).
Is this something you actually know (source ?) or speculate ?
Apple vs Orange. Do you really want a motherboard/cpu combo that demands 400W+ of power pumped somehow to "the cpu", which can now occupy 2 to 3 times the area that it did before? The GPU became the "land and power grab". Not sure if the world is ready for two, especially when were talking about the more critical component. Also, there would be tons of issues. Not necessarily a problem for those already running dual socketed workstations, but maybe a surprise for the rest.It's still amazing to me that AMD and Nvidia can hit such high jumps in performance between gens. I mean, take a look at Intel CPUs, for example, they offer new power features and this and that, but do you even see this type of jump?
Example, compare the 980 to the 4080 and then compare the i7 6700k with whatever i7 hey have now. It's not even comparable.
It is absolutely possible (already is in some aspect) for a lower resolution + intelligent reconstruction to beat native without any work done on it or at least i really do not understand why that would be the case, considering the learning model use way better than regular game native 4K image to construct itself.
Are you talking about DLSS 3.0 and frame insertion? 2.0 does not do any of that.I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
I guess someone like you would probably consider digital foundry to just be a shill for clearly pointing out that using dlss can result in an overall better image in some cases. Personally I just take it on a game by game basis and don't make ignorant generalizations.I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
Yes they are often, does not mean it will still be the case in 2050 (or at least, how does one know that overall the better than native guess vs worst one will not average out positive)I've seen DLSS in person. It is easily objectively worse than native resolution. AI is not magic. It fills in the gaps with guesses, and those guesses can be and often are wrong. You may get a smoother image, but they are wrong in other ways.
Part of me wonders if some of the poo-pooing isn't simply some guys having a very hard time with the fact the new gen was named after a lady (Linda Lovelace, the lady that invented computers).Yep. If you don't like the jump, you're under no obligation to buy it. Wait for another gen then