AMD’s Polaris will be a mainstream GPU, not high-end

It will either be 980 Ti performance, or it will be ~$300, NOT BOTH for christ sake! Stop the rumour mongering.

If AMD has an awesome chip that can compete with a $650 card, you can bet the farm that they are going to price it accordingly.
 
It will either be 980 Ti performance, or it will be ~$300, NOT BOTH for christ sake! Stop the rumour mongering.

If AMD has an awesome chip that can compete with a $650 card, you can bet the farm that they are going to price it accordingly.

You have to think ahead of the curve. The 980Ti is going EOL soon.
 
If it is indeed that powerful and that cheap maybe Nvidia won't release a lame **60 / **70 card. 980ti performance levels at around $350 seems good to me. I'm not interested in a mere 20% performance jump from my GTX 970, so I hope this rumor is true.
 
You have to think ahead of the curve. The 980Ti is going EOL soon.

The 980 Ti will be EOL, replaced with the baseline 1080, if history serves as an example. The 980 replaced the 780 Ti, and it was only marginally (single digit percentage) faster, it wasn't until the Titan X/980 Ti that 780 Ti users had a 'mere' 35-40% upgrade option available.

Essentially, I would highly HIGHLY doubt Nvidia would replace a $650 card with one half the price: That's just a stupid business decision. The 770 and 970 have both been ~20-30% jumps from the cards they replaced. the 770 was LITERALLY a 680, and the 970 was comparable to a 780 (vanilla). So a fictional 1070 will be as fast as a 980 (vanilla), which does NOT bode well for a 1080 being any more than ~10% faster than a 980 Ti. And you can BET the 1080 will be priced at $500+: cheaper than a 980 Ti, but that AINT no $300 card.

Be realistic folks: you do this every damn generation.
 
But AMD is saying that they want to jump start VR. I agree with you 100% how things have gone in the past. But without the mainstream able to purchase VR titles; then VR is a dead technology. AMD has tossed a great deal of their eggs onto the VR cart. It's go or blow time for them. 7 million VR capable PCs isn't enough for developers to start making VR games. The hardware in the mainstream computer base has to be VR capable.

I see a Polaris card that replaces the 380 and is as fast as a R9 390. A 380 isn't really VR ready right? Note how Nvidia got rid of the 750ti and the GTX 950 is it's current replacement. I don't think they were in the same league at first. To me it's already started subtly from both teams.

Here compare the 750ti vs 950.
GeForce GTX 950 vs 750 Ti

Now let's say AMD replaces their 750ti equivalent with a 380 or comes up with a Polaris card that matches the 380 but sticks into the 750ti price range?

It doesn't seem far fetched to me. It seems like 100% upgrade time to me.
 
The 980 Ti will be EOL, replaced with the baseline 1080, if history serves as an example. The 980 replaced the 780 Ti, and it was only marginally (single digit percentage) faster, it wasn't until the Titan X/980 Ti that 780 Ti users had a 'mere' 35-40% upgrade option available.

Essentially, I would highly HIGHLY doubt Nvidia would replace a $650 card with one half the price: That's just a stupid business decision. The 770 and 970 have both been ~20-30% jumps from the cards they replaced. the 770 was LITERALLY a 680, and the 970 was comparable to a 780 (vanilla). So a fictional 1070 will be as fast as a 980 (vanilla), which does NOT bode well for a 1080 being any more than ~10% faster than a 980 Ti. And you can BET the 1080 will be priced at $500+: cheaper than a 980 Ti, but that AINT no $300 card.

Be realistic folks: you do this every damn generation.

Wait a second. Who said NV was going to replace 980Ti performance for half the price?
 
Wait a second. Who said NV was going to replace 980Ti performance for half the price?

Just about everyone on this forum.

It's ridiculous.

Edit: The OP of this thread did in a roundabout way, and you did indirectly by saying Nvidia would EOL the 980 Ti, which would somehow justify AMD releasing a 980 Ti performance card for half the price.
 
Just about everyone on this forum.

It's ridiculous.

Edit: The OP of this thread did in a roundabout way, and you did indirectly by saying Nvidia would EOL the 980 Ti, which would somehow justify AMD releasing a 980 Ti performance card for half the price.

I would believe AMD might try to gain market share by trying to do 980TI for half the price, but I don't see NV doing that. I think NV will give us greater than 980, but less than 980Ti for half the price. So 1070 for $330 sounds about right and fits in those performance parameters.
 
actually there is another aspect we haven't talking about, if AMD wants to get VR to mainstream you really do have to think about how much VR sets cost too. But I do see time for VR games to come out, so maybe they will cut prices down a bit....
 
I would believe AMD might try to gain market share by trying to do 980TI for half the price, but I don't see NV doing that. I think NV will give us greater than 980, but less than 980Ti for half the price. So 1070 for $330 sounds about right and fits in those performance parameters.

Now YOU are doing that. Stop it. How much was the 970 at launch? How did it compare to a 780 Ti? How much was a 770 at launch, and how did it compare to a 680? How much was a 670 at launch, and how did it compare to a 580? These are the questions you need to ask. You will get a reasonably priced 980, NOT a reasonably priced 980 Ti.
 
Now YOU are doing that. Stop it. How much was the 970 at launch? How did it compare to a 780 Ti? How much was a 770 at launch, and how did it compare to a 680? How much was a 670 at launch, and how did it compare to a 580? These are the questions you need to ask. You will get a reasonably priced 980, NOT a reasonably priced 980 Ti.

I think we're just missing each other a bit. I agree we aren't getting a reasonably priced 980 Ti from NV. We'll just have to agree to disagree on the 1070 pricing and performance. The good thing about it is we'll find out in a couple of months and I'll be enjoying my new 1080.
 
Now YOU are doing that. Stop it. How much was the 970 at launch? How did it compare to a 780 Ti? How much was a 770 at launch, and how did it compare to a 680? How much was a 670 at launch, and how did it compare to a 580? These are the questions you need to ask. You will get a reasonably priced 980, NOT a reasonably priced 980 Ti.

Well let's think like the guys who do marketing for AMD. They will quickly point out that the 390X is as fast as a 980ti in some games if you disable X,Y, and Z. This might explain some of the rumor mill. Lots of optimism.

A 390x BEFORE OC potential is taken into consideration, matches a GTX 980 going by testing on various forums. So why would the R9 390x replacement only match a GTX 980 when they just switched to 14nm tech? It doesn't make sense to only match it. And AMD wants the mainstream to score well on the VR test. That means a score of 10 at least. Aren't the 390x like $379 or something? T-Storm and this is one of the few websites that will load currently. Why can't the 14nm Polaris equivalent be $349?

I think you're underestimating Polaris as much as people are overhyping Polaris. :)

Price checked the 390x and they are going for $379 on NewEgg depending on brand.
390x - Newegg.com
 
Last edited:
Now YOU are doing that. Stop it. How much was the 970 at launch? How did it compare to a 780 Ti? How much was a 770 at launch, and how did it compare to a 680? How much was a 670 at launch, and how did it compare to a 580? These are the questions you need to ask. You will get a reasonably priced 980, NOT a reasonably priced 980 Ti.

MSI GeForce GTX 970 Gaming 4 GB Review

Practically speaking at that low percentage of a difference you'd consider (Relative average of 97 vs 100 for reference models at 1080p) the GTX 970 and GTX 780ti the same performance class. The GTX 970 had more VRAM and a new architecture (which have seen born out in some newer games), as such I'd think it is fair to consider it has being at least reasonable to consider a replacement for the 780ti in terms of what you are getting just much cheaper ($330 vs $699). This was released in a vacuum essentially and AMD just responded with (reluctant) price cuts. AMD did not respond with a new line until 10 months later.

GTX 770 vs GTX 680 is a rather easy case as it was just the same chip with higher clocks and higher speed memory (7ghz vs 6ghz) at a lower price ($400 vs $500). Even at the same performance that would have been a 25% perf/$ increase just from the price adjustment. This was also released in a vacuum and AMD just responded with price cuts. Although that window was shorter than Maxwell's release as Nvidia then had to follow with price cuts ($400 to $330) after AMD released Hawaii and it's 2xx series 5 months later.

GTX 670 was noticeably faster than GTX 580 to the point you'd consider it a higher performance class all together. It launched at $400 vs $500 though so the price gap was smaller.

NVIDIA GeForce GTX 670 2 GB Review

At the same price no one would have chosen GTX 580. 670 also had the same advantages in VRAM and newer architecture. Unlike the previous cases Nvidia's 6xx series was released with AMDs newest line having beat it to market. Although 6xx series were still priced in a way to force price cuts from AMD.
 
The 980 Ti will be EOL, replaced with the baseline 1080, if history serves as an example. The 980 replaced the 780 Ti, and it was only marginally (single digit percentage) faster, it wasn't until the Titan X/980 Ti that 780 Ti users had a 'mere' 35-40% upgrade option available.

Essentially, I would highly HIGHLY doubt Nvidia would replace a $650 card with one half the price: That's just a stupid business decision. The 770 and 970 have both been ~20-30% jumps from the cards they replaced. the 770 was LITERALLY a 680, and the 970 was comparable to a 780 (vanilla). So a fictional 1070 will be as fast as a 980 (vanilla), which does NOT bode well for a 1080 being any more than ~10% faster than a 980 Ti. And you can BET the 1080 will be priced at $500+: cheaper than a 980 Ti, but that AINT no $300 card.

Be realistic folks: you do this every damn generation.

Well it would get a lot more people out to upgrade. Most people don't upgrade every generation "just because". The 7** series was essentially a refresh of the same line up at lower prices, which isn't typical. All that being said I think your performance estimates are close.

But this thread was about AMD and not Nvidia.
 
The 980 Ti will be EOL, replaced with the baseline 1080, if history serves as an example. The 980 replaced the 780 Ti, and it was only marginally (single digit percentage) faster, it wasn't until the Titan X/980 Ti that 780 Ti users had a 'mere' 35-40% upgrade option available.

Essentially, I would highly HIGHLY doubt Nvidia would replace a $650 card with one half the price: That's just a stupid business decision. The 770 and 970 have both been ~20-30% jumps from the cards they replaced. the 770 was LITERALLY a 680, and the 970 was comparable to a 780 (vanilla). So a fictional 1070 will be as fast as a 980 (vanilla), which does NOT bode well for a 1080 being any more than ~10% faster than a 980 Ti. And you can BET the 1080 will be priced at $500+: cheaper than a 980 Ti, but that AINT no $300 card.

Be realistic folks: you do this every damn generation.


Yet nV has the same 2.5 perf/watt advantage than AMD has (either a 45% performance increase at the same power or a 70% drop in power, as TSMC put it, so its a bit more that 2.5 perf/watt), so don't think a 1070 can't reach a 980ti performance, its easy to do that and to stay at the same power evolve without any modification to the architecture. Don't forgot that nV has this advantage too. Its not the architecture that gives it its Finfet and node itself. If they do it is another matter, but if memory servers, the x70 series of cards have always gotten close to the top of the previous generation. So going by those numbers the x70 will be a killer card at 350 more likely it will placed at 399
 
Yet nV has the same 2.5 perf/watt advantage than AMD has (either a 45% performance increase at the same power or a 70% drop in power, as TSMC put it, so its a bit more that 2.5 perf/watt), so don't think a 1070 can't reach a 980ti performance, its easy to do that and to state at the same power evolve without any modification to the architecture. Don't forgot that nV has this advantage too. Its not the architecture that gives it its Finfet and node itself. If they do it is another matter, but if memory servers, the x70 series of cards have always gotten close to the top of the previous generation. So going by those numbers the x70 will be a killer card at 350 more likely it will placed at 399

470, 670, and 770 were clearly faster than the previous highest end single GPU card.

970 was also close (well the 780ti but the Titan Black would make comparisons a bit trickier) but in my opinion if you consider the other factors aside from the raw fps avgs at the time it should be considered better as well. That was at time of release though, if you look at the situation in hindsight then opinion would weigh even more to the 970.

GTX 570 vs GTX 480 is a bit tricky and the worse relative x70 showing. But 5xx series was a really short turn around. GTX 570 released less then 7 months away from the GTX 480 in the same calendar year.
 
People are loosing sight of what AMD wants to do with this VR approach.

The Pro Duo really shows this. It's not just VR. Sure, one GPU per eye is a great excuse.

But really, it's about pushing mGPU support, in conjunction with DX12. Notice how busy AMD has been with DX12 titles lately? Compare that to DX11. The trojan horse where with support, AMD can start to really compete with Nvidia again across all product levels, as a smaller company, by utilizing a more scalable architecture. We all know that when CFX and SLI scale well, CFX practically always pulls ahead, increasingly so with more load/more cards.

So DX12/Vulkan is partially about making mGPU a bit easier for Devs, less support needed from manufacturers (but still required). AMD is not banking on people just running one card for the high end, looking forward. Hence the support and tailored hardware to be attractive to devs. A firepro driver capable gaming card, is very interesting to some people who have multiple uses for high end workstations.. like devs.
 
Well let's think like the guys who do marketing for AMD. They will quickly point out that the 390X is as fast as a 980ti in some games if you disable X,Y, and Z. This might explain some of the rumor mill. Lots of optimism.

A 390x BEFORE OC potential is taken into consideration, matches a GTX 980 going by testing on various forums. So why would the R9 390x replacement only match a GTX 980 when they just switched to 14nm tech? It doesn't make sense to only match it. And AMD wants the mainstream to score well on the VR test. That means a score of 10 at least. Aren't the 390x like $379 or something? T-Storm and this is one of the few websites that will load currently. Why can't the 14nm Polaris equivalent be $349?

I think you're underestimating Polaris as much as people are overhyping Polaris. :)

Price checked the 390x and they are going for $379 on NewEgg depending on brand.
390x - Newegg.com

I think this is the scenario we're likely going to see: subtly better than 390x (and GTX 980), although not quite equalling 980 ti, performance for around $350 with much better power efficiency. In fact, I think this is relatively conservative prediction that with all the improvements that come with 14nm and supposed FinFet improvements, especially considering the performance jumps we've seen in the past with a process size reduction e.g. GTX 570 vs GTX 670. And remember, AMD is effectively replacing a 2-year old chip (Hawaii) also leading me to believe there will be at the very least performance parity, likely higher.
 
Last edited:
People are loosing sight of what AMD wants to do with this VR approach.

The Pro Duo really shows this. It's not just VR. Sure, one GPU per eye is a great excuse.

But really, it's about pushing mGPU support, in conjunction with DX12. Notice how busy AMD has been with DX12 titles lately? Compare that to DX11. The trojan horse where with support, AMD can start to really compete with Nvidia again across all product levels, as a smaller company, by utilizing a more scalable architecture. We all know that when CFX and SLI scale well, CFX practically always pulls ahead, increasingly so with more load/more cards.

So DX12/Vulkan is partially about making mGPU a bit easier for Devs, less support needed from manufacturers (but still required). AMD is not banking on people just running one card for the high end, looking forward. Hence the support and tailored hardware to be attractive to devs. A firepro driver capable gaming card, is very interesting to some people who have multiple uses for high end workstations.. like devs.

Hold on there, DX12 actually makes multi-GPU HARDER for the developers to implement. Before it was a driver tick and then optimise engine code, now the developers have to completely code multi-GPU from the ground-up.

This is what people fail to realise: DX12 actually makes game design more difficult. DX 11 took care of a LOT of the low-level stuff, but it did so using its methods and they were not up for debate. DX12 does NOTHING for the developers besides allows access to hardware at a low-level.
 
Hold on there, DX12 actually makes multi-GPU HARDER for the developers to implement. Before it was a driver tick and then optimise engine code, now the developers have to completely code multi-GPU from the ground-up.

This is what people fail to realise: DX12 actually makes game design more difficult. DX 11 took care of a LOT of the low-level stuff, but it did so using its methods and they were not up for debate. DX12 does NOTHING for the developers besides allows access to hardware at a low-level.

This is anything but true, game design under DX11 required drivers to function that is a 2 factor implementation before you could "enjoy" multiple gpu usage.
The question is this what did DX11 do , nothing for high end gaming engines it crippled performance due to DX11 limitation of only one cpu core talking to the gpu.

Actually developers can now make use of any gpu resource if it was just one gpu and source out tasks accordingly. The extra work allows better control and without a doubt better results then under DX11 ever would be possible. The option to control what to implement for each game by the developer themselves allows far better results then just AFR what we have seen now...
 
  • Like
Reactions: N4CR
like this
This is anything but true, game design under DX11 required drivers to function that is a 2 factor implementation before you could "enjoy" multiple gpu usage.
The question is this what did DX11 do , nothing for high end gaming engines it crippled performance due to DX11 limitation of only one cpu core talking to the gpu.

Actually developers can now make use of any gpu resource if it was just one gpu and source out tasks accordingly. The extra work allows better control and without a doubt better results then under DX11 ever would be possible. The option to control what to implement for each game by the developer themselves allows far better results then just AFR what we have seen now...

Yes developers have more control over multi-GPU rendering with DX12. But it requires "effort" and "time" and "money".

So what's the incentive?
 
Harder because it actually works properly and takes a little more effort to make work therefore? This is early days, it will get easier and better understood as time goes on, as with any new system. Hence AMD is stepping in and trying to shoulder some of that load for now to draw awareness too.

Once the rest of the AA developers get used to actually programming again, instead of cut and paste shit-titles and ports of late, it'll be good for all of us. Fallout 4 is the poster child of this rubbish, what a bunch of hacked up shit. Huge budget, no mGPU support, practically ten+ year old engine? Craptastic graphics? WTF?
Forcing mGPU acceptance to mainstream, is the only way to do it and it's better for everyone in the long run.
 
People are loosing sight of what AMD wants to do with this VR approach.

The Pro Duo really shows this. It's not just VR. Sure, one GPU per eye is a great excuse.

But really, it's about pushing mGPU support, in conjunction with DX12. Notice how busy AMD has been with DX12 titles lately? Compare that to DX11. The trojan horse where with support, AMD can start to really compete with Nvidia again across all product levels, as a smaller company, by utilizing a more scalable architecture. We all know that when CFX and SLI scale well, CFX practically always pulls ahead, increasingly so with more load/more cards.

So DX12/Vulkan is partially about making mGPU a bit easier for Devs, less support needed from manufacturers (but still required). AMD is not banking on people just running one card for the high end, looking forward. Hence the support and tailored hardware to be attractive to devs. A firepro driver capable gaming card, is very interesting to some people who have multiple uses for high end workstations.. like devs.


Its a product that 1% of 1% will buy, developers don't care about such a small number..... If it was priced lower yeah I could see it pick up ground. But at 1500 its a gamble.
 
This is anything but true, game design under DX11 required drivers to function that is a 2 factor implementation before you could "enjoy" multiple gpu usage.
The question is this what did DX11 do , nothing for high end gaming engines it crippled performance due to DX11 limitation of only one cpu core talking to the gpu.

Actually developers can now make use of any gpu resource if it was just one gpu and source out tasks accordingly. The extra work allows better control and without a doubt better results then under DX11 ever would be possible. The option to control what to implement for each game by the developer themselves allows far better results then just AFR what we have seen now...


Its not that easy...... lol

If you want to explicit control, its harder to create the engine now as nV and AMD really have no way to help you outside help you modify your own code, which is the same thing as you modifying the code. (this is the same with SLi and Xfire)

If you want implicit control, you have to make sure your renderer is suited for it, another words, write more code or change code and this can be anywhere from easy to extremely time consuming depending on the type of engine you have.

So since developers don't give a crap about SLi and Xfire, why would they give a crap about mGPU, one reason, even though it takes more time to implement, once the engine is converted to implicit control, they don't need to worry anymore, but that might take them quite a bit of time and effort. Don't expect this to happen over night, or a year or two, more like 5 years +
 
Last edited:
Yeah once you implemented it for your engine if you have something that is used across more then one game the work done once and used many times as long as the engine gets used by different games.
 
Just remember people the 780ti was a $650 product, Then Nvidia released the 970 with the same performance for $329.

Do not ever say it can't happen. It did happen once, and I was wrong. I did not expect a 970 to be as fast as a 780ti.

Now I am not saying it will be as fast as a 980ti, all I am saying is never say never. It can happen (not saying it will)
 
Hold on there, DX12 actually makes multi-GPU HARDER for the developers to implement. Before it was a driver tick and then optimise engine code, now the developers have to completely code multi-GPU from the ground-up.

This is what people fail to realise: DX12 actually makes game design more difficult. DX 11 took care of a LOT of the low-level stuff, but it did so using its methods and they were not up for debate. DX12 does NOTHING for the developers besides allows access to hardware at a low-level.

It makes the developers responsible for the game then. No more having to rely on AMD and NVidia to get Crossfire and SLI working right. It's all on developers now.

We will finally find out which developers actually care about PC gaming. with DX12 they now have to take the time to make it run right.
 
AMD is responsible for all the console upgrades, and the data we currently have indicates that polaris10 probably will offer roughly 390 performance. That's enough to crush 1080p, but not higher resolutions and certainly nothing approaching 4k. How can the PS4 Neo and NeXbox handle 4k, with AMD making the SoC and graphics? Multiple GPUs per console. And how would game developers address those multiple GPUs? Vulcan and DirectX 12 explicit multi-adapter.

So the theory (and obviously just speculation) is that next-gen consoles all have multiple AMD GPUs, which will convince the main engine developers to support vulcan or DX12 explicit multi-adapter. And once unity, unreal, crytek, etc, support explicit multiple GPUs on consoles, those same APIs also run on PCs. And then we all live happily ever after.

It would be nice, wouldn't it?
 
AMD is responsible for all the console upgrades, and the data we currently have indicates that polaris10 probably will offer roughly 390 performance. That's enough to crush 1080p, but not higher resolutions and certainly nothing approaching 4k. How can the PS4 Neo and NeXbox handle 4k, with AMD making the SoC and graphics? Multiple GPUs per console. And how would game developers address those multiple GPUs? Vulcan and DirectX 12 explicit multi-adapter.

So the theory (and obviously just speculation) is that next-gen consoles all have multiple AMD GPUs, which will convince the main engine developers to support vulcan or DX12 explicit multi-adapter. And once unity, unreal, crytek, etc, support explicit multiple GPUs on consoles, those same APIs also run on PCs. And then we all live happily ever after.

It would be nice, wouldn't it?

You bring up a good point. You would think programming games for PC would be easy for developers now since all consoles will be on GCN. I would "Hope" games would come out being smoother on PC, but I have lost faith at some of the lazy porting developers have done.

I think PC gamers will still be 2nd class citizens to developers still. It's just with DX12 we can call them out easier for shitty ports since the consoles will run GCN.
 
Xbone and PS4 run GCN now, and many PC ports still suck. We know damn well that Unreal engine runs great on PC, but that didn't stop Batman: Arkham Knight from being a mangled abortion of a game. That won't change, some ports will still suck... but the underlying engines should scale better with multiple GPUs.
 
Somewhat right and somewhat wrong ;)

The sentiment is clear we all have it , console version is good and PC version is a mess or takes 21000 patches and after a period of time where we all grow long beards and have many children the game is finally acceptable :) .

As soon as more developers shift to DX12/Vulkan the more likely games become less problematic. Don't forget that developers need a good time to get adjusted to this new approach and they become smarter at it.
 
AMD is responsible for all the console upgrades, and the data we currently have indicates that polaris10 probably will offer roughly 390 performance. That's enough to crush 1080p, but not higher resolutions and certainly nothing approaching 4k. How can the PS4 Neo and NeXbox handle 4k, with AMD making the SoC and graphics? Multiple GPUs per console. And how would game developers address those multiple GPUs? Vulcan and DirectX 12 explicit multi-adapter.

So the theory (and obviously just speculation) is that next-gen consoles all have multiple AMD GPUs, which will convince the main engine developers to support vulcan or DX12 explicit multi-adapter. And once unity, unreal, crytek, etc, support explicit multiple GPUs on consoles, those same APIs also run on PCs. And then we all live happily ever after.

It would be nice, wouldn't it?

Wait what ?

The PS4 Neo or whatever won't have multi-gpu. It'll handle 4K because it won't be running the same settings we're used to on PC. Lower IQ, higher resolution. They'll use all the tricks at their disposal to make it playable 30fps at 4k, it's never going to look like PC ultra settings at 4k :p

I think the whole multi-gpu console thing is just conjecture, like in that video by adoredtv. There's no reason they don't just keep upgrading the consoles with the latest ~230mm^2 mainstream gpu every two years

This console upgrade thing is great for console gamers btw, in theory should ensure FULL back compatibility with each console lasting TWO console generations not just one
 
This console upgrade thing is great for console gamers btw, in theory should ensure FULL back compatibility with each console lasting TWO console generations not just one

The very first Nintendo NX rumor was that it would be so compatible with games made on the XBOX One and PS4, that the port time would be very, very short. All of the consoles are doing the backwards compatible thing now. And when games are really old they are making a HD PC port, then back porting it to the new consoles.
 
Of course it's conjecture. I said it was a theory and speculative, in the post you quoted.
 
The very first Nintendo NX rumor was that it would be so compatible with games made on the XBOX One and PS4, that the port time would be very, very short. All of the consoles are doing the backwards compatible thing now. And when games are really old they are making a HD PC port, then back porting it to the new consoles.

I'm not sure you get what I mean. Let's call PS4 era games "GEN1", PS5 era games GEN2 etc etc

PS4 - Tahiti - GEN1 1080p30

PS4 Neo - Polaris 10 GEN1 1080p60/2160p30 | GEN2 1080p30

PS5 - Navi X GEN1 2160p30 + GEN2 1080p60/2160p30

PS5 Neo - AMD whatever GEN1 2160p30 + GEN2 2160p30 +GEN3 1080p30

So every two upgrades the console becomes obsolete and can't play current generation games.

So the upgrade that comes after PS4 Neo will bring games that run exclusively on PS4 Neo onwards
 
welll I'm not exactly sure of what the context is with Nintendo nx but its not coming out till later next year so........
 
This is anything but true, game design under DX11 required drivers to function that is a 2 factor implementation before you could "enjoy" multiple gpu usage.
The question is this what did DX11 do , nothing for high end gaming engines it crippled performance due to DX11 limitation of only one cpu core talking to the gpu.

Actually developers can now make use of any gpu resource if it was just one gpu and source out tasks accordingly. The extra work allows better control and without a doubt better results then under DX11 ever would be possible. The option to control what to implement for each game by the developer themselves allows far better results then just AFR what we have seen now...


Not exactly, poor code creates a need for driver based optimizations. The stricter rules and guidelines in DX 12 or low level api's doesn't allow this.
 
Back
Top