- Jun 11, 2007
I don't even mean do specific data for every game. That undoubtedly takes time/money. Although I think that some developers with bigger budgets, should seriously consider doing it. On console, I have no idea why Microsoft isn't using their Azure cloud to train AI upscaling specifically for every single first party game. It could give them a big quality advantage over Sony.
I just mean get some data from at least a few actual games. and apply that to their otherwise general algorithm, used for all games. I.E. they said that DLSS 3.5 has been tested on a substantially larger dataset than ever before-------and the first title with the results from that, has obvious visual problems. It would seem that only getting data from a couple of idealic test environments, is hindering the quality of new features.
And a direct example of that is----they went to the trouble of creating a tech demo of a bar which-----looks a whole lot like Cyberpunk. But....isn't Cyberpunk. And I am sure that demo's visuals are tuned to be ideal for DLSS and Ray Reconstruction. As opposed to simply partnering with CDPR and featuring a bar scene from Cyberpunk and getting DLSS + RR properly tuned for that. A real game.
Did they say how many games they use/what's in the data set? I really can't recall?