cybereality
[H]F Junkie
- Joined
- Mar 22, 2008
- Messages
- 8,789
I also don't think it's SVOGI.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
It's a hexagon, and no, that's not it. It's clear between multiple casings that a hexagonal shape is being reflected.
Yes. Our current implementation is both API and hardware agnostic. It runs in 1080p with 30 fps on a Vega 56. Reducing the resolution of reflections allows much better performance without too much quality loss. For example in half-resolution mode it runs 1440p / 40+ fps.
However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations.
To ensure our ray-traced reflections performed as well as possible we didn’t use the original render geometry for reflections, but rather a less detailed version that is easier to process. This optimization is similar to traditional LODs (level of detail objects) that replace the original render geometry as you move away from it, and the object gets smaller on the screen.
In fact, all the objects in the Neon Noir Demo use low-poly versions of themselves for reflections. As a few people have commented, it is noticeable on the bullets, but it’s a lot harder to spot in most cases. That said, we can easily fix the reflection on the bullets by using more detailed LODs or just not using LODs at all.
The video description says it was done in real time, and the youtube video itself is 4K @ 30 FPS.
Told everyone it was horseshit - no way it was running at native 4k, which was all the hype was trumpeting. It's just 1080p at 30fps.
And guess what - there are RTX full-resolution demos that run at above 20fps on the GTX 1080, so a little optimization is all it took here.
Looks more like a cheap demo to me, optimized for the single card. Show me the real performance improvement vs RTX (re-create one of their demos on your engine) if you want to convince me you have a Golden Goose here.
That’s a strangely negative response. It’s great they made a demo that is hardware agnostic. Of course it’ll run better with hardware designed for RT.
But like they said, lower the resolution it’ll run even better. Given current reflections in games are run at much lower resolutions I see no problem with this.
At the end of the day I just want hardware agnostic RT that the vast majority of machines can run. We all know what happens to proprietary tech....
Google DXR...you just failed...bigtime.
Here are some detail from the devs about this:
https://www.cryengine.com/news/how-we-made-neon-noir-ray-traced-reflections-in-cryengine-and-more
Tidbits:
I know what DXR is. Where is it actually implemented and GPU agnostic running Vegas?
DXR is vendor agnostic...that AMD's driver is lacking support (AMD's side, not microsoft) doesn't alter this fact.
But I get it...NV is evil, DXR is evil...until AMD enables driver/hardware support...and RTX kill kittens...right?
I don't think you're understanding this right. DXR is hardware agnostic, just like the rest of the DirectX suite is. It accepts commands to do ray tracing in DirectX format, and hands them off to the graphics card driver, which must then decide how to do whatever the task is. On an nVidia card, the driver has the option of recruiting the octotree and matrix accelerators ("RTX cores") if the GPU is so-equipped, but it can apparently also do the same operation using only the CUDA cores, at the obvious expense of it running very slowly.DXR might technically be vendor agnostic but the current implementation is anything but that. Game developers would have to program completely different for anything not nVidia.
I never said nVidia is evil although, since you brought it up, it’s no secret they push proprietary techs which is not good for adoption, competition, or in the end gamers. More importantly it increases risk of failure.
I didn’t even mention AMD vs nVidia. You’re the one with the hardon for nVidia. I have a 2080ti and couldn’t give a damn about RT in it’s current form.
I don't think you're understanding this right. DXR is hardware agnostic, just like the rest of the DirectX suite is. It accepts commands to do ray tracing in DirectX format, and hands them off to the graphics card driver, which must then decide how to do whatever the task is. On an nVidia card, the driver has the option of recruiting the octotree and matrix accelerators ("RTX cores") if the GPU is so-equipped, but it can apparently also do the same operation using only the CUDA cores, at the obvious expense of it running very slowly.
I don't see any reason any other vendor, such as AMD or Intel, couldn't use the same or similar math to what nVidia uses to do that operation on their own floating point compute units.
http://cwyman.org/code/dxrTutors/dxr_tutors.md.htmlNah, I understand just fine. I followed this very closely and read/watched everything I could find.
Currently the way it is game developers need to program specifically for each vendor. So all their work using nVidia’s tool kitss graphics cards is not transferable.
Ideally Microsoft would have made a universal tool kit similar to nVidia’s that would have been hardware agnostic and made it easy for developers.
So we have them developing games the traditional way, plus they have to add on RT functionality and it’s (currently) vendor specific programming.
Nah, I understand just fine. I followed this very closely and read/watched everything I could find.
Currently the way it is game developers need to program specifically for each vendor. So all their work using nVidia’s tool kitss graphics cards is not transferable.
Ideally Microsoft would have made a universal tool kit similar to nVidia’s that would have been hardware agnostic and made it easy for developers.
So we have them developing games the traditional way, plus they have to add on RT functionality and it’s (currently) vendor specific programming.
RTX = NVIDIA GPU's with DXR support.
DXR = Vendor agnostic DX12 raytracing API.
The intro said "API agnostic" which makes me wonder if this demo is using DXR at all (Im going to guess no).
Side note:
People should stop treating this implementation to those you see in games...to many shortcuts, visual artifacts, lesser fidelity etc. compared to games already out.
I bet that is why the heading is worded as it is...to muddy the waters...NVIDIA evil and all that right?
People want a non proprietary way of running ray tracing that actually performs.
RTX as it stands right now is a painful joke unless your running a 2080ti or better and settle for 1440p resolution or lower.
Also no one gives a damn about visual fidelity if the game runs like a slide show.
DXR isn't proprietary.
I'm fascinated that some people are having such a hard time understanding this very simple concept. Why, I wonder?
DXR - raytracing via DirectX 12. Hardware agnostic, GPUs must support DX12.
RTX - Nvidia's way of accelerating DXR. Proprietary.
Right now, the only way to accelerate DXR via hardware on a GPU is RTX. When AMD releases DXR hardware-capable cards, we'll have another proprietary path to accelerate raytracing DX12 code, that is, DXR.
Why are some people so confused? It's literally how GPUs have always worked: AMD and NV both supported, say, DX11, but the way each architecture accelerated DX11 code have always followed proprietary paths. What is so hard about this?
I'm fascinated that some people are having such a hard time understanding this very simple concept. Why, I wonder?
DXR - raytracing via DirectX 12. Hardware agnostic, GPUs must support DX12.
RTX - Nvidia's way of accelerating DXR. Proprietary.
Right now, the only way to accelerate DXR via hardware on a GPU is RTX. When AMD releases DXR hardware-capable cards, we'll have another proprietary path to accelerate raytracing DX12 code, that is, DXR.
Why are some people so confused? It's literally how GPUs have always worked: AMD and NV both supported, say, DX11, but the way each architecture accelerated DX11 code have always followed proprietary paths. What is so hard about this?
The confusion comes from the fact that games that support ray tracing advertise(d) themselves as supporting RTX, and not DXR. Therefore it seems like they are supporting a proprietary feature that can only be used on the RTX line of cards. There were no "NV's implementation of DX11 games", there was only DX11 games.
Could RTX be conflated to be similar with Gameworks? Proprietary but open?
Sorry, I didn’t mean open so much as I meant will run on AMD. Poorly conveyed that.
DXR isn't proprietary.
It works quite well and significantly enhances visuals on multiple AAA games. That's far from a painful joke, especially since it's the only game in town.
Well, if 'runs like a slideshow' were actually true, then yeah. But since it isn't, there's real value in RTX.
You asked why is there confusion. I explained it. Then you come back with people should do their research. Yeah, that's a circular argument. If everyone had a PhD about proprietary and open implementations of ray tracing there would be no confusion in the first place.So... nobody should trust marketing and they should do their own research on how things work. Isn't that the same it has always been? It's going to be a mess if once AMD releases their DXR capable hardware we refer to that in games as: with RTX and XYZ support! Instead of just: supports DXR.
You asked why is there confusion. I explained it. Then you come back with people should do their research. Yeah, that's a circular argument. If everyone had a PhD about proprietary and open implementations of ray tracing there would be no confusion in the first place.
Where did I say DXR is proprietary I pointed out RTX on purpose which is. Your standards of performs good and significantly enhances visuals are far different then mine, dropping below 60 fps for a few reflections seems a bit much to me. Then we have not even figured on the fact you have to spend over 1000 dollars just to even get that. Also DLSS does not enhance visuals, from what I have seen it blurs and degrades the quality.
We have no indication that games that leverage RTX functionality on Nvidia GPUs will be able to run those code paths on AMD GPUs. The DXR stuff should work.
DXR can be run via hardware acceleration or software (shaders). Regardless of how you do it, it's still DX12 code and still hardware agnostic. Don't know what you mean by "real", as even if you did it on a CPU it'd still be real raytracing, doesn't become "fake" because you do it on software VS hardware.
Developers will implement RTX code to accelerate DXR, just like they've always implemented Nvidia/AMD code paths to run DX12, 11, 10, 9 and I can't recall anything prior to that. When AMD releases their accelerator, that code will also be implemented by developers. RTX/AMD's version are not raytracing. DXR is the code, AMD/Nvidia just accelerate it in their own ways that make more sense for their own hardware.