Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.
Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 110GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).
One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.
Not sure if they can combine more than 2 cards.
The NASDAQ press release has some more info:
https://www.nasdaq.com/press-releas...tx-worlds-first-raytracing-gpu-20180813-00977
I assume the RTX5000 with half the RAM is what is going to be the 2080 announced next week.
If the rumors are true and a Titan also gets revealed then it's probably going to be a crippled RTX6000.
I don't think we are disagreeing since the RTX5000 is listed as 3072 CUDA cores in that press release.Not sure. I don't think it will have all the 4608 cores. That will be reserved for the Ti. Likely little over 3000 core for 2080 and and full core or little less for the Ti
Jensen is usually kind of bleh at doing these presentations, but he seems really off his game today. This is a lot worse than he usually is.
Founders' Edition Leather Jackets, only.
He doesn't need to compete with CPUs. Whatever the server is running (Intel, AMD, whatever) it'll have a NVIDIA GPU plugged in running Tensor and RTX cores. No need for competition, this is NVIDIA, remember?Bet he wishes he had a CPU to talk about
MY EYES, THE REFLECTION, GAHHH
Doesn't nvidia normally wait until a few months after releasing quadros to release geforce? Might be different this time.
Maybe we get the leftover volta's now that they have turing for the proffesionals.
One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.
Not sure if they can combine more than 2 cards.
So it’s 10K , 6.3K, and 2.3k for RTX8000, 6000, 5000 respectively. Volta cost 8K.
I read that 10Grays/sec more as a fillrate that won't be achievable in most conditions. Even if the dataset fits in memory, the memory access is likely incoherent and limiting performance. As you said, any bounces will curtail performance significantly. Still a nice feature, but the rest of the hardware doesn't seem built to really push the feature. It's just another relatively dumb tensor core that can brute force highly linear workflows in parallel.I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.
It would be curious to see if AMD's HBCC tech would have any effect on this to reduce effective memory footprint. NVLink supports system memory access, but the hardware paging is a bit different and only worked with compatible (Power) architectures last I checked. Vega on a Threadripper/Epyc with all the attached system memory channels might be able to pull it off with the paging. Setting aside ray performance for the moment, which may be insignificant compared to random access. Nvidia might be able to use the same processor, but has x86 issues with the memory controller. NVSwitch and multiple GPUs are the only way to scale memory capacity, possibly without paging as it should be more along the lines of direct access with limited bandwidth.Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 100GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).
They can combine all the cards they want until the switch runs out of aggregate bandwidth. Problem being that as number of cards increases they become increasingly limited by effective memory bandwidth as it approaches NVLink bandwidth of 100GB's or thereabout. Far from the 600+GB's the cards have individually. Limit the distance rays can travel and paging would be more realistic and performance likely acceptable.One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.
Not sure if they can combine more than 2 cards.
Fuck it. It's a monopoly. Consoles be damned. ATI use to be able to follow right away or lead off back in the day. This is just so damn sad. Watching them since AMD took over slowly fall down the rabbit hole to fucksville has been horrible for those of us from back in the Hay Day of GPU/CPU wars..
I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.
Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 100GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).
10,000 GPU
Where have you been?BTW checking some other sites last night I did see that NV is going to be holding a special GeoForce event right before Gamescom.
So RTX = Ray Tracing Xtreme?
Should have gotten a 8800gtMy last good card was the Radeon XTX1900.
Should have gotten a 8800gt![]()
That's what I get for forcing myself to periodically not look at things on the 'net. I should know better, Kyle, Cagey, Brent, megalith, and the whole team cover just about everything.