nVidia RTX innovation or marketing bullshit?
category: code [glöplog]
Give a creative man a pencil, he creates masterpiece.
Give a dumb man lots of tool, he even can't create shit.
RTX could be a pencil for a creative man.
Give a dumb man lots of tool, he even can't create shit.
RTX could be a pencil for a creative man.
Useless crap. We already have this https://www.youtube.com/watch?v=x19sIltR0qU
looks very promising, if they can get hardware acceleration going full on with good quality software optimizations, looks very cool to be honest, i love raytracing and how lifelike it looks compared to regular rasterization, also voxels are very cool. delta force 1&2 games still look unique and amazing with voxel engine.
According to this page, RT core is used for computing ray triangle intersections and BVH Traversal.
My guess is that SM core is also used for all the rest you need when doing raytracing.
There is also a new tensor core which can do some computation on 4x4 matrices more info here. AFAIK it can be used for IA/Physics/deep learning .
My guess is that SM core is also used for all the rest you need when doing raytracing.
There is also a new tensor core which can do some computation on 4x4 matrices more info here. AFAIK it can be used for IA/Physics/deep learning .
they're stepping up their game too fast. to my knowledge nobody even uses the tensor cores (for ML that is) to the full extent - which exist for quite some time now. the best i read was a speed up of 2, compared to the theoretic speed up of ~9. maybe that's why the audience was talked into loving raytracing. driver developers, game developers, they are not ready yet to properly use RTX.
Ray tracing is conceptually simple and very often useful, even if just for hard shadows and perfect mirrors. What makes the RTX thing risky is requiring new hardware; most users won't have it for many years. So you can either target a small subset of the potential users, only use RTX for small eye candy and not the main rendering, or write a fallback for everything. None of these seem ideal. Having it as an easy replacement functionality for envmaps etc in UE4 might help with this a lot tho.
For ML you target people with actual money so buying a few GPUs might not be a problem -- I haven't followed closely enough to know if the bubble has completely burst by now.
For ML you target people with actual money so buying a few GPUs might not be a problem -- I haven't followed closely enough to know if the bubble has completely burst by now.
AFAIK the Tensor-Cores were first introduced with the Titan V Card and can also be found in the latest Nvidia Quadro pro series. So this is indeed the first time Tensor Cores are available on a consumer Card. I dont know if they are also used for the RT stuff or if this is done on additional cores, but that would make for 3 different cores on one card and I doubt that.
Another interesting feature is DLSS which is some sort of anti-aliasing based on some algorithm created with deep learning and should increase performance and visual enhancements a lot while using far less performance-power than todays (t)aa mechanisms.
At least thats what I read on teh intarwebs.
Another interesting feature is DLSS which is some sort of anti-aliasing based on some algorithm created with deep learning and should increase performance and visual enhancements a lot while using far less performance-power than todays (t)aa mechanisms.
At least thats what I read on teh intarwebs.
i think rtx is the first big breakthrough since dx9, looking forward till everyone has it.
never heard of compute shaders?
can they best a raytraced scene?
they can raytrace a scene. and do pretty much anything else.
accelerated raymarching ... would it help to have hardware support for that ?
I mean the main loop part that goes step by step. For the rest, (evaluating the distance functions) I think SM (which AFAIK process pixel shaders) will do the job (as it is now)
I mean the main loop part that goes step by step. For the rest, (evaluating the distance functions) I think SM (which AFAIK process pixel shaders) will do the job (as it is now)
isn't hardware accelerated raytracing many times faster than doing it in software mode compute shaders? i don't get why everyone is so upset about rtx. looks like a complete positive scenario for everyone, unless someone just hates raytracing. if it can be improved over time like most other things in nature it will become faster and faster down the line and possibly overtake everything else.
I think people have been saying that about raytracing for decades. :)
i001, yes, the tracing part can be many times faster but as I pointed out on the previous page, earlier methods (i.e. this) show that with the current GPUs the slowest part is not the tracing logic itself but just reading the nodes/triangles from memory. I haven't seen any independent numbers from RTX and I'll be very surprised if they surpass software solutions by even a 2x difference. There are other factors (being able to trace and compute simultaneously and saving memory bandwidth from not doing wavefront approaches since there's no extra register pressure for doing traces and other compute in the same kernel) to it so we'll see how it turns out in the end.
My main dislikes about this is are that it's a vendor specific thing and that their advertised numbers seem funky. Otherwise yeah, ray tracing is the way to go.
Tigrou, the distance evaluation part is by far the most expensive in ray marching and it has to be generic code to be flexible enough, so what exactly would you want to accelerate?
My main dislikes about this is are that it's a vendor specific thing and that their advertised numbers seem funky. Otherwise yeah, ray tracing is the way to go.
Tigrou, the distance evaluation part is by far the most expensive in ray marching and it has to be generic code to be flexible enough, so what exactly would you want to accelerate?
Tigrou: nope
msqrt: the API is not vendor specific: DXR as part of directx could be implemented by any hardware vendor, same with the upcoming vulkan support. It's not one of those gameworks things, it is actually part of directx/vulkan. Of course, at the moment the only hardware is single-vendor. The paper you link was written by three people all working at nvidia (who have tons more raytracing research published), so I think it's safe to assume that someone at nvidia has been thinking hard about those hardware implications you mention. But yeah, I agree the published perf data is not exactly comprehensive. First people will have the hardware in their hands end of september IIRC.
fizzer: I remember a very nice siggraph talk a few years back titled "raytracing is the future and ever will be" :)
msqrt: the API is not vendor specific: DXR as part of directx could be implemented by any hardware vendor, same with the upcoming vulkan support. It's not one of those gameworks things, it is actually part of directx/vulkan. Of course, at the moment the only hardware is single-vendor. The paper you link was written by three people all working at nvidia (who have tons more raytracing research published), so I think it's safe to assume that someone at nvidia has been thinking hard about those hardware implications you mention. But yeah, I agree the published perf data is not exactly comprehensive. First people will have the hardware in their hands end of september IIRC.
fizzer: I remember a very nice siggraph talk a few years back titled "raytracing is the future and ever will be" :)
Yes, the HW being NV-only is what I worry about. I'd guess AMD and Intel are already working on implementations, at least software ones. Let's just hope the performance isn't too bad (AMD hasn't released much research on ray tracing, might be a lot of catching up to do) so this would actually be widely usable.
Dunno about research papers but AMD has actively been working on their OpenRays / RadeonRays technology for years.
Yes, but the only perf numbers I've seen look pretty rough and there seems to be no info on how the newest cards would perform, so I'm expecting it's not too good either.
Seems people don't get how Nvidia are handling this too well. Rough description as I understand it (may be wrong in places):
1. Rasterise scene same as it's done now
2. Trace from the rasterised surfaces for lighting / shadows / reflections. For matte surfaces this is done at 1 sample per ray (i.e. terrible quality), possibly at lower resolution too. For shiny surfaces 1 ray is enough anyway.
3. Hand over to the tensor cores, where a machine learning algorithm does noise reduction. Think something like temporal AA where it's getting data from several frames plus general noise reduction, but using ML to get better results.
So when it's used, it'll be using the standard compute / rasterising cores, the RT cores and the tensor cores together. And because it's basically tracing at terrible quality levels, perf on that side doesn't need to be that high - the ML stuff will clean it up and upscale.
1. Rasterise scene same as it's done now
2. Trace from the rasterised surfaces for lighting / shadows / reflections. For matte surfaces this is done at 1 sample per ray (i.e. terrible quality), possibly at lower resolution too. For shiny surfaces 1 ray is enough anyway.
3. Hand over to the tensor cores, where a machine learning algorithm does noise reduction. Think something like temporal AA where it's getting data from several frames plus general noise reduction, but using ML to get better results.
So when it's used, it'll be using the standard compute / rasterising cores, the RT cores and the tensor cores together. And because it's basically tracing at terrible quality levels, perf on that side doesn't need to be that high - the ML stuff will clean it up and upscale.
@psonice, sounds like we can get this then, just looking much better? Fingers crossed =)
... that’s most likely exactly what they’re doing
psonice : i think (3) is not being done in real-time applications, it's all spatial/temporal denoising (bilateral filters and taa).
psonice/smash: there was a very nice keynote at HPG by Colin Barré-Brisebois of EA/SEED on what they do. Looking at his blog, it seems that talk isn't online (yet?), but from glancing over his GDC talks they seem to cover most of it. After nvidia's announcement, those talks now make more sense ;) See his blog: https://colinbarrebrisebois.com/
Also, it's not like nvidia is doing 1,2, and 3: The gamedevs/engine devs do that. And for 3, agree with smash, it seems to be mostly SVGF or derivatives thereof, so no inferencing - although some of the newer demos seem to use nvidia's DLSS. I think the jury is still out on classic vs. ML reconstruction/AA/denoising filters.
Also, it's not like nvidia is doing 1,2, and 3: The gamedevs/engine devs do that. And for 3, agree with smash, it seems to be mostly SVGF or derivatives thereof, so no inferencing - although some of the newer demos seem to use nvidia's DLSS. I think the jury is still out on classic vs. ML reconstruction/AA/denoising filters.
honestly i'm mostly excited to see what type of unintended applications can be accelerated with the RT hardware (same for tensor cores..). In any case, can't complain about more/novel hardware :)
Hopefully they can find some way to have higher (real) raytrace throughput on future hardware without relying on die shrinks, tho...
Hopefully they can find some way to have higher (real) raytrace throughput on future hardware without relying on die shrinks, tho...