[Ray Marking] Cheap/Fake Global Illumination
category: code [glöplog]
Just wanted to post a hack I made, I couldn't find anything while searching the boards on this so I hope its something new, hopefully someone can expand it further.
Images of a very basic implementation:
The concept is really simple and builds off the well known AO hack:
modified to become the following:
returnColor is set in your scene/map function
The rest of the vars should be self explanatory.
Then you just add the calculated GI term (clamp/scale/modify) to your lighting and voila! Fake GI that looks nice sometimes and its no more expensive than an AO pass. (Note, this is only single bounce, no reflection pass)
Enjoy!
-tz
Images of a very basic implementation:
The concept is really simple and builds off the well known AO hack:
Code:
float ambientOcclusion(vec3 p, vec3 n) {
float ao = 0.0, d;
for (float i = 1.0; i <= samples; i++) {
d= stepDistance * i;
ao += (dist - map(p + n * dist, returnColor)) / i;
}
return ao;
}
modified to become the following:
Code:
vec3 globalIllumination(vec3 p, vec3 n) {
vec3 g = vec3(0.0);
float dist;
for (float i = 1.0; i <= samples; i++) {
dist = stepDistance * i;
float d = vec3(dist - map(p + n * dist, returnColor));
g += returnColor*d/ i;
}
return g;
}
returnColor is set in your scene/map function
The rest of the vars should be self explanatory.
Then you just add the calculated GI term (clamp/scale/modify) to your lighting and voila! Fake GI that looks nice sometimes and its no more expensive than an AO pass. (Note, this is only single bounce, no reflection pass)
Enjoy!
-tz
yep. Auld and I played with this idea three or four years ago. We did it by multipliying by 1 - gatherColor, though. Also, there's the extension of SSAO to screen space color bleeding (http://www.mpi-inf.mpg.de/~ritschel/Papers/SSDO.pdf), which is another idea to use whatever cheap AO technique you have to gather color info and do color bleeding. That being said, I don't know if any 4k intro has used it yet (i haven't myself in any demo/intro)
you really should be more concerned about some interresting scenery :)
Anyone has some hours of time to try this proposal on slisesix? :) would love to see the results!
I'd suggest changing strategy of setting returnCollor in the map function. Currently you return whatever color of the nearest object is. If you did a proportional lerp it would help reduce the artifacts - especially the one seen as sharp transition between red wall and blue brick.
neat idea, but I think it looks like ass :)
yes, doesn't look that good, at least on the test scene.
Not only auld and IQ played with that idea. ;)
It just looks not very convincing. Maybe with some more samples over a hemisphere this could be way better. Maybe even some kind of mixing this with some stochastic sampling and some screen space post blur/low pass filtering could do a way better job. Maybe what KK recommends could also help it a lot (also take the second closest object color into account and lerp or something).
Try it all and post screenshots of it - we are interested ;)
It just looks not very convincing. Maybe with some more samples over a hemisphere this could be way better. Maybe even some kind of mixing this with some stochastic sampling and some screen space post blur/low pass filtering could do a way better job. Maybe what KK recommends could also help it a lot (also take the second closest object color into account and lerp or something).
Try it all and post screenshots of it - we are interested ;)
No GL
w/ mild lerp
The issue with any interpolation is that you end up sampling from objects that may not be in visible to current position in the raymarch, e.g. the left wall sampling the sphere when the box is in the way.
w/ mild lerp
The issue with any interpolation is that you end up sampling from objects that may not be in visible to current position in the raymarch, e.g. the left wall sampling the sphere when the box is in the way.
threshold with a max influence radius using smoothstep.
I think whatever you do with this method, there are going to be artefacts. You'll need multiple rays to get a 'decent' effect that correctly accounts for multiple objects, occlusion etc.
If that's not practical, you can probably get away with even the first method, careful scene design, and turning the effect down a bit. You want a strong effect when you're testing like this, but something more subtle tends to look better - and hides the defects :)
Btw, for this particular test scene a simple raytrace would be much faster and would probably let you use multiple rays in realtime for a couple of light bounces :)
If that's not practical, you can probably get away with even the first method, careful scene design, and turning the effect down a bit. You want a strong effect when you're testing like this, but something more subtle tends to look better - and hides the defects :)
Btw, for this particular test scene a simple raytrace would be much faster and would probably let you use multiple rays in realtime for a couple of light bounces :)
LGTM
Yeah, that looks much nicer now :)
LGTBW.
("Looks Good To Be WebGL" :)
("Looks Good To Be WebGL" :)
Btw: unless my mind just messes with my memory, this demo is filled to the brim with fake SSAO-based light emission stuff, as well as some raytracing in places.
looks good (the last one especially). if it's as cheap as an SSAO pass, thumb up!
Also, is it compatible with a polygon render, or only with raymarching ?
Also, is it compatible with a polygon render, or only with raymarching ?
... its not SS as it uses color information gathered during raymarch from away from the normal.
This works well if you can get away with using simple colors. With procedural textures you have to gather object contribution to color rather than the actual color at each step then compute the GI color after the march. But then this means you'd have to compute potentially many procedural textures at every pixel (so average color should rather be used instead if the textures are expensive).
This works well if you can get away with using simple colors. With procedural textures you have to gather object contribution to color rather than the actual color at each step then compute the GI color after the march. But then this means you'd have to compute potentially many procedural textures at every pixel (so average color should rather be used instead if the textures are expensive).