beautypi, procedural generated nature - technical details?
category: code [glöplog]
I suppose by now you guys have all seen iq's stellar work on http://www.beautypi.com. I'm posting this here in case iq shows up to answer some questions, or someone else knows something, or even knows something more...
I assume the work is technically similar to the steps outlined in http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html. The video doesn't get very close to objects, so some questions are open.
1. If this is marching cube, how do you deal with hard edges (curvatures below nyquist frequency)? My cheap MC algorithm gives me terribly jagged edges. -- If it's not MC, what is it?
2. How is the grass done? Is this the usual billboard fakery or something else?
3. Is it using raycasting? I assume it's classic meshes, generated with geometry shaders. If yes, did you march the entire scene in one pass? How is the texture encoded?
I'd be happy if someone could help me out here.
I assume the work is technically similar to the steps outlined in http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html. The video doesn't get very close to objects, so some questions are open.
1. If this is marching cube, how do you deal with hard edges (curvatures below nyquist frequency)? My cheap MC algorithm gives me terribly jagged edges. -- If it's not MC, what is it?
2. How is the grass done? Is this the usual billboard fakery or something else?
3. Is it using raycasting? I assume it's classic meshes, generated with geometry shaders. If yes, did you march the entire scene in one pass? How is the texture encoded?
I'd be happy if someone could help me out here.
paniq: intrigued to know the answers too. I came across that GPU Gems 3 article the other day for the first time, looks really great.
Isn't this "just" regular sphere tracing ?
Quote:
Isn't this "just" regular sphere tracing ?
Pretty sure - not.
Quote:
My cheap MC algorithm gives me terribly jagged edges
No shit.
Well, I'm not sure what it does - but I suspect it to be a combination of several techniques.
You could have a base geometry of triangles and procedurally put little cubes using tessellation and geometry shaders onto that geometry, render those cubes with some OIT technique into a linked lists g-buffer and take all cube "frontsides" as raymarching startpoints in order to sphere trace inside those little cubes.
IIRC IQ mentioned to be playing around with tessellation - so it also could be triangles everywhere. But the game "guess the technique from a youtube video" sucks ;)
considering its iq i would say "raymarching distance fields on two triangles" or something similar
You can mix spheretracing with rasterization. You define your scene as a distance field
- Convert the distance field isosurface to a polygon soup (there's something like "dual contour map", forgot the exact name)
- Render the polygon soup, zbuffer & normal
- Tada, you got the first hit & normal infos much faster than raymarching for each pixel !
- Convert the distance field isosurface to a polygon soup (there's something like "dual contour map", forgot the exact name)
- Render the polygon soup, zbuffer & normal
- Tada, you got the first hit & normal infos much faster than raymarching for each pixel !
And you of course use the distance field for cheap global illumination, soft-shadow, spheretracing for small localized details.
How faster would that be? Aren't the first jumps the fastest ones?
Dual contour map sounds intriguing. Got a source?
xernobyl: They would have been if they were direction dependent. Unfortunately, a lot of time is spent almost (but not quite) hitting a surface.
Indeed... If you have a lot of pixels in the pictures which objects edges (say, a landscape with lots of rocks and trees), first hit will be expensive, those needs lots of iterations.
which object edges => which *are* object edges
But primary rays are also the easiest to optimize, using tile based techniques etc.
Well in my case I'm mostly interested in partitioned high fidelity realtime contouring of SDF's using Geometry shaders - deformable and animated structures for a game being one of the applications. I have a marching cubes based implementation that runs in first frame with animated 64^3 volumes, but I wonder if I could do better.
hi!
no no! i'm sick of distance fields, lol. there is no raymarching here. there are no rays. there is no distance fields involved neither, no countours, no volumetric nothing.
it's all very simple actually, just plain staight gl polygons. classic rasterization. regular procedural meshes rasterized as in the good old days, nothing more. this is pretty much technology from fr-08 in current hardware. don't underestimate the power of good old polygons. unless you are making a 4k, which is not the case. i love polys!
no no! i'm sick of distance fields, lol. there is no raymarching here. there are no rays. there is no distance fields involved neither, no countours, no volumetric nothing.
it's all very simple actually, just plain staight gl polygons. classic rasterization. regular procedural meshes rasterized as in the good old days, nothing more. this is pretty much technology from fr-08 in current hardware. don't underestimate the power of good old polygons. unless you are making a 4k, which is not the case. i love polys!
another Q directed directly to you, IQ:
the video looks a bit low on framerate already, while i wonder how you got it that fast at all (considering its the techniques you invented for that pixar-movie)...the traditional poly-approach makes a lot of sense in this context of thinking about it!
...still: what specs had that PC you captured that video off? i am talking of the uppermost-video in the first posts link!
the video looks a bit low on framerate already, while i wonder how you got it that fast at all (considering its the techniques you invented for that pixar-movie)...the traditional poly-approach makes a lot of sense in this context of thinking about it!
...still: what specs had that PC you captured that video off? i am talking of the uppermost-video in the first posts link!
i made the video in my laptop, with a gf 560M, with fraps. it will get faster, this is an experiment for now :D
ps - i never mentioned pixar anywhere, as far as i know...
ps - i never mentioned pixar anywhere, as far as i know...
But you did do something similar for them though, didn't you?
But regardless, I'd love to see this running in the flesh.
But regardless, I'd love to see this running in the flesh.
Paniq: excuse the self promotion but check the slides on my blog, directtovideo.wordpress.com .. There is a load of material on gpu marching cubes. Basically geometry shaders is a bad idea and you should check out stream compaction techniques.. You should be able to get more like 128 or 256 volumes in less than one frame.
Link to Smash's blog for the lazy. Fantastic resource btw Smash :)
thanks a lot I'll have a look. I checked some more papers recently (which were totally over my head tbh), and they also mentioned stream compaction.
@iq's "unless you are making a 4k": When you notice that fancy vertex shaders compress as nice as fancy pixel shaders, you have much, much more options. Like, say, making a Chaos Theory remake in 4k. ;)
Polygons are completely underestimated in 4k's now after all this spheretracing craze.
Polygons are completely underestimated in 4k's now after all this spheretracing craze.
KK! ;)
IQ: tbh I thotght it was polys coz of the lighting :) its harder to do or fake gi so the look is more harsh.