Displacement Mapping
category: general [glöplog]
Why is the drawing of an low res Model with Displacement Mapping faster than drawing the full resoltion modell?
Uhm. Less polygon data, less setup. Pixel shader / fixed functions are fast and textures resident. There is a break-even point obviously.
So reading the huge amount of polygon data is the bottle neck?
Reading, storing, transforming, sorting, drawing and generally handling the huge amount of polygons adds up to a bottleneck.
With DM in particular, if you do the tesselation in realtime, you can make it viewport-dependent and just stop adding polygons once they're smaller than one pixel. Compare static mesh, where you need to decide on a tesselation early and stick with it.
Its not that dumb a question after all!
Yes, my first thought is : transform the vertices.
but you've got to raytrace the displacement map in the shader on the the other side.It does not exactly work the same way, though.
Then you've got the bandwidth of the vertex data. But you also have the bandwidth of the displacement map.
Bottom line is, the displacement map is much more compact than vertices, like a terrain heightmap, the spatial referential is adapted to the local context. It's a kind of data compression after all.
Then, with vertices you've got to store UV coords, normals maybe, etc. These are computed on the fly with displacement mapping.Then you've got the polygon data that are pointers to the vertex data. Plus maybe per plygon attribute, don't know, texture handle or so.
(just a thought, with vertices you've got the problem of LOD when polygon approaches pixel, with displacement mapping on the other side you've got the problem of mipmaps...i saw a paper once that dealed with the problem of mipmaps for bump maps, maybe not trivial, with anisotropy etc.)
By the way, I'm not an expert so i have been wondering if the technique of displacement mapping had been implemented with little "cubes" (six quads, maybe three could do) instead of a quad? Do someone understand what I mean? :) You draw the quads of the cube and in the shader you raytrace the heightfield, that way you've got a displacement mapping that really works when your view is parallel to the heightfield, you really see the mountains in all cases.
any idea?
ps: fuuuuuuuck! I just discovered you can drag and enlarge the box that lets you type your post in the pouet bbs.neat.
Yes, my first thought is : transform the vertices.
but you've got to raytrace the displacement map in the shader on the the other side.It does not exactly work the same way, though.
Then you've got the bandwidth of the vertex data. But you also have the bandwidth of the displacement map.
Bottom line is, the displacement map is much more compact than vertices, like a terrain heightmap, the spatial referential is adapted to the local context. It's a kind of data compression after all.
Then, with vertices you've got to store UV coords, normals maybe, etc. These are computed on the fly with displacement mapping.Then you've got the polygon data that are pointers to the vertex data. Plus maybe per plygon attribute, don't know, texture handle or so.
(just a thought, with vertices you've got the problem of LOD when polygon approaches pixel, with displacement mapping on the other side you've got the problem of mipmaps...i saw a paper once that dealed with the problem of mipmaps for bump maps, maybe not trivial, with anisotropy etc.)
By the way, I'm not an expert so i have been wondering if the technique of displacement mapping had been implemented with little "cubes" (six quads, maybe three could do) instead of a quad? Do someone understand what I mean? :) You draw the quads of the cube and in the shader you raytrace the heightfield, that way you've got a displacement mapping that really works when your view is parallel to the heightfield, you really see the mountains in all cases.
any idea?
ps: fuuuuuuuck! I just discovered you can drag and enlarge the box that lets you type your post in the pouet bbs.neat.
ryg: Aaaaaaaaaaaaaaaah that's very clever!
I kind of see this as an hybrid of rasterization and raytracing.
But you still maybe will need LOD on the lowpoly mesh if things are far.
Maybe we could genrealize this, think recursively...
Every model is just a cube with displacement mapping on its faces, this gives you more polygons. Then on some of tese polygons (the ones forming the "mountains" of the six heightfields) you add the posibility off adding a new level of displacement mapping...Mmmh...I'm starting to wonder if this can work :)
The idea is to sort of consider each big object in the world and to "raytrace" its bounding box or bounding sphere, provided we have designed a clever adapted structure that describes it...
Maybe recursively intricated bounding boxes in a tree...
Got to think more.
I kind of see this as an hybrid of rasterization and raytracing.
But you still maybe will need LOD on the lowpoly mesh if things are far.
Maybe we could genrealize this, think recursively...
Every model is just a cube with displacement mapping on its faces, this gives you more polygons. Then on some of tese polygons (the ones forming the "mountains" of the six heightfields) you add the posibility off adding a new level of displacement mapping...Mmmh...I'm starting to wonder if this can work :)
The idea is to sort of consider each big object in the world and to "raytrace" its bounding box or bounding sphere, provided we have designed a clever adapted structure that describes it...
Maybe recursively intricated bounding boxes in a tree...
Got to think more.
hey hey, now we've got insane amount of shader mips, could we replace that big displacement map with some nurbs coefficients and trace it on the fly? Or dct coefficients for the displacement map maybe? for texture data too...
That drag-and-enlarge-the-textarea thingy is a safari/webkit feature :)
HelloWorld: Displacement Mapping doesn't necessarily mean using ray tracing:
- you can upload the vertices of a low poly mesh as vertex shader constants and then just send barycentric coordinates in your vertexbuffer to render the hi-res model (ok, this is effectively more a vertex compression scheme than DM and is restricted by constant count, but well)
- you can use progressive meshes (incl. LOD)
- or you write a REYES implementation (which is more or less the optimal architecture for DM). This would be SW rendering or CUDA/Cell of course.
- you can upload the vertices of a low poly mesh as vertex shader constants and then just send barycentric coordinates in your vertexbuffer to render the hi-res model (ok, this is effectively more a vertex compression scheme than DM and is restricted by constant count, but well)
- you can use progressive meshes (incl. LOD)
- or you write a REYES implementation (which is more or less the optimal architecture for DM). This would be SW rendering or CUDA/Cell of course.
-> for the first two points see Tom Forsyth's stuff
Mmh...Maybe I need to read more. I was referring to displacement mapping as a texture accessed in the pixel shader just like a bump map, not generating real vertices in the pipeline, I guess I meant parallax mapping ; when I talk of ray tracing, it's just that in that case you're kind of tracing a ray in your pixel shader to intersect chith the heightfield.
Hardware tessellation is the future/present!
Displacement Mapping With Tesselator?
Mmmmhh... seems neat for that time.
and still widely in use..
does anyone know a scientific paper where i can cite from?
Not really, also I'm not sure if you're referring to creating new vertices on the fly or doing pixel shader parallax/occlusion/relief etc...
Wow, I understand the subtilities better now
with correct Z occlusion
That's pretty fucking ace imho.
from http://www.inf.ufrgs.br/~oliveira/pubs_files/Policarpo_Oliveira_Comba_RTRM_I3D_2005.pdf
with correct Z occlusion
That's pretty fucking ace imho.
from http://www.inf.ufrgs.br/~oliveira/pubs_files/Policarpo_Oliveira_Comba_RTRM_I3D_2005.pdf
and1, using subd+displacement mapping to compactly store highres meshes:
http://research.microsoft.com/en-us/um/people/hoppe/dss.pdf
you can pretty much work through the references from there.
http://research.microsoft.com/en-us/um/people/hoppe/dss.pdf
you can pretty much work through the references from there.
I have a plan for a deferred shading like technique with color and 3D-normal shit onto 2 render-targets using an fast approximate screenspace displacement mapper + fake HDR-stuff + fake Focal-BS in a second pass on Low-Level 2.0-hardware
It shouldn't look really bad, but is this a bad idea? cause it might work for the cause. even bullshitting 1280+ hardware smh?
It shouldn't look really bad, but is this a bad idea? cause it might work for the cause. even bullshitting 1280+ hardware smh?