Polygon edge detection algorithm
category: general [glöplog]
I only know that one where you run all the faces facing the camera and insert the edges to a list or remove them if they're in the list... it may not be exactly this but it's easy to get there.
Is there any faster algoritmh that's not too hard to implement?
Is there any faster algoritmh that's not too hard to implement?
"polygon edge detection" should be "mesh edge detection" or maybe "mesh sillouette" or something like that.
Do you want the mesh silhouette as seen from an arbitrary point?
If so you can use the same edge-finding algorithms that are used by shadow volumes.
If so you can use the same edge-finding algorithms that are used by shadow volumes.
Where the "arbitrary point" point is the camera ;)
for shadow volumes we used to do this:
pregenerate a winged edge array (stores the edge and the two polygons connected to it - im assuming manifold meshes - i.e. only up to 2 polygons can be connected to an edge).
check each edge to see if one of the polys faces away from the camera and one faces towards (dot poly normal and camera to first vertex in your case.. or light->first vertex for shadows). those are the ones you need.
pregenerate a winged edge array (stores the edge and the two polygons connected to it - im assuming manifold meshes - i.e. only up to 2 polygons can be connected to an edge).
check each edge to see if one of the polys faces away from the camera and one faces towards (dot poly normal and camera to first vertex in your case.. or light->first vertex for shadows). those are the ones you need.
additional property that may or may not improve speed: silhouette edges for a 2-manifold mesh form closed loops. so after you found a silhouette edge, you can just test the edges incident to your current edge's "end" vertex.
+: traverses the silhouette edges "in order", which is useful to have sometimes.
-: tends to be slower (per-edge) than the straightforward method because of the increased amount of pointer chasing.
+: traverses the silhouette edges "in order", which is useful to have sometimes.
-: tends to be slower (per-edge) than the straightforward method because of the increased amount of pointer chasing.
I dont believe there's a faster one.
just remember that you don't need to normalize your normals or camera->vertex vectors here, it's only the signs what you care about. Ah, for directional lights (sun) it's faster of course - you only need to compare the signs of the (camera space) z components of the face normals.
oswald, well, there's clustered backface culling, but i doubt it would help much with typical meshes.
It looks as if the algorithm you are talking about is O(n), so probably there no faster algorithms
By other hand, if it is for rendering silouettes lines (as in some NPR renderings), maybe you can do it render size dependant and not mesh dependant, rendering the scene with flat colors for every mesh without antialiasing and then checking for adjacent pixels of different color...
no asymptotically faster algorithm, you mean. there's a world of difference :)
besides, an optimal algorithm would be O(k) (where k=number of silhouette edges), not O(n) (n=number of edges in mesh). i don't think such an algorithm exists, but you might be able to get something like O(k+log n) if:
then again, as i said earlier, this problem is mostly academic, because i very much doubt the overhead for CBC is going to be worth it in practice.
besides, an optimal algorithm would be O(k) (where k=number of silhouette edges), not O(n) (n=number of edges in mesh). i don't think such an algorithm exists, but you might be able to get something like O(k+log n) if:
- you use something like hierarchical clustered backface culling to quickly reject groups of all-back/all-front faces.
- you traverse along silhouette loops.
- you find a quick way to find "seed edges" to start traversing from. this is probably the clincher - i don't think you can narrow the candidate edge list down to C*(k+log(n)) edges (C independent of the mesh) with just clustered backface culling, at least not for general 2-manifold meshes.
then again, as i said earlier, this problem is mostly academic, because i very much doubt the overhead for CBC is going to be worth it in practice.
but ryg, since you are rotating, you would need to regenerate the hierarchy... and it should need O(n) I think...
Wait wait, sorry, I've just understand what you said ryg... yes, maybe there is an algoritm like that :)
ermm... I'm thinking about it... and it looks as if anyway, in the worst case it is o(n)...
A question: have you used something like a thresold in the z-buffer between adjacet pixels for drawing silouttes in NPR rendering?
texel: I really want the edges lines, not the pixel data :)
I'm still not sure what I want to do with it.
I'm still not sure what I want to do with it.
Quote:
We found that for small models, under 10,000 polygons for our hardware, brute force silhouette extraction is easy
to implement and runs nearly as fast much more complex methods.
nice review broderick :) ty
JustFakeItAndNooneWillNotice(tm)
xernobyl, just some silly idea: draw the vertices alone to some 2d point data structure and do the barricading sleeping tigers algorithm. no clue about speed, can't be arsed to figure it out.
Quote:
for shadow volumes we used to do this:
pregenerate a winged edge array (stores the edge and the two polygons connected to it - im assuming manifold meshes - i.e. only up to 2 polygons can be connected to an edge).
check each edge to see if one of the polys faces away from the camera and one faces towards (dot poly normal and camera to first vertex in your case.. or light->first vertex for shadows). those are the ones you need.
note: hehe, I have the very same algorythm in karate, this is for sure the thing to do for software rendering. here i an example of rendering with black polygons for edges:
Note the edge polygons alo need a "normal vector" from the shape polygon, for the stroke width. You could re-link the whole edges to find the whole contour closed polygon, but re-using the 3D vertex's normal gives quite good result (and need no extra "asymptotically" shit ;)
*free ad*: karate makes it possible to re-use any kind of texture rendering objects for the edges.(you just patch a texture object with some other texture object used as edge renderer) :)
Quote:
for shadow volumes we used to do this:
pregenerate a winged edge array (stores the edge and the two polygons connected to it - im assuming manifold meshes - i.e. only up to 2 polygons can be connected to an edge).
check each edge to see if one of the polys faces away from the camera and one faces towards (dot poly normal and camera to first vertex in your case.. or light->first vertex for shadows). those are the ones you need.
trivium: build "strips" of winged edges (first polygon on next edge is second triangle on previous edge) and get twice the speed.