Re: mesh encoding for 4k intros
category: general [glöplog]
Interesting read.
You might also want to check out:
http://www.gvu.gatech.edu/~jarek/papers/CornerTableSMI.pdf
(<2 bits for triangle connectivity, <1 bit for vertex locations => <3 bits per triangle total)
http://www.cs.technion.ac.il/~gotsman/AmendedPubl/SpectralCompression/SpectralCompression.pdf
(makes a good argument for quantizing frequency spectrum instead of directly quantizing vertex locations)
You might also want to check out:
http://www.gvu.gatech.edu/~jarek/papers/CornerTableSMI.pdf
(<2 bits for triangle connectivity, <1 bit for vertex locations => <3 bits per triangle total)
http://www.cs.technion.ac.il/~gotsman/AmendedPubl/SpectralCompression/SpectralCompression.pdf
(makes a good argument for quantizing frequency spectrum instead of directly quantizing vertex locations)
Hmm some of those uncompression routines look too large for 4k...
auld: Loonies already used a simplified version of edgebreaker int Benitoite
I'm no pc coder, but for what I know meshes should be generated, not encoded :)
true...but I wonder if iqs method beats it due to its simplicity and reliance on the compressor?
mrtheplagu, as far as I understand in the Edgebeaker paper, they only speak about topolgy compression, and then 2 bits/tri is not impressive, I think I'm around 5 bits/tri with my cheap methods. The real problem is coding the verices (prediction errors), but unfortunalely they don't speak about that (don't give figures) on the paper.
However I know state of the art is few bits per triangle, both geometry and topology included. But that only works for meshes where prediction works very fine, ie, highly tesselated (read smooth) meshes... What raises the question of it worths to encode a low res mesh at high bitrate and subdivide, instead of directly storing a higres mesh and skip the subdivision code... My feeling is second will be best for 4k, but I'm not sure of course.
Should I put some of the meshes I used in the public domain so people tests and tries to beat my compression rates? That would be cool, and very useful for all of us!
ps-Oswald, only if you want to show extruded cubes and planes the rest of your life... ;)
However I know state of the art is few bits per triangle, both geometry and topology included. But that only works for meshes where prediction works very fine, ie, highly tesselated (read smooth) meshes... What raises the question of it worths to encode a low res mesh at high bitrate and subdivide, instead of directly storing a higres mesh and skip the subdivision code... My feeling is second will be best for 4k, but I'm not sure of course.
Should I put some of the meshes I used in the public domain so people tests and tries to beat my compression rates? That would be cool, and very useful for all of us!
ps-Oswald, only if you want to show extruded cubes and planes the rest of your life... ;)
sorry, I meant "first will be best for 4k" (low_poly_mesh + subdiv)
i think you should
Yeah iq, I think you're right that these papers focus on large meshes and don't consider code size. The Edgebreaker paper does talk about vertex compression, but they use a parallelogram predictor which, as you say, doesn't make much sense for meshes that aren't locally flat. Compressing control meshes for subdivision surfaces is tougher since they're already in such a compact format... so basically I agree with everything you're saying! :) Just thought I'd add those links for people who hadn't seen them in case they inspired some new ideas.
I think you should definitely release your data for people to play with. It's easy to sit around on a message board and speculate about what works best, but more useful to actually go and try it out!
I think you should definitely release your data for people to play with. It's easy to sit around on a message board and speculate about what works best, but more useful to actually go and try it out!
ok, I'll try to do it asap...
1. I guess a stupid obj or xml with the list of vertices and quad indices should be fine for everyone? Or even simpler, just two small C arrays (no parsing needed)?
2. Should we do it (posting results or whatever) here in a pouet thread?
1. I guess a stupid obj or xml with the list of vertices and quad indices should be fine for everyone? Or even simpler, just two small C arrays (no parsing needed)?
2. Should we do it (posting results or whatever) here in a pouet thread?
That would be fun!
Oh, and please do post some object I can load in Max. Like .obj. But C arrays would be also handy.
iq: wavefront .obj works everywhere. :)
what about starting a little competiton...
we take a base object (maybe the pouet pig obj?) and try to get the best possible quality/size relation results!
we take a base object (maybe the pouet pig obj?) and try to get the best possible quality/size relation results!
Yeah, I vote for OBJ too.
If you want to play around with the pig model, I'm hosting it here: http://p.oisono.us/origins
If we're going to have a competition, we'll have to define an error metric (i.e., a way of determining how far the reconstructed model is from the original model). Maybe someone can code up a little "judge" app that sucks in two OBJs and spits out the relative error. [Maybe that person can be me.]
If you want to play around with the pig model, I'm hosting it here: http://p.oisono.us/origins
If we're going to have a competition, we'll have to define an error metric (i.e., a way of determining how far the reconstructed model is from the original model). Maybe someone can code up a little "judge" app that sucks in two OBJs and spits out the relative error. [Maybe that person can be me.]
or how about public voting?
mrtheplague, could you upload the unsibdivided mesh? That one is far too high poly for the purpose of the comptetition...
ah, you can develop the error calculator, that would be nice.
ah, you can develop the error calculator, that would be nice.
Ok, preliminary version of the contest page: http://www.rgba.org/iq/trastero/4kmesh
I say sorry for my poor English. Please comment so we set the contest up asap.
I say sorry for my poor English. Please comment so we set the contest up asap.
(ah, I will prepare the basic sample app as soon as I have one hour free)
The links-section is a bit... wrong? :)
"We introduce FreeLence, a novel and simple single-rate compression coder for triangle manifold meshes. Our method uses free valences and exploits geometric information for connectivity encoding. Furthermore, we introduce a novel linear prediction scheme for geometry compression of 3D meshes. Together, these approaches yield a significant entropy reduction for mesh encoding with an average of 30% over leading single-rate region-growing coders, both for connectivity and geometry."
http://page.mi.fu-berlin.de/polthier/articles/freelence/freelenceEG2005.pdf
http://page.mi.fu-berlin.de/polthier/articles/freelence/freelenceEG2005.pdf
i don't see the point. why would you go to great lengths to encode a mesh in 4k if you could as well code the mesh? except for a few specific 4k-ideas of course, doing it by hand or with some homebrewn tool (and not importing some 3ds file) sounds much more reasonable to me...
>mrtheplague, could you upload the unsibdivided mesh?
yeah, it had a bunch of non-quads in it which were a bitch to get rid of, but I've now added a quad-only control mesh:
http://p.oisono.us/origins/pig_purequad.obj
(there's also a link from the original page)
yeah, it had a bunch of non-quads in it which were a bitch to get rid of, but I've now added a quad-only control mesh:
http://p.oisono.us/origins/pig_purequad.obj
(there's also a link from the original page)