Separable SubSurfaceScatering (SSSS) for skin shading realtime demo
category: gfx [glöplog]
Check this guys work on realtime subsurface scatering for realistic skin shading, looks pretty good/realistic to me (albeit a little too much reflective). And it is even realtime!
http://iryoku.com/separable-sss-released
http://iryoku.com/separable-sss-released
i've seen this do the rounds the last week or so. initially i was impressed, but then
i saw http://www.iryoku.com/teasing-our-real-time-skin-rendering-advances (comparision with/without shots) and http://www.ir-ltd.net/infinite-3d-head-scan-released .. and i realised he had managed to get a small sss effect at the cost of losing a lot of the texture detail.
if you ever wondered about the importance of presentation and graphics in realtime demos this is it - cos that's a fucking nice head model doing most of the work there. :)
i saw http://www.iryoku.com/teasing-our-real-time-skin-rendering-advances (comparision with/without shots) and http://www.ir-ltd.net/infinite-3d-head-scan-released .. and i realised he had managed to get a small sss effect at the cost of losing a lot of the texture detail.
if you ever wondered about the importance of presentation and graphics in realtime demos this is it - cos that's a fucking nice head model doing most of the work there. :)
That's pretty much my sentiments as well. As far as the author goes, I'm far more impressed by his work on SMAA (not that it's hard to impress a non-programmer like myself :)
gloom: about his work on mlaa/smaa im going to refrain from commenting because i know too much back story. :)
He seems to be a bit of a poser.
smash: fair enough. ;)
i checked out the realtime version and one hand the rendering is mighty impressive, but on the other hand:
haha, great catch :)
:D
OMG! it's actually morse code for 'HAIL SATAN'!
Finally realistic skin rendering. Waiting for the pr0n industry to use this.
3...2...1...
3...2...1...
Realistic skin rendering has been around for a few years - nothing new there. Realistic human _movement_ on the other hand? Still a few years off, at least..
What gloom said. I'm not sure on the workflow, but I've seen previous techniques with similar looking results running faster on my machine. I remember one published on a GPU Gems book, or Shader X or a similar book a couple of years ago.
Quote:
He seems to be a bit of a poser.
Why you say that?
they sure posers. I've seen what smaa looks like. pff. and that video here does so not show any good or better example of what you could do to make SSS look as a rendering effect. a human head or better it's ears, nose and eyelids is just the wrong model or basicly object to showcase SSS. and he just unfocused the effect by using that usual highres skin texture detail bump mapping rendering.
btw: can't remember any demo attempting to actually show a _realistic_ human or head/face... and I have been "watching demos" since the MSX age in the 80's.
too much effort, for just a single scene, maybe... of course, all of us are highly trained on how a realistic face should look (and animate), so I guess it is one of the hardest efforts in CG.
could someone here point out any attempt worth revisiting?
now I remember one post from iq about the lack of realism in demos... indeed.
too much effort, for just a single scene, maybe... of course, all of us are highly trained on how a realistic face should look (and animate), so I guess it is one of the hardest efforts in CG.
could someone here point out any attempt worth revisiting?
now I remember one post from iq about the lack of realism in demos... indeed.
Quote:
You say that as if it was ever the goal of any demo to depict a realistic face or head. Why?btw: can't remember any demo attempting to actually show a _realistic_ human or head/face...
Quote:
can't remember any demo attempting to actually show a _realistic_ human or head/face...
Well there's http://www.pouet.net/prod.php?which=54604, but there's still a couple of problems with that argument:
1. As far as realtime goes, perhaps last year's L.A. Noire was the first time I could say a human head was "realistic". Note that their tech isn't actually a cheap method you can hack together with two Kinects and a digital camera.
2. Having a realistic human head implies you'll need a level of consistent realism. That's an insane amount of _content-related_ effort even for a paid job, let alone 3-4 people messing around for a party of 100 people. Again, you could argue that the scene is driven by impressing each other, but you either spend months on that one head like this guy (and the company who provided him the data) or have an actual demo.
@gloom: I would never be as far-fetched as to try to imply what the demoscene mission-statement or objectives are, just asking if any production ever attempted it. ;) cant remember any, I guess there isn't any.
@gargaj on 2): yep, that was my point of view. Trying to "do it right" is too much effort for any demogroup of the standard size, and the "wow factor" benefit could just not be enough ROI for the investment in time. ;)
Thanks for the pointer on that Nuance production. They actually had two nice features: somewhat realistic head models, and lipsync! (although not simultaneously). Shame on the color scheme (for my taste).
Will take a look at LANoire on the PS3... along those lines, I also liked facial expresiveness (specially the eyes) on Uncharted 2-3.
@gargaj on 2): yep, that was my point of view. Trying to "do it right" is too much effort for any demogroup of the standard size, and the "wow factor" benefit could just not be enough ROI for the investment in time. ;)
Thanks for the pointer on that Nuance production. They actually had two nice features: somewhat realistic head models, and lipsync! (although not simultaneously). Shame on the color scheme (for my taste).
Will take a look at LANoire on the PS3... along those lines, I also liked facial expresiveness (specially the eyes) on Uncharted 2-3.
heheheheh, that's funny that zcareplex mk5 got mentioned for realistic head rendering. i will demistify this by myself before someone else does it:
it's so simple. we did a laserscan of our heads (one of cosmic friends owns one) ... each face was scanned from 3 perspectives and the scan took several minutes to complete. that's also the reason why the eyes are closed. Then we did some photos, triangulated the point cloud, cleaned the mesh, and mapped the textures accordingly. Everything else is done with 2 simple blue/yellowish point light sources. that's it.
It's all about fake.
however, i think rendering skin in a still image is somewhat easier than doing it in motion/realtime, things normally start to look dull, as those micro movements and special effects can't be achieved easily and you always run into some kind of uncanny valley at some point.
speaking of the initial post: i think this guy is a swagger that brags too much. :)
it's so simple. we did a laserscan of our heads (one of cosmic friends owns one) ... each face was scanned from 3 perspectives and the scan took several minutes to complete. that's also the reason why the eyes are closed. Then we did some photos, triangulated the point cloud, cleaned the mesh, and mapped the textures accordingly. Everything else is done with 2 simple blue/yellowish point light sources. that's it.
It's all about fake.
however, i think rendering skin in a still image is somewhat easier than doing it in motion/realtime, things normally start to look dull, as those micro movements and special effects can't be achieved easily and you always run into some kind of uncanny valley at some point.
speaking of the initial post: i think this guy is a swagger that brags too much. :)