Bokeh DOF
category: code [glöplog]
:(
yehar: Well yes, I guess technically GPUs are Turing complete :-) My point was that the FIR solution is probably a lot simpler in a GPU context. I wouldn't know how to do an IIR filter using just a plain ol' regular fragment shader (but I am admittedly a n00b).
Revival: I'm a complete newbie to GPU's too, but I'm now reading on it. What you say seems to be correct. The result of reading from the texture that is being written to is specified as undefined, but could work: http://www.nvidia.com/docs/IO/8227/GDC2003_SummedAreaTables.pdf. To be on the safe side, one could use an efficient parallel algorithm for prefix sum: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch39.html. An analogous algorithm can be used to implement IIR filters that have been factored to single poles by partial fraction expansion.
also see http://developer.amd.com/media/gpu_assets/SATsketch-siggraph05.pdf from ATI a few years back.
or, yea, compute
or, yea, compute
yehar - reading from the texture you're writing to is allowed on Nvidia, if you're only reading from the same pixel you are writing to (there's an opengl-extension for it)... which is probably an absolute useless piece of information for this usecase :p
I find an interesting twist to do, if one is using a gaussian type kernel that blurs according to luminosity of another texture, *and* one is one is using depth buffer from the scene for that mask channel (as opposed to some kind of gaussian circle gradient, or linear gradient) - running that depth buffer or perhaps a scene with lighting, through a Dilate type of fragment to treat the image one feeds the blur map can create an interesting alternative to the heavy hexagonal blur look. The brighter parts of the luminosity channel one has treated with the Dilate will do that classic bloom out, but it's not a played out look. Similarly, you can just take you luminosity channel and run it through a hexagon pixel shader routine, pass that to the blur kernel that blurs from luminosity, and voila.
If it's a really extreme effects, you can use pretty low res going in, and speed stuff up quite a bit.
If it's a really extreme effects, you can use pretty low res going in, and speed stuff up quite a bit.
This one is also interesting, a memory-efficient algorithm for summed area tables and recursive filters: http://w3.impa.br/~diego/projects/NehEtAl11/
While summed area tables should work well for simple circle-filters, aren't they more or less useless for depth-of-field effects as they provide no means to avoid leaking?
With a little bit of formula rewriting I'm sure anyone can convert the summed area thing to scatter the blobs instead of gathering.
O(n log n)-method: first when rendering each pixel, write both color and the bokeh blob size, then in postprocess recursively split the blob into 2^n half and residue.
O(n): left as an exercise, similar to the tree-approach.
(for extra points, figure how to make it non-square using this method, might not be as trivial as with gathering)
O(n log n)-method: first when rendering each pixel, write both color and the bokeh blob size, then in postprocess recursively split the blob into 2^n half and residue.
O(n): left as an exercise, similar to the tree-approach.
(for extra points, figure how to make it non-square using this method, might not be as trivial as with gathering)
Have you noticed that SQUARE BOKEH doesn't look that bad, can be used stylistically in a demo, and it's really fast?
Square bokeh looks pretty ugly to me. Could be good as a demo effect though.
ohh I like that one! :)
Square bokeh? Use massive pixels instead!
Come on? Glowing cubes with square bokeh? You must be a gamer not to love it.
Just out of curio though, has anyone made a demo about it yet?
I'm not sold yet tbh, but I'm sure if Fairlight or ASD used it they'd automatically perfect it. :D
I'm not sold yet tbh, but I'm sure if Fairlight or ASD used it they'd automatically perfect it. :D
Ok, I'm sold, that was pretty awesome!
Just another quick question: Would Bokeh be the same type of de-focussing that is used to make normal pics look like scale models?
Just another quick question: Would Bokeh be the same type of de-focussing that is used to make normal pics look like scale models?
wasn't Excess ahead of them, or was that another kind of dof?
8bb: that's just a question of simulating the right lens aperture / distance scale.
Quote:
Come on? Glowing cubes with square bokeh? You must be a gamer not to love it.
I hate to have to say this, but square bokeh was already done like 12 years back: http://www.pouet.net/prod.php?which=344
That's actually one of my favourite demos too. You may now call me a gamer :(
Box blur can be seen as square bokeh given a big enough distance from the focal plane...
8bit: we used bokeh effects in numb res already - but also tried to keep it subtle and not completely overdo it, so i guess most people missed it :)
(we also had dof with no leakage since frameranger btw)
(we also had dof with no leakage since frameranger btw)
blur your left foot out, your right foot out and and now you gotta shout: do the hokey bokeh!
Hyde: We were indeed. It's generally the same DOF algorithm (i.e inspired by the DICE presentation at SIGGRAPH 2011), but it looks like mine has somewhat less leaking. This might be partially because of the use-cases rather than the implementation itself, but I somehow doubt it. I spent a LOT of time experimenting with leak-elimination. I ended up removing the in-front-of-the-focal-plane DOF, because the thin features of the neuron-cluster model suffered quite a lot. It looked good on "fat" models, but not on thin ones. I hate that I had to remove it, but there was simply no time to fix it either.
Smash: I'd love to pick your brain at some point about leak-prevention, especially in relation to scatter-as-gather. Cortex Sous-Vide was my first demo with DOF (well, apart from a few fixed-function hacks years ago), and I'm sure there's a lot of know-how that I've missed. I've simply not been interested in doing gaussian DOF, because it looks like ass.
Smash: I'd love to pick your brain at some point about leak-prevention, especially in relation to scatter-as-gather. Cortex Sous-Vide was my first demo with DOF (well, apart from a few fixed-function hacks years ago), and I'm sure there's a lot of know-how that I've missed. I've simply not been interested in doing gaussian DOF, because it looks like ass.
Wasn't Bjoer from Tpolm another Bokeh?
Hey Smash, at least you don't overdo it like all these other ribbon and glow demos. *ducks* ;)
So basically post processing then Xerby? Would be pretty cool if modern cameras came equipped with realtime post processing capabilities.
Hey Smash, at least you don't overdo it like all these other ribbon and glow demos. *ducks* ;)
So basically post processing then Xerby? Would be pretty cool if modern cameras came equipped with realtime post processing capabilities.