About Camera Focus Algorithm
category: code [glöplog]
Reading science textbook, I found very basic contents of optics.
and came up with some strange(and bit silly) Algorithm.
Make focus effect with sampling ranged random point.
Get Ray's length and subtract focal length, then divide this value with what i call
'Focus ratio'(i actually don't know what is real name of it).
it looks like this in code.
then get samples with modified ray direction.
Add all samples and divide color by numbers of sample.
when numbers of sample is 4, it looks like this.
It seems that it works, but results are bit too noisy and this sampling(also, i don't know it's real name)
Frame drops under 40 in 720p with my GTX960.
I wonder, Is there any better algorithm than mine?
Thanks~
:D
and came up with some strange(and bit silly) Algorithm.
Make focus effect with sampling ranged random point.
Get Ray's length and subtract focal length, then divide this value with what i call
'Focus ratio'(i actually don't know what is real name of it).
it looks like this in code.
Code:
viewDir = raydir(fov,iResolution.xy,(fragCoord.xy));
viewDir = vec3(viewDir.xy+rayFocus.xy,viewDir.z);
Code:
float focusval = (poslength-focusleng)/focusratio;
rayFocus = vec2(RNG(fragCoord.xy+rayFocus.yx),RNG(fragCoord.yx+rayFocus.xy))*focusval;
then get samples with modified ray direction.
Add all samples and divide color by numbers of sample.
when numbers of sample is 4, it looks like this.
It seems that it works, but results are bit too noisy and this sampling(also, i don't know it's real name)
Frame drops under 40 in 720p with my GTX960.
I wonder, Is there any better algorithm than mine?
Thanks~
:D
What is your poslength? It seems odd to have since the way a camera lens refracts light doesn't depend on how far the visible things are or how long you determine your view ray to be.
The usual construction is to find the point in focus and then perturb only the origin point of the ray. This simulates the process of light arriving on different points on a camera lens which is the origin of this effect: the amount of blur (the "circle of confusion") is a linear function of absolute distance from the focal plane. To visualize, the rays that match a single point in the output image essentially form a double cone with the tips at the focal point.
So first we compute the focal point via the z-distance to the focal plane (if we compute the standard distance we get a focal sphere instead of a plane):
This is the point that is in focus in this direction; all points on the lens of the camera refract illumination from this point to the same position in the final image so the image of this point appears sharp. Note that this point is only a virtual point; it might be behind an object and often no object is located at this point, but it's the point where an object would be in focus if it existed and was visible.
Then we choose a random point on the lens of the camera:
This samples a disk which is a more common aperture shape in actual cameras than a square, but essentially any choice is good if it gives you the visuals you want.
Now we have the ray origin (the point on the lens) and the point we wish to cast the ray towards (the focal point) so the final ray direction is
Note that I'm working in camera coordinates here, so the camera is located at the origin and points along the z axis, so if we want to place the camera somewhere else in another orientation, we'll have to somehow transform the origin and direction. This is often done with a general view matrix though for intros you might want something simpler and more hard coded: but note that you can't simply sum an offset to the origin anymore, you also have to rotate the origin point to match your direction so that the lens points towards your view direction.
The usual construction is to find the point in focus and then perturb only the origin point of the ray. This simulates the process of light arriving on different points on a camera lens which is the origin of this effect: the amount of blur (the "circle of confusion") is a linear function of absolute distance from the focal plane. To visualize, the rays that match a single point in the output image essentially form a double cone with the tips at the focal point.
So first we compute the focal point via the z-distance to the focal plane (if we compute the standard distance we get a focal sphere instead of a plane):
Code:
vec3 focalPoint = dir * (focalLength/abs(dir.z))
This is the point that is in focus in this direction; all points on the lens of the camera refract illumination from this point to the same position in the final image so the image of this point appears sharp. Note that this point is only a virtual point; it might be behind an object and often no object is located at this point, but it's the point where an object would be in focus if it existed and was visible.
Then we choose a random point on the lens of the camera:
Code:
float angle = rand()*2.*3.14159265;
vec3 origin = apertureSize*sqrt(rand)*vec3(cos(angle), sin(angle), 0.);
This samples a disk which is a more common aperture shape in actual cameras than a square, but essentially any choice is good if it gives you the visuals you want.
Now we have the ray origin (the point on the lens) and the point we wish to cast the ray towards (the focal point) so the final ray direction is
Code:
dir = normalize(focalPoint-origin)
Note that I'm working in camera coordinates here, so the camera is located at the origin and points along the z axis, so if we want to place the camera somewhere else in another orientation, we'll have to somehow transform the origin and direction. This is often done with a general view matrix though for intros you might want something simpler and more hard coded: but note that you can't simply sum an offset to the origin anymore, you also have to rotate the origin point to match your direction so that the lens points towards your view direction.
Even with raymarching, wouldn't it be more performant to just "fake" depth of field in post?
msqrt:
That's very beautiful method..
Thanks for solution :D
gargaj;
Well, I don't know how to make blur effect..
That's very beautiful method..
Thanks for solution :D
gargaj;
Well, I don't know how to make blur effect..
Minus256: no prob, glad you understood the explanation :--D
Gargaj: oh yes, it definitely would. You could also probably try to combine the approaches: shoot the rays randomly and then try to filter away most of the noise. You might get rid of some of the annoying artifacts in pure post-process solutions
Gargaj: oh yes, it definitely would. You could also probably try to combine the approaches: shoot the rays randomly and then try to filter away most of the noise. You might get rid of some of the annoying artifacts in pure post-process solutions
Is there any possible way to modify rendered scene frame?
I'm new with GLSL and want to learn more about it!
Just make sure you render into a texture, and then use that texture in a second pass.
I'm guessing you're using Shadertoy, so just use the "+" button above the shader to add a new pass that renders to Buffer A, and then set iChannel0 in your output pass to use Buffer A as an input texture.
I'm guessing you're using Shadertoy, so just use the "+" button above the shader to add a new pass that renders to Buffer A, and then set iChannel0 in your output pass to use Buffer A as an input texture.
gargaj: Thanks! :DDDDD
Do you mind if i ask you how to make frame buffer in iq's 4k framework?
Do you mind if i ask you how to make frame buffer in iq's 4k framework?
That I genuinely don't know because I haven't used OpenGL actively for a decade now, but I'm sure someone else here will gladly point out what the correct-method-du-jour is.
If a just a postprocess pass is enough: Leviathan 2.0
If you don't actually need Visual Studio and C/C++ coding, you can use Compofiller Studio
http://www.kameli.net/compofillerstudio/
Enable two-pass rendering for post-processing. The same shader is used for both rendering passes, just with a different time value. Use negative time for the post-processing pass.
http://www.kameli.net/compofillerstudio/
Enable two-pass rendering for post-processing. The same shader is used for both rendering passes, just with a different time value. Use negative time for the post-processing pass.