Raymarching Beginners' Thread
category: code [glöplog]
IQ:
i had lots of fun trying to put that into this 4k:
TRSI - Large Hedron Collider
but in the end i found its too much of popping up/changing the style.
also i think its funny how that farbrausch-kb-version is on top of the month list while just showing one of these fantastic fractals (in an amount of fourthousandandninetysix bytes, which is a lot if you wanna show these things ;) )
Quote:
increasing the iteration count in some raymarched recursive sierpinski-distributed spheres:
i had lots of fun trying to put that into this 4k:
TRSI - Large Hedron Collider
but in the end i found its too much of popping up/changing the style.
also i think its funny how that farbrausch-kb-version is on top of the month list while just showing one of these fantastic fractals (in an amount of fourthousandandninetysix bytes, which is a lot if you wanna show these things ;) )
Hello again after quite a long time ;)
After playing with iq's shadertoy and tons of knowledge in those two topics here, I decided to take it to another level and try actualy writing some shader stuff in c++. However my GLSL knowledge is limited and I've failed transfering even simplest raymarching shader to iq's 4kb intro framework (one with basic raymarching example).
If anyone can provide me with c++ source code for such simple raymarcher or any clue how to get this stuff working together it would be great.
Thanks a lot in advance guys :)
After playing with iq's shadertoy and tons of knowledge in those two topics here, I decided to take it to another level and try actualy writing some shader stuff in c++. However my GLSL knowledge is limited and I've failed transfering even simplest raymarching shader to iq's 4kb intro framework (one with basic raymarching example).
If anyone can provide me with c++ source code for such simple raymarcher or any clue how to get this stuff working together it would be great.
Thanks a lot in advance guys :)
eh, error, it should say:
... (one with basic RAYTRACING example).
sorry for that
... (one with basic RAYTRACING example).
sorry for that
iq's g4k_Software example does what you ask for...
The new version of Fragmentarium is awesome! Its sort of a raymarching development environmental. Its designed for fractals, but it works fine with anything. Comes with a ton of raymarching code and is great for experimenting.
http://syntopia.github.com/Fragmentarium/
Some stuff I did with it.
http://www.flickr.com/photos/72494557@N03/
I love the tile rendering feature, it removes a lot of the artifacts/jaggies. The movement is a bit awkward and buggy though.
http://syntopia.github.com/Fragmentarium/
Some stuff I did with it.
http://www.flickr.com/photos/72494557@N03/
I love the tile rendering feature, it removes a lot of the artifacts/jaggies. The movement is a bit awkward and buggy though.
Hi, i used some of my previous techniques from this examples
http://www.pouet.net/topic.php?which=7920&page=15
In this GLSL Sandbox stuff.
Cage
Fence
Torus
TorusJourney
Playing with symmetries in raymarching is really cool! :)
http://www.pouet.net/topic.php?which=7920&page=15
In this GLSL Sandbox stuff.
Cage
Fence
Torus
TorusJourney
Playing with symmetries in raymarching is really cool! :)
hardy: sorry, just wanted to make a quick 4k for Sunrise, didn't anticipate people would like it :/
(perhaps I involuntary put in some illegal telepathic messaging about my main inspiration for the soundtrack, who knows ... :)
(perhaps I involuntary put in some illegal telepathic messaging about my main inspiration for the soundtrack, who knows ... :)
Hi,
I am an extreme newbie to raytracing but have been playing around with frag shaders.
How would i go about addig raytracing to a simple example like the below?
<code>#ifdef GL_ES
precision highp float;
#endif
uniform float time;
uniform vec2 resolution;
uniform sampler2D tex;
void main(void) {
vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
float cLength = length(cPos);
vec2 uv = gl_FragCoord.xy/resolution.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
vec3 col = texture2D(tex,uv).xyz;
gl_FragColor = vec4(col,1.0);
}</code>
I am an extreme newbie to raytracing but have been playing around with frag shaders.
How would i go about addig raytracing to a simple example like the below?
<code>#ifdef GL_ES
precision highp float;
#endif
uniform float time;
uniform vec2 resolution;
uniform sampler2D tex;
void main(void) {
vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
float cLength = length(cPos);
vec2 uv = gl_FragCoord.xy/resolution.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
vec3 col = texture2D(tex,uv).xyz;
gl_FragColor = vec4(col,1.0);
}</code>
hi. start at page 1 ;)
Ha, spank you
Hey Matteblack, what are you using for raytracing? Are you coding your own engine, are you using Fragmentariam or WebGL Sandbox? Because what you send to the shaders is very important in setting up the camera.
Ideally you should do the minimum possible in the fragment shader, and do as much camera stuff in the vertex shader or cpu.
Matteblack: Also read like 5 posts above and here
Plus this is rayMARCHING not rayTRACING, which is a bit of a different thing...
I recently started to play around with distance fields and mandel box fractals (I know that this is quite late but as a designer, I'm slow with picking up coding stuff).
Ignoring the awesome stuff already posted and solely focusing on the "Beginner" in the thread's title, I dare to ask some questions...
When combining IQ's formula for rounded boxes with domain scaling, my normals get fucked up and thus the shading for the scaled object does no longer match the rest of the scene. Any ideas, how to solve that?
And, anybody uses HLSL and found a proper way for domain repetition?
Ignoring the awesome stuff already posted and solely focusing on the "Beginner" in the thread's title, I dare to ask some questions...
When combining IQ's formula for rounded boxes with domain scaling, my normals get fucked up and thus the shading for the scaled object does no longer match the rest of the scene. Any ideas, how to solve that?
Code:
float BOX_RADIUS= 0.03;
float dBox(float3 p, float3 b) {
return length( max( abs(p) - b + float3(BOX_RADIUS,BOX_RADIUS,BOX_RADIUS), 0.0 ) ) - BOX_RADIUS;
}
float dScaledBox(float3 p) {
p*=float3(1,1, 0.1); // this distorts the normal
float d = dBox(p- float3(-2,-0, 0), float3(1, 0.2, 0.2));
d = min(d, dBox(p- float3(2,-0, 0), float3(1, 0.2, 0.2)));
return d;
}
And, anybody uses HLSL and found a proper way for domain repetition?
Quote:
When combining IQ's formula for rounded boxes with domain scaling, my normals get fucked up
Your problem is described here.
Quote:
The server at www.aigfx.com is taking too long to respond.
...
Quote:
...my normals get fucked up
--verbose please - screenshot?
Pixtur, it's usually a bad idea to scale the position you're marching with. It messes up normals, iteration glow, hit detection etc. Instead, scale your objects by the inverse.
Here is what your code gives me (ignore the coloring):
Notice that the rounding on the boxes is also distorted on one axis, and the glow effect is also stronger in the direction you scaled. When I change the (GLSL-adapted) code to this:
I get this result, which is probably closer to what you want:
BTW, check your mail sometimes :)
Here is what your code gives me (ignore the coloring):
Notice that the rounding on the boxes is also distorted on one axis, and the glow effect is also stronger in the direction you scaled. When I change the (GLSL-adapted) code to this:
Code:
float dScaledBox(vec3 p) {
vec3 Scale = vec3(1,1,0.1);
float d = dBox(p- vec3(-2,-0, 0)/Scale , vec3(1, 0.2, 0.2)/Scale );
d = min(d, dBox(p- vec3(2,-0, 0)/Scale , vec3(1, 0.2, 0.2)/Scale ));
return d;
}
I get this result, which is probably closer to what you want:
BTW, check your mail sometimes :)
Scaling space in one direction makes marching not really better ;)
What kind of domain repetition do you want to achieve?
What kind of domain repetition do you want to achieve?
I tried the reverse scaling approach without a difference. But I also don't really understand, why the scaling affects the distance/normal calculation.
Here is a screenshot. The depth of the cube in the lower center is scaled up by 10.
Here is a screenshot. The depth of the cube in the lower center is scaled up by 10.
Code:
float dBoxes(float3 p) {
float d;
d = dBox(p - float3(-1,0,0), float3(0.5,0.5,0.5));
d=min(d,dBox(p * float3(1,1,0.1)- float3( 0,0,0), float3(0.5,0.5,0.05)));
d=min(d,dBox(p - float3( 1,0,0), float3(0.5,0.5,0.5)));
d=min(d,dBox(p- float3(-1,1,0), float3(0.5,0.5,0.5)));
d=min(d,dBox(p- float3( 0,1,0), float3(0.5,0.5,0.5)));
d=min(d,dBox(p- float3( 1,1,0), float3(0.5,0.5,0.5)));
return d;
}
float3 getNormal(float3 p, float offset)
{
float dt=.0001;
float3 n=float3(getDistance(p+float3(dt,0,0)),
getDistance(p+float3(0,dt,0)),
getDistance(p+float3(0,0,dt)))-getDistance(p);
return normalize(n);
}
Seven: Indeed it looks better, but with that I would not be able to scale complex compound objects, or do do I miss something?
What about central differences? Just use tested code from the toolbox thread.
The AO looks completely screwed up - to big step width + wrong direction (Might be a flipped normal) I guess.
Maybe some more complete sample code so we can reproduce the behavior here would be cool.
The AO looks completely screwed up - to big step width + wrong direction (Might be a flipped normal) I guess.
Maybe some more complete sample code so we can reproduce the behavior here would be cool.
Pixtur: Doesn't HLSL have a modulo equivalent? in GLSL, simply do a mod() operation on your marching point in one or more dimensions, and your object gets repeated in that dimension. To ensure that your object stays in the middle of the "repetition box", you have to subtract half the size of the box:
Here's what this does to your code:
Code:
float dScaledBox(vec3 p) {
vec3 Scale = vec3(1,1,0.1);
p.yz = mod((p.yz), 4.0)-vec2(2.); // instance on yz-plane
float d = dBox(p- vec3(-2,-0, 0)/Scale , vec3(1, 0.2, 0.2)/Scale );
d = min(d, dBox(p- vec3(2,-0, 0)/Scale , vec3(1, 0.2, 0.2)/Scale ));
return d;
}
Here's what this does to your code:
fmod in hlsl iirc.
Pixtur: The reason that scaling affects distance is simple: it's an operation that does not preserve length. You're trying to find the distance from your marching point to the intersection point (if there is one). This is a vector with a certain length. Moving this vector around doesn't change its length, nor does rotating it. So you can translate or rotate objects without having to adapt their distance functions.
But scaling does change the vector length. If you're doing uniform scaling in 3 dimensions, you can correct this by multiplying the found distance with the inverse of your scaling factor. That's the "*s" in Iq's scaling function, which corrects the "p/s":
But if the scaling factors are not all the same, there is no generic way to correct this. For simple primitives you could try to take in account which direction you're marching in, but as soon as the primitives distance function contains a rotation, twist, reflection or such, that won't work anymore. So yes, if you want to scale complicated compound objects, you'll have to pass the scaling factors along to each distance function and scale the object itself.
The reason the normals are wrong is because you're no longer taking the same epsilon (dt in your code) in each dimension. If you have scaled p in the Z direction, you'll have to correct the dt in the Z position of your normal function too, and that will only work if you have scaled your entire scene the same way, or if you use a different normal function for each object, which gets too complicated quickly.
So your normals are wrong, in the case of the cube they probably point very close towards the other cubes, so the ambient occlusion goes bonkers. Again, you can solve this by using the inverse of the scaling factor in the distance function, instead of scaling p. Replace the lower middle cube by
and everything looks correct. (BTW, in the previous screenshot I used your code without the scaling factor. With scaling factor, rectangles connect and form beams. Hope this didn't confuse you.)
But scaling does change the vector length. If you're doing uniform scaling in 3 dimensions, you can correct this by multiplying the found distance with the inverse of your scaling factor. That's the "*s" in Iq's scaling function, which corrects the "p/s":
Code:
float opScale( vec3 p, float s )
{
return primitive(p/s)*s;
}
But if the scaling factors are not all the same, there is no generic way to correct this. For simple primitives you could try to take in account which direction you're marching in, but as soon as the primitives distance function contains a rotation, twist, reflection or such, that won't work anymore. So yes, if you want to scale complicated compound objects, you'll have to pass the scaling factors along to each distance function and scale the object itself.
The reason the normals are wrong is because you're no longer taking the same epsilon (dt in your code) in each dimension. If you have scaled p in the Z direction, you'll have to correct the dt in the Z position of your normal function too, and that will only work if you have scaled your entire scene the same way, or if you use a different normal function for each object, which gets too complicated quickly.
So your normals are wrong, in the case of the cube they probably point very close towards the other cubes, so the ambient occlusion goes bonkers. Again, you can solve this by using the inverse of the scaling factor in the distance function, instead of scaling p. Replace the lower middle cube by
Code:
d=min(d,dBox(p - float3( 0,0,0)/ float3(1,1,0.1), float3(0.5,0.5,0.05)/ float3(1,1,0.1) ));
and everything looks correct. (BTW, in the previous screenshot I used your code without the scaling factor. With scaling factor, rectangles connect and form beams. Hope this didn't confuse you.)