Memorable radial blur you've seen(not just demos)
category: general [glöplog]
Frigo's radial blur(s) in [ur=http://www.pouet.net/prod.php?which=19]Plume[/url] was pretty neat, and rare, at the time.
Frigo's radial blur(s) in Plume was pretty neat, and rare, at the time.
ok, let's try this ... once more, with feelings.
ok, let's try this ... once more, with feelings.
I didn't see any radial blur in Plume but a think I'd call circular blur, or I don't know.. radial is the one that also emits rays in perspective.
Live evil by mandula.
Doom: the sw-renderer way doesn't scale the image more than once; it uses bilinear filtering, feedback and carefully put sampling-points to give an illusion of many layers.
http://weyland.monovoid.dk:8080/~lg/radialblurb.gba
(needs to be redone with a non-shitty ASM implementation, non-shitty datastuff and properly tuned variables) - it's realtime though! :P
Kusma, could you throw me the keys for the pimpmobile? I would like to cruise town ;P
(needs to be redone with a non-shitty ASM implementation, non-shitty datastuff and properly tuned variables) - it's realtime though! :P
Kusma, could you throw me the keys for the pimpmobile? I would like to cruise town ;P
Here's an idea for Pouet2.0: Add support for some math representation language!
;)
;)
doom: I think transforming to polar and back again is way more expensive than multiple resizes...
Being I(z) the image scaled by z, and n the number of passes...
That's what ryg said about hardware radial blur... I think.
Being I(z) the image scaled by z, and n the number of passes...
That's what ryg said about hardware radial blur... I think.
That z^0 would look better if it was a 1.
And the result using n=4 (or 5, I don't remember) and z=1.04 was:
Quote:
It's called Math ML ;) and there could probably be a BBcode to load an external Math ML resource. At worst there is some Math ML to whatever converters.Add support for some math representation language
xernobyl, did you just seriously antialias a latex formula to the pouet background colours?
not meant badly or anything, but do you ever come outside?
not meant badly or anything, but do you ever come outside?
doom: polar has very uneven sampling resolution. this probably won't show with large zoomfactors, but it will if they aren't that huge, and it looks bad. besides, in sw you can do the iir/feedback method, and in hw it's not really a win: you can either do the polar->cartesian map yourself per pixel (not really cheap) or put it into a texture, but then you need to do an extra (dependent) texture fetch to get the actual color. you can do this in an extra pass (easier) or fold it into the first pass of your blur (*might* be faster). then for a decent blur you typically need 1 or 2 extra passes, and convert it back (again, either fold it into the last blur pass, making it more expensive, or do it in an extra pass). that doesn't really save on # of passes, it doesn't make the passes any faster (you only do a directional blur, but with bilinear filtering hw, that doesn't gain you performance really), and it has worse precision, so why bother?
xernobyl: your result image looks about right (except n could be higher; the n=4 or 5 i suggested is good when you combine 4 zoomed copies per pass, if you only do 2 you need twice as much passes), but the formula's waaay off :)
basically, you have to parameters to tweak: c (should be <=1) and z (should be >=1). say I is your original image, and Itemp is another temp image of the same size. what you do for the HW variant when using 2 copies per pass is
and the version I recommend with 4 copies per pass looks like
that's probably more readable than writing it out in formulas; the main goal in my original post was really showing how the two methods are related, and that's easier when writing it out in fomulas.
xernobyl: your result image looks about right (except n could be higher; the n=4 or 5 i suggested is good when you combine 4 zoomed copies per pass, if you only do 2 you need twice as much passes), but the formula's waaay off :)
basically, you have to parameters to tweak: c (should be <=1) and z (should be >=1). say I is your original image, and Itemp is another temp image of the same size. what you do for the HW variant when using 2 copies per pass is
Code:
for(i=0;i<nPasses;i++)
{
float alpha = c / (1.0f + c); // blend factor for this pass
Itemp = lerp(alpha, zoom(I, z), I);
c *= c;
z *= z;
swap(I, Itemp);
}
and the version I recommend with 4 copies per pass looks like
Code:
for(i=0;i<nPasses;i++)
{
float cc = c*c;
float zz = z*z;
float norm = (1 - c) / (1 - cc*cc); // normalization factor
Itemp = (1.0f * norm) * I
+ ( c * norm) * zoom(I,z)
+ ( cc * norm) * zoom(I,zz)
+ (c*cc * norm) * zoom(I,zz*z);
c = cc*cc;
z = zz*zz;
swap(I, Itemp);
}
that's probably more readable than writing it out in formulas; the main goal in my original post was really showing how the two methods are related, and that's easier when writing it out in fomulas.
oh, and it doesn't have to be all zooms either. you can turn it into a "spiradial blur" by throwing in a rotation component (like in discotheque by dxm). in fact, any affine transformation of your UV coordinates will work - instead of squaring (or taking the 4th power or whatever you end up using) the zoom factor, just square the whole matrix.
ryg: I know sampling will be uneven, but scaling isn't perfect either. You inevitably get unintended blurring along x and y axes, not just along r, and that error scales with the strength of the blur AND the number of iterations you want to do. So it won't go away if you throw more processing time at it[.
Blurring in polar coordinates at least gives you a predictable and constant error. If you do the cartesian->polar and polar->cartesian mappings with bilinear interpolation and have a decent resolution for the intermediate picture, I don't see how the end result wouldn't look pretty good (especially near the center of the screen where it matters).
And as a bonus, once you're in polar mode you can do all kinds of cool stuff that's really hard otherwise.
But I guess it does depend on how it relates to HW acceleration in practice, about which I haven't a clue.
Blurring in polar coordinates at least gives you a predictable and constant error. If you do the cartesian->polar and polar->cartesian mappings with bilinear interpolation and have a decent resolution for the intermediate picture, I don't see how the end result wouldn't look pretty good (especially near the center of the screen where it matters).
And as a bonus, once you're in polar mode you can do all kinds of cool stuff that's really hard otherwise.
But I guess it does depend on how it relates to HW acceleration in practice, about which I haven't a clue.
doom, you *don't* get any unintended blurring. you only get *intended* blurring due to the zooming. but if your zoom factor is 1 (i.e. no zoom) and you didn't screw up elsewhere, after 4 iterations you'll still have the original image. if you increase the zoom factor from there, you'll get slightly blurred *zoomed* copies (which is inevitable for a zoom in), but since this is a blur effect that's not exactly a problem. especially since you'll still have full resolution near the center of the blur, where the effect of the zoom doesn't show much.
second, even in polar coordinates, you don't want to mix pixels (theta,r), (theta,r+1) and (theta,r+2), you want to mix (theta,r), (theta,r*z) and (theta,r*z^2), so you need a pretty weird blur kernel; a normal box/triangle/gaussian or similar will look like crap:
for reference, the same with a proper radial blur:
(both quickly handbuilt in wz3, so no, there's no particularly accurate match between the parameters, but i didn't really care).
the "decent resolution" for the intermediate picture is another problem. it typically needs to be bigger than your input image (because the "extent" along the r axis is depends on the diagonal of your image size, not width/height directly), so you're actually wasting extra fillrate blurring an image bigger than your original one. with a weird kernel that may or may not have an efficient factorization (it probably won't). and the quality will probably *still* suck around the edges of the screen, which *does* matter with radial blur, because the whole point of the exercise is getting those streaky rays that extend to the edges of the screen.
second, even in polar coordinates, you don't want to mix pixels (theta,r), (theta,r+1) and (theta,r+2), you want to mix (theta,r), (theta,r*z) and (theta,r*z^2), so you need a pretty weird blur kernel; a normal box/triangle/gaussian or similar will look like crap:
for reference, the same with a proper radial blur:
(both quickly handbuilt in wz3, so no, there's no particularly accurate match between the parameters, but i didn't really care).
the "decent resolution" for the intermediate picture is another problem. it typically needs to be bigger than your input image (because the "extent" along the r axis is depends on the diagonal of your image size, not width/height directly), so you're actually wasting extra fillrate blurring an image bigger than your original one. with a weird kernel that may or may not have an efficient factorization (it probably won't). and the quality will probably *still* suck around the edges of the screen, which *does* matter with radial blur, because the whole point of the exercise is getting those streaky rays that extend to the edges of the screen.
thx zest for this precious contribution
why not just sample/blur along the vector between your current texel and the center of the screen, just as we do in the good ol' AS of D...
ryg, yes, you *do* get unintended blurring ;). You could always say that since it's a blur routine anyway, who'll notice. And there IS intended blurring, corresponding to the increase in width of the radial lines, but it's just that it's not perfect and you lose a little detail with every iterative step, causing unintended blur. But I imagine it's completely irrelevant if the scaling isn't too destructive (although in SW rendering, that's a tradeoff thing).
I hadn't thought about the blur kernel, actually. :) I just figured gaussian or 3-4-pass box blur would look good, but yeah, you don't JUST want blur along the radius, you want the perspective effect. I guess one way to accomplish it is to do exactly what you'd otherwise do in 2D, i.e. the iterative scaling thing, only in 1D.
Whether or not the mapping back and forth will look bad depends completely on what resolutions you can realistically work with, and what sort of interpolation is practically possible. I don't know about that. One thought though, is that there's no real reason why the theta resolution has to be constant.
But, again, I have no idea which works out to be faster in real life.
I hadn't thought about the blur kernel, actually. :) I just figured gaussian or 3-4-pass box blur would look good, but yeah, you don't JUST want blur along the radius, you want the perspective effect. I guess one way to accomplish it is to do exactly what you'd otherwise do in 2D, i.e. the iterative scaling thing, only in 1D.
Whether or not the mapping back and forth will look bad depends completely on what resolutions you can realistically work with, and what sort of interpolation is practically possible. I don't know about that. One thought though, is that there's no real reason why the theta resolution has to be constant.
But, again, I have no idea which works out to be faster in real life.
navis, that is the exact same thing as mixing several zoomed copies.
hm yes, but if you just additive blend the several zoomed copies then you get a clamp (0..1) for every pair of images you add. If you do the sum in a pixel shader then you can get over 1 which will then be properly normalized at the end !
(all these zoomed copies anyway should be pre-'blurred', doing a radial blur/jypnoglow on the high res image will look cheap and slow)
(all these zoomed copies anyway should be pre-'blurred', doing a radial blur/jypnoglow on the high res image will look cheap and slow)
Quote:
(all these zoomed copies anyway should be pre-'blurred', doing a radial blur/jypnoglow on the high res image will look cheap and slow)
I disagree. Lowres/preblur looks cheap.
battle: that looks like polar->cartesian mapping probbs to me.