Random adding motion blur to demo captures thread
category: offtopic [glöplog]
Some of stills of my motion blur experiment on demo captures
First method result:
Crap.
New method results:
Debris clips:
Full frame-to-frame motion blur - standard simple stuff, "incorrect" to some people. mp4, 24 fps
Shutter motion blur - motion blur at 48 fps, then drop every other frame. Suppose to be like a movie-camera output or such. mp4, 24 fps
My method is capturing demos at 120 fps, then use a motion estimator library to take it to 960 fps [or more..], then simply merge all the frames down to whatever target fps. No blur filter required or used. But this does use a hell of a lot of CPU time. The results are pretty though and rather accurate. Wanted to release more motion-blurred clips, but never got around to it. That, and I think I lost my code in a file system crash.
Why? Purely curiosity. Not about to start releasing my demo caps with motion blur; I'll leave that up to Trixter. :)
[/thread]
First method result:
Crap.
New method results:
Debris clips:
Full frame-to-frame motion blur - standard simple stuff, "incorrect" to some people. mp4, 24 fps
Shutter motion blur - motion blur at 48 fps, then drop every other frame. Suppose to be like a movie-camera output or such. mp4, 24 fps
My method is capturing demos at 120 fps, then use a motion estimator library to take it to 960 fps [or more..], then simply merge all the frames down to whatever target fps. No blur filter required or used. But this does use a hell of a lot of CPU time. The results are pretty though and rather accurate. Wanted to release more motion-blurred clips, but never got around to it. That, and I think I lost my code in a file system crash.
Why? Purely curiosity. Not about to start releasing my demo caps with motion blur; I'll leave that up to Trixter. :)
[/thread]
Looks nice. Would be great to see some full captures of demos with this :)
Why haven't I heard that version of the "Debris"-soundtrack before?
micksam7: Awesome. I guess using the motion vectors directly to blur should be a lot faster, and it should yield (roughly) the same results. But this is definitely cool. One of the bigger benefits of it is that you can lower the frame-rate and still get acceptable results, compared to non-motion blurred results (as you already did by going down to 24fps, which usually looks crap for demos). Perhaps it also allows to compress a bit harder, due to less high frequencies?
It looks very cinematic... I like it
awesome idea.
kusma: Using motion vectors from video files for any kind of effect or analysis is usually a bad idea. The reason is that in many cases, they do *not* specify the actual direction in which things move ("true motion"). They only point to some arbitrary position in the image that happens to look like the (macro)block in question. In ~90% of all cases, they will be close to true motion, but the remaining 10% of the vectors won't play nice and lead to really annoying artifacts. So nothing saves you from a full-scale motion analysis, and even then you can't generate 100% correct motion vector fields.
Aside from that, using motion estimation to generate nice motion blur looks tempting. However, I still doubt that it would really work all that well for artificial image content (read: demos). In "real", natural images, there's always some kind of texture on objects the motion estimator can lock to. A lot of demos and games, however, have large areas of (nearly) uniform color where proper tracking of motion, especially at object edges, is next to impossible.
On the other hand, I have seen demos on commercial "100 Hz" TV sets (which also use motion estimation to interpolate between images, which is not all that different from adding artifical motion blur) with surprisingly good quality (considering how badly they fail on some natural image content), so it might be worth a try.
Aside from that, using motion estimation to generate nice motion blur looks tempting. However, I still doubt that it would really work all that well for artificial image content (read: demos). In "real", natural images, there's always some kind of texture on objects the motion estimator can lock to. A lot of demos and games, however, have large areas of (nearly) uniform color where proper tracking of motion, especially at object edges, is next to impossible.
On the other hand, I have seen demos on commercial "100 Hz" TV sets (which also use motion estimation to interpolate between images, which is not all that different from adding artifical motion blur) with surprisingly good quality (considering how badly they fail on some natural image content), so it might be worth a try.
Looks great on debris. I'd much prefer to watch the motion blurred version than the normal one, even at 24fps instead of 30 (30 would still be preferred to 24 of course :))
On the other hand, I can't see it working so well for those demos with 1-frame screen flashes and the like. For some effects, you'd effectively lose detail rather than gaining motion blur too. So overall, it's maybe a bad idea to apply it to every demo.
Also, some effects are frame-rate dependent (yeah, maybe they shouldn't be, but it happens) so rendering at 120fps and reducing to 24 is going to really mess up the effect speed.
Considering how much cooler it looks, I'd be very happy to see this applied to the more 'cinematic' demos.
On the other hand, I can't see it working so well for those demos with 1-frame screen flashes and the like. For some effects, you'd effectively lose detail rather than gaining motion blur too. So overall, it's maybe a bad idea to apply it to every demo.
Also, some effects are frame-rate dependent (yeah, maybe they shouldn't be, but it happens) so rendering at 120fps and reducing to 24 is going to really mess up the effect speed.
Considering how much cooler it looks, I'd be very happy to see this applied to the more 'cinematic' demos.
KeyJ: I wasn't talking about using MPEG motion vectors. AFAICT, he hasn't compressed the video yet, so most likely there are none. The motion estimation library is probably using different algorithms than MPEG compressors.
I didn't expect this thread to pick up much :(
gloom - That's the soundtrack that I've always heard, unless I accidentally added the wrong audio file [wouldn't be the first time]
KeyJ - So far, using motion estimation seems to be working out very well. Note I am using a high rate source in the first place.
kusma - Yes, it does allow me to lower the bitrate drastically
All - I recovered my code. I'll get a full sample or two done today I suppose..
gloom - That's the soundtrack that I've always heard, unless I accidentally added the wrong audio file [wouldn't be the first time]
KeyJ - So far, using motion estimation seems to be working out very well. Note I am using a high rate source in the first place.
kusma - Yes, it does allow me to lower the bitrate drastically
All - I recovered my code. I'll get a full sample or two done today I suppose..
kusma: OK, I see.
micksam7: Yes, the higher the input frame rate, the shorter the motion vectors, and the smaller the possible motion vector errors. So I'd be surprised to see any noticeable artifacts when going from 120 to 960 anyway :) I was rather talking about using more modest frame rates in the 48-60 Hz (or even 24-30 Hz) range to limit the computational complexity.
micksam7: Yes, the higher the input frame rate, the shorter the motion vectors, and the smaller the possible motion vector errors. So I'd be surprised to see any noticeable artifacts when going from 120 to 960 anyway :) I was rather talking about using more modest frame rates in the 48-60 Hz (or even 24-30 Hz) range to limit the computational complexity.
KeyJ, doesn't that entirely depend on which motion estimation technique that is used? granted, it's never 100% perfect, but it is possible to use motion estimation algorithms that were actually intended for predicting true motion, instead of those used for compression algorithms.
i've no clue exactly what is out there, but did churn out an implementation of the "3D recursive search" algorithm (G. de Haan) at some point, which is intended exactly to (efficiently) produce true motion vectors (used in TV upconverters). it worked very well for me (and *significantly* better than some open source motion estimators from compression packages)
i've no clue exactly what is out there, but did churn out an implementation of the "3D recursive search" algorithm (G. de Haan) at some point, which is intended exactly to (efficiently) produce true motion vectors (used in TV upconverters). it worked very well for me (and *significantly* better than some open source motion estimators from compression packages)
more on-topic, imho the "true motion" upconverter from MVTools (for AVISynth) produces rather decent results. It's open source and rather well structured, so getting the motion vectors only shouldn't be too complicated.
micksam7: Perhaps you should consider trying some Gaussian weighting of the frames instead of simple avarage (unless you're already doing that, of couse)?
Captures coming after a few hours of cpu time..
kusma: nah, real camera motion blur actually is very close to a box filter (the incoming photons don't have any more influence in the middle of the exposure than they have at the beginning or end), and using different filters gives results that just look weird.
the result is quite hard to notice. if nobody told me about it i will never have see a difference just by watching the video. however, thats a nice effect, with this these demos looks more like cg.
i'm wondering : how did raytracers do for 3d movies (eg: in "toy story" , while some balls are bouncing, they are blurred)
did the raytracers use objects velocity vectors (they have the full 3d scene and their properties) and then blur objects locally?
or did they use similar techniques as here (thus movement analysis by checking whats happening between two frames), thus after rendering ?
also: i'd curious to know how these fake motion blurs should render on old msdos demos/intro from 90's. maybe their resolution/color space is just not big enough to make something good
i'm wondering : how did raytracers do for 3d movies (eg: in "toy story" , while some balls are bouncing, they are blurred)
did the raytracers use objects velocity vectors (they have the full 3d scene and their properties) and then blur objects locally?
or did they use similar techniques as here (thus movement analysis by checking whats happening between two frames), thus after rendering ?
also: i'd curious to know how these fake motion blurs should render on old msdos demos/intro from 90's. maybe their resolution/color space is just not big enough to make something good
I dunno, I just watched this, but I think it looks fantastic. :)
ryg: Aha, OK.
Tigrou: Good motion blur is hard to see. I think the main benefit is that the frame-rate can be lowered without crippling the motion-"feel", thus reducing the file-size drastically.
Tigrou: Good motion blur is hard to see. I think the main benefit is that the frame-rate can be lowered without crippling the motion-"feel", thus reducing the file-size drastically.
tigrou: afaik motion blur in Pixar's Renderman is done by REYES -that is the micropolygons thing- and the effect should be based in the micropolygons, so it looks as a rasterization effect, so probably it is not raytracing
Post processing motion blur has exactly the same problems has DOF... that's my retarded comment for the day.
*has,
that was my standard grammar error for the day.
that was my standard grammar error for the day.
*as
this was the correction.
this was the correction.
You'll probaly never get good results !! .. Motion-Blur is a tricky thing ..
I've thought about a deferred shading like technique .. encoding normal and motion vector fields delivered from the rendering-engine on IPP-shader base .. with X264 GMC motion frames for semi camera-wise vector fields .. and a lil bonus from local-motion objects in a 'disturbance' function ..
It could also do Post-Displacement-Mapping .. on the other hand ..
It's just an idea for IPP .. working with the capturix .. throwing my thoughts ..
I've thought about a deferred shading like technique .. encoding normal and motion vector fields delivered from the rendering-engine on IPP-shader base .. with X264 GMC motion frames for semi camera-wise vector fields .. and a lil bonus from local-motion objects in a 'disturbance' function ..
It could also do Post-Displacement-Mapping .. on the other hand ..
It's just an idea for IPP .. working with the capturix .. throwing my thoughts ..
The debris-motionblur-2424 already looks really great! It's like debris by Dreamworks or something.
The only strange thing is: in debris-motionblur-2448, the shaking of the camera that is supposed to happen in the first shot is a lot more noticeable than in 2424.
The only strange thing is: in debris-motionblur-2448, the shaking of the camera that is supposed to happen in the first shot is a lot more noticeable than in 2424.