Sign in to follow this  
Daver

Realtime true (real) Motion blur

Recommended Posts

Daver    122
I was wondering what is the different between fake-blend-over-the-last-frame and true, real type of motion blurs ? Thank you.

Share this post


Link to post
Share on other sites
Tessellator    1394
I guess a "true" motion blur would be done with an acclumulation buffer, rendering many discrete versions of the scene between the previous and current frames, in order to simulate the nature of a camera shutter. Blending in the previous frame doesn't really work since it's not the data your interested in, you need all camera relative movement from the previous frame to the current one.

However, for a real-time app, anything which approximates this behaviour could be considered true motion blur. Some next gen games are starting to feature motion blur as a post process, where a fullscreen buffer filled with velocity data is rendered out per frame, and then used to blur the final scene. Have a look at www.projectoffset.com for a good example of realtime motion blur.

T

Share this post


Link to post
Share on other sites
Brian Lawson    147
Quote:
Original post by Tessellator
Have a look at www.projectoffset.com for a good example of realtime motion blur.

T


And for details on how to do that, or something pretty similar to that...see your latest installment of the DirectX SDK. :)

Share this post


Link to post
Share on other sites
Tessellator    1394
One thing to watch out for with the DX sample (which caught me out) is that they do their divide by w in the vertex shader (when converting to screen space). This works okish most of the time, but when you get the camera close to a surface you'll see lots of distortion because the values can't be linearly interpolated (once you divide by w, you're no longer in a linear space). It is better to do the divide by w in the pixel shader (at least in my experience).

T

Share this post


Link to post
Share on other sites
Dirge    300
There is a detailed explanation relating to how Valve implemented "real" but not real-time blur for Day of Defeat here.

True blur is basically an optical illusion, a defect in vision and camera frame capturing methods. It's crucial for movies however (due to it's "temporal anti-aliasing") and one of the primary reasons why they look great in a dark theater at 24 fps.

Share this post


Link to post
Share on other sites
Brian Lawson    147
Quote:
Original post by Tessellator
It is better to do the divide by w in the pixel shader (at least in my experience).

T


I too find this to be the case a LOT more often than not for values that need to be divided by w (depending on the application and what it is you're trying to do).

In sucks, because when being forced to do it in the pixel shader, you're chewing up x # of instructions out of your 64. :| (That is of course if you're using 2.0 hardware).

[Edited by - Brian Lawson on March 17, 2006 10:02:45 AM]

Share this post


Link to post
Share on other sites
N64Marin    100
Theory-correct Motion blur can be achieved using Accumulation butter. Render many frames into the accumulation buffer then bring it onto the screen. Without accumulation buffer, you can also render to texture many times then put a quad onto the screen with that texture. This approach may suffer from limited accuracy of texture if you don't use floating-point texture, or loss of quality from texture filtering.

Share this post


Link to post
Share on other sites
Daver    122
Hmmm... I found that accumulation buffer would be far to slow for a use in games (or I'm wrong and nowadays hardware could handle this?).
Maybe I could just add some blur for "Frame Feedback Motion Blur" texture and get a pretty similar results as a real-motion blur?

Thanks.

Share this post


Link to post
Share on other sites
oggialli    217
If you can afford requiring floating point texture render targets for your motion blur effect, go with the "real" one - do the temporal vertex morphing in the vertex shader and render the temporal antialiasing samples to high-precision floating point textures with GL_EXT_framebuffer_object and GL_ATI_texture_float (for ATI cards) or GL_NV_float_buffer (for NVIDIA cards). It should work at a reasonable speed, unlike the accumulation buffer method.

I'd strongly advise against going with frame feedback - I have tried it for a couple of my demos and - well, just like everybody else says, it looks horrible. So don't and instead opt for the real essence of motion blur.

Share this post


Link to post
Share on other sites
acid2    451
A better way (but hella more expensive) way to use the accumulation buffer to produce motion blur is to accumulate about 5 times before even presenting to the screen. What you do, is for each accumulation you interpolate the position/rotation etc from its current to the desired settings for when its presented to the screen.

The, blend these 5 accumulations together and you should have something more realistic. I haven't actually implemented this myself - I've only tried framebuffer feedback and that looks crap.

You'd be best of to just do velocity-based motion blur using a pixel shader. As someone else pointed out, refer to the friendly DXSDK for information on implementing this :)

Share this post


Link to post
Share on other sites
davepermen    1047
accumulation buffer is in hardware on hardware with dx9 level. it's implemented in the background with the help of a int16 (64bit per pixel) rendertarget afaik.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Framebuffer feedback sucks, it's exactly the same effect as bying a crappy lcd monitor with poor responce time. DO NOT USE framebuffer feedback unless you want the effect of a crappy lcd, which is very different from motion blur.

Share this post


Link to post
Share on other sites
timw    598
just throwing out an idea use some catmull rom curves for blending and integrating the pixel intensity. this would effectivley increase the order of the interpolation. you'd have a start point and end point, and two derivatives calculated from intermediate frames and adjusted to make sure the next frame would be continuous. I'm sure it's been thought of, it's the natural extension to the velocity based first order approach. it would require at least two more frames in between the rendered ones. so it would effectivley add 2/3rd to the rendering cost.

Share this post


Link to post
Share on other sites
I have to export my OpenGL animation to a QuickTime movie adding the motion blur.
I used the accumulation buffer, blending from 4 to 64 frames into a final frame, but the result is not so professional.
I get such a "too much" or "too few" motion blur. Not realistic.
I paste here the code. Would someone tell me what's wrong?

int gMotionBlurFrames = 4; // or 16 or 32 or 64...
int totResampledFrames = totFrames * gMotionBlurFrames;

// here I stretch the animation 4 times longer
[self ResampleAnimation:gMotionBlurFrames];

for(gFrame = 0; gFrame < totResampledFrames; gFrame++){

[glView display];

int mbFrame = gFrame % gMotionBlurFrames;
glAccum(GL_ACCUM, (float)1.0f / (float)gMotionBlurFrames);
if(mbFrame == (gMotionBlurFrames - 1)){
glAccum(GL_RETURN, 1.0);
[self CopyGLBufferToOutputBuffer];
[self AddOutputImageToQTMedia];

glClear(GL_ACCUM_BUFFER_BIT);
}
}

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this