# Full Screen Motion Blur With Shaders

This topic is 3848 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

So this afternoon I decided that, by looking at a few games, adding motion blur would be super easy and I tried it... I was wrong. Maybe it is easy, but I don't get it. What I did was lerp the current frame with another accumulation buffer with an s value of .75. Then I copied the new scene to the accumulation buffer and used that for the next scene and so on. It kinda works but not very well. The first problem is that, when the program runs at lower framerates, there are large gaps that appear between previous frames in the motion blur, making it look pretty awful. The second issue is that certain objects will accumulate on the buffer and never disappear. This is the one that confuses me the most because it seems like the lerp would take care of that by diminishing older frames too much to see them. (You can see the numbers and the outline of the buttons accumulated in the wrong spots. The frame will sit like this for ever and the numbers never disappear from the wrong spots) I really hope I'm not doing this completely wrong. If there's one thing I don't want to do, it's render each object that's moving 1000 times to do a motion blur effect.

##### Share on other sites
The way your doing it now is the 'old' way of doing simple motion blur, and as you found, doesnt look very good, especially with low framerates :P

Assuming you are using vertex/pixel shaders, the new way is to render all your moving objects' per-pixel velocities to an offscreen buffer, then in a post processing pass, use that velocity buffer to blur your framebuffer.

I dont know enough about it to give you a detailed explanation, but the directx sdk has a nice motion blur sample, and nvidia's sdk has a much better one

Again, that technique only works if you are using vertex/pixel shaders. I believe it can be made to work even with PS/VS 1.4

The cool thing about the technique i just mentioned is that although you can get some artifacts, it still looks very nice, even with low framerates

##### Share on other sites
Yeah, as I was looking through these forums, I did see that. However, I am a bit confused. How exactly do you calculate 'pixel' velocity? That seems like it would be a bit difficult.

##### Share on other sites
In the DX sample the velocity of a vertex is calculated by taking the difference of the current screen space position and the previous screen space position. Here is the snippet of code:

    // Transform from object space to homogeneous projection space    vPosProjSpaceCurrent = mul(vPos, mWorldViewProjection);    vPosProjSpaceLast = mul(vPos, mWorldViewProjectionLast);    // Convert to non-homogeneous points [-1,1] by dividing by w     vPosProjSpaceCurrent /= vPosProjSpaceCurrent.w;    vPosProjSpaceLast /= vPosProjSpaceLast.w;        // Vertex's velocity (in non-homogeneous projection space) is the position this frame minus     // its position last frame.  This information is stored in a texture coord.  The pixel shader     // will read the texture coordinate with a sampler and use it to output each pixel's velocity.    float2 velocity = vPosProjSpaceCurrent - vPosProjSpaceLast;            // The velocity is now between (-2,2) so divide by 2 to get it to (-1,1)    velocity /= 2.0f;

I had only one problem with the DX sample, though. For some models this technique doesn't work so well because of perspective correct interpolation. I ended moving velocity calculations to the pixel shader and it worked well.

##### Share on other sites
calculating per pixel velocity is really easy. First you need to make a black screen sized buffer. Then all you do is render your geometry given its current WorldViewProjection and its previous WorldViewProjection matricies. All the work is done in the vertex shader; velocity is simply

v = vertexPosition * currentWVP - vertexPosition * previousWVP

You can output this velocity as the color from the vertex shader, and dont even need a pixel shader

Then, you render your scene normaly (with lighting, textures, ect...)
After rendering you scene normaly, you render a plain old screen sized quad, and in its pixel shader, you want to read the velocity from your velocity map that you made previously, and blur the backbuffer based on that velocity.

edit:
looks like deathkrush has exactly what you need, and hes right, most of the artifacts can be fixed if you do the velocity calc in the pixel shader (though its a bit more expensive)

##### Share on other sites
So you render the scene twice? Once regularly, and once with the velocity map, then blur the regular one with the velocity map.

When you render the 'velocity map' would you use a regular R8G8B8A8 backbuffer. Wouldn't rendering the scene twice and storing the old WordViewProjection matrices be a large cut on performance?

Finally, how do you blur using that velocity? Would you do blur with a radius based on the magnitude of the velocity, or do you actually use the direction?

##### Share on other sites
In fact, its possible to only need to render your geometry once if you use multiple render targets, but in my own experience, as long as you dont have to issue too many draw calls, it is faster to render twice rather than with MRT. Best thing to do is try both and see which is faster though

If you do end up rendering everything twice, you can do another little trick for the velocity map generation where you stretch the actual geometry in the vertex shader to eliminate many of the artifacts even more (nvidia's sample shows this well)

When you render the velocity map, you can use a floating point texture, or you can set a "max velocity" then divide your velocity by this max velocity to put it in the -1,1 range, allowing you to store it in a r8g8b8 (no need for alpha here).

WHen you actualy do the blur, you want to sample N number of times in the direction of velocity, and in the negative direction of velocity. Dont sample in a disk or anything, sample in a line in the direction of velocity and -velocity

##### Share on other sites
Well the reason that your current implementation looks terrible is because its mathematically wrong [smile]. Motion-blur is designed to address an aliasing problem, in this case temporal aliasing of a signal. And the only way to properly anti-alias a signal is to oversample and then apply a low-pass filter. In your case you're not creating any extra samples, you're just applying a low-pass filter. This is analogous to applying a blur filter instead of multi-sampling or super-sampling when attempting to remove "jaggies". The proper approach would be to, for every frame, render a few "sub-frames" that represent time periods before and after the initial time. These sub-frames would then be combined, to produce the anti-aliased image. However this is horribly expensive to do, and requires many samples to look good at lower framerates (this is for the same reason an image at 640 x 480 looks bad on a 1280 x 1024 monitor, even if 4x FSAA is used).

Of course, you might notice that the 2.5D post-processing motion blur isn't proper anti-aliasing either. You're still stuck with the same problem where you need more information than what's already been rendered to your back-buffer, and because of that you *will* get artifacts with this technique. This picture demonstrates it quite well.

Anyway velocity generation usually isn't a problem unless you're fill-rate bound, it's a pretty cheap pass. A trick you can do to avoid velocity generation is to , in your post-processing pass, determine a pixel's position using a depth-buffer and then use the last frame's ViewProjection matrix to determine it's velocity. However the drawback is that you only get blurring from camera movement, and not from objects moving on their own.

1. 1
Rutin
25
2. 2
3. 3
4. 4
JoeJ
18
5. 5

• 14
• 14
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631758
• Total Posts
3002137
×