Jump to content
  • Advertisement
Sign in to follow this  
jcabeleira

Screen Space Motion Blur artifacts

This topic is 2507 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I guys, I'm implementing screen space motion blur in my deferred renderer using the trick of obtaining the per-pixel velocity vector by transforming the pixel's world space position by the current and previous view/projection matrices and calculating the difference between the two.
The results of this technique are good except when I rotate the camera too fast or move backwards which results in blurring artifacts/discontinuities (I'll upload a screenshot as soon as I can). I believe this is due to the fact that in those cases the pixel was behind the camera in the previous frame and so it won't be correctly projected to the near plane because its W value is bellow the [-1, 1] range. However I see everyone using this technique without any problems so what am I missing here?

Thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
It sounds like you might be dividing by w and subtracting the two positions in the vertex shader. (And it sounds like you've read the motion blur chapter in GPU Gems 2.) You need to output both positions from the vertex program without dividing by w (you only need x, y, and w). Then, divide each position by its w coordinate and subtract them in the fragment shader. This will correctly handle triangles that extend behind the camera.

I recommend checking out Chapter 17 in Game Engine Gems 1 if you're interested in a more precise motion blur method based on a velocity buffer.

Share this post


Link to post
Share on other sites
Thanks for your response, Eric.
The problem can't be caused by the division by W because I'm doing all the calculations in the fragment shader. I'm performing camera motion blur using the depth buffer to reconstruct the world space position of each pixel. This is the code that I use to generate the motion vectors:


void main()
{
//Obtain the world space position of the pixel.
vec3 position = reconstructPositionFromDepth(gl_TexCoord[0].st);

vec2 currentProjection = gl_TexCoord[0].st;

vec4 previousProjection = previousMatrix * vec4(position, 1.0);
previousProjection.xy /= previousProjection.w;
previousProjection.xy = previousProjection.xy * 0.5 + 0.5; //Convert from the [-1, 1] to the [0, 1] range
.
vec2 motion = currentProjection.xy - previousProjection.xy;

gl_FragColor= vec4(motion, 0, 0);
}


And here is an image depicting the artifact when rotating the camera from right to left very fast:
motion_blur_artifact.jpg

I'm sure the world space reconstruction is fine since I've tested it extensively and used .
What confuses me is: what is the projected position of a point that is behind the camera? For instance, imagine that the camera rotated 180º meaning that a pixel that is now right in front of the camera was exactly behind it in the previous frame. If I'm not mistaken, both pixel positions are projected to the same screen space location and thus their difference would yield a zero length motion vector, when in fact we'd like to obtain something similar to the complete 180º arc of the motion. Am I right?

Share this post


Link to post
Share on other sites
Both the current position and the previous position would need to be divided by their (different) w coordinates in the fragment shader. I don't see that. What is stored in gl_TexCoord[0].st? The screen coordinates? (If so, it would be better to use gl_FragCoord.xy.) Your subtraction is operating on things that are in different coordinate spaces, which will never produce correct results. I don't see the current projection matrix being applied. You also don't want to remap to [0,1] before subtracting. Your code really ought to look something like this:

void main()
{
//Obtain the world space position of the pixel.
vec3 position = reconstructPositionFromDepth(gl_FragCoord.xy);

vec2 currentProjection = currentMatrix * vec4(position, 1.0);
vec4 previousProjection = previousMatrix * vec4(position, 1.0);
currentProjection.xy /= currentProjection.w;
previousProjection.xy /= previousProjection.w;

vec2 motion = currentProjection.xy - previousProjection.xy;
gl_FragColor= vec4(motion * 0.5 + 0.5, 0, 0);
}

Share this post


Link to post
Share on other sites

What wrapping mode is set for the sampled texture?


Clamp to edge.


What is stored in gl_TexCoord[0].st? The screen coordinates?


Yes, the screen coordinates in the [0, 1] range. These are used to sample the color buffer texture which is a simple 2D texture.


Both the current position and the previous position would need to be divided by their (different) w coordinates in the fragment shader. I don't see that.


I already know the current screen space position of the pixel, it's simply gl_TexCoord[0].st (scaled and bias to the [-1, 1] range), so I don't need to project its world space position to screen space. By doing so, I would be unprojecting from screen space to world space when reconstructing the position from depth and then projecting back again to screen space when multiplying by the current view/projection matrix, which is redundant.
That was actually my first approach and the code was very similar to the one you suggested however it yielded the same artifacts.


Your subtraction is operating on things that are in different coordinate spaces, which will never produce correct results. I don't see the current projection matrix being applied. You also don't want to remap to [0,1] before subtracting.


The coordinate space is the same, the current screen space position of the pixel is given in the [0, 1] range and the previous screen space position is given in the [-1, 1] range but converted to the [0, 1] range using the instruction: "previousProjection.xy = previousProjection.xy * 0.5 + 0.5;". Nevertheless, I modified the code to keep things in the [-1, 1] range but the artifact remains:


void main()
{
//Obtain the world space position of the pixel.
vec3 position = reconstructPositionFromDepth(gl_TexCoord[0].st);

//Current screen space position in the [-1, 1] range.
vec2 currentProjection = gl_TexCoord[0].st * 2.0 - 1.0;
//Previous screen space position in the [-1, 1] range.
vec4 previousProjection = previousMatrix * vec4(position, 1.0);
previousProjection.xy /= previousProjection.w;
.
vec2 motion = (currentProjection.xy - previousProjection.xy) / 2.0;

gl_FragColor= vec4(motion, 0, 0);

Share this post


Link to post
Share on other sites
I've been thinking about this motion blur technique and I'm convinced that the artifact I get is actually expected. Look at the diagram bellow, in the left you see a case where the technique performs well, and on right you can see that the technique fails when the point was behind the near plane in the previous frame:

motion_blur_projection.jpg

If I understand projections correctly, notice that we can't rely on the projection of points that were previously behind the camera since they will yield an incorrect motion vector. What do you guys think about this?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!