Jump to content
  • Advertisement
Sign in to follow this  
MirekCerny

DX11 DX10 motion blur

This topic is 2471 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,
I decided to implement the object based motion blur the 'DX10' way (with geometry shader, vs the velocity texture).
I read the docs for DX10 SDK sample; they were clear enough - first you amplify the geometry "on both sides" using geometry shader, then you have to blur the texture for said geometry using anisotropic filtering.

However, when I took a look at the sample, it looked fishy. I checked the source, and found out they were not amplifying geometry AND bluring the texture; they were EITHER amplifying the geometry, OR bluring the texture, which doesn't make much sense?

Does anybody know what is going on here? What is the right way to do this?

Or rather, what is the best way to implement object based motion blur (real-time) for DX11 hardware?

Share this post


Link to post
Share on other sites
Advertisement
Yeah I remember noticing the same thing when I was looking through that sample. I guess they decided they didn't need it? It probably wouldn't look very good anyway, since you'd be pre-filtering unshaded samples.

Unfortunately getting a good motion blur is still really hard and/or expensive even on DX11 hardware, so there's no real "best way" that I could point you to. I've done a lot of experimentation and prototyping in my spare time, and haven't yet come up with a satisfactory solution. Using geometry shader amplification is expensive, and requiring alpha to coverage + MSAA doesn't help either. It's especially expensive the way they did it in that sample, where they add fins to every front-facing triangle. This results in a lot of amplification, a lot of overdraw, as well as artifacts inside the mesh silhouettes. I would suspect that if you tried to do it while rendering to a G-Buffer with MRT the results wold be pretty disastrous.

Some other things that I've tried:

  1. Only generating fins along the mesh silhouttes, combined with a post-process blur. To do this right you need triangle adjacency information, which means you either need to store it in a buffer or use the special index buffer layout required for GS adjacency. It's still not cheap since you still have the GS active, but at least you can avoid the crazy overdraw. The shading on the fins can look pretty weird for certain shapes, so you need to be careful about what normals you use on the fins. You also won't get proper transparency inside the object silhouette.
  2. Using a compute shader prepass to generate fins with an AppendStructuredBuffer. This avoids GS usage, but it's still not very cheap since you need to transform all of the vertices correctly in the compute shader and also use an append buffer which requires global atomics. There are also some annoying restrictions that prevent you from having a vertex buffer that's also used as an SRV or UAV. Consequently you need to have special shaders for rendering the fins.


Currently I'm still sticking with an implementation based on Morgan McGuire's recent I3D paper. It's a purely screen-space approach which fundamentally limits the overall quality, but it does a good job of avoiding the common artifacts that plague most games (especially for camera-based motion blur). I'm going to keep trying to figure out a good way to add in some sort of geometry-based techniques, because I think that you really need that if you want character motion to look good.

Share this post


Link to post
Share on other sites
Even if they did decide they don't need the anisotropic filtering, it still makes no sense to use it to render the scene _when the motion blur is off_ ;-)

Thanks for the paper, I was looking for such information. Yeah it is a bit disappointing - I was hoping that the GS method would allow a real object based motion blur - perhaps using a technique similar to shadow volume generation on the GPU - and now it seems I'm still stuck with the screen based one. Oh well ;-)

Share this post


Link to post
Share on other sites

Currently I'm still sticking with an implementation based on Morgan McGuire's recent I3D paper. It's a purely screen-space approach which fundamentally limits the overall quality, but it does a good job of avoiding the common artifacts that plague most games (especially for camera-based motion blur). I'm going to keep trying to figure out a good way to add in some sort of geometry-based techniques, because I think that you really need that if you want character motion to look good.

I'm curious (as one of the co-authors), are you seeing any particularly objectionable artifacts on character motion with the technique? We've been pretty happy with the results (particularly considering the overall cost and the fact that it even works on current generation consoles).

Share this post


Link to post
Share on other sites
Instead of using fins, why not just scale the object in the direction it's travelling based on its velocity and gaussian blur it in that direction. This is actually truer to real life too. You might also be able to pass in the extents of the object in screen space and alpha blend out at the edges

Share this post


Link to post
Share on other sites
So far we have been pretty happy with it. It does a really nice job of handling depth rejection. The only time it really breaks down is when you have large, divergent velocities within the same tile, but thats a fundamental limitation of screen space approaches.

Scaling meshes only works for simple motion along a major axis direction. What you really want to do is stretch per-vert, which you can kinda do by moving the vertex based on the dot product of the normal and the velocity direction. However this only works well on rounder, organic meshes. It also causes your mesh to literally tear itself apart, unless you have a way to guarantee that vertices with the same position are always offset by the same amount. One way to do this is to compute a seperate normal generated purely from positions.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!