• Advertisement
Sign in to follow this  

Verify my motion blur idea

This topic is 4233 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After seeing the motion blur done by project offset, I've been searching for effective ways to do "real" motion blur. It annoys me when articles and demos showcasing this effect cheat by using frame feedback, which just looks horrible. So I've been searching around, and I've found an interesting method that I think might work. I'd like your input on it. Basically, if the game runs at something like 120 fps, or something similar, the extra frames can be divided and rendered to an accumulation buffer. So in the case of 120 fps, the game would render 4 frames to 4 seperate render targets, and then average all of them into the final image, which is then rendered to the screen. This probably wouldn't be the best way to do it, so I'd like some insight on how accurate my idea is. Thanks a lot!

Share this post


Link to post
Share on other sites
Advertisement
Ive always been curious about motion blur. In my mind, simply fading one scene into the other SHOULD create true motion blur (isnt that whats going on inside a camera? IE multple 'pictures' of the same scene, shifted slightly?), but youre right, it looks like crap. All other techniques that ive seen involve some complex shader that must be applied to all objects you want to blur, it just seems like such a hassle... I wish there was a more elegant way.

I cant comment on your theory. What exactly do you mean by "rendered to an accumulation buffer". How is that different from "frame feedback"?

Share this post


Link to post
Share on other sites
My primary reference to this idea is this article. At the end, he posts a link to a demo that looks a lot like true motion blur.

It looks like the main difference is that instead of just blending the previous frame with the current, there are actually multiple frames rendered, averaged together, and then displayed (simulating a real camera).

Share this post


Link to post
Share on other sites
Quote:
It looks like the main difference is that instead of just blending the previous frame with the current, there are actually multiple frames rendered, averaged together, and then displayed (simulating a real camera).


Yeppers. thats basicly how it works. Basicly real motion blur comes from objects being in motion while the camera shutter is open...so, as you suggest, averageing together multiple renders and displaying the resulting frame is a pretty accurite way to do it.


Share this post


Link to post
Share on other sites
Fascinating! So this has nothing to do with shaders or individual objects? Its applied uniformly to everything? Now that sounds elegant, checking out the article now...

Share this post


Link to post
Share on other sites
I thought you might enjoy reading this. It's old now, but it has a good description of several techniques that can be implemented with multiple samples per pixel.

Share this post


Link to post
Share on other sites
Akk from what ive read, you need to render MANY frames in order to get decent results. Thus youre gonna take a huge hit to fps... kinda discouraging. Does that mean games like project offset are just running at really high fps (like 120+ ), and thats how they get away with motion blur?

Share this post


Link to post
Share on other sites
@alvaro

that was a interesting article too.. I thought 3dfx was ded though.. heh. Anyway, I didnt quite understand how their "t buffer" would eliminate the need to render multiple frames in order to get motion blur.. Did I miss something?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
We are looking to do something similar to this.

Someone asked whether the FPS would have to be insanely high to have this effect look good, and I'd like to point out that it depends on the FPS the game will be locked at. At 30 FPS you'd only need to run the game at about 90 FPS to get 3 render targets to alpha blend together for each frame, which from our tests looks very good. The better the system running the game the more targets get sampled for each frame, and the more accurate the blur.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
http://www.100fps.com/how_many_frames_can_humans_see.htm

Here's an interesting link. A game need not run any faster than a film (24 FPS) to communicate enough information about the enviornment if the motion blur is accurate. Our minds are able to use the blur information (assuming it's accurate and not a parlor trick) to fill in the gaps in frames. That's why even high action films aren't dizzying to us.

Share this post


Link to post
Share on other sites
What about vsync and tearing? If youre running sub 60fps wont you get nasty tearing? I know my game looks like crap at anything below 60fps, but GORGEOUS at 60fps vsync. Wont we be giving that up if we use motion blur?

Share this post


Link to post
Share on other sites
Why don't you just enable vsync if you're worried about tearing? In DirectX just apply D3DPRESENT_INTERVAL_ONE, OpenGL probably does something just as simple.

Also, couldn't you just have the motion blur quality depend on the machine? For instance, on machines running your game under 60fps, just disable motion blur. Then for the beefy machines that can handle 120+, use the wasted frames.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well, the reason that you can lock the framerate under 60 FPS (like 30 as I mentioned) is because of the accuracy of the motion blur. The only reason that 40-50-60 frames still seem off with current games is because of the lack of that same motion blur. If you can sample 3-4 targets+ for every frame (30Hz) there is no need to bring the framerate any higher. Beefier machines can simply render more samples and their motion blur will be that much more accurate.

Share this post


Link to post
Share on other sites
Perhaps that sounds stupid, but have the 'extra frames' to be rendered full resolution? It would be interesting to see how it looks with extra frames rendered a 1/2 the main resolution. Since this would speed it up a lot (I suppose), then you can survive with less power without 'wasting' 80% of the power just for the motion blur, reducing the usage down to, let's say, 50%...

Share this post


Link to post
Share on other sites
Correct me if I'm wrong... but isn't the best way to render motion blur like this:

1. When rendering pixels, store the velocity of that pixel (relative to camera) into a 'velocity' buffer.

2. Do a post process on the final image, whereby you blur pixels in the direction and magnitude of that pixels velocity.

3. Remove banding on the motion blur, by doing a random texture lookup on a gray scale noise texture and use that value (0-1) to offset where you take the samples from. That effectively removes the banding.

4. With access to the Z-buffer as well, you can make sure pixels in the distance do not blur over the top of pixels near the front. So a fast moving car behind a lamppost, will not blur the lamppost in the foreground.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
This is a more common method, but it isn't true motion blur. No image-based effect can be.

Share this post


Link to post
Share on other sites
I agree, but this is the closest we can get which doesnt require multiple frame renders (costly), and the blur is relatively accurate to the average users eye.

Share this post


Link to post
Share on other sites
There is a motion blur example using the accumulation buffer in the OpenGL Red Book. It's available free online so have a look at that. [smile]

Share this post


Link to post
Share on other sites
I think that the idea described in that article( this one to be exact) is completely useless in real-time. Why? This sentence reveals it all:
Quote:
Written in that article on freespace.virgin.net
For example, to create a 4 second animation with 100 motion-blurred frames, you might begin by rendering 400 frames


Trying to accomplish this in modern games where the rendering already goes multipass is completely impossible. That would mean having e.g. 6 passes(one per light) * 4(just to stick with the above-mentioned example) just to have a motion blur :) In my honest opinion that is insane.
There's is a much better(i.e. maybe not that accurate, but accurate enough to be unnoticable by the human eye, and a lot faster as well)technique out there. There have been a interesting discussion on this topic here at GD.net, check this out.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hybrid
I agree, but this is the closest we can get which doesnt require multiple frame renders (costly), and the blur is relatively accurate to the average users eye.


I've seen once some pictures done with this method (ray traced offline) and it looked really good! If it can give 'decent enough' results in a high quality raytracer, then I suppose that in a real time app it will be more than enough...

Share this post


Link to post
Share on other sites
@Mephysto

Damn. I got all excited when I saw

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/PixelMotionBlur_Sample.asp

Then I read..

Quote:
This vertex shader transforms vertex position into screen space, performs a simple N • L lighting equation, copies texture coordinates, and computes the velocity of the vertex by transforming the vertex by both the current and previous [world * view * projection] matrix and taking the difference.


What the hell? So EVERY single object in your scene needs this (somewhat simple, but still) vertex shader? If we have to apply a vertex shader to every object in the scene, is it really that much better than just averaging multiple frames?

Share this post


Link to post
Share on other sites
Quote:
Original post by MePHyst0
I think that the idea described in that article( this one to be exact) is completely useless in real-time. Why? This sentence reveals it all:
Quote:
Written in that article on freespace.virgin.net
For example, to create a 4 second animation with 100 motion-blurred frames, you might begin by rendering 400 frames


Trying to accomplish this in modern games where the rendering already goes multipass is completely impossible. That would mean having e.g. 6 passes(one per light) * 4(just to stick with the above-mentioned example) just to have a motion blur :) In my honest opinion that is insane.
There's is a much better(i.e. maybe not that accurate, but accurate enough to be unnoticable by the human eye, and a lot faster as well)technique out there. There have been a interesting discussion on this topic here at GD.net, check this out.


Yes, but you don't need your motion blurred game to run at 100fps. It could run at 30fps just fine, which would only require a maximum of 120 frames per second.

Share this post


Link to post
Share on other sites
I think a good way would be to rerender only the moving object, not the whole frame, to a chain of render targets. This way, only the moving objects will have the blur, not the whole image..so you avoid having things blur by moving the camera. Then you just do the frame averaging with a weight towards the most recentt, etc. Just make sure to render the frames with the alpha cleared to 0.

Share this post


Link to post
Share on other sites
I disagree, I think if youre gonna use motion blur it NEEDS to be applied to everything uniformly. Run the pixel moton blur demo in the direct x sdk directory. Its really amazing how much more realistic motion blur makes even STATIC objects look. The motion blur you get from simply moving the camera (even when other objects are stationary) really makes the scene come alive. Its just such a shame that the 'pixel shader' motion blur requires every single object be processed by a vertex shader...

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
http://www.100fps.com/how_many_frames_can_humans_see.htm

Here's an interesting link. A game need not run any faster than a film (24 FPS) to communicate enough information about the enviornment if the motion blur is accurate. Our minds are able to use the blur information (assuming it's accurate and not a parlor trick) to fill in the gaps in frames. That's why even high action films aren't dizzying to us.


Um...not to be nit-picky...but there is a LOT of poppycock on that link.

Our eyes don't operate in "frames per second" they are constantly operateing sending signals to our brains. If you concentrait enough you can detect the slight flickering of your house lights running on 50 or 60hz AC electrical supply; even sense the difference bewteen strobe lights running at 110 and 120 hz. our brains are very adept at interpreting motion from what our eyes see.

For example (and despite what you may have heard) Disney cell animation is very often shot in "twos"...meaning 12 frames per second, yet it still seems fluid to our eyes.

Saveing Private Ryan had action scenes shot with very fast camera shutter speeds, minimizeing motion blur...yet we can still sense fluidity in even fast motions when projected at 24FPS.

That we can easily detect a difference between a non-motion blurred game running at 120FPS and one with motion blur running at 30FPS is proof enough that motion blur isn't a necessity for fluid motion.

Its ONLY more "realistic" to have motion blur in that we have trained our brains to percieve it when we watch images projected on flat 2D screens. If you ever get a chance catch one of the IMAX films that is projected at 48FPS, much more fluid "realistic" motion then ordinary 24FPS projection

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement