motion blur

Started by
7 comments, last by DonnieDarko 18 years, 8 months ago
I'm speechless: Get project offset sneak peek 2 here!! I must say I'm stunned, amazed & don't know what to say... except - any idea how they make that "unified blur"? I can't come up with an idea that would work in all situations, especially during the "pause" with the particles. Although the objects aren't moving (becouse of the pause, but other than that they're moving pretty fast :P ), blur is still applied to them, plus the camera can freely move around & this won't cause visual artifacts. Now, I think that suggests that the frame buffer somehow gets the information on how fast each object is moving, which, could be done easily by rendering to a seperate render target & encoding each objects speed to the color & then do some post process work. But, this method would fail, becouse as you see in This image the blur works of different parts of each object individually, so you would need some way to encode the speed of each individual pixel... so.. any ideas on how to make something like that work? [Edited by - Bouga on August 9, 2005 2:12:02 AM]
"A screen buffer is worth a thousand char's" - me
Advertisement
Hmm... the boundary between the blurred and non-blurred parts seems rather sharp...

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Yes, thats the thing that cought my attention in the first place. If you just do motion blur for the whole screen, even the things that move very slowly will be a little blury, which is undesirable in most places. Atleast I think it doesnt look good... but here, it is somehow handled very nicely - the blurry parts are blurred and the non-moving parts stay crisp. Plus, when you freeze a frame, you dont see the artifact that usually appears in motion blur - being able see the exact position of every object in the few previous frames. Here, the object is just stretched as if there were many, many little steps. (maybe thats becouse they have some killer hardware & actually get very high fps... :? )
"A screen buffer is worth a thousand char's" - me
afaik, they are rendering the velocity of the object (calculated per vertex and interpolated across the polygon) to a seperate render target, then capturing the current frame and blurring it proportional to the stored velocities. Nivida has a presentation on this somewhere at their developer site. I didn't see any curved motion blurs in the video, which can't occur due to the linear nature of velocity, so I suspect this is the case.
[s] [/s]
I can see the fnords.
Quote:Original post by DudeMiester
afaik, they are rendering the velocity of the object (calculated per vertex and interpolated across the polygon) to a seperate render target, then capturing the current frame and blurring it proportional to the stored velocities.


I saw the paper you're talking about, however IIRC even with a high number of samples (16 I think was used there?) you could still make out the individual samples which kinda ruined the effect.
remember it's possible to blur in the time dimension, this would smooth it out at a cost of variance in the solution. that could be what they're doing. run a 3 dimensional gaussian thorugh it. I don't see it being that crazy. also particles could be parameterized via some sort of spline. and the spline drawn with increasing intensity based on some distance factor. these particles are obviously not sampled temporarly. they're moving to fast you can fake that easily tho.

Tim
Quote:
they are rendering the velocity of the object (calculated per vertex and interpolated across the polygon) to a seperate render target, then capturing the current frame and blurring it proportional to the stored velocities


Yes, I think that has got to be the case too, since, on the new GPU'us you could easily do an "if" to see whether the speed of the object is greater than some threshold, and only blur if it is, leaving the still parts unblurred & crisp, as they should be.
"A screen buffer is worth a thousand char's" - me
I don't understand how bluring based on velocity of the points will give a moving object a blur "tail". could you explain it a bit?
I think DudeMeister is talking about the technique described here.

This topic is closed to new replies.

Advertisement