How to properly cancel camera motion blur?

Started by
4 comments, last by Jamsers 11 months, 2 weeks ago

How would you properly cancel out motion blur caused by camera movement?

For example, when you move your camera 10 units right:

an object that wasn't moving will get blurred 10 units left,
an object that was moving 10 units left will get blurred 20 units left,
an object that was moving 10 units right (same as the camera) will have no blur applied.

Now the standard approach seems to be to apply a "counter force" to the motion vectors, equivalent to the camera's movement.

So now when you move your camera 10 units right, you apply a counter force of "10 units right" to the motion vectors, and now:

an object that wasn't moving will have no blur applied,
an object that was moving 10 units left will get blurred 10 units left,
an object that was moving 10 units right (same as the camera) will get blurred 10 units right.

The problem is in the third scenario: an object moving in lockstep with the camera gets blurred in the direction of the camera movement! In practice this means that in a first person shooter, for example, the gun model gets blurred when you turn your camera. In a platformer, the player character gets blurred every time you move.

Examples in motion

I've made some simple videos in Unity to illustrate the problem, but it's interesting to note that this is a problem common to all game engines. In Borderlands 3 (UE4) for example, you can disable camera motion blur while keeping object motion blur on - and when you do so, and get in a vehicle and drive, the vehicle gets blurred in the direction you're driving in.

What's the truly proper way to cancel out camera motion blur, I wonder?

None

Advertisement

BTW I'm not actually asking for help, this is more of a call for discussion and ideas.

There's already an industry standard workaround for this issue - just mask out the relevant objects from motion blur. (Or render in a separate render layer - useful for FPS for example.) In an FPS, you mask out the gun model, in a platformer, you mask out the player character, ect. The problem is that this leaves those objects with no motion blur at all, and it's a hack - other objects that happen to move in the same direction as the camera will still suffer from the issue.

The reason why I'm interested in this is because motion blur is kinda essential to making any computer generated image look plausible, including games. However, just straight up "cinematic" motion blur is annoying, causes headaches, and affects gameplay. IMHO, games should simulate a sort of genre dependent "saccadic masking" where camera motions that are very common don't cause motion blur.

None

Don't hack the motion vectors at all. Even if you work out an appropriate “inverse” operation, it will still inevitably produce artifacts in other algorithms.

Just kill the blur pass Dispatch. If it's not it's own pass… add a permutation to the shader it's part of that will allow you to turn off just the blur pass part.

Jamsers said:
because motion blur is kinda essential to making any computer generated image look plausible, including games

Many people, myself included, really dislike motion blur in games (especially FPS) because it decreases the clarity of the image, and will disable it if at all possible. The argument is that your eyes are already applying blur to the images you see on the screen by temporally integrating the photons on the retina, so it is redundant (and even harmful) to apply a secondary artificial layer of blur.

If you could update the screen at a very fast rate then there would be no appreciable difference between the images you would see versus reality, and therefore no need to apply blurring to make motion seem continuous. VR games which require low-latency rendering don't use motion blur. I imagine motion blur in VR would quickly cause motion sickness.

Jamsers said:
However, just straight up "cinematic" motion blur is annoying, causes headaches, and affects gameplay

Yeah it is annoying in games, due to incorrect implementation. In real life, saccadic masking happens, so only movement relative to your head triggers motion blur (you can literally see this by waving your hand in front of your eyes right now.) Motion blur caused by your head movement or eyeball movement gets “cancelled out” by the brain: we're actually blind for nearly 40 minutes a day due to this phenomenon.

Figuring out this issue will be key in properly implementing saccadic masking in games (practically it just means remove the motion blur caused by camera movement while keeping all other motion blur intact). With it, you'll be tracking an enemy in an FPS, for example, with no blur at all other than the movement from his limbs maybe. Right now we have motion blur more akin to a film camera, so when you track a moving enemy in an FPS your entire screen just turns into a vaseline smear.

Aressera said:
VR games which require low-latency rendering don't use motion blur.

There's many things you can't do in VR right now due to the nature of VR displays currently being one massive compromise. In an ideal (nonexistent) VR display, you should be able to focus on an object with your own eyes: currently you can't, because it's just a flat screen in front of your eyeballs. At the very least, the screen should track your eyeballs to see what you're focusing on: currently, that's probably not possible within the decade. So the proper compromise for now is to remove all semblance of focus and just render everything as flat as possible.

Until we get displays that somehow allow your eyes to physically focus on an object “within” the display (I have no idea how that'll even be possible), 1000hz+ displays, and 700 ppi resolutions (that's 14000x7875 resolution for a 24" monitor!), you're going to need depth of field, motion blur, and antialiasing respectively. Pixar figured this out in the early 90s while developing Toy Story - the lack of these “artifacts” implies an infinitesimally small sensor capturing information within an infinitesimally small amount of time. (which, to be fair, is actually what cameras in CGI and games are.) Actual cameras (including our eyes) have finite size, and capture information in a finite amount of time, hence the need to “reintroduce” these artifacts in our renders.

None

This topic is closed to new replies.

Advertisement