Something that bothers me...

Started by
7 comments, last by Stroppy Katamari 11 years, 4 months ago
I have been thinking lately about the kind of camera effects that are pushed into games to get more "realism" and "immersion" (watching Crysis 3 and Battlefield 3 videos).

In one side we get the "realistic" stuff. Head bobbing, eye adaptation, depth of field (that one depends on the artistic orientation), etc.

In the other side we get the "camera" stuff. Lens flares, bloom, depth of field (again, depends on the artistic orientation), dirt on the screen, etc.

Doesn't these effects negate each other?

In the first case, these effects "communicate" to us, the players, that we are in the environment, looking at it through our eyes. That's why they adapt, that's why the screen moves and blurs when we run desperately, because we're the person inside that world. Its the most immersive thing to do, to think not as you're looking at a monitor but to think about it as you're looking at the environment by yourself, with your own eyes.

But in the second case, it communicates otherwise. It is a camera (LENS flares) you're not in there, you're looking at it through a camera. It looks cool if you're... say, a space marine with a helmet and your visor is dirty, you're inside the helmet and you're looking at the world through the helmet, you see the dirt on the helmet, you see the funny things it does with light. But in other games, like BF3 or the UE4 demo, it doesn't makes any sense to have "camera" effects if you're not supposed to look at the world through a camera.

Its like in one side you get these immersive effects that subtly tell you're inside the world and then these other effects that tell you're not there, that you're looking at the world through a camera.

Does it makes any sense that both kind of things are pushed inside games as "realistic" effects?

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement
Well, getting that dirt and blood on some glass surface in front of your eyes is better than getting it in your eyes and having to rub them for 5 minutes, after which you get permanent damage from the sand and rocks and a reddish atmosphere for the rest of the game :P

They might not make sense, but although i dont really play those games, id imagine that kind of effects make you feel like you are in the game, and not just some floating camera unaffected by what happens around you.

o3o

TheChubu I kind of agree with this statement if we're suppose to be getting immersed into a game's world, we shouldn't be thinking about the camera and any head bob or sway should be minimal at best to seem natural not to remind us we are in first person (this should be self evident;).

Well, getting that dirt and blood on some glass surface in front of your eyes is better than getting it in your eyes and having to rub them for 5 minutes, after which you get permanent damage from the sand and rocks and a reddish atmosphere for the rest of the game tongue.png

They might not make sense, but although i dont really play those games, id imagine that kind of effects make you feel like you are in the game, and not just some floating camera unaffected by what happens around you.
The point is the contradiction right there that you mentioned. Camera effects for making you think you're not a floating camera? Madness!

The best thing imo would be, more investiment in eye/human vision related effects and less (or none at all) effects based on how camera's lenses react to light.That is if you want to convey that you're not looking at the world through a camera of course (otherwise it would imply that you're trying for a cinematic experience).

I'm not talking about that kind of hyper realism, have the character try to get off the sand of his eyes. I'm talking about simply not reminding the player that he is looking at the world through a monitor. Just remove the hooks that imply that. Lens flares imply that, bokeh depth of field imply that, etc.


TheChubu I kind of agree with this statement if we're suppose to be getting immersed into a game's world, we shouldn't be thinking about the camera and any head bob or sway should be minimal at best to seem natural not to remind us we are in first person (this should be self evident;).
I actually find the head bobbing pretty good for immersion.

But yeah, as you said, this is the case when we're supposed to be immersed in the virtual world. More cinematic experiences (what most of AAA games look for) are bound to have camera effects based on actual camera behavior.

I just don't get why mix the two things, or worse, pass them both combined as realistic effects. Realistic for what exactly? Why have eye adaptation while having lens flares? Are we the character or are we looking at him through a camera? Its contradictory.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator


I just don't get why mix the two things, or worse, pass them both combined as realistic effects. Realistic for what exactly? Why have eye adaptation while having lens flares? Are we the character or are we looking at him through a camera? Its contradictory.
Well, it's realistic as in "holywood realism". Lots of games go for a kind of "film" look, rather than a human perspective. "Realistic" might also mean "photorealistic" or "hyperrealistic", which are artistic genres.

BF3 for example, is obviously trying to look like it's filmed via a camera. It's a deliberate style choice. It's some kind of "war footage" schtick.

As for eye adaptation -- this isn't contradictory for a film-style presentation, because cameras also have adaptive (or manual) exposure controls. In fact, most tone-mapping systems refer to this parameter as "exposure", because they were initially developed for photography. Automatic adaptation might just be the "AI cameraman" turning the exposure knob on your virtual camera.

As for bloom, it's extremely important in human-style graphics as well as camera-style graphics. Try looking at a bright light source at night (e.g. a street-lamp), then hold out your thumb to cover it up - you'll notice a large 'glow' around the light disappears when you do so.
n.b. streetlamps on foggy nights also have a 2nd 'glow' that doesn't disappear when you cover them with your hand, which is atmospheric scattering.

As for head-bob, it should be very minimal in a human-style rendering. When running, your head does move excessively, but your brain uses your vestibular organ to "stabilize" it's vision, so that you don't notice just how much your eyes are actually moving.
If you use a head-camera to film yourself running, and the watch the footage later (without the simultaneous vestibular hints), then it's very hard to watch and very disorienting. A camcorder-style presentation should make greater use of head-bob... However a holywood-style presentation should minimize it because that's what (non-camcorder) films do.


That said... yes, it's a weird situation.
I remember being really impressed with one crappy FPS in the 90's, because they only showed their lens flare effect when you were using your scope (and thus looking at the scene through a lens) biggrin.png

As for more human-style presentation, one thing I want to see is internal 4D HDR rendering instead of RGB rendering. The eye has 4 kinds of light receptors, which are roughly tuned to red, green, blue and teal, so the eye actually sees in 4D colour. However, the optic nerve cuts this information down to just 3 dimensions, before it's processed by your brain, which is why we're ok with just rendering with RGB.
However, the process by which the 4D signal is cut down to 3D for processing differs depending on how much light is around. If there's 1% of a candle's worth of light, then RGB are thrown out, and only the Teal sensor data is sent to the brain. If there's more than 3 candle's worth of light, then Teal is thrown out, and only RGB are sent to the brain. However, in the range between those 2 extremes (around the 1 candle level), a weird "low-light" vision mode kicks in where the RGB and Teal data are combined in a special way, which always gives low-light scenes a very different appearance when you're actually there compared to when you see a photograph of it.
If we actually rendered in 4D and then used a tone-mapper that simulated this 4D->3D process that our eyes perform, then low-light renderings will appear much more realistic than what we currently achieve.
As for eye adaptation -- this isn't contradictory for a film-style presentation, because cameras also have adaptive (or manual) exposure controls.
Yep, thing is that, from what I've seen, the selling point of that is that its something human eyes do. From the marketing side, it isn't presented as "film-like" but "human-like".

As for bloom, it's extremely important in human-style graphics as well as camera-style graphics.
You're right on that. I should have specified overused bloom instead of just bloom. Though great data about atmospheric scattering! I didn't knew about it before (or rather, did knew about it but hadn't realized it tongue.png).

As for head-bob, it should be very minimal in a human-style rendering. When running, your head does move excessively, but your brain uses your vestibular organ to "stabilize" it's vision, so that you don't notice just how much your eyes are actually moving.
It's probably a very tricky thing to get right I guess. If it moves too little, looks like a flying camera, if it moves too much, it looks weird, if it moves badly, looks robotic, etc.


That said... yes, it's a weird situation.
I remember being really impressed with one crappy FPS in the 90's, because they only showed their lens flare effect when you were using your scope (and thus looking at the scene through a lens) biggrin.png
It really adds to the atmosphere of the game when used in the proper situations.

As for more human-style presentation, one thing I want to see is internal 4D HDR rendering instead of RGB rendering. The eye has 4 kinds of light receptors, which are roughly tuned to red, green, blue and teal, so the eye actually sees in 4D colour. However, the optic nerve cuts this information down to just 3 dimensions, before it's processed by your brain, which is why we're ok with just rendering with RGB.
However, the process by which the 4D signal is cut down to 3D for processing differs depending on how much light is around. If there's 1% of a candle's worth of light, then RGB are thrown out, and only the Teal sensor data is sent to the brain. If there's more than 3 candle's worth of light, then Teal is thrown out, and only RGB are sent to the brain. However, in the range between those 2 extremes (around the 1 candle level), a weird "low-light" vision mode kicks in where the RGB and Teal data are combined in a special way, which always gives low-light scenes a very different appearance when you're actually there compared to when you see a photograph of it.
If we actually rendered in 4D and then used a tone-mapper that simulated this 4D->3D process that our eyes perform, then low-light renderings will appear much more realistic than what we currently achieve
Hm, I've read about the "blueish" low-light vision but I read that it wasn't because we have 4 kind of light receptors but because the blue one works better for low light environments.

There are humans with 4 light receptors but those are a genetic anomaly, occurs more frequently in women. They have one light receptor repeated.

I'll try to find the article in Wikipedia...

Found it, this one: http://en.wikipedia....ki/Color_vision Trichromatic vision is the term and that's what we humans have. There is a link there for tetrachromatic vision too.

I guess then you don't need 4 colors per pixel but make the blue have a bigger range.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

We have 3 kinds of "colour receptors" (cones), excluding mutations, and 1 kind of "night vision receptor" (rods).
In daylight, the rods aren't functional, so we have normal colour (photopic) vision, the optic nerve carries 3D colour to the brain. In this mode, you're actually the most sensitive to green colours -- when shown equally reflective red/green/blue surfaces, the green ones will appear to be the brightest.
e.g. when calculating the brightness of an RGB colour, weighted formula like "[font=courier new,courier,monospace]0.3*R + 0.59*G + 0.11*B[/font]" are often used to reproduce the sensitivity of our cones to each wavelength. Many games use such formulas in their tonemappers, but these are only valid for daytime lighting conditions. They're completely wrong for low-light.

In complete darkness, the cones aren't functional, so we have grainy colour-less (scotopic) vision. The optic nerve basically carries 1D colour to the brain.

In low light, the rods begin to contribute information at the same time that the cones are, which is called mesopic vision. There's 4 sensors picking up light in this mode (3 cone types + 1 rod type), but the optic nerve still only carries 3D colour information, so we still have trichromatic vision. However, because the wavelength that's absorbed by the rods is a "blueish" colour, this causes what's called the "Purkinje shift", where your eyes are now more sensitive to blue wavelengths, making them seem brighter.
In one side we get the "realistic" stuff. Head bobbing, eye adaptation, depth of field (that one depends on the artistic orientation), etc.

In the other side we get the "camera" stuff. Lens flares, bloom, depth of field (again, depends on the artistic orientation), dirt on the screen, etc.

It is a camera (LENS flares) you're not in there, you're looking at it through a camera. It looks cool if you're... say, a space marine with a helmet and your visor is dirty, you're inside the helmet and you're looking at the world through the helmet, you see the dirt on the helmet, you see the funny things it does with light.[/QUOTE]

In Crysis, the protagonist is not seeing the world about him through his own eyes, so I don't personally see "camera" stuff detracting from that game.

In the first case, these effects "communicate" to us, the players, that we are in the environment, looking at it through our eyes. That's why they adapt, that's why the screen moves and blurs when we run desperately, because we're the person inside that world. Its the most immersive thing to do, to think not as you're looking at a monitor but to think about it as you're looking at the environment by yourself, with your own eyes.
Simulated motion blur isn't the same as our eyes actually focusing on something. It may make canned footage of the game look somewhat more realistic, but makes things feel less realistic when actually playing. Artificial, exaggerated head bob is another thing that works really badly; though our heads may move as we move, our built-in vision processing lets us ignore it, whereas we can't ignore the artificial bobbing.

Until we can move to head-mounted displays with low-lag 6DOF head tracking, I think the best things we can do for visual realism are pushing 120FPS and higher resolutions.

This topic is closed to new replies.

Advertisement