Yep, thing is that, from what I've seen, the selling point of that is that its something human eyes do. From the marketing side, it isn't presented as "film-like" but "human-like".
As for eye adaptation -- this isn't contradictory for a film-style presentation, because cameras also have adaptive (or manual) exposure controls.
You're right on that. I should have specified overused bloom instead of just bloom. Though great data about atmospheric scattering! I didn't knew about it before (or rather, did knew about it but hadn't realized it ).
As for bloom, it's extremely important in human-style graphics as well as camera-style graphics.
It's probably a very tricky thing to get right I guess. If it moves too little, looks like a flying camera, if it moves too much, it looks weird, if it moves badly, looks robotic, etc.
As for head-bob, it should be very minimal in a human-style rendering. When running, your head does move excessively, but your brain uses your vestibular organ to "stabilize" it's vision, so that you don't notice just how much your eyes are actually moving.
It really adds to the atmosphere of the game when used in the proper situations.
That said... yes, it's a weird situation.
I remember being really impressed with one crappy FPS in the 90's, because they only showed their lens flare effect when you were using your scope (and thus looking at the scene through a lens)
Hm, I've read about the "blueish" low-light vision but I read that it wasn't because we have 4 kind of light receptors but because the blue one works better for low light environments.
As for more human-style presentation, one thing I want to see is internal 4D HDR rendering instead of RGB rendering. The eye has 4 kinds of light receptors, which are roughly tuned to red, green, blue and teal, so the eye actually sees in 4D colour. However, the optic nerve cuts this information down to just 3 dimensions, before it's processed by your brain, which is why we're ok with just rendering with RGB.
However, the process by which the 4D signal is cut down to 3D for processing differs depending on how much light is around. If there's 1% of a candle's worth of light, then RGB are thrown out, and only the Teal sensor data is sent to the brain. If there's more than 3 candle's worth of light, then Teal is thrown out, and only RGB are sent to the brain. However, in the range between those 2 extremes (around the 1 candle level), a weird "low-light" vision mode kicks in where the RGB and Teal data are combined in a special way, which always gives low-light scenes a very different appearance when you're actually there compared to when you see a photograph of it.
If we actually rendered in 4D and then used a tone-mapper that simulated this 4D->3D process that our eyes perform, then low-light renderings will appear much more realistic than what we currently achieve
There are humans with 4 light receptors but those are a genetic anomaly, occurs more frequently in women. They have one light receptor repeated.
I'll try to find the article in Wikipedia...
Found it, this one: http://en.wikipedia....ki/Color_vision Trichromatic vision is the term and that's what we humans have. There is a link there for tetrachromatic vision too.
I guess then you don't need 4 colors per pixel but make the blue have a bigger range.