Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualBacterius

Posted 24 May 2013 - 07:57 PM

remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.


Sure, full-blown diffraction effects are hard to see in every day life, but have you never looked at a bright light source surrounded by darkness? Say a street light at night? Our eyes perceive light in much the same way as any camera, collecting photons on the retina and transmitting intensity and color information across the optic nerve to the brain. There are some extra biological effects, for instance cones and rods shutting down when they are overloaded (like the discoloured or black halo you get when you look at a bright light for too long) and thermal noise that many people can perceive, especially in low light (not much different from sensor noise). The eye's design is a bit more complex than the average camera but overall it can be pretty accurately modelled by existing methods. But we need to apply those methods properly and not go over the top for cinematic effect.

As for the "halo" you see around lights, it is caused partly by scattering, and partly by diffraction at the eye's lens (the "rainbow" effect is due to the fact that different wavelengths are diffracted differently, and the overall result produces these multicolor "streaks" of light). What happens is light waves incident to the lens from that light source are diffracted at the aperture and "bleed" onto nearby locations on the sensor. Here's a mockup I did of the base diffraction pattern for a human eye with my lens diffraction tool:

pupil.png?raw=true


Of course it would need to be colorized, scaled and convolved with the image, but I don't know about you but it's pretty damn close to what I see when I look at a small white light, though obviously it depends on lots of factors. I should try to convolve it on an HDR image and see what the result looks like.

That said I agree with just developing a high dynamic range display. That would be so much easier than messing around with tonemapping algorithms. But then by the same reasoning we may as well just develop a holodeck and let nature do the graphics for us :)

#1Bacterius

Posted 24 May 2013 - 07:50 PM

remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.

 

Sure, full-blown diffraction effects are hard to see in every day life, but have you never looked at a bright light source surrounded by darkness? Say a street light at night? Our eyes perceive light in much the same way as any camera, collecting photons on the retina and transmitting intensity and color information across the optic nerve to the brain. There are some extra biological effects, for instance cones and rods shutting down when they are overloaded (like the discoloured or black halo you get when you look at a bright light for too long) and thermal noise that many people can perceive, especially in low light (not much different from sensor noise). The eye's design is a bit more complex than the average camera but overall it can be pretty accurately modelled by existing methods. But we need to apply those methods properly and not go over the top for cinematic effect.

 

As for the "halo" you see around lights, it is caused partly by scattering, and partly by diffraction at the eye's lens (the "rainbow" effect is due to the fact that different wavelengths are diffracted differently, and the overall result produces these multicolor "streaks" of light). What happens is light waves incident to the lens from that light source are diffracted at the aperture and "bleed" onto nearby locations on the sensor. Here's a mockup I did of the base diffraction pattern for a human eye with my lens diffraction tool:

 

pupil.png?raw=true

 

Of course it would need to be colorized, scaled and convolved with the image, but I don't know about you but it's pretty damn close to what I see when I look at a small white light, though obviously it depends on lots of factors. I should try to convolve it on an HDR image and see what the result looks like.


PARTNERS