I'm trying to understand what happens to a lens flare effect during fast motion. Unfortunately I don't have an opportunity to test this myself so I thought someone in here probably knows this. Will the lens flare get affected by the motion blur ? I'm leaning towards yes...since motion blur happens to every "light" that camera sensor receives, right ? So even the light that's not coming in directly and bouncing around the lens should.
Lens Flare in motion (camera question)
Yes, and no. It will in the sense that the light sources visible by the camera will undergo motion blur, and hence the diffraction pattern will be smeared along the displacement axis. However, the diffraction pattern itself will generally not change in shape or form (besides the smearing) unless a light source gets occluded during displacement.
So in the process of recreating this process realistically I should do lens flare after motion blur ? How about DoF ?
So in the process of recreating this process realistically I should do lens flare after motion blur ? How about DoF ?
Don't know, depends on how your graphics pipeline is setup. In theory motion blur pretty much consists of averaging the render over some time duration, but in practice there are lots of hacks to make stuff run faster. I doubt most people would notice the lens flare getting blurred anyway, the displacement is generally not fast enough (remember objects far away where the diffraction effect is strongest do not actually move quickly on screen, whereas closer/larger objects do not have such distinctive lens flare features).
As for DoF, that's a tricky one, because it depends on the current focal length which can change during fast motion. But if the DoF is constant, it doesn't really matter either way because objects that are moving (and hence susceptible to a change in appearance with depth) are going to be blurred anyway, far more than what depth of field would normally do.
So, no, do stuff before motion blur if you can, but if it doesn't look good or looks unnatural you can always do it after.
Unless this isn't for real time but you're going for true realism..
Unless this isn't for real time but you're going for true realism..
In the non-real-time world one would normally use oversampling (casting multiple rays per pixel) to achieve several effects:
* anti-aliasing (pick multiple points in the pixel as ray targets)
* depth of field (model the eye as a little disk and pick multiple points in the disk as ray origins)
* motion blur (pick multiple times within the range covered by this frame)
* soft shadows (pick multiple points in the non-point light source)
* soft reflections (pick multiple angles in which light bounces)
A great trick is that you can achieve all of these effects for approximately the cost of one of them! Say you launch 1000 rays per pixel. For each one of them you pick a random point in the pixel, a random point in the disk that models the eye, a random time, etc. and use the average as your pixel color.