Mipmaps for depth of field - controlling the blur?

Started by
2 comments, last by OrangyTang 18 years, 4 months ago
I'm trying to implement something similar to the fake depth of field described here. However I've hit a few snags trying to get it controllable. 1. I'm so far just using the base texture and the first mipmap, and using trilinear filtering to blend between the blurred and the sharp versions. This seems to work, but it limits the about of blur I can add (otherwise the transition looks wrong). Alternatively I can use the whole mipmap chain, but then the quality starts to suffer on the really blurry ones. Has anyone tried this, and which method did they use? 2. I'm drawing my blurry sprites in a 3d projection, and currently trying to blur those further away from the camera. Unfortunately as those further away are smaller on screen the auto mipmaping interfers with my biasing, causing the blur to kick in much earlier than I want. Is there a way to prevent this somehow? The only way I can think of is to manually adjust my bias to counteract this, but that sounds error prone and difficult to control. Cheers.
Advertisement
Er, anyone?
Are you using shaders or fixed pipe line?
If you use a shader you would be able to blend between the base texture and the blured version yourself, with full control. Very easy to calculate the distance in a vertex shader.
This is with fixed function OpenGL. Although you could use some kind of fragment shader, functionally it's not the same - you'd be sampling a texture twice and calculating the blend manually. This method uses trilinear filtering to sample a texture once and get the blend 'for free'.

This topic is closed to new replies.

Advertisement