Math behind anisotropic filtering?

Started by
3 comments, last by 21st Century Moose 8 years, 1 month ago
Bilinear or trilinear filtering are standardized in api spec. The trilinear coefficient is based on log of horizontal and vertical derivative inside a pixel quad.
I didn't find what is the technic used by anisotropic filtering and why it does improve texture accuracy at steep angle. The only thing I found is that a texture is sampled up to 16 times around the sampled location.

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)
Advertisement

The best way to think of it is to picture what a pixel looks like when projected into texture space. If the surface is aligned with the camera near plane then the pixel will map to a square in texture space. It could be freely rotated however and look like a diamond. For point sampling the texel closest to this square sample is used. For bilinear its a weighted average of the 4 nearest. For the case of mip mapping a mip level is selected such that the square is about 1 texel unit area. Think about this square projected onto each mip level. As each mip level goes down each texel effectively doubles in each axis.

For anisotropic filtering then the pixel is elongated and forms a thin rectangle. There is no mip level you can use which either fully represents all of the rectangle. Instead the rectangle can be subdivided into smaller rectangles such that they are more square like. Each of this subdivisions have their own texel centre and can be each bilinear filtered and averaged for a result whcih better represents all the texels that touch the rectangle.

Hope this helps. I know pictures will help better explain. I'm sure there is so good visual explanations out there.

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)


It's an extension in GL because it's patented. :(

In all the GPU's I've had the privilege of working with at a low level, there's always been an assortment of bits associated with many different tweaks for quality/speed trade-offs with the aniso filter. I think it's a feature that's best served with a vague / high level spec and non-specified implementation details to give GPU vendors room to optimize :)

The gist is that the vertical and horizontal derivatives of the texture coordinate are very different when viewing a surface at a shallow angle, so instead of treating the texture filter as a circle, you should treat it as an ellipse. Aniso filters implement this by taking multiple (generally 2-16) trilinear samples within that elliptic footprint and combining them.

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)


It's an extension in GL because it's patented. :(

Wow! Really? :(
Couldn't they just add the enum value to the spec, but then just describe it as "implementation defined" :lol:

Wow! Really? :(
Couldn't they just add the enum value to the spec, but then just describe it as "implementation defined" :lol:


They could have very easily specified that 1 was a valid value for max anisotropy, which would have let any implementation use it, patent or no patent.

What's worse is that there's a prior example in GL: occlusion queries, where they specified that GL_QUERY_COUNTER_BITS was allowed to be 0 in order to allow Intel hardware to claim OpenGL 1.5 support when it didn't have occlusion query support.

https://www.opengl.org/archives/about/arb/meeting_notes/notes/meeting_note_2003-06-10.html

How to try and get this into the core? Seems too small to do as an optional subset. Called a straw poll contingent on someone coming up with a "caps-bit like" interface that would let Intel claim to support it. Rob then suggested that we could change the spec to allow supporting counters with zero bits and calling out in the spec that query functionality should not be used in this case.


I totally fail to see any reason why a similar approach couldn't have been done with anisotropic filtering.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement