Jump to content
  • Advertisement
Sign in to follow this  
vlj

Math behind anisotropic filtering?

This topic is 844 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Bilinear or trilinear filtering are standardized in api spec. The trilinear coefficient is based on log of horizontal and vertical derivative inside a pixel quad.
I didn't find what is the technic used by anisotropic filtering and why it does improve texture accuracy at steep angle. The only thing I found is that a texture is sampled up to 16 times around the sampled location.

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)

Share this post


Link to post
Share on other sites
Advertisement

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)


It's an extension in GL because it's patented. :(

Share this post


Link to post
Share on other sites

In all the GPU's I've had the privilege of working with at a low level, there's always been an assortment of bits associated with many different tweaks for quality/speed trade-offs with the aniso filter. I think it's a feature that's best served with a vague / high level spec and non-specified implementation details to give GPU vendors room to optimize :)

The gist is that the vertical and horizontal derivatives of the texture coordinate are very different when viewing a surface at a shallow angle, so instead of treating the texture filter as a circle, you should treat it as an ellipse. Aniso filters implement this by taking multiple (generally 2-16) trilinear samples within that elliptic footprint and combining them.

Is there an article/explanation and is it standardized somehow or vendor dependant? (in gl it's not core AFAIK even if all vendors supports it)


It's an extension in GL because it's patented. :(

Wow! Really?  :(
Couldn't they just add the enum value to the spec, but then just describe it as "implementation defined"  :lol:

Edited by Hodgman

Share this post


Link to post
Share on other sites

Wow! Really?  :(
Couldn't they just add the enum value to the spec, but then just describe it as "implementation defined"  :lol:


They could have very easily specified that 1 was a valid value for max anisotropy, which would have let any implementation use it, patent or no patent.
 
What's worse is that there's a prior example in GL: occlusion queries, where they specified that GL_QUERY_COUNTER_BITS was allowed to be 0 in order to allow Intel hardware to claim OpenGL 1.5 support when it didn't have occlusion query support.
 
https://www.opengl.org/archives/about/arb/meeting_notes/notes/meeting_note_2003-06-10.html
 

How to try and get this into the core? Seems too small to do as an optional subset. Called a straw poll contingent on someone coming up with a "caps-bit like" interface that would let Intel claim to support it. Rob then suggested that we could change the spec to allow supporting counters with zero bits and calling out in the spec that query functionality should not be used in this case.


I totally fail to see any reason why a similar approach couldn't have been done with anisotropic filtering.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!