Jump to content
  • Advertisement
Sign in to follow this  
sir_sgt_jeffrey

ddx/ddy functions, software rasterization, and texture filtering

This topic is 2544 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello I've been writing a kind of shader-based software renderer (for the reasons of a learning experience, fun and interest), it is a C++ template based system so shaders are written in C++, this is not a parallel implementation (at the moment). I've implemented basic 2D texture mapping and sampler objects which support various u/v_addressing modes (wrap, clamp, mirror, etc) and samples the texture via nearest neighbor sampling. Now I'm trying to implement min and mag texture filter modes and eventually support texture mip-mapping.

I need a method of determining when to select the min or mag filter and eventually what mipmap level to use so through looking at books and the internet I figured that the best way to do this is compute partial derivatives of texture coodinates with respect to the screen-space x/y axis. I've been looking at the intrinsic functions ddx/ddy and how the GPU works and I think kind of understand how it works.

The GPU rasterizes in 2x2 pixel blocks for the purpose of computing approximate partial derivatives by using forward (or hybrid forward/backward) differencing, I understand this but what I don't understand how can these functions can work with arbitrary input values of various data types (float1/2/3/4). It seems to me that in order to compute the forward difference of some texcoord value given to ddx/ddy this function must know what the interpolated texture coodinates is for the neighbouring pixels before it can compute the forward differences, is this correct? what I'm trying to say is, is this how these function would logically work:

ddx( UV_coord ) = (the lerp-ed UV_coord at the pixel x+/-1) - UV_coord
ddy( UV_coord ) = (the lerp-ed UV_coord at the pixel y+/-1) - UV_coord

My reasoning for thinking this is rasterizing a triangle interpolates vertex attributes across the triangle surface for each pixel, so without functions such as ddx/ddy computing the forward difference of 2 pixels in x and 2 pixels in y means computing the difference of interpolated texture coordinates for each pixel which makes sense to me for determining the rate of change of the texture coordinates in screen space. Is my understanding completely wrong?

Share this post


Link to post
Share on other sites
Advertisement
You have the right idea. The derivative instructions have to be able to sample the texture coordinates for neighbouring pixels. This is easy when you process 2x2 blocks of pixels in parallel (though you will have to rasterize some extra pixels around the edges of your triangles). In a serial implementation it's a bit trickier - I guess you have to pause execution of the shader when you hit a texture instruction and switch the next pixel, then come back once you have texture coordinates for all 4 pixels in the block. It becomes even trickier once you allow for control flow (what if the other pixels took a different branch and don't ever execute the texture instruction?).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!