I am writing a sofware renderer. I have no SIMD or multithreading functionality yet. When rasterizing a triangle I loop over all of the pixels in its bounding box (in screen coordinates) and interpolate and color the pixels that pass the edge function. I tried implementing mipmapping but found that to compute the texture coordinate differentials I needed the interpolated values for the right and bottom neighboring pixels (whose attributes are not interpolated at this point).
I thought of couple solutions:
1) Do another loop before the main one which would just calculate all of the interpolated texture coordinates so they are available in the main loop. (This is obviously slow)
2) Choose the right mip level of the texture by calculating the maximum differential from the 3 vertices of the rasterized triangle. Would this work?
Intuitively it seems to me that yes: consider two vertices, u1 = 0, u2 = 1 and in screen coordinates x1 = 100, x2 = 600. Then it makes sense to pick a larger texture. On the other hand if u1 = 0 and u2 = 0 and x1= 100, x2 = 101, then picking the
smallest texture sounds reasonable.
Would these solutions work and/or is there a better one?