Practical question about implementing custom mipmap downsampling

Started by
3 comments, last by Adam_42 14 years, 4 months ago
Hi there, for an algorithm I need to implement a custom downsampling on GPU for a mipmap texture using DirectX 9.0c and the effect file framework. I get a source texture and have to downsample it successively (i.e. level by level) using a pixel shader that is doing some more or less fancy stuff. Hence, for each iteration, the current level (i) serves as render target, and the previous level (i-1) as source texture. And exactly here is my problem: I can set the surface level (i) as render target using device->SetRenderTarget. However, how do I set the level (i-1) as source texture to the effect?! I can only use ID3DXBaseEffect::SetTexture(handle, tex), but there is no such method for binding surfaces of a texture :(. After the downsampling, I need the mipmap texture for further processing. Then, I access the texture by selecting an appropriate level of detail and let the graphics card do the interpolation between the mipmap levels. Currently, I only see two alternatives, both of which yield much overhead for copying surface data: (1) Instead of creating a single mipmap texture, create each level as separate texture and do the downsampling. However, Afterwards I need to create another texture with mipmaps and copy each surface to it using StretchRect or something. (2) After a downsampling step, copy the result of level (i) to a separate temporary texture which then serves as input/source for the next iteration. However, this isn't any better, since I still have to copy every level (except the last, maybe). I do believe that the problem of downsampling a mipmap by hand is not that exotic. So I really hope that someone has an elegant solution to it. Cheers, Data
Advertisement
Oops, the question seems to be more uncommon and exotic than I thought in the first place.

So, is there nobody out there who might have an idea of how to handle this efficiently? This would be really really great, as I'm stuck here somehow. Plus, my rendering is already quite complex and I can't effort much overhead...

Thanks again and Merry Christmas,
Data
The only case in which I'd like to do that is mipmapping normalmaps and even in this case, I don't consider it a performance path yet. Are you sure you need this in the first place? Also consider .dds can provide explicit mipmaps, which is probably a better (and surely more data-driven and fine-tuneable) way of doing things if you need total control.

Previously "Krohm"

Hi,

well. I'm implementing a depth-of-field effect as post-processing filter applied to the rendered scene. I found a promising paper ("Real-time Depth-of-Field Rendering Using Anisotropically Filtered Mipmap Interpolation"; http://www.mpi-inf.mpg.de/~slee/pub/papers/lee09-tvcg-dof-preprint.pdf). As the title suggests, the authors filter the mipmaps *anisotropically* in order to reduce certain artifacts. Hence, they / I cannot rely on automatic mipmap generation, but have to implement the downsampling by hand in shaders.

Thus, I would say "yes, I need it" :-)...
Could you get away with using GenerateMipSubLevels() to create the mips, and then doing the extra filtering when you read the texture, or even better use the hardware anisotropic filtering to do it for you?

If not I think the only option is to create a bunch of render targets (one for each mip level), and once you've filled them all in use StretchRect() to copy them all to to the mip surfaces of a single texture.

This topic is closed to new replies.

Advertisement