• Advertisement
Sign in to follow this  

How to down/up sample a render target

This topic is 1940 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I haven't really found anything basic on how to do this.
I want to speed up my ssao calculation by using a half res depth target.
Now as I understand I have to downsample this target, but I don't really know how to do this.
I've seen some code snippets about averaging pixel but I don't fully understand this.

Can someone explain to me how this works or guide me to some tutorial or easy to understand code ?

I realize it's not as easy as just outputting the sampled value of the original to a rendertarget with half the size or is it ? Edited by lipsryme

Share this post


Link to post
Share on other sites
Advertisement
You can read up on image scaling if you want some background info. The basic idea is that you sample one or more texels from the source render target, apply some sort of filter to those texels, and output the result. You're probably already familar with the "point" and "linear" filtering modes that are built-in to the hardware, which you can use just by taking a single texture sample with the appropriate sampler settings. But you can also implement more complex filters manually in your shader, if you wish.

For downscaling depth, things are a bit more tricky since some of the conventional wisdom used for scaling color images won't necessarily apply. Most people just end up using point filtering to downscale depth, which preserves edges but increases aliasing. To implement that, it really is as easy as you think it is: just use a pixel shader to sample the full-size render target, and output the value to your half-size render target.

Share this post


Link to post
Share on other sites

For downscaling depth, things are a bit more tricky since some of the conventional wisdom used for scaling color images won't necessarily apply. Most people just end up using point filtering to downscale depth, which preserves edges but increases aliasing. To implement that, it really is as easy as you think it is: just use a pixel shader to sample the full-size render target, and output the value to your half-size render target.


My image seems distorted now though. Do I need to change the viewport while rendering in half res ?

edit: Ah yes that was it :) Edited by lipsryme

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement