Jump to content
  • Advertisement
Sign in to follow this  
RobMaddison

Downsampling and hardware filtering during Post Processing

This topic is 2498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My post processing framework is working nicely, I can add in and remove filters at will to make some interesting effects.

My typical bloom effect consists of these filters:

Render normal scene to render target 1 (RT1)

Bright pass on RT1 to RT2 (RT1 and RT2 same size)
Blur (Vertical then Horizontal) from RT2 to RT3 (RT3 is a smaller size)
Composite RT1 (original scene) and RT3 (blurred brightpass) to RT2
Copy RT2 to back buffer

Firstly, whilst researching post processing, I noticed quite a few people mentioning downscaling when doing things like gaussian blurs. The way I do this is to just set the destination of my blur filters to a smaller render target. Is this correct or should there be an actual Downscale filter set up so I can slot it in wherever I like?

Secondly, to use hardware filtering (linear) for the above bloom effect, I would need to set the input texture's min/mag filter on the Composite pass to linear. I wanted the Composite filter to be generic in that it just 'add's two textures together using point filtering, would it be an idea to have two Composite filters, e.g. CompositePoint and CompositeLinear?

Thirdly (and lastly), I have the copy filter in there at the end which essentially just copies the source texture to the back buffer - this is obviously another draw call when I could theoretically do this at the end of the penultimate filter (in this case, the composite filter) by simply passing in the back buffer as the destination of that filter particular filter. I thought this might make it easier to plug filters in easily. This might be an area to shave off 0.5ms of frame time when it comes to performance profiling.

Am I way off the mark here?

Thanks in advance for any advice...

Share this post


Link to post
Share on other sites
Advertisement
1) Your RT2 should be a smaller size. No need to brightpass back to a full size buffer if all you're going to do is downsize it. Plus reading from the full size texture in the blur pass costs a tiny from extra texture cache misses.

2) Not sure what you're asking here. You can easily change the filtering for any texture throughout the process. You're free to use linear while you're blurring, point when you're copying. If your pixel/texel math is correct, linear can give you the same results as point for a copy.

3) The 'composite' step in your chain indeed could simply write to the backbuffer.

Tightening up the post effects can save quite a bit of time on slower hardware, and even on fast cards the 0.5ms you mentioned is a pretty good guess.

Share this post


Link to post
Share on other sites
Thanks for the response. With regard to point 2, my composite shader (and in fact most of my filter shaders) is written with a source texture which has its min/mag filter set explicitly to Linear - can I change that dynamically (programmatically)? I don't really want to have to set up 2 samplers, one with linear and one with point and choose which sampler to use.

Thanks

Share this post


Link to post
Share on other sites

The way I do this is to just set the destination of my blur filters to a smaller render target. Is this correct or should there be an actual Downscale filter set up so I can slot it in wherever I like?
You should downscale to the smaller resolution first, instead of using a high-resolution input to the first low-res blur pass. The reason is that the texture sample results for both methods will be different -- with downscaling first, you will receive (averaged) information from all pixels, with the latter method you lose some information.

Thirdly (and lastly), I have the copy filter in there at the end which essentially just copies the source texture to the back buffer - this is obviously another draw call when I could theoretically do this at the end of the penultimate filter (in this case, the composite filter) by simply passing in the back buffer as the destination of that filter particular filter.
Yeah, if you can get rid of that last copy step it's obviously going to be a good thing, but wasting half a millisecond per frame isn't the worst thing in the world if it makes your code cleaner.

With regard to point 2, my composite shader (and in fact most of my filter shaders) is written with a source texture which has its min/mag filter set explicitly to Linear - can I change that dynamically (programmatically)?
Yes you can change sampler states programmatically. What shader language/framework are you using? I'm guessing you're using the effect framework, as with raw HLSL you have to set sampler states programmatically (they can't be set via HLSL without FX syntax).

Share this post


Link to post
Share on other sites

Yes you can change sampler states programmatically. What shader language/framework are you using? I'm guessing you're using the effect framework, as with raw HLSL you have to set sampler states programmatically (they can't be set via HLSL without FX syntax).


Hi Hodgman, I'm using DirectX9 with C++. Would you set sampler states in particular passes?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!