Jump to content
  • Advertisement
Sign in to follow this  
Happy SDE

DX11 Downsampling texture to half resolution

This topic is 801 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I decided to make SSAO in half resolution.

 

There are 2 input textures: Depth (DXGI_FORMAT_R24G8_TYPELESS) and Normal (DXGI_FORMAT_R10G10B10A2_UNORM)

Both textures may be noAA or MSAA.

 

I realized that I do a lot of sampling with the textures.

It seems for me that it’s better to downsample each texture to half resolution NO-AA texture before SSAO calculation and blurring.

After that just Load() data from them.

 

I need only one new LOD, that is 2x smaller that original texture.

 

The question is: what is the best way to do it in DX11?

I can imagine several solutions:

  1. Write full-quad pixel shader that will average each 2x2 values. In MSAA case it will average all subsamples.
  2. Write Compute shader that will do the same thing
  3. A better solution I am not aware of. :)

What would you recommend?

Edited by Happy SDE

Share this post


Link to post
Share on other sites
Advertisement
Yes 1/2 are the standard approaches. A single texture fetch with bilinear filtering will calculate the average for you :)

However, you can't average depth and normals... Well you can, but the results won't be sensible and won't produce a good result -- at the edges of objects, the averaging will take two discontinuous surfaces (e.g. a character and the background), and "invent" a new surface that's half-way between both (something floating half way between the character and the background).

In this case, you want to simply throw away 75% of your data when downsampling, and then use a bilateral depth/normal-aware upsampling feature when going back to full resolution.

Share this post


Link to post
Share on other sites

However, you can't average depth and normals... Well you can, but the results won't be sensible and won't produce a good result -- at the edges of objects, the averaging will take two discontinuous surfaces (e.g. a character and the background), and "invent" a new surface that's half-way between both (something floating half way between the character and the background).

In this case, you want to simply throw away 75% of your data when downsampling,

Thank you, Hodgman!

 

I just finished downsampling implementation on pixel shader and found the artifacts of depth averaging.

It's interesting, but throwing away 75% of data gives performance gain from 85 => 72 microsec for this pass (GPU time).

 

But I don't understand this statement:

...and then use a bilateral depth/normal-aware upsampling feature when going back to full resolution.

In my previous implementation without downsampling, I used 2-passes of bilateral depth/normal-aware (took from Luna book),

and after that just used the result on a lighting pass.

 

But you are suggesting bilateral upsampling?

Is it a different algorithm?

Edited by Happy SDE

Share this post


Link to post
Share on other sites

If you've already done bilateral blurring, then it should be pretty easy :D

To do depth-aware upsampling, when going from half-res to full-res, for each full-res pixel, point-sample the nearest 4 half-res pixels and generate standard bilinear weights for them.
Then, perform a depth/normal threshold test of some kind, to determine if each of those samples is 'valid' or not. If a sample is not valid, set its weight to zero.
Renormalize the weights so they sum to 1.0 (e.g. weights.xyzw /= dot(weights.xyzw, (float4)1))
-- But, take care to handle the case where all weights are zero: in that case, there's no valid low-res data that corresponds to your high res pixel, and the above code snippet will divide by zero! So, take the closest depth match, or average all 4 samples, or just use the initial bilinear weights, etc...
Combine the 4 samples using their new weights.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!