Sign in to follow this  
Quat

DX11 Copying MSAA depth buffer dx11

Recommended Posts

Quat    568
I am using DX11.

I am using the light prepass system, where I first render the view space normals and depth to a render target. This pass also lays down the scene depth, so there is no overdraw in later passes.

Now for MSAA, I was going to use non-msaa render targets and depth buffers for generating the g-buffer and screen space light map, and just use MSAA for the final pass. However, with this I need two depth buffers--one multisampled for the back buffer, and one not multisampled for the g-buffer. So my question is if there is a way to avoid drawing the scene depth twice.

I don't think I can use MSAA g-buffer because that would average depths at the edges.

Is it possible to have two render targets where one is multisampled and the other is not? I am thinking of having something like (g-buffer and non-multisampled depth buffer) and (null render target with multisampled depth buffer).

Share this post


Link to post
Share on other sites
MJP    19788
Not using MSAA for the G-Buffer pass will give you really bad artifacts. You will end up having lighting information for a completely different triangle applied to subsamples for edge triangles during your second geometry pass. Or if no triangle was rendered at all to a pixel during the G-Buffer pass and a subsample samples lighting from that texel, it will get black because there will be no lighting there at all.\

If you really want to avoid artifacts, you have to do it like this:

1. Render G-Buffer with MSAA
2. Render lighting with MSAA, making sure you individually light subsamples for triangle edges
3. Render your second geometry pass, making sure to sample the right subsample from the lighting buffer for edge triangles

It sucks pretty badly from an implementation and performance point of view, and it's one of the big reasons that I really don't like light prepass.

Anyway to answer your question...all render targets and depth stencil buffers need to have the same MSAA mode when you're rendering. You can't have one of them with MSAA, and one of them without. You can render a depth buffer with MSAA and manually resolve it using a pixel shader if you want.

Share this post


Link to post
Share on other sites
Quat    568
OK, I read the ShaderX7 article on Deferred Shading and MSAA and I see what you mean.

I would like to confirm some things:

1. If I create a MSAA back buffer and depth buffer, I must enable D3D11_RASTERIZER_DESC::MultisampleEnable so that the system will automatically resolve the back buffer before presenting. I need to test this again, but I seem to recall getting antialiasing even if I didn't enable this state, as long as I created a MSAA back buffer and depth buffer.

2. MSAA textures are never resolved automatically (you must call ResolveSubresource). To sample a MSAA texture at the sample level, you must use Load(texCoord, sampleIndex).

3. For a pixel shader to run at sample frequency, you must have input SV_SAMPLEINDEX.

4. Use SV_Coverage to detect edges, so that we only run the pixel shader per sample about the edge.


Also, what do you like instead of light prepass? Just full deferred renderer?

Share this post


Link to post
Share on other sites
MJP    19788
1. The setting in the rasterizer state doesn't have anything to do with resolving the render target. It actually controls how MSAA is handled during rasterization. The documentation that explains it is [url="http://msdn.microsoft.com/en-us/library/bb205126%28v=vs.85%29.aspx"]here[/url] and [url="http://msdn.microsoft.com/en-us/library/cc627092%28v=vs.85%29.aspx"]here[/url], although it can be a bit confusing if you're not familiar with how MSAA works. Basically what happens when you disable that is that you have multiple subsamples in your render target, but the rendering will occur as if it were a non-multisampled render target.

2. Yes, these are both correct. You must also declare your texture as a Texture2DMS to be able to load subsamples (you will get a warning from the debug layer if you don't, or if you try to use a Texture2DMS with a non-multisampled texture).

3. Yes, you must declare it [i]and[/i] use it in your shader program. Alternatively, you can also use the "sample" modifier on one of your pixel shader inputs to get the shader to run per-sample. This modifier causes the input to he interpolated to each MSAA sample point, rather than the pixel center.

4. SV_Coverage is a great way to detect edge pixels. You can also use it as an output, to control which subsamples get written by the pixel shader.

I prefer a full deferred renderer. IMO light prepass is only really worth it on platforms where multiple render targets aren't available, or are too expensive.

[quote name='Quat' timestamp='1313513225' post='4849917']
OK, I read the ShaderX7 article on Deferred Shading and MSAA and I see what you mean.

I would like to confirm some things:

1. If I create a MSAA back buffer and depth buffer, I must enable D3D11_RASTERIZER_DESC::MultisampleEnable so that the system will automatically resolve the back buffer before presenting. I need to test this again, but I seem to recall getting antialiasing even if I didn't enable this state, as long as I created a MSAA back buffer and depth buffer.

2. MSAA textures are never resolved automatically (you must call ResolveSubresource). To sample a MSAA texture at the sample level, you must use Load(texCoord, sampleIndex).

3. For a pixel shader to run at sample frequency, you must have input SV_SAMPLEINDEX.

4. Use SV_Coverage to detect edges, so that we only run the pixel shader per sample about the edge.


Also, what do you like instead of light prepass? Just full deferred renderer?
[/quote]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By RubenRS
      How do i open an image to use it as Texture2D information without D3DX11CreateShaderResourceViewFromFile? And how it works for different formats like (JPG, PNG, BMP, DDS,  etc.)?
      I have an (512 x 512) image with font letters, also i have the position and texcoord of every letter. The main idea is that i want to obtain the image pixel info, use the position and texcoords to create a new texture with one letter and render it. Or am I wrong in something?
    • By thmfrnk
      Hey,
      I found a very interesting blog post here: https://bartwronski.com/2017/04/13/cull-that-cone/
      However, I didn't really got how to use his "TestConeVsSphere" test in 3D (last piece of code on his post). I have the frustumCorners of a 2D Tile cell in ViewSpace and my 3D Cone Origin and Direction, so where to place the "testSphere"? I thought about to also move the Cone into viewspace and put the sphere to the Center of the Cell with the radius of half-cellsize, however what about depth? A sphere does not have inf depth?
      I am missing anything? Any Ideas?
      Thx, Thomas
    • By Modymek
      hi all
      I want to enable and disable shader in MPCH Media player Classic
      the MPCH have shader option using HLSL shaders
      I want the shader to read each file extension before it plays the file
      so if the video file name is video.GR.Mp4 it will play it in Grayscale shader 
      if it is not and standard file name Video.Mp4 without GR. unique extension so it plays standard without shader or end the shader
      here is the shader I have for grayscale
      // $MinimumShaderProfile: ps_2_0
      sampler s0 : register(s0);
      float4 main(float2 tex : TEXCOORD0) : COLOR {
          float c0 = dot(tex2D(s0, tex), float4(0.299, 0.587, 0.114, 0));
          return c0;
      }
       
      I want to add if or block stantement or bloean to detect file name before it call the shader in order to go to the procedure or disable it or goto end direct without it
       
      any thoughts or help
    • By noodleBowl
      I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader
      I understand I need to set up the constant buffer just like any other buffer:
      1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things
      In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have?
      //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }  
    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
  • Popular Now