MSAA issues.

Started by
4 comments, last by Gavin Williams 11 years, 3 months ago
Hi,

I'm trying to setup MSAA to remove jittering at the edges of primitives/geometry.

I have set the following :


rsd.IsMultisampleEnabled = true;
...
SampleDescription = new SampleDescription(8, 0),


on my G-Buffer textures and on my depth buffer. But I get the following errors :

D3D11 ERROR: ID3D11Device::CreateDepthStencilView: The base resource was created as a multisample resource. You must specify D3D11_DSV_DIMENSION_TEXTURE2DMS instead of D3D11_DSV_DIMENSION_TEXTURE2D. [ STATE_CREATION ERROR #143: CREATEDEPTHSTENCILVIEW_INVALIDDESC]

D3D11 ERROR: ID3D11Device::CreateDepthStencilView: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #148: CREATEDEPTHSTENCILVIEW_INVALIDARG_RETURN]

This doesn't make sense to me, because i don't want to use Texture2DMS, I'm not using the sampling results, nor making my own sampler, I'm just trying to use the built in MSAA. And so for the sake of resolving the error I change my DSV :


DepthStencilViewDescription dsViewDesc = new DepthStencilViewDescription
{
ArraySize = 0,
Format = depthFormat,
Dimension = DepthStencilViewDimension.Texture2DMultisampled,
MipSlice = 0,
Flags = DepthStencilViewFlags.None,
FirstArraySlice = 0,
};


Then I get this error :

D3D11 ERROR: ID3D11DeviceContext::Draw: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 1 of the Pixel Shader unit (TEXTURE2DMS). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]

That doesn't make sense to me, because the depth stencil view isn't in shader code ?

Am I doing something wrong here ?

Do I have to use Texture2DMS to use MSAA ?

Should I just use super-sampling ?

Thanks for any help.
Advertisement
From what you described, it sounds like you're using deferred rendering. With deferred rendering you need to do all kinds of special-case handling for MSAA, it's not just something you can "switch on" and have it work. Not only does it require changing your shaders just to avoid runtime errors, but you need to do things like edge detection to make it more efficient than just supersampling.

For your first error, you need to set the DSV dimension to Texture2DMS if you texture you created is multisampled. It doesn't matter whether or not you want to sample it later as a multisampled texture, the DSV still needs to have the proper dimension.

For yours second error, it sounds like you're sampling the depth buffer through a shader resource view in one of your shaders using a Texture2D in your shader code. Any non-resolved texture with multisampled has be accessed as a Texture2DMS in the shader, you can't use Texture2D. Color MSAA textures can be resolved to a non-MSAA texture which can then be sampled using Texture2D, but you can't do that with a depth buffer (and you don't want tor resolve your MSAA textures anyway for deferred rendering).
I don't yet do any MSAA resolve, could that be the cause of this last error ?

Yes I'm using deferred rendering (lighting). MJP, I'm reading your Quick Overview of MSAA and looking over some other posts about the subject, I think I have seriously avoided this topic in the past, but I'll need to work it out, because the edge jittering is terrible. I'm going to start with a 2x2 super-sampling approach. That seems easiest and my graphics isn't too demanding at the moment so it seems like a good reference technique. I'm looking for a DirectX11 resolve function but all I've found so far are custom shaders to resolve the Texture2DMS by accumulating the sub-samples. (I'm trying to use the language, sry if I'm not spot on with terms).
The built-in resolve function is ResolveSubresource. However you can't resolve a depth buffer, and it wouldn't be useful anyway even if you could.

The simplest way to implement MSAA with deferred rendering to have a separate MSAA version of your lighting shader(s), and in that version use Texture2DMS. Then in the shader just loop over all subsamples for a given pixel, calculate the lighting, and average the result. This will be much more expensive than it could be, but it will work.

Hmm, I have decided not to worry about the 2x2 supersampling. MSAA seems to be more efficient in accomplishing the same result. I understand how it works and I understand how to resolve (a color map). But when I start thinking about depth maps and normal maps and why you have suggested using MSAA in the lighting shader, I start getting confused. For instance, in the simple case where my wall edges are against the floor or where the wall top-edge is against the top of the wall section (which is visible, ala Gauntlet / Desktop Dungeons) then the normals and position maps, and depth maps can easily be resolved, because there is continuity from one triangle to the next. But what about when I add a wall section with a passage / doorway, then the top of the doorway is no longer next to the ground so their is a discontinuity from the wall triangle to the ground triangle next to it. Likewise for any geometry that is adjacent but separate from the ground in screen space. At the moment, that's a real puzzle for me. And i suspect that this is getting into fairly advanced territory. My best guess at an approach is to do the following :

Add an extra step to my wall rendering ... that being to render the walls' color output into a Texture2DMS then resolve to a Texture2D and just pass that back into my regular pipeline for the lights and final composition to use. And leave the wall shaders normal and position output as it is. So there will be a slight misalignment between my color map and my normal / depth maps. Any wall edge that connects with the ground will be handled ok, but edges of elevated objects will be smooth (in color), but the lighting shaders will use the ground normals to light the edge. It doesn't sound perfect, but if it gets rid of the jittering, that might be ok.

Here's a video of the problem ..

">

I suspect that there may be a couple of ways to actually implement MSAA and so I might see it being used one way, quite different to how it's often used. I am going to have to just implement something and see the results and work from there I'd say. I'll talk about it more after I've at least implemented one version.

Thanks.

Super-sampling 2x2 improved things, particularly around the ground-wall interface, where there is similar lighting. But edges that had moderate/strong lighting adjacent to zero lighting didn't improve that much, which breaks my idea of ignoring the normals. I also reworked the textures to remove hi-lighting at the edges, which I think helps a little as well, but I don't think that's a good thing to start doing, because really it's about contrast between textures, not the textures themselves, and I would then have to create many more textures to handle all the wall shapes if i was to darken around the edges, as I'm currently using a single texture for all walls. A lot of work compared to just employing MSAA. Performance is fine using 2x2 SSAA.

Super-sampling 4x4 further reduced the edge aliasing to almost being unnoticeable, but it produced severe texture aliasing, I assume because the linear sampler only samples from 2x2 pixels, not 4x4. I'll have to check this. That would explain why I've seen code for manual texture filtering. Also with 4x4, the 5 buffers (color,normal,position,depth,final-light-map) are 7680x4068 each, so 625MB. I don't think that's very good, and the performance is bad, with a noticeably low frame rate.

Next, onto MSAA.

This topic is closed to new replies.

Advertisement