Jump to content
  • Advertisement
Sign in to follow this  

Texture becomes pixelated when upsampling to full-screen quad and MSAA enabled

This topic is 2814 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


So I am trying to upsample a texture to a full-screen quad. By turning on MSAA, the texture becomes pixelated, while everything seems perfect with no anti-aliasing.

This sample image sums up my problem pretty well I think:

I am using DirectX 11 with pixel/vertex shader 4.0, if it matters. I have tried searching for a solution, but the only relevant things I could find was about the half-pixel offset needed on DirectX 9, which shouldn't be a problem on DirectX 10+.

So my question is, does anyone recognise this problem? Any form of help is appreciated.

Thanks in advance.

Share this post

Link to post
Share on other sites
It's kind of hard to diagnose if we have no idea what you're actually doing. sad.png Got some code?

EDIT: I would wager that filtering in general is incompatible with MSAA surfaces, though. I can't really think of a good way to incorporate the extra subsample data in that operation and I suspect it's just blanket not supported for the same reason. If you aren't ResolveSubresource()ing right now, try that first (again, code!)

Share this post

Link to post
Share on other sites
Thanks for the reply.

I'm not really sure about the correct method to apply msaa when dealing with textures etc.
So basically, what I'd like to do is this:

1. Render the main scene to a full-resolution DXGI_FORMAT_R16G16B16A16_FLOAT-texture (with msaa enabled).
2. Render the volumetric light (that you see in my first post) to a small-resolution texture. MSAA is not really needed here.
3. Render the main scene to the backbuffer (maybe using a HDR/bloom effect later on), by rendering a full-screen quad and texture from step 1.
4. Render the small-resolution texture from step 2 to the backbuffer with a full-screen quad

I've tried to extract what I think is the most relevant code here.

Setting up the swap chain:

// Get data on GPU's support of multisampling
for(int i = 1; i < D3D11_MAX_MULTISAMPLE_SAMPLE_COUNT; i++) {
hr = dev->CheckMultisampleQualityLevels(DXGI_FORMAT_R8G8B8A8_UNORM, i, &qual_levels);
if(!FAILED(hr) && qual_levels > 0) {
settings.msaa_count = i;
settings.msaa_quality = qual_levels-1;
settings.msaa_count = 4;
settings.msaa_quality = qual_levels[settings.msaa_count]-1;
// create a struct to hold information about the swap chain
ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC));
scd.BufferCount = 1; // one back buffer
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color
scd.BufferDesc.Width = sys.screen_width; // set the back buffer width
scd.BufferDesc.Height = sys.screen_height; // set the back buffer height
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used
scd.OutputWindow = hWnd; // the window to be used
scd.SampleDesc.Count = settings.msaa_count; // how many multisamples
scd.SampleDesc.Quality = settings.msaa_quality;
scd.Windowed = TRUE; // windowed/full-screen mode
scd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; // allow full-screen switching

Setting up a rasterizer:

// Apply custom rasterizer so we can be sure to get anti-aliasing
ZeroMemory(&rd, sizeof(D3D11_RASTERIZER_DESC));
rd.MultisampleEnable = ( settings.msaa_count > 1 );
rd.AntialiasedLineEnable = ( settings.msaa_count > 1 );
rd.FillMode = D3D11_FILL_SOLID;
rd.CullMode = D3D11_CULL_BACK;
rd.DepthClipEnable = true;

hr = dev->CreateRasterizerState(&rd, &pRS);
if( SUCCEEDED(hr)) devcon->RSSetState(pRS);

Depth stencil texture:

// create the depth stencil texture
ZeroMemory(&texd, sizeof(texd));
texd.Width = sys.screen_width;
texd.Height = sys.screen_height;
texd.ArraySize = 1;
texd.MipLevels = 1;
texd.SampleDesc.Count = settings.msaa_count;
texd.SampleDesc.Quality = settings.msaa_quality;
texd.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
texd.BindFlags = D3D11_BIND_DEPTH_STENCIL;
ID3D11Texture2D *pDepthBuffer;
hr = dev->CreateTexture2D(&texd, NULL, &pDepthBuffer);
if(FAILED(hr)) return false;
// create the depth stencil buffer
ZeroMemory(&dsvd, sizeof(dsvd));

dsvd.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
dsvd.ViewDimension = settings.msaa_count == 1 ? D3D11_DSV_DIMENSION_TEXTURE2D : D3D11_DSV_DIMENSION_TEXTURE2DMS; //D3D11_DSV_DIMENSION_TEXTURE2DMS;
dsvd.Texture2D.MipSlice = 0;
dev->CreateDepthStencilView(pDepthBuffer, &dsvd, &zbuffer);
// set the render target as the back buffer
devcon->OMSetRenderTargets(1, &backbuffer, zbuffer);

Here is the code used to setup all my render textures:
I've tried turning off multi-sampling here, but then my game crashes on startup. So I guess there is something I should be doing differently to turn off msaa on individual textures.

D3D11_TEXTURE2D_DESC textureDesc;
HRESULT result;
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;

// Initialize the render target texture description.
ZeroMemory(&textureDesc, sizeof(textureDesc));
// Setup the render target texture description.
textureDesc.Width = textureWidth;
textureDesc.Height = textureHeight;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = format;
textureDesc.SampleDesc.Count = sys.device->settings.msaa_count;
textureDesc.SampleDesc.Quality = sys.device->settings.msaa_quality;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
// Create the render target texture.
result = sys.device->dev->CreateTexture2D(&textureDesc, NULL, &pRenderTargetTexture);
return false;
// Setup the description of the render target view.
renderTargetViewDesc.Format = textureDesc.Format;
renderTargetViewDesc.ViewDimension = sys.device->settings.msaa_count == 1 ? D3D11_RTV_DIMENSION_TEXTURE2D : D3D11_RTV_DIMENSION_TEXTURE2DMS; //D3D11_RTV_DIMENSION_TEXTURE2DMS;
renderTargetViewDesc.Texture2D.MipSlice = 0;

// Create the render target view.
result = sys.device->dev->CreateRenderTargetView(pRenderTargetTexture, &renderTargetViewDesc, &pRenderTargetView);
return false;
// Setup the description of the shader resource view.
shaderResourceViewDesc.Format = textureDesc.Format;
shaderResourceViewDesc.ViewDimension = sys.device->settings.msaa_count == 1 ? D3D11_SRV_DIMENSION_TEXTURE2D : D3D11_SRV_DIMENSION_TEXTURE2DMS;// D3D11_SRV_DIMENSION_TEXTURE2DMS;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;

// Create the shader resource view.
result = sys.device->dev->CreateShaderResourceView(pRenderTargetTexture, &shaderResourceViewDesc, &pShaderResourceView);
return false;

Here is an extract of my main graphics-draw loop.
rt_xxxx are my render textures.

// Render main 3d scene to texture
setShader( SHADER_TYPE_MAIN );
shader_main->updateCbLightSource( matView, matProj );
ID3D11ShaderResourceView *srv = rt_light_shadowmap->GetShaderResourceView();
sys.device->devcon->PSSetShaderResources(1, 1, &srv);

// Render volumetric light to small-resolution render texture
sprite_light->Render(0, 0, 0 ); // The model of the light frustum
shader_vol_lighting->render( 18, rt_light_shadowmap->GetShaderResourceView(), rt_depth_map->GetShaderResourceView()); // The shadow-map and depth-map are also just render textures, previously generated

// Render full-screen quads to backbuffer
updateTextureCB( sys.screen_width , sys.screen_height );
full_window->Render(0, 0, rt_scene->GetShaderResourceView() );
shader_texture->render( sizeof(WORD)*6);
// Vol. light
full_window->Render(0, 0, rt_small_2->GetShaderResourceView() );
shader_texture->render( sizeof(WORD)*6);
// Present
swapchain->Present(0, 0);

If I understand it correctly, I should define my rt_scene render texture with msaa and just draw it like I do currently.
On the other hand, my volumetric light render texture (here called rt_small2) should be defined without msaa. Then, I can resize it and hopefully not get the pixelation.

Is it generally bad to resize textures with msaa?

Share this post

Link to post
Share on other sites
I figured out my problems I think.

My problems originated from the fact that I used MSAA on all my render textures, even though I didn't need it in most of them. Now, I have defined every render texture without MSAA, except for the one I render my main scene to. The second problem I had was with the ZBuffer. For anyone else having problems using a mixed set of render textures with and without MSAA, make sure the ZBuffer you are using matches your current render target in MSAA settings. I have now defined two ZBuffers, one for rendering with MSAA and one for everything else. Also make sure MSAA is disabled on your backbuffer, if you are rendering non-MSAA textures to it.

As InvalidPointer pointed out, I also had to copy my main scene render texture to another texture without MSAA using ResolveSubresource(). Then I can draw the new texture using a full-screen quad to my backbuffer.

I have one question though. Is there a way to render my MSAA-texture directly to my backbuffer? Or do I have to do convert it to a non-MSAA texture and then draw it as a full-screen quad as I do currently?

Edit: Okay I can answer my own question again :) You can draw the MSAA texture directly to the backbuffer just fine. Just make sure you don't use linear filter or something if you are resizing the texture.

Share this post

Link to post
Share on other sites
In no particular order, stuff that sticks out--
1) Your backbuffer should not actually be MSAA-enabled. This is what's going to be displayed directly on the monitor, and it has no direct way to use the extra sample data. You'll either be resolving an 'offscreen' MSAA surface to this directly via ResolveSubresource(), buuut there happens to be another way...

2) With the advent of D3D10+ you can actually do a custom resolve by way of a shader, which is extra useful because you mention a desire to use HDR rendering. (BTW-- use R11G11B10F as your target format, it halves write bandwidth for little to no image quality loss unless you have insane lighting contrast.) I mention this because while a flat average will work just fine for standard LDR images, doing this pre-tonemapping can have disastrous effect on the final image quality for much the same reason why naive normal map filtering creates alias city-- you're doing the blending at the wrong time. For an example of what this looks like, grab any early UE3 game (Gears of War or Unreal Tournament 3) and enable AA, then watch in horror as it does basically nothing aside from reduce performance. The short version is that the tonemap operator can result in radically different brightness values for adjacent pixels and you've already thrown out the extra subsamples in the earlier resolve pass.

The 'proper' way to do things is to grab all your individual MSAA samples using the new MSAA texture feature, (Texture2DMS in HLSL, you'll need to create special shaders for each supported sample count, unfortunately, but #defines make this fairly easy) tonemap them all individually, *then* average with some multiply-adds. Emil Persson, a really clever ex-ATi demo guy, has a sample app that does exactly this, available from here. It includes source smile.png

The final, ten-thousand-foot view:
1) Render your scene to an MSAA, HDR surface somehow.
2) Bind your swap chain buffer as a render target, and bind the scene texture to a shader resource slot. You can also merge your volumetric light pass result in with the main stuff here if you'd like. Additionally, if you want to do some extra postprocessing, you can just create another non-MSAA surface and render to that instead. The important bit is that it's not multisampled, not that it's the backbuffer. D3D10 is very cool about this, actually, it was much more of a pain in the ass in 9.
3) Draw a fullscreen triangle that reads all the MSAA samples, tonemaps them, then averages. The aforementioned volumetric light merge can be done a few ways. The cheapest, though not necessarily most correct method would be to tonemap it, tonemap the main scene, then blend those two values somehow. The more correct, but slightly more expensive approach would be to add the volumetric light value into each individual main scene MSAA sample, then tonemap/average the results. ALUs are pretty cheap nowdays so if you're a correctness nutter you can probably get away with either one.

EDIT: Aaand you ninja'd me, though I suppose this answers your new question too :)

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!