# DX11 Getting bitmap from multisampled backbuffer

## Recommended Posts

jamesxli    303

Hi Everyone,

In my desktop application (using SlimDX & DX11 ) I need to make a snapshot of a rendered window. I use the following code to save the bitmap of the current back buffer:

            var context = device.ImmediateContext;
var srcTex = renderTarget.Resource as Texture2D;
var desc = srcTex.Description;
desc.BindFlags = BindFlags.None;
desc.Usage = ResourceUsage.Staging;
desc.SampleDescription = new SampleDescription(1, 0);
Texture2D staging = new Texture2D(device, desc);
context.CopyResource(srcTex, staging);
Texture2D.ToFile(context, staging, ImageFileFormat.Png, "c:/Temp/Screenshot.png");


This code works fine, if my renderTarget is not multisampled. If the renderTarget is multisampled ( as needed for anti-aliasing) I got blank image. Does anybody know a way to get a bitmap snapshot from a multisampled render target?

##### Share on other sites
unbird    8336

You need to use [url="http://msdn.microsoft.com/en-us/library/windows/desktop/ff476474(v=vs.85).aspx"]ResolveSubresource[/url] first.

##### Share on other sites
jamesxli    303

Thank you very much for the quick response. It solves the problem completely. Just for people who might be encounter the same problem, my code to get a bitmap from current back-buffer looks as following:

        public Bitmap GetBitmap() {
var ctx = device.ImmediateContext;
var srcTex = renderTarget.Resource as Texture2D;

var desc = srcTex.Description;
desc.BindFlags = BindFlags.None;
desc.Usage = ResourceUsage.Staging;
desc.SampleDescription = new SampleDescription(1, 0);
Texture2D staging = new Texture2D(device, desc);

desc.Usage = ResourceUsage.Default;
desc.CpuAccessFlags = CpuAccessFlags.None;
Texture2D resolved = new Texture2D(device, desc);

ctx.ResolveSubresource(srcTex, 0, resolved, 0, desc.Format);
ctx.CopyResource(resolved, staging);

using (Surface surface = staging.AsSurface()) {
Bitmap bm = new Bitmap(desc.Width, desc.Height,
rect.Pitch, PixelFormat.Format32bppArgb, rect.Data.DataPointer);
surface.Unmap();
resolved.Dispose();
staging.Dispose();
return bm;
}
}

Edited by jamesxli

## Create an account

Register a new account

• ### Similar Content

• By isu diss
I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?

• By Endurion
I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture
My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
• By Mercesa
Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds.
When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again.
This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas?
In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface.

Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results
Count of command buffers goes from 4 (far away from surface) to ~20(close to surface).
The command buffer duration in % goes from around ~30% to ~99%
Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds.
I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface?

Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.

• In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware?
Cheers!

• By gsc
Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
I will really appreciate for any advice!

• 20
• 15
• 17
• 10
• 18