How to use D2D with D3D11?

Started by
57 comments, last by Magmatwister 12 years, 8 months ago
Quote:Original post by JB2009
Is there any way to put these requests/questions to MS's DirectX team?.


As a member of the DX team, I can tell you that the issues are already known and well understood.
Advertisement
Quote:DieterVW: As a member of the DX team, I can tell you that the issues are already known and well understood.


DieterVW,

Many thanks - I did not know that you were a member of the DX team (though I guessed you might be).

Quote:DieterVW: You will likely get quite a bit of use from this method before something else comes along.


Is there any more information available at this stage? At this point I don't know whether we are talking weeks, months or years. (For project planning purposes, it would have been helpful if the Aug 2009 SDK included a list of what was not yet completed or working. It seems silent on the subject).

JB.
Quote:DieterVW: Can you check the error message in the exception that is thrown when using the debug device? From the sounds of it, something in the D3D11 pipeline is set incorrectly, (possibly NULL, or maybe a random value). The debug device will throw an exception when it hits an egregious error, often an unexpected NULL. The exception should containly information about the error. Alternatively, you could use the ID3D11Debug or ID3D11InfoQueue to get the messages about the error that way. Without the debug device, D3D will just ignore the draw call since it detected an error when validating the pipeline.


DieterVW,

The problem I am having is that after using GDI (specifically after calling GetDC and Release DC on the DXGISurface1 associated with the D3D11 backbuffer) during a particular frame, no further rendering is possible as exceptions are raised by the rendering call. Previously I stated that the D2D1 compositing failed after a GDI call, but further investigation revealed that all rendering fails after the call to GetDC.

If ID3D11Debug.ValidateContext is called immediately prior to rendering, then that throws the same exception (see below) as Direct3DDeviceContext.DrawIndexed raises.

The exception reports "Access violation at address 642017B5 in module 'D3D11SDKLayers'dll". Read of address 0000004C.". There is no output to "Output" when the exception occurs. With the D3D11 call that throws the exception in a try...finally, D3D11InfoQueue.GetNumStoredMessages reports no messages.

The DirectX SDK is Aug 2009.

I am currently investigating why the problem does not occur in the next frame if the GetDC call is the last D3D operation in the frame rendering (as opposed to being followed by other rendering). I've tried clearing ("ClearState") and resetting the device context state after the call to GetDC but that didn't help.

There is always a chance that the problem is due to an error on my part. However it is odd (and troubling) that D3D is not diagnosing/reporting any errors.

Help would be appreciated. Strictly speaking this is no longer a D2D1 interop issue, as D2D1 is not involved in this problem, but I've raised it here because it follows on from your diagnosis advice.

JB.

Edit: Calling "Direct3DDeviceContext.OMSetRenderTargets(1,&RenderTargetView,RenderTargetDepthStencilView)" after the GetDC/ReleaseDC and before any subsequent rendering solves the problem (i.e. no exception and rendering works). GetDC is causing the render target to be set to NULL (confirmed with Direct3DDeviceContext.OMGetRenderTargets).

GetDC is giving rise to debug info message: STATE_SETTINGINFO #49: OMSETRENDERTARGETS_UNBINDDELETINGOBJECT "OMSetRenderTargets: Forcing OM Render Target slot 0 to NULL, since the resource is marked D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX, but has not been acquired via IDXGIKeyedMutex::AcquireSync".

I have removed all D2D1 and I am not using the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag explicitly. The only flag set for the back buffer is DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE.

There is something very unpleasant going on in D3D, because if after calling GetDC/ReleaseDC I call Direct3DDeviceContext.OMGetRenderTargets and use the returned values (i.e. a NULL render target and a valid depth buffer) in a call to Direct3DDeviceContext.OMSetRenderTargets (on simply just set the render target to NULL), the exception does not occur (though of course render does not occur either).

JB.

[Edited by - JB2009 on September 30, 2009 11:58:00 AM]
The behavior you describe seems to be the designed behavior. The information provided in the DXGI docs for GetDC/ReleaseDC says:
----------------------------------------------------------------------------
After you use the GetDC method to retrieve a DC, you can render to the DXGI surface using GDI. The GetDC method readies the surface for GDI rendering and allows interoperation between DXGI and GDI technologies.

Keep the following in mind when using this method:

*You must create the surface using the D3D10_RESOURCE_MISC_GDI_COMPATIBLE flag for a surface or use the DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE flag for swap chains, otherwise this method will fail.
*You must release the device and call the IDXGISurface1::ReleaseDC method before issuing any new Direct3D commands.
*This method will fail if an outstanding DC has already been created by this method.
*The format for the surface or swapchain must DXGI_FORMAT_B8G8R8A8_UNORM_SRGB or DXGI_FORMAT_B8G8R8A8_UNORM
*On GetDC, the render targer in the output merger of the Direct3D pipeline is unbound from the surface. OMSetRenderTargets must be called on the device prior to Direct3D rendering after GDI rendering.
*Prior to resizing buffers you must release all outstanding DCs.
----------------------------------------------------------------------------
So, when you are using the surface with GDI, the command will unbind the D3D render target. This is not much different from if you were to bind the render target for reading in D3D -- the API would auto unbind it from the render target slot.

It'll be up to you to know when you've released the device context for the surface of the render target, then you can rebind it with the depth stencil since using GDI requires exclusive access to the surface.
DieterVW,

Many thanks for that. I confess that I missed the OMSetRenderTargets comment in the GetDC documents. I'm now clearing the render target before using GDI, and setting it again afterwards.

It does seem that when GetDC unbinds the render target, it doesn't do so properly - hence the exception during rendering. A properly unbound render target would not result in exceptions being thrown during rendering.

I'm unhappy with COM interface method throwing exceptions in situations like this - particularly access violations, which except for obvious situations such as invalid method parameters, are quite opaque. It would be better if it gave some hint as to the problem (e.g. a comment about the state of the render target). I fear you might tell me that I should have been more observant of debug info message #49! - but nevertheless I would appreciate it if you would forward the issue to the relevant person in the DirectX team. It is quite possible to write graphics libraries that are sufficiently well behaved (particularly in debug mode) that you don't have to study every word of the documentation.

Thanks again.

JB.
Keep in mind that this exception is caused by the SDKLayers because you are using the debug device. When you use D3D directly there won't be any such exceptions, just silent failures where the pipeline doesn't execute the draw.

If nothing else, this technique prevents the developer from overlooking this issue, and infact makes it very clear where the problem lies.

It would seem, however, that the SDKLayers is not working correctly with the GetDC() method since having a NULL render target is fine.

Quote: At this point I don't know whether we are talking weeks, months or years. (For project planning purposes, it would have been helpful if the Aug 2009 SDK included a list of what was not yet completed or working. It seems silent on the subject).

JB.


The August SDK is the completed release for the runtime, meaning no longer beta. The interop of D2D with D3D11 cannot be changed with a new SDK release since the runtime is shipped in the OS. This of course doesn't limit D3DX changes in the future.

[Edited by - DieterVW on October 2, 2009 12:47:06 PM]
Quote:Original post by DieterVW
The August SDK is the completed release. The interop compatibility of D2D with D3D11 cannot be changed with a new SDK release.


Is there a chance that there will be a new D3D11-backbuffer-compatible D2D beta/release sometime in the near future?
Quote:DieterVW: The August SDK is the completed release. The interop compatibility of D2D with D3D11 cannot be changed with a new SDK release.


DieterVW,

I am bewildered by this statement. Please expand on the reasoning behind this. Surely Direct2D (though not necessarily D2D1 - maybe a later version) should be fully compatible with D3D11, as it was with D3D10.1? With D3D10.1 you can draw D2D1 content directly to the back buffer, but as we have discussed at length here, you cannot do this with D3D11. Has this functionality been removed deliberately (e.g. for a good reason, though even if it is slow, or incompatible with MSAA etc, it should still be available as an option), or did MS not get the coding finished in time, and now it is too late to fix it?

JB.

[Edited by - JB2009 on October 1, 2009 9:29:00 PM]
I've done some (preliminary) speed tests to see whether the D3D11+D2D1 workaround using a shared texture has a significant speed penalty. The result is that there is no significant speed penalty using a shared texture with D3D11 and D2D1 as opposed to using D2D1 onto the backbuffer of a D3D10.1 device, but that D2D1 appears to be very slow in both situations.

Preliminary text drawing speed test results:

-> 25 sentences of small text (25 characters each)
-> 501x501 pixel window
-> Using ID2D1RenderTarget.DrawText
-> Not using debug device(s).

D3D9: Using D3DXFONT: 0.3 ms.

D3D10.1: Using D2D1 (in hardware mode) to render directly to the back buffer: 5.0 ms.

D3D11: Using D2D1 (in hardware mode) to render to a shared texture using a D3D10.1 device, then compositing the shared texture to D3D11 (in 10.1 feature mode, to a 10.1 graphics card): 5.0 ms.

Notes:

1) Times increase for a large window/larger fonts.
2) Using ID2D1RenderTarget.DrawTextLayout is faster (approx 1.0 ms as opposed to 5.0 ms), but this is only helpful if the text is not dynamic. The 1.0 ms may be the minimum time lost whenever D2D1 is used.

-------------------------------

In these results, the D3D11 "via shared texture" approach is not slower than the D3D10.1 approach. With a larger window (e.g. 1680x1050) the D3D11 approach is slightly slower - probably due to (a) the clearing of the shared surface to transparent, and (b) the compositing.

However, although D2D1 offers a great deal of flexibility when compared to GDI or D3DXFONT, these speeds seem poor.

The 1 to 3 ms "minimum delay" when using D2D1 was, earlier in this discussion, thought to be due to the mutexes, but this may not be so. It seems that any use of D2D1 takes a few milliseconds. The same delay occues with a D3D10.1 device and no (explicit) mutexes. What is happening to use up this amount of time?

With D3D11+D2D1, I've tried not waiting for the text drawing to complete (i.e. compositing in the next frame rather than the current frame), but this does not eliminate the delay.

For one of our (main) applications, I'm currently estimating up to 10.0 ms spent drawing text (based on a mockup with the same screen size, same font sizes and similar amounts of text) if we use D2D1, as opposed to around 1 ms with D3D9. This would reduce the framerate from 25 fps to 20 fps, which takes us from (just) acceptable to not acceptable for our application. Caching (e.g. creating TextLayouts only when new text is identified) is not a complete solution in our situation, as most of the text is dynamic.

In addition, small D2D1 text is more blurred than that from GDI or D3DXFont, and I've not found any options to fix it.

JB.

[Edited by - JB2009 on October 2, 2009 4:08:16 AM]
More information about API interop is available in blog posts on the DirectX Blog. In particular there are relavent parts in these two places:
first
second

D2D was built to use the D3D10 API, it accepts a 10 device by querying it from the resource used as a rendertarget and this is the primary design choice. Interop is available for other devices, such as D3D11 as we've discussed at length. As you've found, there is little performance difference involved between using D2D with either set of 3D APIs. So the question then becomes -- what technical problem are we solving for the future. As mentioned in the DX blog, it's great to get feedback on what problems existing going forward.

The SDK contains the headers, but the runtime is shipped in the OS.

If you can place the D2D portion of you application in another thread, would that help hide any latency that would otherwise be added to your main render loop?

Through profiling can you see where the time is spent? Is GPU time or CPU time accounting for the 5ms?

Creating TextLayouts as soon as possible is the best way to go. These have to be calculated anyway, and not cashing them is a waste. Even if a layout is discarded often becuase of dynamic content, odds are that it was still used for quite a few frames. So finding ways to create the TextLayouts whenever possible is the way to go.

This topic is closed to new replies.

Advertisement