Jump to content
  • Advertisement
Sign in to follow this  
daVinci

How to use D2D with D3D11?

This topic is 2478 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quote:
Original post by JB2009
Remaining D2D1/D311 interop issues:

1) My biggest concern is the time that D2D1 is adding to the frame time. I'm seeing 1.5 to 2.7 ms additional time (depending on window size) over the equivalent D3D9 approaches. I haven't yet established whether this is "per window" for multiple window applications. The additional time is incurred as soon as the ID2D1 BeginDraw/EndDraw are added (i.e. without actually drawing anything). The amount of drawing only has a very small effect on the amount of time.

For my application this is time I can not afford to lose. Caching the 2D content is not an option for me because at least some of the text changes every frame (and even a single item of text incurs the full time overhead).

The only way to avoid this is to run the D2D content in an Async manner - which would result in the composite happening a few frames late. This shouldn't actually be noticeable by a user.

Quote:
An important question for me is: Will this problem continue to exist when (if?) D2D1 becomes compatible with D3D11? (And also, was this a problem with D2D1 and D3D10.1?).

You shouldn't have this issue with D3D10 since all rendering will be against the same device (meaning you don't need mutex sharing).

Quote:
2) If I add both GDI content (i.e. using IDXGISurface1.GetDC) AND D2D1 (using the DieterVW method), I either get an exception when drawing the D2D1 Quad to the D3D11 back buffer, or the GDI content appears but not the D2D1. Can they work together? Is it to do with the key values used with the mutexes? I'm updating a D3D9 library to D3D11, and cannot prevent GDI and text (via D2D1) being used together.

You can control the order of rendering to a shared surface using the mutex. If the order is appearing incorrect, then the algorithm you're using probably isn't working. Example: The initial draw locks the mutex with zero, and then releases it with 1. The next device you want to win the lock needs to be requesting the lock with a 1. Proceed in this manner, incrementing the release/lock for each transition in drawing.

Everything should be fine, provided the resource was flagged at creation time as GDI compatible. Also, you can't get a swap chain to create a back buffer that uses a mutex. You'd need new rendertarget created by D3D11. (perhaps you already did this, though I can't tell.)

The resource types that can be shared are a really thin slice, and get thinner everytime you add a functionality flag. I don't know enough about GDI to advise you on the best path to take to make this all work.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
DieterVW wrote: Everything should be fine, provided the resource was flagged at creation time as GDI compatible. Also, you can't get a swap chain to create a back buffer that uses a mutex. You'd need new rendertarget created by D3D11. (perhaps you already did this, though I can't tell.)

I'm writing to the D3D11 backbuffer using GDI. The backbuffer is created with the DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE flag. I do not use a mutex explicitly, as the resource is not shared, though from clues in "Output", I suspect a mutex is used internally.

I also have a shared resource (Texture2D) that is written to by D2D1 (with a mutex), and read from by D3D11 during compositing (also using a mutex).

(I could instead write to the shared resource using GDI, but this would break compatibility with existing code).


Pseudo code:

DrawToD3D11BackBufferUsingGDI(); // This gets drawn correctly.

KeyedMutexForSharedResource_10_1.AcquireSync(0,INFINITE);
try
RenderTargetForDirect2D.BeginDraw();
RenderTargetForDirect2D.Clear(...);
RenderTargetForDirect2D.DrawText(...);
RenderTargetForDirect2D.EndDraw();
finally
KeyedMutexForSharedResource_10_1.ReleaseSync(0);
end;

KeyedMutexForSharedResource_11.AcquireSync(0,INFINITE);
try
DrawSharedResourceToD3D11BackBuffer(); // Either this does nothing (without D3D debugging), or an exception is thrown (with D3D debugging enabled).
finally
KeyedMutexForSharedResource_11.ReleaseSync(0);
end;

Note that all mutex keys are zero. With only two users of the shared resource, I don't understand the merit in other options.

The presence of the GDI drawing to the D3D11 backbuffer is somehow preventing the compositing from working.

With D3D debugging enabled, the Direct3DDeviceContext.DrawIndexed throws an exception, with no message in "Output". Without debugging, it does nothing.

JB.

Share this post


Link to post
Share on other sites
I tried that same thing, and it works fine for me, drawing with D2D to a render target texture and with GDI directly to the D3D11 back-buffer, blending the D2D texture on top after.

Share this post


Link to post
Share on other sites
I noticed there is also the possibility to use a DC render target with D2D, and use BindDC() to bind to the GDI compatible DC obtained for the D3D11 back-buffer. When using this performance drops significantly however, when updating a large area of the screen with D2D, and best performance is with software D2D drawing. (I guess the performance is about the same as with GDI drawing).

Share this post


Link to post
Share on other sites
Remember that in D2D and D3D the commands are queued, and run async from the thread that is calling the API. In your case, with 2 devices, they are probably fighting over who gets the lock first, since both are asking for lock zero. You may or may not see this error, but certainly there are machines out there that will hit this bug and flicker wildly due to out of order rendering.

Release the D2D lock with 1, and aquire the d3d lock with 1, that way you've forced and order.

Share this post


Link to post
Share on other sites
I'm trying to combine direct3d 11 and direct2d too. Actually I have trouble with OpenSharedResource. This method always returns E_INVALIDARG. My code looks like this:

D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));

texDesc.Width = swapChainDesc.BufferDesc.Width;
texDesc.Height = swapChainDesc.BufferDesc.Height;
texDesc.Format = swapChainDesc.BufferDesc.Format;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.SampleDesc.Count = swapChainDesc.SampleDesc.Count;
texDesc.SampleDesc.Quality = swapChainDesc.SampleDesc.Quality;
texDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
texDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;

_pD3D11Device->CreateTexture2D(&texDesc, NULL, &_pD3D11Texture);
_pD3D11Texture->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&_pMutex);

IDXGIResource* pResource;
_pD3D11Texture->QueryInterface(__uuidof(IDXGIResource), (void**)&pResource);

HANDLE hResource;
HRESULT hResult = pResource->GetSharedHandle(&hResource);

ID3D10Resource* pTexture;
hResult = _pD3D10Device->OpenSharedResource(hResource, __uuidof(ID3D10Resource), (void**)&pTexture);

I created the direct3d 11 device with no creation flags. The direct3d 10.1 device was created with the flag D3D10_CREATE_DEVICE_BGRA_SUPPORT. Any ideas?

Share this post


Link to post
Share on other sites
I don't think you can share a surface with multi-sampling. Apart from that I use the same code as you, but set the format to DXGI_FORMAT_B8G8R8A8_UNORM.

Share this post


Link to post
Share on other sites
I don't use multisampling. Count is set to one and Quality is set to zero. I also tried to set the format of the texture to B8G8R8A8_UNORM. However, the OpenSharedResource function fails...

maybe it is a problem with my two devices? I created each device with NULl as argument for the adapter.

Share this post


Link to post
Share on other sites
Yes, you need to use the same adapter, NULL doesn't work. I noticed this myself when I first tried it. =)
The documentation says that NULL means the same adapter as the first one returned from IDXGIFactory::EnumAdapters. So use CreateDXGIFactory and EnumAdapters(0, &pAdapter) to obtain the adapter. Or it might work by creating the D3D11 device with NULL and then checking which adapter it uses.

Share this post


Link to post
Share on other sites
Is it absolutely not possible to share the backbuffer between a D3D10.1 and D3D11 device? I have tried all kinds of things and it just won't work. I really hope MS creates a D3D11 version of D2D soon, as I really can't use the render-to-texture-and-blend trick for this program I am working on.

IDXGIResource *DX11Resource = NULL;
EIF(g_pSwapChain->GetBuffer(0, IID_PPV_ARGS(&DX11Resource)));

HANDLE DX11ResourceHandle = NULL;
EIF(DX11Resource->GetSharedHandle(&DX11ResourceHandle));

The GetSharedHandle call fails with DXGI_ERROR_NOT_FOUND, and I tried creating the swapchain in various ways with the DXGI_USAGE_SHARED flag.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!