• ### Announcements

• #### Wondering what's new and changed at GameDev.net?06/20/17

Check out the latest Staff Blog update that talks about what's changed, what's new, and what's up with these "Pixels".
Followers 0

# How to use D2D with D3D11?

## 58 posts in this topic

I write something like the following // Use the texture to obtain a DXGI surface. IDXGISurface *pDxgiSurface = NULL; renderTarget->QueryInterface(&pDxgiSurface); if (pDxgiSurface == NULL) return NULL; // Create a D2D render target which can draw into. D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), 96,96); HRESULT result = GetDirect2DFactory()->CreateDxgiSurfaceRenderTarget(pDxgiSurface, &props, &renderTargetDirect2D); I got E_NOINTERFACE in result. It must be D2D integrates with D3D10.1 only. How with minimal efforts we can render using D2D to D3D11 surface? The D3D11 device was created with flag D3D11_CREATE_DEVICE_BGRA_SUPPORT. The render target: D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.MiscFlags = 0;
0

##### Share on other sites
In order to do this you will need to be running 2 devices. D2D/DWrite will only work on a D3d10.x device, so you will have to share the resource between the two devices and use synchronization on the resource to control which device has read/write access. Interop with a d3d10 devices is easier because you can use the same device for your 3D as you are using for your D2D stuff and avoid all sync and sharing issues. There is a sample available in D2D showing this.

So, how to do this? There are some format restrictions for resources when using D2D, so it is safest to go with the BGRA like you did above as others may well fail.

Create your d3d10 and d3d11 devices. Create D2D with the D3D10 device you have a handle to. Make sure that you have a DXGI1.1 interfaces, you need it for the synchronization part. Make sure that the two devices are using the same IDXGIAdapter. You can Query Interface for the adapter (make sure it's a dxgi1.1 version) from the 11 device and use it to create the D3D10 device.

Create the resource in D3D11 with the flag D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX.

Query Interface the resource for the IDXGIResource interface.

Get the shared handle using the IDXGIResource::GetSharedHandle() API.

Use the ID3D10Device::OpenSharedResource() to get the d3d10 version of your resources.

Query Interface for the IDXGISurface of the D3D10 handle of your resrouce.

Use the same CreateDxgiSurfaceRenderTarget() you mentioned above to get the D2D handle to the resource.

Now, you need the IDXGIKeyedMutex handles for both the D3D11 and D3D10 versions of your shared resource. You will use these to lock the resource to which ever device is doing any read/write operations with the resource.

I can't recall which DXGI interface you have to Query Interface the mutexes from. It's either the IDXGIResource or the IDXGISurface.

The mutex api is fairly easy to use. The API is designed to ensure that the order in which the locks are secured for use with each device can be controled. So, when you release the lock you provide an index key that indicates who is allowed to get the lock next. This is important since you'll have submitted commands to both devices and so it's likely that there will be several lock requests pending simultaneously for the resource. You won't have any other way to control who accesses the resource and in what order they do so since the devices live on their own threads. So all future lock requests will have their own index and it will be important for the code releasing the lock to indicate which index will win the next lock.

EDIT: fixed spelling errors.

[Edited by - DieterVW on September 20, 2009 6:50:37 PM]
0

##### Share on other sites
Some related questions =). (I'm on Vista 64, 8800GT).

Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?
Is this only a problem on Vista, or also on Windows 7?

I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?
0

##### Share on other sites
Quote:
 Original post by Erik RufeltSome related questions =). (I'm on Vista 64, 8800GT).Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?

You will likely get quite a bit of use from this method before something else comes along.
Quote:
 Is this only a problem on Vista, or also on Windows 7?

Works the same in both.
Quote:
 I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

This is all most certainly a driver issue. You need support for DXGI1.1 and at least use the D3D10.1 or D3D11 to have access to BGRA in anything D3D10+
Quote:
 In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?

EDIT: Formatting

[Edited by - DieterVW on September 21, 2009 12:35:23 PM]
0

##### Share on other sites
I got it working from your step-by-step guide, and it doesn't seem slow anymore, so that must have been a different issue. Thanks!

I still have to create the D3D10.1 device with feature level 9.x though, or I get an E_NOINTERFACE from D3D10CreateDevice1. It's only with the D3D10_CREATE_DEVICE_BGRA_SUPPORT flag that I need to use 9.x (the same thing happens in the interoperability sample). 9.x gets BGRA support automatically without the flag too.
The D3D11 device is created with BGRA support on feature level 10.0. There is DXGI1.1 support, I query the surfaces with IDXGISurface1, as well as use a IDXGIAdapter1 when creating the device. There is no problem using those with feature level 10.0 if I don't use the BGRA support flag, but then D2D creation fails. Also, any feature level 9.x D3D10.1 device crashes on OpenSharedResource if the texture is not in BGRA format.. but works fine with feature level 10.0 without BGRA support. In the old March 2009 SDK with the beta DLLs feature level 10.0 worked fine with BGRA support and D2D, though I never tried sharing it with D3D11 then.
0

##### Share on other sites
Are you saying that the D3D10CreateDevice1 fails when you use the flag D3D10_CREATE_DEVICE_BGRA_SUPPORT? This would happen only if your driver hasn't been updated to include support for the flag. I don't know off hand what drivers would work, but something pretty recent should, and if not then something in the very near future should. BGRA support is standard in DX9, but had been removed in DX10... and then added back in -- right now you only get this with the beta files from the March 2009 SDK. At some point there will be a patch for vista pushing these updates. The supported change is however part of windows 7.

Incase anyone tries it, WARP cannot do resource sharing and neither can REF.

BGRA might be the only supported resource type for sharing with D2D. If you search the whole space you might get lucky and find something else that works but I'm not aware of anything. I think D2D is entirely restricted to using BGRA surfaces as render targets.

It sounds like you're doing everything correctly. Provided both devices are DXGI1.1 and both support BGRA, and you have a supporting driver, you should be golden
0

##### Share on other sites
Thank you for the detailed info, DieterVW.
One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?
0

##### Share on other sites
Locking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers). Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering). Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.
0

##### Share on other sites
Quote:
 Original post by daVinciThank you for the detailed info, DieterVW.One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?

Actually, you can use any of the D3D10, D3D10.1, D3D11 devices with WARP, the only requirement is that the feature level is D3D10.
0

##### Share on other sites
Quote:
 Original post by Erik RufeltLocking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers).

The flickering is because the draw order between the devices to the shared surface is not reliable without the mutexes. There will be a performance drop if you run all this on the same thread since working with the lock which causes flushes and blocks until the lock is successful.
Quote:
 Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering).

An Async solution is probably the only way to reduce/eliminate the performance loss due to the mutex. You can still use the mutex, just with zero as the dwMilliseconds parameter to poll and see if D2D is done and then go on to make the final composite. If done right my guess is that the lag would not be perceivable.
Quote:
 Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.

I don't think that MSAA resources are sharable between any devices.
0

##### Share on other sites
I want to use Direct2D1 to write to a render target created with a Direct3D 11 swap chain. Is this possible?

As I understand the above, it enables you to share a texture between two Direct3D 11 and 10.1 devices. However I can't see how to share a swap chain render target because it does not have the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX (or equivalent) flag.

Am I missing something?

JB.

[Edited by - JB2009 on September 22, 2009 3:41:58 PM]
0

##### Share on other sites
I don't think that's possible, you have to draw to a texture and then copy that to the screen.

I have tried the following couple of ways, each with its problems. Perhaps DieterVW who seems to know what he's talking about can comment if any of these is the recommended way, and if I have missed something obvious. =)

1. Use a shared texture of the same size as the screen as the render-target. Lock it with the mutex, draw all D3D11 graphics into it, then lock it for D2D and draw that on top, then copy it to the screen. Looks the same as using D2D with a D3D10.1 render-target, but suffers from lag from the syncing.

2. Draw D3D11 geometry directly to the screen, and use a shared texture as the D2D render-target. When everything is drawn in both D2D and D3D11, alpha-blend the D2D texture to the screen. As far as I can see this can't work with antialiasing in D2D, since only premultiplied and ignored alpha are supported for DXGI render-targets (except for DXGI_FORMAT_A8_UNORM render targets, as stated here. A texture of that format can't be opened with OpenSharedHandle from D3D10.1). This causes bad edges to the text when drawn with anti-aliasing (pre-blended to background color, not to screen contents), so text must be drawn with DWRITE_MEASURING_MODE_GDI_CLASSIC.
(There seems to be a bug with this, with performance deteriorating when drawing a lot of text. I'm not sure if the bug is in my program, but changing to DWRITE_MEASURING_MODE_NATURAL gets rid of the problem, with no other change in the code).
0

##### Share on other sites
Creating a a backbuffer with the mutex isn't possible -- and as mentioned above, this would have performance issues since you have to sync usage between devices.

Though this won't solve every scenario, using a shared rendertarget created by the developer will work the best. This target can actually be cleared to transparent and then all D2D/DWrite drawing can be done. Pre-multiplied alpha is actually exactly what you want (more on this below). You can then composite the shared render target over the main 3D scene, or use it as a texture in future frames. The composite in 3D will also be done using pre-multiplied blending.

You optionally can render to an MSAA render target in D2D and then resolve and copy the contents to the shared resource to bring it back to D3D.

In this scenario you can avoid performance loss by using several shared resources. An update from D2D will likely only take a couple frames at most (meaning that the frame will be drawn and available to the 3D device within a couple frames), and then will be available for the composite in 3D. This should work for a lot of UI, though is imperfect if your plan is to do something like putting text tags above moving units in a game. In such a case it's probably best to render that text to a cached texture which would be used directly in drawing the 3D scene.

For a detailed analysis of why pre-multiplied alpha is always the way to go, read this: http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]
0

##### Share on other sites
That works perfectly, I didn't know how the premultiplied alpha works. Thanks again for all your explanations!
With D2D antialiasing it's not noticeable that the text isn't multisampled, even when composite with a multisampled render target behind it.
0

##### Share on other sites
DieterVW, Erik - many thanks for your help.

What is the best way to combine ("composite") the shared render target with the D3D11 backbuffer? (I'm straight from D3D9).

JB.
0

##### Share on other sites
Draw a fullscreen textured quad with alpha-blending:
D3D11_BLEND_DESC::RenderTarget[0].SrcBlend = D3D11_BLEND_ONE;
D3D11_BLEND_DESC::RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
0

##### Share on other sites
Thanks again DieterVW and Erik - especially for the instructions on the basic strategies involved.

I've got the D2D1+D311 strategy working well with white text onto a dark 3D scene, but not with black text onto a light 3D scene.

Am I aiming for a shared render target texture where all the pixels are transparent (i.e. Alpha=0) except for where the 2D drawing/text is? If so, is this achieved by creating the D2D brushes with Alpha=1, and clearing the D2D1 render target to transparent using the D2D1RenderTarget.Clear method with Color=Anything and Alpha=0?

JB.

[Edited by - JB2009 on September 25, 2009 1:47:26 PM]
0

##### Share on other sites
Yes, that is the method I use at least. Does the text not show up if it's black, and if so what happens with gray text?
Perhaps there's something wrong with the blend-state.
0

##### Share on other sites
Erik,

> Yes, that is the method I use at least.
> Does the text not show up if it's black,
> and if so what happens with gray text?
> Perhaps there's something wrong with the blend-state.

The problem was that the pixel shader used for the compositing was ignoring the texture alpha component. With this fixed the results are very impressive - particularly for larger fonts. The fonts blend well with any background colour. With very small fonts the results are less clear than using the D3D9 D3DXFont. Maybe there are options (e.g. clear type etc) to fix this - I haven't looked yet.

JB.

[Edited by - JB2009 on September 30, 2009 10:21:48 AM]
0

##### Share on other sites
Remaining D2D1/D311 interop issues:

1) My biggest concern is the time that D2D1 is adding to the frame time. I'm seeing 1.5 to 2.7 ms additional time (depending on window size) over the equivalent D3D9 approaches. I haven't yet established whether this is "per window" for multiple window applications. The additional time is incurred as soon as the ID2D1 BeginDraw/EndDraw are added (i.e. without actually drawing anything). The amount of drawing only has a very small effect on the amount of time.

For my application this is time I can not afford to lose. Caching the 2D content is not an option for me because at least some of the text changes every frame (and even a single item of text incurs the full time overhead).

An important question for me is: Will this problem continue to exist when (if?) D2D1 becomes compatible with D3D11? (And also, was this a problem with D2D1 and D3D10.1?).

2) If I add both GDI content (i.e. using IDXGISurface1.GetDC) AND D2D1 (using the DieterVW method), I either get an exception when drawing the D2D1 Quad to the D3D11 back buffer, or the GDI content appears but not the D2D1. Can they work together? Is it to do with the key values used with the mutexes? I'm updating a D3D9 library to D3D11, and cannot prevent GDI and text (via D2D1) being used together.

JB.
0

##### Share on other sites
Quote:
 Original post by JB2009Remaining D2D1/D311 interop issues:1) My biggest concern is the time that D2D1 is adding to the frame time. I'm seeing 1.5 to 2.7 ms additional time (depending on window size) over the equivalent D3D9 approaches. I haven't yet established whether this is "per window" for multiple window applications. The additional time is incurred as soon as the ID2D1 BeginDraw/EndDraw are added (i.e. without actually drawing anything). The amount of drawing only has a very small effect on the amount of time.For my application this is time I can not afford to lose. Caching the 2D content is not an option for me because at least some of the text changes every frame (and even a single item of text incurs the full time overhead).

The only way to avoid this is to run the D2D content in an Async manner - which would result in the composite happening a few frames late. This shouldn't actually be noticeable by a user.

Quote:
 An important question for me is: Will this problem continue to exist when (if?) D2D1 becomes compatible with D3D11? (And also, was this a problem with D2D1 and D3D10.1?).

You shouldn't have this issue with D3D10 since all rendering will be against the same device (meaning you don't need mutex sharing).

Quote:
 2) If I add both GDI content (i.e. using IDXGISurface1.GetDC) AND D2D1 (using the DieterVW method), I either get an exception when drawing the D2D1 Quad to the D3D11 back buffer, or the GDI content appears but not the D2D1. Can they work together? Is it to do with the key values used with the mutexes? I'm updating a D3D9 library to D3D11, and cannot prevent GDI and text (via D2D1) being used together.

You can control the order of rendering to a shared surface using the mutex. If the order is appearing incorrect, then the algorithm you're using probably isn't working. Example: The initial draw locks the mutex with zero, and then releases it with 1. The next device you want to win the lock needs to be requesting the lock with a 1. Proceed in this manner, incrementing the release/lock for each transition in drawing.

Everything should be fine, provided the resource was flagged at creation time as GDI compatible. Also, you can't get a swap chain to create a back buffer that uses a mutex. You'd need new rendertarget created by D3D11. (perhaps you already did this, though I can't tell.)

The resource types that can be shared are a really thin slice, and get thinner everytime you add a functionality flag. I don't know enough about GDI to advise you on the best path to take to make this all work.
0

##### Share on other sites
Quote:
 DieterVW wrote: Everything should be fine, provided the resource was flagged at creation time as GDI compatible. Also, you can't get a swap chain to create a back buffer that uses a mutex. You'd need new rendertarget created by D3D11. (perhaps you already did this, though I can't tell.)

I'm writing to the D3D11 backbuffer using GDI. The backbuffer is created with the DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE flag. I do not use a mutex explicitly, as the resource is not shared, though from clues in "Output", I suspect a mutex is used internally.

I also have a shared resource (Texture2D) that is written to by D2D1 (with a mutex), and read from by D3D11 during compositing (also using a mutex).

(I could instead write to the shared resource using GDI, but this would break compatibility with existing code).

  Pseudo code:  DrawToD3D11BackBufferUsingGDI(); // This gets drawn correctly.  KeyedMutexForSharedResource_10_1.AcquireSync(0,INFINITE);  try    RenderTargetForDirect2D.BeginDraw();    RenderTargetForDirect2D.Clear(...);    RenderTargetForDirect2D.DrawText(...);    RenderTargetForDirect2D.EndDraw();  finally    KeyedMutexForSharedResource_10_1.ReleaseSync(0);  end;  KeyedMutexForSharedResource_11.AcquireSync(0,INFINITE);  try    DrawSharedResourceToD3D11BackBuffer(); // Either this does nothing (without D3D debugging), or an exception is thrown (with D3D debugging enabled).  finally    KeyedMutexForSharedResource_11.ReleaseSync(0);  end;

Note that all mutex keys are zero. With only two users of the shared resource, I don't understand the merit in other options.

The presence of the GDI drawing to the D3D11 backbuffer is somehow preventing the compositing from working.

With D3D debugging enabled, the Direct3DDeviceContext.DrawIndexed throws an exception, with no message in "Output". Without debugging, it does nothing.

JB.
0

##### Share on other sites
I tried that same thing, and it works fine for me, drawing with D2D to a render target texture and with GDI directly to the D3D11 back-buffer, blending the D2D texture on top after.
0

##### Share on other sites
I noticed there is also the possibility to use a DC render target with D2D, and use BindDC() to bind to the GDI compatible DC obtained for the D3D11 back-buffer. When using this performance drops significantly however, when updating a large area of the screen with D2D, and best performance is with software D2D drawing. (I guess the performance is about the same as with GDI drawing).
0

##### Share on other sites
Remember that in D2D and D3D the commands are queued, and run async from the thread that is calling the API. In your case, with 2 devices, they are probably fighting over who gets the lock first, since both are asking for lock zero. You may or may not see this error, but certainly there are machines out there that will hit this bug and flicker wildly due to out of order rendering.

Release the D2D lock with 1, and aquire the d3d lock with 1, that way you've forced and order.

0