Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


How to use D2D with D3D11?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
58 replies to this topic

#1 daVinci   Members   -  Reputation: 122

Like
0Likes
Like

Posted 19 September 2009 - 08:48 PM

I write something like the following // Use the texture to obtain a DXGI surface. IDXGISurface *pDxgiSurface = NULL; renderTarget->QueryInterface(&pDxgiSurface); if (pDxgiSurface == NULL) return NULL; // Create a D2D render target which can draw into. D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), 96,96); HRESULT result = GetDirect2DFactory()->CreateDxgiSurfaceRenderTarget(pDxgiSurface, &props, &renderTargetDirect2D); I got E_NOINTERFACE in result. It must be D2D integrates with D3D10.1 only. How with minimal efforts we can render using D2D to D3D11 surface? The D3D11 device was created with flag D3D11_CREATE_DEVICE_BGRA_SUPPORT. The render target: D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.MiscFlags = 0;

Sponsor:

#2 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 20 September 2009 - 06:50 AM

In order to do this you will need to be running 2 devices. D2D/DWrite will only work on a D3d10.x device, so you will have to share the resource between the two devices and use synchronization on the resource to control which device has read/write access. Interop with a d3d10 devices is easier because you can use the same device for your 3D as you are using for your D2D stuff and avoid all sync and sharing issues. There is a sample available in D2D showing this.

So, how to do this? There are some format restrictions for resources when using D2D, so it is safest to go with the BGRA like you did above as others may well fail.

Create your d3d10 and d3d11 devices. Create D2D with the D3D10 device you have a handle to. Make sure that you have a DXGI1.1 interfaces, you need it for the synchronization part. Make sure that the two devices are using the same IDXGIAdapter. You can Query Interface for the adapter (make sure it's a dxgi1.1 version) from the 11 device and use it to create the D3D10 device.

Create the resource in D3D11 with the flag D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX.

Query Interface the resource for the IDXGIResource interface.

Get the shared handle using the IDXGIResource::GetSharedHandle() API.

Use the ID3D10Device::OpenSharedResource() to get the d3d10 version of your resources.

Query Interface for the IDXGISurface of the D3D10 handle of your resrouce.

Use the same CreateDxgiSurfaceRenderTarget() you mentioned above to get the D2D handle to the resource.

Now, you need the IDXGIKeyedMutex handles for both the D3D11 and D3D10 versions of your shared resource. You will use these to lock the resource to which ever device is doing any read/write operations with the resource.

I can't recall which DXGI interface you have to Query Interface the mutexes from. It's either the IDXGIResource or the IDXGISurface.

The mutex api is fairly easy to use. The API is designed to ensure that the order in which the locks are secured for use with each device can be controled. So, when you release the lock you provide an index key that indicates who is allowed to get the lock next. This is important since you'll have submitted commands to both devices and so it's likely that there will be several lock requests pending simultaneously for the resource. You won't have any other way to control who accesses the resource and in what order they do so since the devices live on their own threads. So all future lock requests will have their own index and it will be important for the code releasing the lock to indicate which index will win the next lock.

EDIT: fixed spelling errors.

[Edited by - DieterVW on September 20, 2009 6:50:37 PM]

#3 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 20 September 2009 - 07:29 AM

Some related questions =). (I'm on Vista 64, 8800GT).

Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?
Is this only a problem on Vista, or also on Windows 7?

I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?

#4 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 20 September 2009 - 08:35 AM

Quote:
Original post by Erik Rufelt
Some related questions =). (I'm on Vista 64, 8800GT).

Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?

You will likely get quite a bit of use from this method before something else comes along.
Quote:
Is this only a problem on Vista, or also on Windows 7?

Works the same in both.
Quote:
I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

This is all most certainly a driver issue. You need support for DXGI1.1 and at least use the D3D10.1 or D3D11 to have access to BGRA in anything D3D10+
Quote:
In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?

I don't know much about this one, sorry.

EDIT: Formatting

[Edited by - DieterVW on September 21, 2009 12:35:23 PM]

#5 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 20 September 2009 - 11:12 AM

I got it working from your step-by-step guide, and it doesn't seem slow anymore, so that must have been a different issue. Thanks!

I still have to create the D3D10.1 device with feature level 9.x though, or I get an E_NOINTERFACE from D3D10CreateDevice1. It's only with the D3D10_CREATE_DEVICE_BGRA_SUPPORT flag that I need to use 9.x (the same thing happens in the interoperability sample). 9.x gets BGRA support automatically without the flag too.
The D3D11 device is created with BGRA support on feature level 10.0. There is DXGI1.1 support, I query the surfaces with IDXGISurface1, as well as use a IDXGIAdapter1 when creating the device. There is no problem using those with feature level 10.0 if I don't use the BGRA support flag, but then D2D creation fails. Also, any feature level 9.x D3D10.1 device crashes on OpenSharedResource if the texture is not in BGRA format.. but works fine with feature level 10.0 without BGRA support. In the old March 2009 SDK with the beta DLLs feature level 10.0 worked fine with BGRA support and D2D, though I never tried sharing it with D3D11 then.

#6 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 20 September 2009 - 01:00 PM

Are you saying that the D3D10CreateDevice1 fails when you use the flag D3D10_CREATE_DEVICE_BGRA_SUPPORT? This would happen only if your driver hasn't been updated to include support for the flag. I don't know off hand what drivers would work, but something pretty recent should, and if not then something in the very near future should. BGRA support is standard in DX9, but had been removed in DX10... and then added back in -- right now you only get this with the beta files from the March 2009 SDK. At some point there will be a patch for vista pushing these updates. The supported change is however part of windows 7.

Incase anyone tries it, WARP cannot do resource sharing and neither can REF.

BGRA might be the only supported resource type for sharing with D2D. If you search the whole space you might get lucky and find something else that works but I'm not aware of anything. I think D2D is entirely restricted to using BGRA surfaces as render targets.

It sounds like you're doing everything correctly. Provided both devices are DXGI1.1 and both support BGRA, and you have a supporting driver, you should be golden

#7 daVinci   Members   -  Reputation: 122

Like
0Likes
Like

Posted 20 September 2009 - 07:01 PM

Thank you for the detailed info, DieterVW.
One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?

#8 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 21 September 2009 - 03:06 AM

Locking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers). Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering). Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.

#9 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 21 September 2009 - 05:39 AM

Quote:
Original post by daVinci
Thank you for the detailed info, DieterVW.
One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?


Actually, you can use any of the D3D10, D3D10.1, D3D11 devices with WARP, the only requirement is that the feature level is D3D10.


#10 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 21 September 2009 - 06:04 AM

Quote:
Original post by Erik Rufelt
Locking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers).

The flickering is because the draw order between the devices to the shared surface is not reliable without the mutexes. There will be a performance drop if you run all this on the same thread since working with the lock which causes flushes and blocks until the lock is successful.
Quote:
Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering).

An Async solution is probably the only way to reduce/eliminate the performance loss due to the mutex. You can still use the mutex, just with zero as the dwMilliseconds parameter to poll and see if D2D is done and then go on to make the final composite. If done right my guess is that the lag would not be perceivable.
Quote:
Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.

I don't think that MSAA resources are sharable between any devices.

#11 JB2009   Members   -  Reputation: 100

Like
0Likes
Like

Posted 21 September 2009 - 10:41 PM

I want to use Direct2D1 to write to a render target created with a Direct3D 11 swap chain. Is this possible?

As I understand the above, it enables you to share a texture between two Direct3D 11 and 10.1 devices. However I can't see how to share a swap chain render target because it does not have the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX (or equivalent) flag.

Am I missing something?

JB.

[Edited by - JB2009 on September 22, 2009 3:41:58 PM]

#12 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 22 September 2009 - 01:08 AM

I don't think that's possible, you have to draw to a texture and then copy that to the screen.

I have tried the following couple of ways, each with its problems. Perhaps DieterVW who seems to know what he's talking about can comment if any of these is the recommended way, and if I have missed something obvious. =)

1. Use a shared texture of the same size as the screen as the render-target. Lock it with the mutex, draw all D3D11 graphics into it, then lock it for D2D and draw that on top, then copy it to the screen. Looks the same as using D2D with a D3D10.1 render-target, but suffers from lag from the syncing.

2. Draw D3D11 geometry directly to the screen, and use a shared texture as the D2D render-target. When everything is drawn in both D2D and D3D11, alpha-blend the D2D texture to the screen. As far as I can see this can't work with antialiasing in D2D, since only premultiplied and ignored alpha are supported for DXGI render-targets (except for DXGI_FORMAT_A8_UNORM render targets, as stated here. A texture of that format can't be opened with OpenSharedHandle from D3D10.1). This causes bad edges to the text when drawn with anti-aliasing (pre-blended to background color, not to screen contents), so text must be drawn with DWRITE_MEASURING_MODE_GDI_CLASSIC.
(There seems to be a bug with this, with performance deteriorating when drawing a lot of text. I'm not sure if the bug is in my program, but changing to DWRITE_MEASURING_MODE_NATURAL gets rid of the problem, with no other change in the code).

#13 DieterVW   Members   -  Reputation: 700

Like
0Likes
Like

Posted 22 September 2009 - 06:38 AM

Creating a a backbuffer with the mutex isn't possible -- and as mentioned above, this would have performance issues since you have to sync usage between devices.

Though this won't solve every scenario, using a shared rendertarget created by the developer will work the best. This target can actually be cleared to transparent and then all D2D/DWrite drawing can be done. Pre-multiplied alpha is actually exactly what you want (more on this below). You can then composite the shared render target over the main 3D scene, or use it as a texture in future frames. The composite in 3D will also be done using pre-multiplied blending.

You optionally can render to an MSAA render target in D2D and then resolve and copy the contents to the shared resource to bring it back to D3D.

In this scenario you can avoid performance loss by using several shared resources. An update from D2D will likely only take a couple frames at most (meaning that the frame will be drawn and available to the 3D device within a couple frames), and then will be available for the composite in 3D. This should work for a lot of UI, though is imperfect if your plan is to do something like putting text tags above moving units in a game. In such a case it's probably best to render that text to a cached texture which would be used directly in drawing the 3D scene.

For a detailed analysis of why pre-multiplied alpha is always the way to go, read this: http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]

#14 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 22 September 2009 - 08:40 AM

That works perfectly, I didn't know how the premultiplied alpha works. Thanks again for all your explanations!
With D2D antialiasing it's not noticeable that the text isn't multisampled, even when composite with a multisampled render target behind it.

#15 JB2009   Members   -  Reputation: 100

Like
0Likes
Like

Posted 22 September 2009 - 09:21 AM

DieterVW, Erik - many thanks for your help.

What is the best way to combine ("composite") the shared render target with the D3D11 backbuffer? (I'm straight from D3D9).

JB.

#16 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 22 September 2009 - 09:30 AM

Draw a fullscreen textured quad with alpha-blending:
D3D11_BLEND_DESC::RenderTarget[0].SrcBlend = D3D11_BLEND_ONE;
D3D11_BLEND_DESC::RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;


#17 JB2009   Members   -  Reputation: 100

Like
0Likes
Like

Posted 22 September 2009 - 12:47 PM

Thanks again DieterVW and Erik - especially for the instructions on the basic strategies involved.

I've got the D2D1+D311 strategy working well with white text onto a dark 3D scene, but not with black text onto a light 3D scene.

Am I aiming for a shared render target texture where all the pixels are transparent (i.e. Alpha=0) except for where the 2D drawing/text is? If so, is this achieved by creating the D2D brushes with Alpha=1, and clearing the D2D1 render target to transparent using the D2D1RenderTarget.Clear method with Color=Anything and Alpha=0?

JB.


[Edited by - JB2009 on September 25, 2009 1:47:26 PM]

#18 Erik Rufelt   Crossbones+   -  Reputation: 3517

Like
0Likes
Like

Posted 22 September 2009 - 02:53 PM

Yes, that is the method I use at least. Does the text not show up if it's black, and if so what happens with gray text?
Perhaps there's something wrong with the blend-state.

#19 JB2009   Members   -  Reputation: 100

Like
0Likes
Like

Posted 25 September 2009 - 07:21 AM

Erik,

> Yes, that is the method I use at least.
> Does the text not show up if it's black,
> and if so what happens with gray text?
> Perhaps there's something wrong with the blend-state.

The problem was that the pixel shader used for the compositing was ignoring the texture alpha component. With this fixed the results are very impressive - particularly for larger fonts. The fonts blend well with any background colour. With very small fonts the results are less clear than using the D3D9 D3DXFont. Maybe there are options (e.g. clear type etc) to fix this - I haven't looked yet.

Thanks for your help,

JB.

[Edited by - JB2009 on September 30, 2009 10:21:48 AM]

#20 JB2009   Members   -  Reputation: 100

Like
0Likes
Like

Posted 25 September 2009 - 07:22 AM

Remaining D2D1/D311 interop issues:

1) My biggest concern is the time that D2D1 is adding to the frame time. I'm seeing 1.5 to 2.7 ms additional time (depending on window size) over the equivalent D3D9 approaches. I haven't yet established whether this is "per window" for multiple window applications. The additional time is incurred as soon as the ID2D1 BeginDraw/EndDraw are added (i.e. without actually drawing anything). The amount of drawing only has a very small effect on the amount of time.

For my application this is time I can not afford to lose. Caching the 2D content is not an option for me because at least some of the text changes every frame (and even a single item of text incurs the full time overhead).

An important question for me is: Will this problem continue to exist when (if?) D2D1 becomes compatible with D3D11? (And also, was this a problem with D2D1 and D3D10.1?).

2) If I add both GDI content (i.e. using IDXGISurface1.GetDC) AND D2D1 (using the DieterVW method), I either get an exception when drawing the D2D1 Quad to the D3D11 back buffer, or the GDI content appears but not the D2D1. Can they work together? Is it to do with the key values used with the mutexes? I'm updating a D3D9 library to D3D11, and cannot prevent GDI and text (via D2D1) being used together.

JB.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS