• Advertisement
Sign in to follow this  

How to use D2D with D3D11?

This topic is 2443 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I write something like the following // Use the texture to obtain a DXGI surface. IDXGISurface *pDxgiSurface = NULL; renderTarget->QueryInterface(&pDxgiSurface); if (pDxgiSurface == NULL) return NULL; // Create a D2D render target which can draw into. D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), 96,96); HRESULT result = GetDirect2DFactory()->CreateDxgiSurfaceRenderTarget(pDxgiSurface, &props, &renderTargetDirect2D); I got E_NOINTERFACE in result. It must be D2D integrates with D3D10.1 only. How with minimal efforts we can render using D2D to D3D11 surface? The D3D11 device was created with flag D3D11_CREATE_DEVICE_BGRA_SUPPORT. The render target: D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.MiscFlags = 0;

Share this post


Link to post
Share on other sites
Advertisement
In order to do this you will need to be running 2 devices. D2D/DWrite will only work on a D3d10.x device, so you will have to share the resource between the two devices and use synchronization on the resource to control which device has read/write access. Interop with a d3d10 devices is easier because you can use the same device for your 3D as you are using for your D2D stuff and avoid all sync and sharing issues. There is a sample available in D2D showing this.

So, how to do this? There are some format restrictions for resources when using D2D, so it is safest to go with the BGRA like you did above as others may well fail.

Create your d3d10 and d3d11 devices. Create D2D with the D3D10 device you have a handle to. Make sure that you have a DXGI1.1 interfaces, you need it for the synchronization part. Make sure that the two devices are using the same IDXGIAdapter. You can Query Interface for the adapter (make sure it's a dxgi1.1 version) from the 11 device and use it to create the D3D10 device.

Create the resource in D3D11 with the flag D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX.

Query Interface the resource for the IDXGIResource interface.

Get the shared handle using the IDXGIResource::GetSharedHandle() API.

Use the ID3D10Device::OpenSharedResource() to get the d3d10 version of your resources.

Query Interface for the IDXGISurface of the D3D10 handle of your resrouce.

Use the same CreateDxgiSurfaceRenderTarget() you mentioned above to get the D2D handle to the resource.

Now, you need the IDXGIKeyedMutex handles for both the D3D11 and D3D10 versions of your shared resource. You will use these to lock the resource to which ever device is doing any read/write operations with the resource.

I can't recall which DXGI interface you have to Query Interface the mutexes from. It's either the IDXGIResource or the IDXGISurface.

The mutex api is fairly easy to use. The API is designed to ensure that the order in which the locks are secured for use with each device can be controled. So, when you release the lock you provide an index key that indicates who is allowed to get the lock next. This is important since you'll have submitted commands to both devices and so it's likely that there will be several lock requests pending simultaneously for the resource. You won't have any other way to control who accesses the resource and in what order they do so since the devices live on their own threads. So all future lock requests will have their own index and it will be important for the code releasing the lock to indicate which index will win the next lock.

EDIT: fixed spelling errors.

[Edited by - DieterVW on September 20, 2009 6:50:37 PM]

Share this post


Link to post
Share on other sites
Some related questions =). (I'm on Vista 64, 8800GT).

Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?
Is this only a problem on Vista, or also on Windows 7?

I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?

Share this post


Link to post
Share on other sites
Quote:
Original post by Erik Rufelt
Some related questions =). (I'm on Vista 64, 8800GT).

Do you know if it will stay like this, or if D2D will work with D3D11 in later SDKs?

You will likely get quite a bit of use from this method before something else comes along.
Quote:
Is this only a problem on Vista, or also on Windows 7?

Works the same in both.
Quote:
I have noticed another problem with this in the latest SDK, I can't create a D3D10.1 device with BGRA support with feature level 10.0, only with feature level 9.3.

This is all most certainly a driver issue. You need support for DXGI1.1 and at least use the D3D10.1 or D3D11 to have access to BGRA in anything D3D10+
Quote:
In addition D2D is still slow, even when the device is hardware and 3D is accelerated. I assume it's drawn on the CPU and copied to the device. Is this also a beta/Vista thing?

I don't know much about this one, sorry.

EDIT: Formatting

[Edited by - DieterVW on September 21, 2009 12:35:23 PM]

Share this post


Link to post
Share on other sites
I got it working from your step-by-step guide, and it doesn't seem slow anymore, so that must have been a different issue. Thanks!

I still have to create the D3D10.1 device with feature level 9.x though, or I get an E_NOINTERFACE from D3D10CreateDevice1. It's only with the D3D10_CREATE_DEVICE_BGRA_SUPPORT flag that I need to use 9.x (the same thing happens in the interoperability sample). 9.x gets BGRA support automatically without the flag too.
The D3D11 device is created with BGRA support on feature level 10.0. There is DXGI1.1 support, I query the surfaces with IDXGISurface1, as well as use a IDXGIAdapter1 when creating the device. There is no problem using those with feature level 10.0 if I don't use the BGRA support flag, but then D2D creation fails. Also, any feature level 9.x D3D10.1 device crashes on OpenSharedResource if the texture is not in BGRA format.. but works fine with feature level 10.0 without BGRA support. In the old March 2009 SDK with the beta DLLs feature level 10.0 worked fine with BGRA support and D2D, though I never tried sharing it with D3D11 then.

Share this post


Link to post
Share on other sites
Are you saying that the D3D10CreateDevice1 fails when you use the flag D3D10_CREATE_DEVICE_BGRA_SUPPORT? This would happen only if your driver hasn't been updated to include support for the flag. I don't know off hand what drivers would work, but something pretty recent should, and if not then something in the very near future should. BGRA support is standard in DX9, but had been removed in DX10... and then added back in -- right now you only get this with the beta files from the March 2009 SDK. At some point there will be a patch for vista pushing these updates. The supported change is however part of windows 7.

Incase anyone tries it, WARP cannot do resource sharing and neither can REF.

BGRA might be the only supported resource type for sharing with D2D. If you search the whole space you might get lucky and find something else that works but I'm not aware of anything. I think D2D is entirely restricted to using BGRA surfaces as render targets.

It sounds like you're doing everything correctly. Provided both devices are DXGI1.1 and both support BGRA, and you have a supporting driver, you should be golden

Share this post


Link to post
Share on other sites
Thank you for the detailed info, DieterVW.
One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?

Share this post


Link to post
Share on other sites
Locking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers). Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering). Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.

Share this post


Link to post
Share on other sites
Quote:
Original post by daVinci
Thank you for the detailed info, DieterVW.
One more question: if I want to use D2D and D3D with WARP I have to create D3D10.1 device, right?


Actually, you can use any of the D3D10, D3D10.1, D3D11 devices with WARP, the only requirement is that the feature level is D3D10.

Share this post


Link to post
Share on other sites
Quote:
Original post by Erik Rufelt
Locking the synced surface seems to come with a performance penalty, about 2 ms per frame, compared to using D3D11_RESOURCE_MISC_SHARED without mutexes (which works but flickers).

The flickering is because the draw order between the devices to the shared surface is not reliable without the mutexes. There will be a performance drop if you run all this on the same thread since working with the lock which causes flushes and blocks until the lock is successful.
Quote:
Perhaps it can be fixed by using an array of render targets and buffering them, drawing with D2D a couple of frames after D3D, replacing the driver level buffering, but it seems like a big thing to implement just for this (and could add lag or mess with the users settings for buffering).

An Async solution is probably the only way to reduce/eliminate the performance loss due to the mutex. You can still use the mutex, just with zero as the dwMilliseconds parameter to poll and see if D2D is done and then go on to make the final composite. If done right my guess is that the lag would not be perceivable.
Quote:
Also, it doesn't seem possible to share multisampled surfaces. Hopefully D2D will support D3D11 in the future, especially since it's lacking a D3DX font.

I don't think that MSAA resources are sharable between any devices.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement