Jump to content
  • Advertisement
Sign in to follow this  
mind in a box

DX11 [D3D11/D2D] Mutex seems to fail...

This topic is 2979 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi GameDev!

I'm currently on that D3D11 with D2D thingy and there seems to be a problem with the mutexes.

When I call this code:

//Sync the texture with D3D10
hr = SharedTexture->GetKeyedMutex10()->AcquireSync(0, INFINITE);

if(FAILED(hr))
{
LE(hr);
return hr;
}

D2D.GetRenderTarget()->BeginDraw();
RenderHUD();
D2D.GetRenderTarget()->EndDraw();

hr = SharedTexture->GetKeyedMutex10()->ReleaseSync(1);
if(FAILED(hr))
{
LE(hr);
return hr;
}

//Sync the texture with D3D11 and render to backbuffer
hr = SharedTexture->GetKeyedMutex11()->AcquireSync(1, INFINITE);
if(FAILED(hr))
{
LE(hr);
return hr;
}
ID3DX11EffectShaderResourceVariable* SRV=NULL;
ScreenQuad.GetQuadShader()->GetCustomVariable(SharedTextureIdx, &SRV);
if(SRV)
{
SRV->SetResource(SharedTexture->GetShaderResourceView());
}


ScreenQuad.Render();

hr = SharedTexture->GetKeyedMutex11()->ReleaseSync(0);
if(FAILED(hr))
{
LE(hr);
return hr;
}



I get this:

D3D10: INFO: ID3D10Device::IASetPrimitiveTopology: Topology value D3D10_PRIMITIVE_TOPOLOGY_UNDEFINED is OK to set during IASetPrimitiveTopology; but will not be valid for any Draw routine. [ STATE_SETTING INFO #237: DEVICE_IASETPRIMITIVETOPOLOGY_TOPOLOGY_UNDEFINED ]
D3D10: INFO: As many as 1 previous debug layer message(s) may be a result of restoring device state saved by D2D. [ APPLICATION_DEFINED INFO #11: STRING_FROM_APPLICATION ]
D3D10: INFO: ID3D10Device::IASetPrimitiveTopology: Topology value D3D10_PRIMITIVE_TOPOLOGY_UNDEFINED is OK to set during IASetPrimitiveTopology; but will not be valid for any Draw routine. [ STATE_SETTING INFO #237: DEVICE_IASETPRIMITIVETOPOLOGY_TOPOLOGY_UNDEFINED ]
D3D10: INFO: ID3D10Device::IASetPrimitiveTopology: Topology value D3D10_PRIMITIVE_TOPOLOGY_UNDEFINED is OK to set during IASetPrimitiveTopology; but will not be valid for any Draw routine. [ STATE_SETTING INFO #237: DEVICE_IASETPRIMITIVETOPOLOGY_TOPOLOGY_UNDEFINED ]
D3D10: INFO: As many as 1 previous debug layer message(s) may be a result of restoring device state saved by D2D. [ APPLICATION_DEFINED INFO #11: STRING_FROM_APPLICATION ]
D3D10: INFO: ID3D10Device::IASetPrimitiveTopology: Topology value D3D10_PRIMITIVE_TOPOLOGY_UNDEFINED is OK to set during IASetPrimitiveTopology; but will not be valid for any Draw routine. [ STATE_SETTING INFO #237: DEVICE_IASETPRIMITIVETOPOLOGY_TOPOLOGY_UNDEFINED ]
D3D10: INFO: As many as 1 previous debug layer message(s) may be a result of restoring device state saved by D2D. [ APPLICATION_DEFINED INFO #11: STRING_FROM_APPLICATION ]



from the Begin/End calls of Direct2D. Yes, that are just infos and I can disable them. But maybe they are important to fix my real problem:

The first time I call AcquireSync() on the D3D11 mutex it works well, but when it comes to the second time I get a E_INVALIDARG error. Even if the inputs didn't change since the last frame (I think so..).

Why could that be? Any ideas?

Share this post


Link to post
Share on other sites
Advertisement
I found something out!

The problem is gone when I comment out my Present() call of the SwapChain of the D3D11 device like this:

/** Presents the swapchain to the window */
HRESULT SceneRenderer_ToSwapchain::PresentSwapChain()
{
return S_OK;
//return SwapChain->Present(1, NULL);
}



Then everything works well, but unfortunatley I need this Present()call to see what my engine renders. [wink]

Maybe someone can help me now...

Share this post


Link to post
Share on other sites
I don't think that return value should actually be possible. Double check that it is acquire that is returning E_INVALIDARG. The only think I can think of is that the resource was not unbound correctly from the dx10 pipeline. From the dx10 surface you can QI for the dx10 device and then you can clear its render targets yourself - just to see if that is the issue. You can do the same and unbind the shared texture SRV from the D3D11 pipeline before releasing it too.

What hardware are you using? Maybe it's a hardware bug. You'd have to try your code on different hardware since REF and WARP don't support shared resources.

Share this post


Link to post
Share on other sites
Thanks for your response!

Quote:

I don't think that return value should actually be possible. Double check that it is acquire that is returning E_INVALIDARG.


I was confused about that E_INVALIDARG too, since the documentation only says that these three are possible:
- WAIT_ABANDONED
- WAIT_TIMEOUT
- E_FAIL

There is no E_INVALIDARG in that list. But for some reason I keep getting it..

Here is a screenshot to prove that:


See, "hr" just turned red after pressing F10.

As I said above, this only happens when I hit the code the second time. I can even see the rendering of the first frame on screen.
And after that Hud-Rendering function is only the Present()-Call.

Oh, and you might noticed that exception in the debug-output. It seems to come from the AcquireSync() call. The whole text is this:

First-chance exception at 0x7704b727 in WTech_2010.exe: Microsoft C++ exception: _com_error at memory location 0x0022de38..


Quote:

The only think I can think of is that the resource was not unbound correctly from the dx10 pipeline. From the dx10 surface you can QI for the dx10 device and then you can clear its render targets yourself - just to see if that is the issue.


How do I do that? I never used QueryInterface very often and I'm not familiar with it. However, I tried to set the rendertargets to NULL with my dummy D3D10.1 device.

Here is the updated code:

/** Renders the HUD to the backbuffer */
HRESULT Hud::RenderToBackbuffer()
{
HRESULT hr=S_OK;

//Sync the texture with D3D10
hr = SharedTexture->GetKeyedMutex10()->AcquireSync(0, INFINITE);
if(FAILED(hr))
{
LE(hr);
return hr;
}

// Tell the debug output to be quiet..
D3D10Device.GetDevice()->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_POINTLIST);

// Draw hud
D2D.GetRenderTarget()->BeginDraw();
RenderHUD();
D2D.GetRenderTarget()->EndDraw();

//Unbind rendertargets? -------------------------------------------
D3D10Device.GetDevice()->OMSetRenderTargets(0, NULL, NULL);

//Release sync of D3D10 device
hr = SharedTexture->GetKeyedMutex10()->ReleaseSync(1);
if(FAILED(hr))
{
LE(hr);
return hr;
}

//Sync the texture with D3D11 and render to backbuffer
hr = SharedTexture->GetKeyedMutex11()->AcquireSync(1, INFINITE); //<------ Fails when we get here the second time!
if(FAILED(hr))
{
LE(hr);
return hr;
}

//Set custom shader variable
ID3DX11EffectShaderResourceVariable* SRV=NULL;
ScreenQuad.GetQuadShader()->GetCustomVariable(SharedTextureIdx, &SRV);
if(SRV)
{
SRV->SetResource(SharedTexture->GetShaderResourceView());
}

//Render a screenquad
ScreenQuad.Render();

//Unbind shader resources? -------------------------------------------
ID3D11ShaderResourceView *const NoSRV[2] = {NULL,NULL};
DXUTGetD3D11DeviceContext()->PSSetShaderResources(0, 2, NoSRV);

//Release sync of D3D11
hr = SharedTexture->GetKeyedMutex11()->ReleaseSync(0);
if(FAILED(hr))
{
LE(hr);
return hr;
}


return S_OK;
}



Quote:

What hardware are you using? Maybe it's a hardware bug. You'd have to try your code on different hardware since REF and WARP don't support shared resources.


I can run the samples posted in this thread without problems...

However, here are my specs:
Win7 64-bit
3GB RAM
Nvidia GeForce 8400GS Go (Yes, it's a laptop)
AMD Turion64x2 2x2GHz

Share this post


Link to post
Share on other sites
Right, you don't actually have to QI since you already have the DX10 device (I forgot that would be the case) The code you added should indeed have cleared all render targets on the dx10 device.

The code from the other post work for me. I'm assuming that you did everything the same as in that example. Which SDK version are you using? Try creating your devices without the debug flag to see if perhaps SDKLayers is adding the unexpected return value, or perhaps causing some other error.

Right now nothing else comes to mind. If that other example ran find on your laptop hardware then I'm not sure what the difference would be in this case.

Share this post


Link to post
Share on other sites
Creating them without the debug flag didn't change it. I'm using the February 2010 SDK, but I downloaded the June 2010 SDK as well. A friend of mine got a few problems when compiling the engine with the new SDK, thats why I didn't install it yet.
I could if that would fix the issue...

Maybe I messed something up in my objects creation?

Here's code for them:

D3D10 device:

D3D10Device::D3D10Device(void)
{
HRESULT hr=S_OK;
Device=NULL;

IDXGIFactory1* DXGIFactory;
hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)(&DXGIFactory) );

IDXGIAdapter* Adapter;
DXGIFactory->EnumAdapters(0, &Adapter);

LE(D3D10CreateDevice1(
Adapter,
D3D10_DRIVER_TYPE_HARDWARE,
NULL,
D3D10_CREATE_DEVICE_DEBUG |
D3D10_CREATE_DEVICE_BGRA_SUPPORT |
D3D10_CREATE_DEVICE_SINGLETHREADED,
D3D10_FEATURE_LEVEL_9_3,
D3D10_1_SDK_VERSION,
&Device
));

Adapter->Release();

DXGIFactory->Release();
}




D3D11 device flags (Gets created via DXUT):

/** In this function you can modify the settings of the device */
bool Engine::ModifyDeviceSettings( DXUTDeviceSettings* pDeviceSettings)
{
pDeviceSettings->d3d11.CreateFlags |= D3D11_CREATE_DEVICE_BGRA_SUPPORT;
pDeviceSettings->d3d11.CreateFlags |= D3D11_CREATE_DEVICE_SINGLETHREADED;
pDeviceSettings->d3d11.AdapterOrdinal=0;
return true;
}




SharedTexture creation:


D3D10_SharedTexture::D3D10_SharedTexture(ID3D11Device* D3D11Device, ID3D10Device1* D3D10Device,UINT SizeX, UINT SizeY, DXGI_FORMAT Format, HRESULT* hr)
{
HRESULT hResult;

D3D11_TEXTURE2D_DESC sharedTextureDesc;
ID3D11Texture2D *pSharedTexture11;

ZeroMemory(&sharedTextureDesc, sizeof(sharedTextureDesc));
sharedTextureDesc.Width = SizeX;
sharedTextureDesc.Height = SizeY;
sharedTextureDesc.MipLevels = 1;
sharedTextureDesc.ArraySize = 1;
sharedTextureDesc.Format = Format;
sharedTextureDesc.SampleDesc.Count = 1;
sharedTextureDesc.Usage = D3D11_USAGE_DEFAULT;
sharedTextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
sharedTextureDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;

hResult = D3D11Device->CreateTexture2D(&sharedTextureDesc, NULL, &pSharedTexture11);
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pDevice11->CreateTexture2D for shared texture"), MB_OK);
if(hr)*hr=hResult;
return;
}

// Get the keyed mutex for the shared texture (for D3D11)
IDXGIKeyedMutex *pKeyedMutex11;

hResult = pSharedTexture11->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&pKeyedMutex11);
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pSharedTexture11->QueryInterface for IDXGIKeyedMutex"), MB_OK);
if(hr)*hr=hResult;
return;
}

// Get the shared handle needed to open the shared texture in D3D10.1
IDXGIResource *pSharedResource11;
HANDLE hSharedHandle11;



hResult = pSharedTexture11->QueryInterface(__uuidof(IDXGIResource), (void**)&pSharedResource11);
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pSharedTexture11->QueryInterface for IDXGIResource"), MB_OK);
if(hr)*hr=hResult;
return;
}

pSharedResource11->GetSharedHandle(&hSharedHandle11);
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pSharedResource11->GetSharedHandle"), MB_OK);
if(hr)*hr=hResult;
return;
}

pSharedResource11->Release();

// Open the surface for the shared texture in D3D10.1
IDXGISurface1 *pSharedSurface10;

hResult = D3D10Device->OpenSharedResource(hSharedHandle11, __uuidof(IDXGISurface1), (void**)(&pSharedSurface10));
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pDevice101->OpenSharedResource"), MB_OK);
if(hr)*hr=hResult;
return;
}

// Get the keyed mutex for the shared texture (for D3D10.1)
IDXGIKeyedMutex *pKeyedMutex10;

hResult = pSharedSurface10->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&pKeyedMutex10);
if(FAILED(hResult)) {
MessageBox(NULL, DXGetErrorDescription(hResult), TEXT("pSharedSurface10->QueryInterface for IDXGIKeyedMutex"), MB_OK);
if(hr)*hr=hResult;
return;
}

Texture2D=pSharedTexture11;
KeyedMutex10=pKeyedMutex10;
KeyedMutex11=pKeyedMutex11;
Surface10=pSharedSurface10;

pSharedResource11->Release();


// Create the resource view
D3D11_SHADER_RESOURCE_VIEW_DESC DescRV;
DescRV.Format = Format;
DescRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
DescRV.Texture2D.MipLevels = 1;
DescRV.Texture2D.MostDetailedMip = 0;
HRESULT hr2 = DXUTGetD3D11Device()->CreateShaderResourceView( (ID3D11Resource *)Texture2D, &DescRV, &ShaderResourceView );

if(FAILED(hr2))
{
LogError() << L"ID3D11ShaderResourceView for HUD texture";

if(hr)*hr=hr2;
return;
}


if(hr)*hr=S_OK;
}




Hopefully this clears up the situation.

Edit: When I specify some other flag than 0 (DXGI_PRESENT_DO_NOT_SEQUENCE or DXGI_PRESENT_TEST) on the Present()-call it works, too, but I get no output then.

Strange that it is able to present it right for one time...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!