Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

111 Neutral

About MarcinWSB

  • Rank

Personal Information

  • Interests
  1. The scenario is the following. I have quite old app using dx9 which allows plugin integration. Unfortunately the plugin integration is through GDI, so I receive Paint callback with HDC. What I do is drawing into Texture2D using DX11 in another process and share the texture with GDI plugin. It works fine when I share the texture bits through CPU. Now, I want to use GPU only and use the texture sharing between DX11 and GDI so I tried to create ID2D1DCRenderTarget but that I cannot link with any shared surface ...
  2. MarcinWSB

    Using Shared textures DX11

    thx, indeed DX12 is not yet implemented in the game I plug into but will keep it in mind for future integrations. I guess there are not many software utilizing efficiently dual gpu capabilities .
  3. It looks like CreateDxgiSurfaceRenderTarget cannot be used with D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE and CreateSharedBitmap cannot be used with ID2D1DCRenderTarget as it must be used with render target created by CreateDxgiSurfaceRenderTarget , so looks there is no straight way to do it. Any tricks ?
  4. I have found an example here: http://xboxforums.create.msdn.com/forums/t/66208.aspx about sharing surfaces/textures created by DX11 into the ID2D1DCRenderTarget, however the above example is based on SwapChain bound to a window (hwnd) What I need is to draw a shared texture into GDI as this is the only plugin interface in an old software I have. Any ideas ? Here is what I do. SharedSurface::SharedSurface(): { //Initialize the ID2D1Factory object D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); //D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, &pD2DFactory); //initialize the ID2D1DCRenderTarget D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_HARDWARE, // D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), 0, 0, D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE, D2D1_FEATURE_LEVEL_DEFAULT ); HRESULT hr = pD2DFactory->CreateDCRenderTarget(&props, &pRT); DWORD createDeviceFlags = 0; createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG; ID3D11DeviceContext* context; D3D_FEATURE_LEVEL fl; DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof( sd ) ); sd.BufferCount = 1; sd.BufferDesc.Width = width; sd.BufferDesc.Height = height; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHARED; sd.OutputWindow = 0; // g_hWnd; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = FALSE, // TRUE; hr = D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, nullptr, 0, D3D11_SDK_VERSION, &sd, &pSwapChain, &mDevice, &fl, &context); hr = m_pSwapChain->GetBuffer(0, IID_PPV_ARGS(&pBackBuffer)); } bool SharedSurface::CreateTexture(ID3D11Device* device, UINT width, UINT height) { HRESULT hr; D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; // D3D11_RESOURCE_MISC_SHARED; // desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; desc.CPUAccessFlags = 0; // D3D11_CPU_ACCESS_READ; // 0; if (device != nullptr) mDevice = device; hr = mDevice->CreateTexture2D(&desc, NULL, &pTexture); IDXGIResource * pDXGIResource = NULL; hr = pTexture->QueryInterface(__uuidof(IDXGIResource), (void **)&pDXGIResource); if SUCCEEDED(hr) { hr = pDXGIResource->GetSharedHandle(&sharedHandle); pDXGIResource->Release(); if SUCCEEDED(hr) { hr = pTexture->QueryInterface(__uuidof(IDXGIKeyedMutex), (LPVOID*)&pMutex); } } hr = pTexture->QueryInterface(__uuidof(IDXGISurface), (void **)&pSurface); FLOAT dpiX; FLOAT dpiY; pD2DFactory->GetDesktopDpi(&dpiX, &dpiY); D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_HARDWARE/*D2D1_RENDER_TARGET_TYPE_DEFAULT*/, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),dpiX,dpiY); hr = pD2DFactory->CreateDxgiSurfaceRenderTarget(pBackBuffer,&props, &pBackBufferRT); DXGI_SURFACE_DESC sdesc; D2D1_BITMAP_PROPERTIES bp; ZeroMemory(&bp, sizeof(bp)); pSurface->GetDesc(&sdesc); bp.pixelFormat = D2D1::PixelFormat(sdesc.Format, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = pBackBufferRT->CreateSharedBitmap(__uuidof(ID3D11Texture2D), pSurface,&bp, &pBitmap); return SUCCEEDED(hr); } void SharedSurface::Draw() { pBackBufferRT->BeginDraw(); pBackBufferRT->DrawBitmap(pBitmap); pBackBufferRT->EndDraw(); pBackBufferRT->Present(); } void SharedSurface::BindDC(HDC hdc, int width, int height) { RECT rct; rct.top = 0; rct.left = 0; rct.right = width; rct.bottom = height; pRT->BindDC(hdc, &rct); } // HOW TO EXCHANGE between pBackBufferRT and pRT ?
  5. MarcinWSB

    Using Shared textures DX11

    The mystery is solved. The code itself is OK (option1), however I was trying to share the resource across 2 different adapters which cannot be achieved using GetSharedHandle() and needs to be done through CPU.
  6. Hi, I am trying to use shared textures with my rendering but with no success. I create texture which I share with another process which draws on that texture. Later on I want to use that texture in my rendering loop. //class members - once initialized, kept static during the rendering ID3D11ShaderResourceView* g_mTexture; ID3D11Texture2D *mTexture; bool MyTexture::CreateTexture(ID3D11Device* device, UINT width, UINT height, int targetWidth, int targetHeight) { HRESULT hr; D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED; // D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.CPUAccessFlags = 0; hr = device->CreateTexture2D( &desc, NULL, &mTexture ); if (SUCCEEDED(hr)) { D3D11_RENDER_TARGET_VIEW_DESC rtDesc; ZeroMemory(&rtDesc, sizeof(rtDesc)); rtDesc.Format = desc.Format; rtDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D; rtDesc.Texture2D.MipSlice = 0; D3D11_SHADER_RESOURCE_VIEW_DESC svDesc; ZeroMemory(&svDesc, sizeof(svDesc)); svDesc.Format = desc.Format; svDesc.Texture2D.MipLevels = 1; svDesc.Texture2D.MostDetailedMip = 0; svDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; hr = device->CreateShaderResourceView(mTexture, &svDesc, &g_mTexture); } IDXGIResource * pDXGIResource; HRESULT hr = mTexture->QueryInterface(__uuidof(IDXGIResource), (void **)&pDXGIResource); if SUCCEEDED(hr) { hr = pDXGIResource->GetSharedHandle(&sharedHandle); pDXGIResource->Release(); if SUCCEEDED(hr) { OutputDebug(L"RequestSharedHandle: w=%d, h=%d, handle=%d", width, height, sharedHandle); return (unsigned long long) sharedHandle; } } .... } the problem is to use that shared handle during my rendering loop as the texture is always black, below are all the options I tried: 1) OPTION 1 (bare texture) In this option I simply tried to use mTexture object created with device->CreateTexture2D() that I shared with another process , so basically I left my legacy code untouched with the exception of CreateTexture where I modified D3D11_TEXTURE2D_DESC options for shared version. In my rendering loop I used g_mTexture created during init by CreateShaderResourceView ``` #!c++ pDeviceContext->PSSetShaderResources( 0, 1, &(*it_region)->g_mTexture ); ``` in the legacy (without shared texture) I simply mapped the texture, copied the bits and unmapped and was working fine: D3D11_MAPPED_SUBRESOURCE mappedResource; HRESULT hr = pDeviceContext->Map(mTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if (SUCCEEDED(hr)) { BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData); if (mappedData != NULL) // { for (UINT i = 0; i < h; ++i) { if (bUpsideDown) { memcpy(mappedData, bits, w * 4); mappedData -= mappedResource.RowPitch; bits += (UINT)w * 4; } else { memcpy(mappedData, bits, w * 4); mappedData += mappedResource.RowPitch; bits += (UINT)w * 4; } } } if ((*it_region)->mTexture != NULL) pDeviceContext->Unmap(mTexture, 0); 2) OPTION 2 - OpenSharedResource In this version I tried the get the handle to the shared texture using OpenSharedResource (with 2 combination) // Option 2.1 - get pTexture directly ID3D11Texture2D *pTexture; // temp handler HRESULT hr = mDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&pTexture); // Option 2.2 get pTexture indirectly by using QueryInterface IDXGIResource* sharedResource = 0; HRESULT hr = mDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&sharedResource); hr = sharedResource->QueryInterface(__uuidof(ID3D11Texture2D), (void**)(&pTexture)); OutputDebug(L"OpenSharedResource:%d\n", hr); Now, having new temporary pTexture handle I tried the following options (with the combination of the above options retrieving the shared pTexture) OPTION 2.3 - copy pTexure into mTexture #!c++ mDevice->CopyResource(mTexture,pTexture); pDeviceContext->PSSetShaderResources( 0, 1, &g_mTexture ); ``` OPTION 2.4 - create new temporary shader resource using temporary pTexture ``` #!c++ ID3D11ShaderResourceView* g_pTexture; // temp handler hr = device->CreateShaderResourceView(pTexture, &svDesc, &g_pTexture); pDeviceContext->PSSetShaderResources( 0, 1, &g_pTexture ); ``` OPTION 3 - MUTEX version Basically I tried all above options using D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag during texture creation and the following code to acquire the mutex: UINT acqKey = 1; UINT relKey = 0; DWORD timeOut = 5; IDXGIKeyedMutex pMutex; hr = pTexture->QueryInterface( __uuidof(IDXGIKeyedMutex), (LPVOID*)&pMutex ); DWORD result = pMutex->AcquireSync(acqKey, timeOut); if (result == WAIT_OBJECT_0) // Rendering using 2.2 and 2.3 options .... else // nothing here - skip frame result = pMutex->ReleaseSync(relKey)); if ( result == WAIT_OBJECT_0 ) return S_OK; NONE of those solutions worked for me, any HINTS ?
  7. ok, I will take a close look at the CPU memory. I would expect a leak at GPU as I memcpy to GPU memory and then create the commandlist... I thought a commandlist is just a kind of handler for GPU resource... though I am still an newbe in dx11.
  8. Hi,   I have the following scenario. I am integrating my DX11 plugin with a game application providing open interface for mods.   One thread is generating dynamic textures stored as Texture2D in ResourceShaderView. Once the bitmap bits are prepared on the deffered context I call Map(), then memcpy to GPU memory, then do UnMap() and call FinishCommandList() on the deffered context.   In the main thread I have a Render callback function that is called by the 3D game application every visual frame and providing me a target view where my plugin is attached to, In this callback I call ExecuteCommandList with the latest commandlist generated by my working thread. It work pretty well.   Now the problem statement is the following. As those threads are asynchronous, the working thread from time to time prepares the bitmaps more often than the main thread is rendering a frame. It means thet per each frame I might have 2-3 commandlists that are never executed as the rendering thread only needs the last snapshot of the texture. In the main thread I only call Release() of the CommanList I executed and do nothing with the previous command lists. Propbably this is not a good approach, however I run the tests with 50 FPS on the working thread and more less 25 FPS on the main thread which means every frame I created two commandlist where only one was executed and released, however I have not noticed any increase of GPU memory as there were not any leaks. Why is that ? Could someone explain. I run the test for 30 minutes with a video rendered on the working thread that was shown in the main scene. Is there any "smart" memory management when the CommandList is bound to the same texture ? Should I change my algorythm and manage the unexecuted command list. The thing is when I tried to do it I had some crashed from time to time and could not figure it why. On the other hand not releasing this "idle" commndlist created no issue ...    
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!