• Advertisement

DX11 CopyResource crash (raw data access)

Recommended Posts

I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).

void extractBitmap(void* texture) {

    if (texture) {
        ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture;
        ID3D11Texture2D* pNewTexture = NULL;

        D3D11_TEXTURE2D_DESC desc;
        d3dtex->GetDesc(&desc);

        desc.BindFlags = 0;
        desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
        desc.Usage = D3D11_USAGE_STAGING;
        desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;

        HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture);

        if (FAILED(hRes)) {
            printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str());
            if (hRes == DXGI_ERROR_DEVICE_REMOVED)
                printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str());
        }
        else {
            if (pNewTexture) {
                D3D11DeviceContext->CopyResource(pNewTexture, d3dtex);

                // Wokring with texture

                pNewTexture->Release();
            }
        }
    }
    return;
}


D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer));
extractBitmap(pBackBuffer);
pBackBuffer->Release();

Crash log:

CreateTexture2D FAILED:887a0005
DXGI_ERROR_DEVICE_REMOVED -- 887a0020

Once I comment out 

D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 

code works fine on all 3 PC's.

Share this post


Link to post
Share on other sites
Advertisement
2 hours ago, ajmiles said:

Have you run with the Debug Layer turned on on any machines?

As I'm injecting into another application and D3D device is already created, I did not.

Share this post


Link to post
Share on other sites

what exactly are you passing in for "texture"? its just a void pointer, your not passing in the texture bytes are you? 

They source and destination of CopyResource must be the same type and description. i'm assuming this is your problem, if "texture" is already an ID3D11Texture2D. if "texture" is not an actual resource, and instead a byte array of the texture data, you will need to create a new ID3D11Texture2D with that data (not just cast it to a ID3D11Texture2D)

Share this post


Link to post
Share on other sites
4 minutes ago, iedoc said:

what exactly are you passing in for "texture"? its just a void pointer, your not passing in the texture bytes are you? 

They source and destination of CopyResource must be the same type and description. i'm assuming this is your problem, if "texture" is already an ID3D11Texture2D. if "texture" is not an actual resource, and instead a byte array of the texture data, you will need to create a new ID3D11Texture2D with that data (not just cast it to a ID3D11Texture2D)

 

4 hours ago, haiiry said:

D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer));

extractBitmap(pBackBuffer);

pBackBuffer->Release();

 

Share this post


Link to post
Share on other sites

ok, so in that case ,what did you set your render target description as? it should match your description of the texture2d you create in that function exactly

as a side note, i would make that function take in a ID3D11Texture2D* rather than a void*

Share this post


Link to post
Share on other sites

 

15 minutes ago, iedoc said:

ok, so in that case ,what did you set your render target description as? it should match your description of the texture2d you create in that function exactly

To clarify, desc.SampleDesc.Count is 1 always. Original texture is DXGI_FORMAT_R8G8B8A8_UNORM_SRGB aswell. I use format and dimensions from original texture (GetDesc and then applied it to create my texture). I've tried to check if texture is already _STAGING when I pass it to my function, but it never was.

15 minutes ago, iedoc said:

as a side note, i would make that function take in a ID3D11Texture2D* rather than a void*, and when you get the buffer from the swapchain, cast it to a ID3D11Texture2D* instead of a void**

Same as https://msdn.microsoft.com/en-us/library/windows/desktop/bb174570(v=vs.85).aspx

Just tested on 2 another systems with Intel CPU + AMD GPU and AMD CPU + AMD GPU and it didn't crash. Seems like something with intel and nvidia drivers.

Edited by haiiry

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By osiris_dev
      Hello!
      Have a problem with reflection shader for D3D11:
      1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
      I tried to add this:
      #include <D3D11Shader.h>
      #include <D3Dcompiler.h>
      #include <D3DCompiler.inl>
      #pragma comment(lib, "D3DCompiler.lib")
      //#pragma comment(lib, "D3DCompiler_47.lib")
      As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
       
    • By trojanfoe
      Hi there, this is my first post in what looks to be a very interesting forum.
      I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?
      Thanks in advance!
    • By Matt_Aufderheide
      I am trying to draw a screen-aligned quad with arbitrary sizes.
       
      currently I just send 4 vertices to the vertex shader like so:
      pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
      pDevCon->Draw(4, 0);
       
      then in the vertex shader I am doing this:
      float4 main(uint vI : SV_VERTEXID) : SV_POSITION
      {
       float2 texcoord = float2(vI & 1, vI >> 1);
      return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
      }
      that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
      one thing I tried is: 
       
      float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);
       
      .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
      this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?
       
      .
    • By Stewie.G
      Hi,
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

       
      PS_BlurV

      PS_BlurH

      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
       
      Thanks!
    • By Fleshbits
      Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
      Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
      Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
      I plainly don't like Unity or Unreal, but might learn them for reference.
      So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
      Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..
       
       
  • Advertisement