• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Naor B
      I am capturing the desktop - around 30 fps using bitblt So now I have 32bit image (bits). I want to scale the image. I.E: while(1){ captureDesktopUsingBitBlt(); ScaleImage();}

      I am doing ScaleImage() using Video Resizer DSP.

      Issue is that for Video Resizer DSP to support Hardware Scaling you need to send it MFT_MESSAGE_SET_D3D_MANAGER message but this is not supported in Win7.

      So I read that good alternative for Hardware Scaling can be to use IDXVAHD_VideoProcessor: https://msdn.microsoft.com/en-us/library/windows/desktop/dd373918(v=vs.85).aspx

      I found this example for IDXVAHD_VideoProcessor: https://github.com/pauldotknopf/WindowsSDK7-Samples/tree/master/multimedia/mediafoundation/DXVA_HD

      Issue is that after doing this:
      hr = g_pDXVAVP->VideoProcessBltHD(pRT,0,1, stream_data );// Perform the blit.
      I do not know how to get the scaled image data.
      I tried hr = pRT->LockRect(&lr, NULL, D3DLOCK_READONLY);  but it returns 0x8876086c  error - i guess you can not lock that one ....
      I tried to create a new Video surface:
      hr = g_pD3DDevice->GetRenderTargetData(pRT, g_pSurfaceNewVidSurface);
      But GetRenderTargetData failed with 0x8876086c
      So My questions are:
      1) Am I in the right way (using IDXVAHD_VideoProcessor for image by image real time capturing scaling (using what I get from BitBlt)) So it will be done using Hardware acceleration and works from Win7 and above or should i use something else?.

      2) Assuming I am in the right way with IDXVAHD_VideoProcessor , how can I get the scaled data after i called VideoProcessBltHD

      Thanks!
    • By isu diss
      How do I fill the gap between sky and terrain? Scaling the terrain or procedural terrain rendering?

    • By Jiraya
      For a 2D game, does using a float2 for position increases performance in any way?
      I know that in the end the vertex shader will have to return a float4 anyway, but does using a float2 decreases the amount of data that will have to be sent from the CPU to the GPU?
       
    • By ucfchuck
      I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
      So I read in a series of samples and push them into float arrays
      float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected

                             
      Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed.
      If i add a
      vals1[4] = vals1[4]/2; 
      or a
      vals1[4] = vals[1]-vals[4];
      the data is gone and everything comes back 0.
       
       
      How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
      Thanks
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 [DX11] Update StructuredBuffer

This topic is 2198 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have (RW)StructuredBuffer initialized to eg. 50MB. In each frame I need to fill it with different data in interval <10, 50> MB

If i use

deviceContext->UpdateSubresource( gpuBuffer->buffer, 0, NULL, data, 0, 0 );


works fine untill data has size 50MB

But if data are smaller, application crash on this line, no DX debug output nothing.

So i used for smaller data D3D11_BOX and fill it this way:

D3D11_BOX destRegion;
destRegion.left = 0;
destRegion.right = dataSize;
destRegion.top = 0;
destRegion.bottom = 1;
destRegion.front = 0;
destRegion.back = 1;

deviceContext->UpdateSubresource( gpuBuffer->buffer, 0, &destRegion, data, 0, 0 );

I hoped that data will be filled from index 0 - dataSize... BUT ...application doesn´t crash this time, however buffer has incorrect content... I got "blank" results if i vizualize content of buffer.

MSDN says nothing about D3D11_BOX and StructuredBuffers, so i really don´t know how to solve this.

Thank you for all answers

Share this post


Link to post
Share on other sites
Advertisement
That sounds like a pointer error to me. Are you sure you aren't exceeding the size of the buffer???

Also, you could try to use mapping instead of update subresource and see if you meet a similar issue. That would indicate that there is a general problem with the overall setup, or if it works then it would mean that you have an issue with update subresource.

Share this post


Link to post
Share on other sites
Size of buffer is not exceeded, its always smaller or same size as inited

I habe tried Map / Unmap, but it requires
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE
which failed for StructuredBuffer with UAV | SRV

I inited buffer this way.

D3D11_BUFFER_DESC bufferDesc;


bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
bufferDesc.ByteWidth = gpuBuffer.elementSize * elementsCount;
bufferDesc.StructureByteStride = gpuBuffer.elementSize;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;

// Create the constant buffer pointer so we can access the shader constant buffer from within this class.
HRESULT hr = this->device->CreateBuffer(&bufferDesc, NULL, &gpuBuffer.buffer);
if (FAILED(hr))
{
MyUtils::Logger::LogError("Failed to create buffer %s", bufferName.GetConstString());
return;
}

Than I init SRV and UAV for it

Share this post


Link to post
Share on other sites
In that case you will need to use two buffers - one with staging usage and the other one as you have already created. Then you write to the staging buffer, and then use copy resource to put the info into the default usage buffer.

It isn't ideal to do this copying, but from experience it is pretty fast since both resources are already on the GPU. In any case, you are only trying to isolate the problem, so this might still be helpful.

Share this post


Link to post
Share on other sites
I have found bug...

If filled data are from array, it crashed
If filled data are stored in vector and filled like &data[0], it works....

But I dont quite understand this behaviour. blink.png

Share this post


Link to post
Share on other sites
That would indicate that it was an issue with the array size - at least I think so anyways... At least you have a working solution now.

Did you try checking your vector size during runtime and ensure that it is in fact smaller than your array?

Share this post


Link to post
Share on other sites
I changed data representation from uint8 array to int array (inside shader i work with int buffer, uin8 representation was in byte form) and now it works fine..

Thank you

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement