• Advertisement
Sign in to follow this  

DX11 DX11 Update textures every frame

This topic is 668 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am trying to upload multiple (say n) textures (BC7) to the gpu each frame (there's data every frame read from CPU; there is no way around this), and I am trying to minimize this time as much as possible, was wondering if anyone has any insights other than what I've done:

 

- each texture is dynamic, have 2 copies (total 2n textures) and interchange between a cpu mapped (D3D11_MAP_WRITE_DISCARD) version to copy data into and gpu unmapped to use for render

- each texture has 2 corresponding resources, a default & a staging version (2n staging, 2n default), map with D3D11_MAP_WRITE and CopyResource (n times each) to default from staging

- have a staging & default texture2darray (array size = n, 2 staging, 2 default), call map D3D11_MAP_WRITE once per frame on staging, CopyResource once to copy and unmap once.

- I also want to try 3d textures, but the limitation of 2048x2048x2048 means i can't use it.

 

All of these are approximately the same times. Does anyone have thoughts on how I can hide/reduce this time?

I am aware GPU has compute/copy/3d engines (exposed in D12), but is there anyway to parallelize whatever unmap/copyresource is doing to a separate engine from the 3d engine on D11? If not any suggestions/thoughts?

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

Copying textures every frame from CPU to GPU memory will be bottlenecked by the bus-bandwidth, so, check out your target platform (e.g. PCI-E) bandwidth and do some theo-crafting about how many times you would be theoretically able to transfer your textures from CPU memory to GPU memory. If this would be an issue, try to re-think your approach.

 

Data transfer will use DMA most of the time, so you can hide this transfer costs (aka avoid stalling your pipeline) if you can get along with one or two frames delay. If this is the case, look into double/triple buffering.

 

Eventually try to reduce the transfered data, either update only parts, use some compression or do even packing/unpacking.

 

Why are 2048x2048x2048 limiting ? Do you need larger textures ? I mean, 2k^3 ~ 32GB for an RGBA texture without mipmaps.

Edited by Ashaman73

Share this post


Link to post
Share on other sites

I am not sure if PCIe is the problem, I have maybe 40MB per frame (with PCIe3.0 x16 for 32GB/s), and I am already double buffering (with a frame delay) to hide the memcpy operation. However, I was thinking earlier, it seems using staging/default approach, the time is not in the unmap, but copyresource. Does D11's CopyResource automatically use the Copy Engine and not stall the 3D Engine (if there is no dependency)? Or would I have to use D12 for that? I'll have to test that out with triple buffering and 2 frames delay I guess. :D

 

2048^3 is limiting cause my widths are > 2048 (height and depth are fine). 

Edited by hiya83

Share this post


Link to post
Share on other sites

D3D11 has no concept of a "copy engine", and so the driver is free to implement CopyResource however it wants as long as it has the correct behavior. It might implement it with an asynchronous DMA. it might not. It might even be doing the same thing for all 3 of your approaches.

 

When you say that all of your approaches are "approximately the same times", what do you mean by that? Are you measuring CPU timing? GPU timing?

Share this post


Link to post
Share on other sites

I am measuring GPU times. 

 

- in dynamic case, I put gpu ticks around unmap

- in default & staging case, unmap doesn't take time, but CopyResource is where the time is

- in default/staging with texture2darray, same as 2nd case. 

Share this post


Link to post
Share on other sites

- in default & staging case, unmap doesn't take time, but CopyResource is where the time is

Unmap will only trigger the upload, which, when done with DMA, will not involve the GPU. But CopyResource, when you try to access the memory block, will spent time in 

1. waiting until the data has been uploaded (=>stalling your pipeline)

2. actually copying your data

 

To measure the first delay try to use some fence and try to measure the time spend in waiting for the fence:

unmap buffer A -> fence A ->... -> start GPU timer -> wait for fence B -> end GPU timer -> CopyResource ->... ->  unmap buffer B -> fence B ->...

Share this post


Link to post
Share on other sites


I also want to try 3d textures, but the limitation of 2048x2048x2048 means i can't use it

Perhaps this is not a relevant suggestion, but is it possible to use a texture array instead of one big volume texture?

Edited by vanka78bg

Share this post


Link to post
Share on other sites

How much time exactly is your 40MiB copy operation currently taking with any of your methods?

2048^3 is limiting cause my widths are > 2048 (height and depth are fine).

Well, another limit is that you'd need a video card with over 8GiB of RAM, which pretty much limits your min-spec hardware to the US$999 GeForce GTX Titan X :wink: :lol:

Share this post


Link to post
Share on other sites

Unmap will only trigger the upload, which, when done with DMA, will not involve the GPU. But CopyResource, when you try to access the memory block, will spent time in 

 

To measure the first delay try to use some fence and try to measure the time spend in waiting for the fence:

 

Does that mean unmap for Dynamic Textures triggers some sort of copy from cpu-accessible gpu memory to default gpu memory internally since that takes about same time as unmap/copyresource for staging/default textures. 

Also possibly dumb question, how would you setup a memory fence on DX from the CPU?? There is no query for that, and everything seems to be implicit... 

 

 

Perhaps this is not a relevant suggestion, but is it possible to use a texture array instead of one big volume texture?

 

I did try texture arrays already, that was the 3rd thing I tried in my original post.. sorry if it was misleading.

 

 

How much time exactly is your 40MiB copy operation currently taking with any of your methods?

 

Well, another limit is that you'd need a video card with over 8GiB of RAM, which pretty much limits your min-spec hardware to the US$999 GeForce GTX Titan X :wink: :lol:

 

Hey sorry but not sure what you meant by how long 40MB copy operation is taking? If you mean the methods I've tried above, they are all in the upper 3 ms ballpark (3.6 - 3.9). 

Yea I am aware of the large memory video card, I am working on other forms of compression as well, but just want to get this down with BC7 for now :D

Edited by hiya83

Share this post


Link to post
Share on other sites

So tried the triple buffering approach hoping CopyResource is async dma, but it still stalls the gpu command. :(

 

Also since d11 device is free threaded, I tried to do something real "dumb" of creating another thread and just keep deleting old/creating new textures (with new content) on this other thread, hoping the texture creation/deletion is async from the gpu graphics engine, and that plan fell flat as well. Even though device is free threaded from the context, apparently creating/deleting resources still runs in same pipeline as the context commands. :( 

 

Any other thoughts/ideas would be appreciated.

Share this post


Link to post
Share on other sites
Have you tried UpdateSubresource from a CPU memory pointer? In certain very specific circumstances I've found this efficient, despite the dire warnings about it in the documentation & elsewhere, because it will manage resource contention automatically for you, which is where I suspect your primary bottleneck is.

Share this post


Link to post
Share on other sites

Have you tried uploading less data? Depending on what your data looks like, you could compute dirty regions on the CPU and only upload that data (potentially via UpdateSubresource as called out above). Is your data really changing all over the place, non-uniformly, every frame?

Share this post


Link to post
Share on other sites

I'll be darned. UpdateSubResource is actually faster; low 3ms instead of high 3ms. Not ideal yet, but it's better. Thanks for the tip! :D

Share this post


Link to post
Share on other sites

I've also been looking into this for days.  My use case is slightly different:  I'm writing a video application and an external source is decoding the video, leaving me with a 4K RGBA texture.  I need to display this texture in my 3D App (it's Unity, but I'm writing a native plug-in which means I'm using DX11).

 

I'm always getting hitches, no matter what I do.  The worst case is an Intel HD 4600 which can take up to 25ms just to upload a 1080p texture.  As Ashaman73 has mentioned, bus bandwidth is probably playing a large role in this.

 

I'm using the normally advocated method of using a DYNAMIC texture, writing to that, then CopyResource over into the real texture.  Here's an article where someone has gone through all of the scenarios and benchmarked them:  https://eatplayhate.me/2013/09/29/d3d11-texture-update-costs/. 

 

My problem is that even the memcpy() of a 1080p RGBA texture into Map()'d memory takes a really long time (5+ms), so when I get up to 4K it's substantial.  What I could really use, I think, is a way to begin this copy process asynchronously.  Right now the copy blocks the GPU thread (since you must Map()/Unmap() on GPU thread, I'm also generally doing my memcpy there).

 

I've read this may be possible in OpenGL with some kind of PixelBufferObject?  Is there anything like this in DirectX?  I haven't tried reverting my code to UpdateSubResource for this case, but are there any other suggestions?

Share this post


Link to post
Share on other sites


My problem is that even the memcpy() of a 1080p RGBA texture into Map()'d memory takes a really long time (5+ms), so when I get up to 4K it's substantial.  What I could really use, I think, is a way to begin this copy process asynchronously.  Right now the copy blocks the GPU thread (since you must Map()/Unmap() on GPU thread, I'm also generally doing my memcpy there).

To be honest, I am more familiar with OGL, so some DX11 expert should have better tips.

 

For one, once the memory is mapped, you can access it from any other thread, just avoid calling API functions from multiple threads. The basic setup for memory to buffer copy could be:

  1. GPU thread: map buffer A
  2. Worker thread: decode video frame into buffer A
  3. GPU thread: when decoded, unmap buffer A

This will most likely trigger an asynchronously upload from CPU to GPU memory, or might do nothing if the DX11 decides to keep the texture in CPU memory for now (shared mem on HD4600 ?).

 

The next issue will be, when accessing the buffer. If you access it too early, e.g. by copying the buffer content to the target texture, then the asynchronously upload will be suddently result in synchronosouly stalling your rendering pipeline. So I would test out to use multple buffers, 3 at least. This kind of delay should be not critical for displaying a video.

 

An other option would be to look for a codex which can be decoded on the GPU. I'm not familiar with video codex, but there might be a codex which allows you to use the GPU to decode it. In this case I could work like this:

  1. map buffer X
  2. copy delta frame (whatever) to buffer (much smaller than full frame)
  3. unmap buffer X
  4. fence X
  5. ..
  6. if(fence X has been reached) start decode shader (buffer->target texture)
  7. swap target texture with rendered texture

Share this post


Link to post
Share on other sites

I've read this may be possible in OpenGL with some kind of PixelBufferObject?  Is there anything like this in DirectX?  I haven't tried reverting my code to UpdateSubResource for this case, but are there any other suggestions?


An OpenGL PBO is the equivalent of using two textures in D3D, either via CopyResource or CopySubresourceRegion.

 

To summarise, in OpenGL the workflow with a PBO is (1) map the PBO, (2) write data to it, (3) unmap the PBO and (4) update the texture via glTexImage2D/glTexSubImage2D.

 

The D3D equivalent is (1) map a staging resource, (2) write data to it, (3) unmap the staging resource, and (4) update the texture via CopyResource/CopySubresourceRegion.

Share this post


Link to post
Share on other sites

Just a final update:  I got it working using the Ashaman73 approach:  Map / MemCopy / Unmap / CopyResource.  For a bit better performance I've added multi-threading for the Memcopy and fences at the Unmap and CopyResource stages to ensure I never touch the texture until it's ready (avoiding all stalls).  Performance went through the roof after enforcing no writes to the texture until the fence is finished.

 

I've talked with a few people who are much more familiar with the issue than I am, and they let me know that OpenGL does have a performance benefit because you don't have to unmap the texture when you perform the upload (you can leave it mapped, reducing some of the complexity and contention).  Another issue is that for 4K textures it's better to upload in a compressed format (for video like I'm doing, that's a YUV format as opposed to RGBA because it's about 1/2 the data depending on your encoding scheme).  You can then perform the final conversion via shaders (this saves the memory bandwidth and trades it for computation).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Stewie.G
      Hi,
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

       
      PS_BlurV

      PS_BlurH

      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
       
      Thanks!
    • By Fleshbits
      Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
      Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
      Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
      I plainly don't like Unity or Unreal, but might learn them for reference.
      So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
      Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..
       
       
    • By Stewie.G
      Hi,
       
      I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
      float gaussian(float x, float sigma)
      {
          float pi = 3.14159;
          float sigma_square = sigma * sigma;
          float a = 1 / sqrt(2 * pi*sigma_square);
          float b = exp(-((x*x) / (2 * sigma_square)));
          return a * b;
      }
      My problem is that I don't quite know what sigma should be.
      It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
      So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
      Here is what my kernel looks like with sigma == 1
              [0]    0.0033238872995488885    
              [1]    0.023804742479357766    
              [2]    0.09713820127276819    
              [3]    0.22585307043511713    
              [4]    0.29920669915475656    
              [5]    0.22585307043511713    
              [6]    0.09713820127276819    
              [7]    0.023804742479357766    
              [8]    0.0033238872995488885    
       
      I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1.
      Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
      I've found this post 
      ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
      Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...
       
      Thanks :-)
    • By mc_wiggly_fingers
      Is it possible to asynchronously create a Texture2D using DirectX11?
      I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
      Is there anything else that I could do to prevent that frame drop from occuring?
    • By cambalinho
      i'm trying draw a circule using math:
      class coordenates { public: coordenates(float x=0, float y=0) { X = x; Y = y; } float X; float Y; }; coordenates RotationPoints(coordenates ActualPosition, double angle) { coordenates NewPosition; NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle); NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle); return NewPosition; } but now i know that these have 1 problem, because i don't use the orign.
      even so i'm getting problems on how i can rotate the point.
      these coordinates works between -1 and 1 floating points.
      can anyone advice more for i create the circule?
  • Advertisement