Copy texture data in default heap to readback heap

Started by
14 comments, last by Adam Miles 7 years ago

Hi Guys,

Are there any straight forward way to copy texture2d data into CPU readable array? IIRC, we can't create texture2d on readback heap right? Copy upload heap buffer to texture2d require lots of bookkeeping, so I guess do a revers task is also very tricky.

Right now I use a compute shader to copy texture2D to a structured buffer in default heap first, and then use CopyBufferRegion to copy it to buffer in readback heap, and finally do a map on it to read data back to CPU.

I feel my approach is overly complicated, there must be a much convenient way to copy texture2D to CPU readable array (otherwise how we do capture screen to a file?)

Thanks in advance.

Advertisement

CopyTextureRegion() can describe either a source or dest buffer as if it were a texture, and facilitate the copy. So what you'd do is pass the texture and subresource info as the source, and the buffer with the texture footprint (consider using GetCopyableFootprints() on the texture to generate the footprint) as the dest.

Jesse can correct me if I'm wrong, but the restriction regarding Textures in a Readback heap is not one shared by the 'Custom' heap.

You should be able to create a CUSTOM heap in L0 (System Memory) with a CPU_PAGE_PROPERTY of WRITE_BACK. With that heap type you shouldn't be prevented from creating a texture that can be persistently mapped by the CPU.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Thanks Jesse, I will try that later~

You should be able to create a CUSTOM heap in L0 (System Memory) with a CPU_PAGE_PROPERTY of WRITE_BACK. With that heap type you shouldn't be prevented from creating a texture that can be persistently mapped by the CPU.

Thanks Adam, just curious, if a texture is mapped, with data layout in a swizzled way, what the advantages for mapped texture2D over mapped structured buffer? (I feel we have to spend extra CPU cycle to 'decode' data for further CPU process) And what the potential use case for map a texture2D by CPU?

Thanks in advance

I haven't ever tried, but I'd be surprised if D3D12 let you map a texture with an UNKNOWN layout. Even if it did, you wouldn't know where to find your texels anyway. You can always make the WRITE_BACK texture in the ROW_MAJOR layout and write to that directly in system memory.

Does the 2D texture data have a lifetime on the GPU beyond the point where you generate it? i.e. Is the data only generated for the benefit of being read back on the CPU or is it read again by the GPU later in the frame? It feels to me like you'd be better off writing it directly back to system memory at the point that you generate the data rather than writing it to GPU local memory and then copying it in a separate step back to system memory.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

I haven't ever tried, but I'd be surprised if D3D12 let you map a texture with an UNKNOWN layout. Even if it did, you wouldn't know where to find your texels anyway. You can always make the WRITE_BACK texture in the ROW_MAJOR layout and write to that directly in system memory.

Believe it or not, you can. If pass a null pointer when you request to map the resource, we'll internally just store the memory location, after which you can use the ReadFromSubresource API to copy (some of) the texels into a ROW_MAJOR layout in standard CPU memory.

Believe it or not, you can.

You live and learn!

So it sounds like you can either pay the cost of detiling the system memory resource on the CPU using the ReadFromSubresource API Jesse mentioned, or create the system memory resource in a format you can access directly (ROW_MAJOR today, Standard Swizzle in the future).

There's a good chance the GPU would much prefer the resource to be natively tiled as it writes to system memory, but if there's other bottlenecks on that Draw/Dispatch it may make no difference what its layout is.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

It feels to me like you'd be better off writing it directly back to system memory at the point that you generate the data rather than writing it to GPU local memory and then copying it in a separate step back to system memory.

Correct me if I am wrong, I don't think we could have compute shader directly write to structured/typed buffer in readback heap?

Not in a Readback heap, no. However, as per my first post in the thread, this restriction shouldn't apply to a "Custom" heap that shares all the flags/properties of a Readback heap.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Not in a Readback heap, no. However, as per my first post in the thread, this restriction shouldn't apply to a "Custom" heap that shares all the flags/properties of a Readback heap.

Thanks Adam, but if such Custom heap could share all the flags/properties of a Readback heap while without aforementioned restriction, why readback heap have that restriction at the first place?

This topic is closed to new replies.

Advertisement