Jump to content
  • Advertisement
NikiTo

DX12 Reinterpret RTV as SRV in another format

Recommended Posts

Case 1:
device->CreatePlacedResource(heap, 0, resDesc1, state_render_target, nullptr, IID..(resource1))  // success
device->CreatePlacedResource(heap, 0, resDesc1, state_pixel_shader,  nullptr, IID..(resource2))  // success 
device->CreateCommitedResource(...) fails

Case 2: (what I want to accomplish)
device->CreatePlacedResource(heap, 0, resDesc1, state_render_target, nullptr, IID..(resource1))  // success
device->CreatePlacedResource(heap, 0, resDesc2, state_pixel_shader,  nullptr, IID..(resource2))  // fails
(total size of texture described in resDesc2 is equal or less than the heap used, both cases fail)

I need to render to RTV in one format then without copying, to read from it as SRV using another format. Is this the way I should try to do it?

Share this post


Link to post
Share on other sites
Advertisement

This most likely will not work. Memory aliasing of textures with preserved contents is not something that is widely supported.

What kind of different format are you talking about? Is it just a channel ordering? You can use the SRV channel swizzling functionality to accomplish that. If it's a different data interpretation (i.e. UNORM vs UINT vs FLOAT) you can accomplish that using typeless resources.

Share this post


Link to post
Share on other sites

I project onto DXGI_FORMAT_R8G8B8A8_UINT and need to read as DXGI_FORMAT_R8_UINT(I was three days trying to make it just read 4 DXGI_FORMAT_R8_UINTs as one DXGI_FORMAT_R8G8B8A8_UINT, but I can't do it, hard to explain..)

Share this post


Link to post
Share on other sites

You want to render as R8G8B8A8_UINT RTV and then alias an R8_UINT SRV over the top of it?

What do you expect to happen (even if it did work, which it won't)? Are you making the R8 SRV twice as wide/tall so it has the same number of bytes as your render target?

Perhaps you can explain what you're trying to achieve and we may be able to suggest a zero-copy approach that will work across all hardware.

Share this post


Link to post
Share on other sites

I have to render only part of the RTV further in the pipeline. So I create triangles that determine which part of the RTV to render and the next shader renders only those pixels(like a mask). The problem is the triangles are by default unaligned to anything else than 1pixel. And my input data to the shader is one byte(per pixel). I have not control of the triangles either, they could happen of any shape. And it is even a little bit more complex than this, but I can't share more details. Initially I wanted of course to read 16 R8_UINTs from a R32G32B32A32_UINT, but I don't want to use if/else, and crossing the Bresenham line of the edge of a triangle lets me with up to 15 pixels of data not needed. And it is even more complex. I am organizing it the last three days and can't find a better setup. I am In the very beginning in the pipeline, further it will become more complex and being able to reinterpret a RTV to any format without copying it would be almost a must have.

Is R8G8B8A8_UINT RTV to R8_UINT SRV definitively not going to work?
I changed the height in order the total size of the heap to match, but still failing. both Intel and AMD fails in the very same way.
Is there another way to accomplish it?

D3D12_HEAP_DESC has its "Flags" set to D3D12_HEAP_FLAG_NONE. Could this be the problem?

Share this post


Link to post
Share on other sites

The problem is you have no idea what the layout of either of the textures, and so simply aliasing an R8 SRV over the top of any other format isn't going to give you a predictable mapping between pixels in the RTV and texels in the SRV.

Let's suppose you rendered a 4x4 render target in the format R8G8B8A8_UINT.
Now let's suppose you alias a 4x16 SRV over the top of it in the format R8_UINT.

Logically both resources are the same size (64 bytes), but you have no way of knowing the in-memory layout of either of those resources. If you read texel (0,0) from the R8 SRV, that may well correspond to 'Red' from pixel (0,0) in the R8G8B8A8_UINT RTV, but equally it may not. When you create a resource in the layout "D3D12_TEXTURE_LAYOUT_UNKNOWN" you are telling the driver/hardware that you don't care about the underlying order of the data, for all intents and purposes it has a random layout.

In reality, GPUs tend to use different tiling algorithms depending on whether a resource is to be used as a Render Target or not, so a 4x4 32-bit render target is very unlikely to even be 64 bytes, let alone be laid out in a predictable order.

Although you haven't provided any great amount of detail about what you're trying to do, I don't understand why you aren't a) Using Stencil to do the masking, or b) Rendering to an R8_UINT render target in the first place. Why render your mask to R8G8B8A8_UINT at all?

As SoldierOfLight pointed out, the only valid aliasing between formats is when both derive from the same "TYPELESS" family. You can't alias R8_UINT over the top of anything other than other R8_* formats.

Edited by ajmiles

Share this post


Link to post
Share on other sites

I supposed that the layout is similar. My bad!
I rendered to R8G8B8A8_UINT because I was expecting to can read the RTV easily in another format.
Thank you for the certain answer that it will not work. I will have to reorganize it all now. It hurts my feelings to change the RTV from R8G8B8A8_UINT to R8_UINT. So first I will search for another way to go around the problem and only if cornered I will change the RTV to match.

(is a triangle "painted" on the stencil faster discarding pixels than a triangle coming from the vertex shader?)

Share this post


Link to post
Share on other sites
9 minutes ago, NikiTo said:

(is a triangle "painted" on the stencil faster discarding pixels than a triangle coming from the vertex shader?)

Discarding pixel shading work using a Stencil test is probably going to beat any other solution you can come up with. GPUs have dedicated hardware and metadata for Stencil testing that means it can cull pixels faster and much earlier in the pipeline than discarding pixels manually after the pixel shader has already started.

Share this post


Link to post
Share on other sites

No, i'm not discarding pixels in the pixel shader. My triangles cover the pixels that need to be rendered in order to discard pixels in a natural way. So it would be maybe the Stencil dedicated hardware vs the Rasterizer dedicated hardware. I am not entering the pixel shader. But I see now maybe Stencil will discard pixels even before vertex shader. Is so?

Share this post


Link to post
Share on other sites
1 minute ago, NikiTo said:

No, i'm not discarding pixels in the pixel shader. My triangles cover the pixels that need to be rendered in order to discard pixels in a natural way. So it would be maybe the Stencil dedicated hardware vs the Rasterizer dedicated hardware. I am not entering the pixel shader. But I see now maybe Stencil will discard pixels even before vertex shader. Is so?

I don't think we're quite on the same page regarding your current approach then.

If your triangles already cover only the necessary pixels that need to be rendered, what purpose does the mask serve and when do you read it?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!