Question about render target data and Texture2D write access

Started by
5 comments, last by phil_t 9 years, 11 months ago

Hello.

Yesterday I was trying to find a way to access render target data and use that for shader effects. Before checking reference I did a quick search thinking this is a common thing and should get at least few results and some ideas how to go about. I got nothing except a few posts about Multiple Render Targets, which is really odd. The graphic card's main purpose is to create this resource and display it. But to get another copy of it I would have to render the scene twice, which by any sane logic is far from acceptable. So the next thing I tried is creating a 2d array inside the shader code. Since I'm passing this data to the target anyway and have all the info I need (the color value and its corresponding coordinate in homogeneous clip space), might as well save it on the way (just before "return color" in pixel shader). Seemed simple enough, but the compiler doesn't let you create arrays larger than 65 thousand and something elements and on top of that - Texture2D can only be read (not modified) inside shader code. Why?!

I also tried a quick workaround test - Setting resolution to 960 x 600 and using 10 arrays [960][60], which turns out to be a big no-no because it makes the compiler just stand in awe when I press compile. I'm not kidding. I have to cancel the build to get back.

4096x4096 texture map - no problem, 960x60 (float3) array - O.o

And why wouldn't you be able to write data to Texture2D object from within? I was planing to base all of my effects on this.

Have I searched for information incorrectly or is this basically it?

Advertisement

It's not clear what you're trying to do. Maybe this? :

http://msdn.microsoft.com/en-us/library/windows/desktop/bb205131(v=vs.85).aspx#Render_to_Texture

Assuming x10 or higher, you would need to create both a shader resource view, as well as a render target view for the texture2d in order to both render to, as well as use that texture as an input resource in another shader. phil_t's link gives the nitty gritty of how.

I have already read that article on msdn, and I will think about this a little bit more before replying again.

I just think it is very odd that you can't write anything to texture resources inside shader code. Explanation below:

Imagine you could declare a "float4 someArray[1920][1080];" for example. It's a 2d array filled with float4 type members. I'll try to explain why that would be nice.

In vertex shader, you normally multiply vertex with world, view and projection matrices so in pixel shader you have fragments in screen space, right? Let's call that variable outPosH. Then let's say you're doing per pixel lighting, so you calculate your diffuse factor multiply with diffuse map sample and light source color or whatever you have in there. And let's say your pixel shader returns a float4. So just before you do "return finalResult;" (finalResult is a float4 result from your pixel shader calculations above) you say: "someArray[outPosH.x][outPosH.y] = finalResult;" by the end of each frame you would have a filled map never even leaving the gpu, and you could use it for each subsequent cycle. So before the same pixel gets overwritten you could use it in the pixel shader for whatever you want. And since you can't declare large arrays they could have at least let you create a "Texture2D screenOutput;" where you could use functions to write data into the map, and then retrieve them by screenOutput.Sample(...).

Is that more clear?


by the end of each frame you would have a filled map never even leaving the gpu, and you could use it for each subsequent cycle.

How is that different from using the render target as a texture input to a subsequent draw call? (Which is what the link I gave describes how to do). The data never leaves the GPU and can be used as an input for the next draw call.

The only thing you really can't do is both write to a texture and read from it during the same draw call (I suspect that would prevent a lot of the optimizations the GPU is able to do).


by the end of each frame you would have a filled map never even leaving the gpu, and you could use it for each subsequent cycle.

How is that different from using the render target as a texture input to a subsequent draw call? (Which is what the link I gave describes how to do). The data never leaves the GPU and can be used as an input for the next draw call.

The only thing you really can't do is both write to a texture and read from it during the same draw call (I suspect that would prevent a lot of the optimizations the GPU is able to do).

But wouldn't it still be faster to do the same thing without those optimizations than to render it twice? You do need to render twice, am I right? Because you need to pass one resource to the swap chain as render target and pass the other to the shader as ShaderResourceView on each frame, so you can sample through it, I am guessing. I didn't have time to read more right now, so you can correct me if I'm wrong.

You don't need to render it twice.

This topic is closed to new replies.

Advertisement