Making a texture render to itself

Started by
10 comments, last by george7378 10 years, 2 months ago

Hi everyone,

I'm working with post-processing in HLSL, i.e. I pass a texture to my effect file containing the back buffer and I then render a sprite using my shader to create various effects. I am doing Gaussian blur right now which means that I need to use multiple steps:

- Render scene to texture

- Render a sprite with the texture blurred horizontally

- Render a sprite with the texture blurred vertically

I've seen a few tutorials which seem to imply that you should render the back buffer to one texture (let's call it t_A), then render the horizontal blur to another separate texture (which I'll call t_B), and then render the final sprite by sampling t_B to perform the vertical blur. However, I decided to try and do it with only one texture, and so my program renders the scene to the texture, then passes the texture straight into the shader to perform the horizontal blur, then immediately passes it straight back in to do the final vertical blur, all without changing the render target (and all within the same BeginScene() and EndScene() pair).

So my question is: is it safe to make a texture sample itself in my shader while also rendering onto itself? Of course I pass the texture into the shader before I start drawing again, so I guess it's stored in there, but there must be a reason why the tutorials are using different textures to do different parts of the post-processing.

Thanks!

Advertisement

Did you actually try it? You would realise that it's not possible at all ;)

So yes, there is a good reason why everybody is using two render targets (textures) ;) You need just two even for many post-processing steps if you apply a ping-pong method (render to A, sample A and render to B, sample B and render to A, sample A and render to B....), but you need at least two.

You cannot read from a texture which currently is assigned as a render target.

Heh, it works when I try it! Here's a horizontal blur, vertical blur and greyscale effect rendered with just one render target:

[attachment=19967:rendering_bw_blur.png]

Here's what I'm doing:

1. Set the offscreen texture as the render target, clear it, begin scene

2. Render the scene in colour, un-blurred, to the offscreen target

3. Immediately send the offscreen texture to the shader and render a sprite by sampling the texture to add the horizontal blur

4. Repeat for the vertical blur

5. End the scene and set the back buffer as the render target

6. Render a sprite to the back buffer by sampling the double-blurred texture using the greyscale shader

7. Present the back buffer

...should this be happening?? tongue.png

EDIT: I get this though when I run in debug mode:

Direct3D9: (WARN) :Can not render to a render target that is also used as a texture. A render target was detected as bound, but couldn't detect if texture was actually used in rendering.

Does this mean that it's actually using a different target? Funny how the effect still happens...

You're provoking undefined behaviour. A blur (or any effect which doesn't do a pixel/texel-perfect match) wouldn't even work if you actually could, so I really wonder if this is the correct result.

Edit: Use a test-picture (e.g. a single white pixel on a black background) and a big kernel radius.

I'd suspect that the driver is detecting the potentially trouble-causing situation and behind your back it's making a copy of the render target to use as the source texture. If that's the case you get correct rendering results on your machine, but there's no guarantee it'll work on any other driver/hardware and the copy means you get sub-optimal performance.

Just a guess though.

Typically if you try to bind a texture as a render target and it is bound as input in the same time, the binding will be released.

You should enable debugging to see if there are any errors produced (there should be).

Cheers!

In this situation, I think it's better to create two textures. One for SRV and one for RTV. After you render into RTV, you can copy RTV to SRV with copyresource. It's should be fast as it's GPU to GPU. Also SRV and RTV should be set to default, so no CPU writing/reading.

DirectX 11, C++


In this situation, I think it's better to create two textures. One for SRV and one for RTV. After you render into RTV, you can copy RTV to SRV with copyresource. It's should be fast as it's GPU to GPU. Also SRV and RTV should be set to default, so no CPU writing/reading.

Copying between resources isn't necessary. You may use the same texture as render target and pixel shader resource, just not at the same time.

Cheers!

Direct3D9: (WARN) :Can not render to a render target that is also used as a texture. A render target was detected as bound, but couldn't detect if texture was actually used in rendering.

Does this mean that it's actually using a different target? Funny how the effect still happens...

It means it is following the rules stating that a texture cannot be an input and an output at the same time and it is likely being unbound internally from either the output (render target) or the input (sampler) stage. Proceeding this way is undefined behavior, and no, your result is not correct just because you got a final image on the screen that isn’t a mess.


You are required to make 2 separate textures and ping-pong render between them.

Even if it did work, your results would be screwed up by it. A blur requires sampling neighbor texels. If you have modified those texels in real-time (by reading from the texture and writing directly back to it on each pixel, for example) then your following blur calls will no longer be sampling from the original data and instead will include previously blurred samples, which would corrupt your blur for the current sample.
So even if the hardware allowed you to read and write from/to a single source you would still need 2 textures. There is no getting around it.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

It will work on *some* GPUs as long as you only sample from the current pixel (so greyscale filter is possible, but a blur filter is a potential race condition).
However, this is not possible on ALL GPUs, so D3D/GL are forced to disallow it and call it undefined behavior. Don't do it if you want your game to work on any PC configuration besides your current one...

This topic is closed to new replies.

Advertisement