Sign in to follow this  
Vincent_M

Writing to Render Target from Itself

Recommended Posts

Vincent_M    969

I'm trying to do a simple blur shader for glow, and I've run into a snag. I render anything emissive to a texture, and that works fine. Then, I send that texture into my shader and render it to a fullscreen quad while that same rendered texture's FBO is active. In other words, I'm writing to what is currently being read to the shader.

 

The issue I run into is that it seems to blend the old fragments with the new ones. For example, if I have an object rendered a pure-white object to the texture, and then use my texture to overwrite that texture to pure-black and no alpha (gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);), the pixels from original color will show through. Now, if I change the alpha in gl_FragColor to 0.5, it blends the pure-white pixels from the first render with the pure-black pixels giving me gray...

 

Should I be using two FBOs, and be ping-ponging back and forth?

Share this post


Link to post
Share on other sites
Hodgman    51324

1) I'm writing to what is currently being read to the shader.

2) Should I be using two FBOs, and be ping-ponging back and forth?

1) In my experience, that's asking for trouble. Some GPUs do actually support this (as long as other pixels aren't reading from the one that you're writing to, which would cause a race-condition), but many GPUs don't support it at all and will do god-knows-what if asked to.

2) Yes.

Share this post


Link to post
Share on other sites
spek    1240

You can read while writing (at least on all my nVidia cards), but when you grab neighbor pixels around your current locations -thus pixels that might be processed at the same time- you'll get artifacts (read weird colors). So for blurring effects that typically grab a region around a source pixel, I'd say play Ping Pong.

Share this post


Link to post
Share on other sites
Geometrian    1810

As Hodgman.

If you want to do it portably, you can use OpenCL to read/write directly to/from a read-write image. This gives you other advantages too (semi-free scatter operations, for one).

 

For your case, a simple shader algorithm, this is overkill. Just ping-pong.

Share this post


Link to post
Share on other sites
Dark Helmet    173
Check out:

* GLSL : common mistakes#Sampling and Rendering to the Same Texture

NV_texture_barrier might be useful to you on NVidia specifically, but I don't know of a cross-vendor way to support this.

OpenCL IMO is a non-starter except for limited use cases, as IIRC flipping back and forth requires a full pipeline flush/sync (that is, in the absense of cl_khr_gl_event / ARB_cl_event). An OpenGL Compute Shader is much more interesting in terms of avoiding that overhead, but I'm not an expert on those yet.

Share this post


Link to post
Share on other sites
Vincent_M    969

Ok thanks guys, I was thinking along those lines of corruption issues, but I wanted to make sure in case I didn't have to create more render targets and shaders. My system tried to be efficient on both Mac and mobile devices, but I'm thinking about dropping mobile support in favor to higher-end hardware due to performance concerns related to branches in shader code, limited fillrate, and lack of dedicated video memory on mobile devices.

 

Trying to support both seem to inflate development time and hinder final results

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this