Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Writing to Render Target from Itself


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Vincent_M   Members   -  Reputation: 705

Like
0Likes
Like

Posted 25 February 2013 - 05:56 AM

I'm trying to do a simple blur shader for glow, and I've run into a snag. I render anything emissive to a texture, and that works fine. Then, I send that texture into my shader and render it to a fullscreen quad while that same rendered texture's FBO is active. In other words, I'm writing to what is currently being read to the shader.

 

The issue I run into is that it seems to blend the old fragments with the new ones. For example, if I have an object rendered a pure-white object to the texture, and then use my texture to overwrite that texture to pure-black and no alpha (gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);), the pixels from original color will show through. Now, if I change the alpha in gl_FragColor to 0.5, it blends the pure-white pixels from the first render with the pure-black pixels giving me gray...

 

Should I be using two FBOs, and be ping-ponging back and forth?



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 30926

Like
3Likes
Like

Posted 25 February 2013 - 06:27 AM

1) I'm writing to what is currently being read to the shader.

2) Should I be using two FBOs, and be ping-ponging back and forth?

1) In my experience, that's asking for trouble. Some GPUs do actually support this (as long as other pixels aren't reading from the one that you're writing to, which would cause a race-condition), but many GPUs don't support it at all and will do god-knows-what if asked to.

2) Yes.



#3 spek   Prime Members   -  Reputation: 997

Like
1Likes
Like

Posted 25 February 2013 - 07:08 AM

You can read while writing (at least on all my nVidia cards), but when you grab neighbor pixels around your current locations -thus pixels that might be processed at the same time- you'll get artifacts (read weird colors). So for blurring effects that typically grab a region around a source pixel, I'd say play Ping Pong.



#4 Geometrian   Crossbones+   -  Reputation: 1575

Like
0Likes
Like

Posted 25 February 2013 - 09:52 AM

As Hodgman.

If you want to do it portably, you can use OpenCL to read/write directly to/from a read-write image. This gives you other advantages too (semi-free scatter operations, for one).

 

For your case, a simple shader algorithm, this is overkill. Just ping-pong.


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#5 Dark Helmet   Members   -  Reputation: 173

Like
2Likes
Like

Posted 25 February 2013 - 08:18 PM

Check out:

* GLSL : common mistakes#Sampling and Rendering to the Same Texture

NV_texture_barrier might be useful to you on NVidia specifically, but I don't know of a cross-vendor way to support this.

OpenCL IMO is a non-starter except for limited use cases, as IIRC flipping back and forth requires a full pipeline flush/sync (that is, in the absense of cl_khr_gl_event / ARB_cl_event). An OpenGL Compute Shader is much more interesting in terms of avoiding that overhead, but I'm not an expert on those yet.

#6 Vincent_M   Members   -  Reputation: 705

Like
0Likes
Like

Posted 26 February 2013 - 01:31 AM

Ok thanks guys, I was thinking along those lines of corruption issues, but I wanted to make sure in case I didn't have to create more render targets and shaders. My system tried to be efficient on both Mac and mobile devices, but I'm thinking about dropping mobile support in favor to higher-end hardware due to performance concerns related to branches in shader code, limited fillrate, and lack of dedicated video memory on mobile devices.

 

Trying to support both seem to inflate development time and hinder final results






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS