Sign in to follow this  
Borisss

blending problem (low buffer precision?)

Recommended Posts

I'm working on an soft shadow framework. To compare my fake, realtime soft shadows I need one image of physically correct soft shadow (sampling the area light with many samples, for example 1024). To draw the shadows I'm blending their color with what is already in framebuffer. Shadow color for one sample is (0,0,0, 1.0/flatNum). Unfortunately for flatNum > 12^2 I'm not getting ANY shadow at all, it works for flatNum = 144 though. Is this an issue of GLUT_RGBA buffer with too low precision (8bit I assume)? If yes, can I somehow set it to higher precision (like in textures using GL_RGBA16) to avoid 1.0/1024 being set to 0? Or am I completely wrong?
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_ALPHA | GLUT_DEPTH | GLUT_STENCIL);

float Shadow[4] = {0,0,0,1.0/flatNum};

...


[Edited by - Borisss on May 11, 2007 9:45:13 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Borisss
Shadow color for one sample is (0,0,0, 1.0/flatNum). Unfortunately for flatNum > 12^2 I'm not getting ANY shadow at all, it works for flatNum = 144 though.

Is this an issue of GLUT_RGBA buffer with too low precision (8bit I assume)?[/code]

It does make sense - after all, 1.0 / (x>=144) < 0.5 (in the 0-255 range) however, most HW now perform computations at much higher precision so I would expect a minimal difference. Did you took a screenshot to sample the various colours? How do you blend exactly?
Quote:
Original post by Borisss
If yes, can I somehow set it to higher precision (like in textures using GL_RGBA16) to avoid 1.0/1024 being set to 0?
By using higher precision render targets, but I've never heard them being used for this reason. I would rather rethink at the algorith since it doesn't sound well to me.

Share this post


Link to post
Share on other sites
I'm using GeForce7600GT, so it won't be a problem of an old HW...
(other stuff: glut 3.7.6, Cg toolkit 1.5)

For 144 samples colour returned from the fragment shader is (0,0,0,1./144), for 169 it is (0,0,0,0). I expected the allocated framebuffer to be as precise as graphic chip enables, that was probably the first problem.

for other blending (PCF and other 'soft' shadow algorithms) I use basic blending with

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);



I'm drawing first the lit scene, then adding shadows by blending with colour (0,0,0,shadow_alpha)... It can be done also vice versa. For realtime methods it is not a problem, only small number of passes are needed.

But this basic blending function doesn't work at all for many blendings over itself. Here I wanted to do something like (scene_color - scene_color/number_of_shadow_samples) for each pass (to get full black color where whole light is hidden). I know that this is extremely slow, I only needed resulting image to compare with other (PCSS, PCF, etc.) methods' resulting quality... This is more like ray tracing approach where you sample area lights with MANY sample rays and add (subtract) their addition to resulting colour (sample is visible or not from the fragment position).

It is implemented as follows:

//fragment shader returns fragment color of the lit scene with alpha set to 1.0/number_of_sampels

//cg code, with similar meaning to openGL counterparts
BlendEnable = true;
BlendFunc = int2(SrcAlpha,One); //dstCol - dstCol*srcAlpha
BlendEquation = int(FuncReverseSubtract);




I've done shadow quality comparison with only 144 samples image as paragon. Yet I'm still curious how can this ray tracing approach be done in OpenGL...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this