glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_ALPHA | GLUT_DEPTH | GLUT_STENCIL);
float Shadow[4] = {0,0,0,1.0/flatNum};
...
blending problem (low buffer precision?)
I'm working on an soft shadow framework. To compare my fake, realtime soft shadows I need one image of physically correct soft shadow (sampling the area light with many samples, for example 1024). To draw the shadows I'm blending their color with what is already in framebuffer.
Shadow color for one sample is (0,0,0, 1.0/flatNum). Unfortunately for flatNum > 12^2 I'm not getting ANY shadow at all, it works for flatNum = 144 though.
Is this an issue of GLUT_RGBA buffer with too low precision (8bit I assume)? If yes, can I somehow set it to higher precision (like in textures using GL_RGBA16) to avoid 1.0/1024 being set to 0?
Or am I completely wrong?
[Edited by - Borisss on May 11, 2007 9:45:13 AM]
Quote:Original post by Borisss
Shadow color for one sample is (0,0,0, 1.0/flatNum). Unfortunately for flatNum > 12^2 I'm not getting ANY shadow at all, it works for flatNum = 144 though.
Is this an issue of GLUT_RGBA buffer with too low precision (8bit I assume)?
It does make sense - after all, 1.0 / (x>=144) < 0.5 (in the 0-255 range) however, most HW now perform computations at much higher precision so I would expect a minimal difference. Did you took a screenshot to sample the various colours? How do you blend exactly?
Quote:Original post by BorisssBy using higher precision render targets, but I've never heard them being used for this reason. I would rather rethink at the algorith since it doesn't sound well to me.
If yes, can I somehow set it to higher precision (like in textures using GL_RGBA16) to avoid 1.0/1024 being set to 0?
I'm using GeForce7600GT, so it won't be a problem of an old HW...
(other stuff: glut 3.7.6, Cg toolkit 1.5)
For 144 samples colour returned from the fragment shader is (0,0,0,1./144), for 169 it is (0,0,0,0). I expected the allocated framebuffer to be as precise as graphic chip enables, that was probably the first problem.
for other blending (PCF and other 'soft' shadow algorithms) I use basic blending with
I'm drawing first the lit scene, then adding shadows by blending with colour (0,0,0,shadow_alpha)... It can be done also vice versa. For realtime methods it is not a problem, only small number of passes are needed.
But this basic blending function doesn't work at all for many blendings over itself. Here I wanted to do something like (scene_color - scene_color/number_of_shadow_samples) for each pass (to get full black color where whole light is hidden). I know that this is extremely slow, I only needed resulting image to compare with other (PCSS, PCF, etc.) methods' resulting quality... This is more like ray tracing approach where you sample area lights with MANY sample rays and add (subtract) their addition to resulting colour (sample is visible or not from the fragment position).
It is implemented as follows:
I've done shadow quality comparison with only 144 samples image as paragon. Yet I'm still curious how can this ray tracing approach be done in OpenGL...
(other stuff: glut 3.7.6, Cg toolkit 1.5)
For 144 samples colour returned from the fragment shader is (0,0,0,1./144), for 169 it is (0,0,0,0). I expected the allocated framebuffer to be as precise as graphic chip enables, that was probably the first problem.
for other blending (PCF and other 'soft' shadow algorithms) I use basic blending with
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'm drawing first the lit scene, then adding shadows by blending with colour (0,0,0,shadow_alpha)... It can be done also vice versa. For realtime methods it is not a problem, only small number of passes are needed.
But this basic blending function doesn't work at all for many blendings over itself. Here I wanted to do something like (scene_color - scene_color/number_of_shadow_samples) for each pass (to get full black color where whole light is hidden). I know that this is extremely slow, I only needed resulting image to compare with other (PCSS, PCF, etc.) methods' resulting quality... This is more like ray tracing approach where you sample area lights with MANY sample rays and add (subtract) their addition to resulting colour (sample is visible or not from the fragment position).
It is implemented as follows:
//fragment shader returns fragment color of the lit scene with alpha set to 1.0/number_of_sampels//cg code, with similar meaning to openGL counterpartsBlendEnable = true;BlendFunc = int2(SrcAlpha,One); //dstCol - dstCol*srcAlphaBlendEquation = int(FuncReverseSubtract);
I've done shadow quality comparison with only 144 samples image as paragon. Yet I'm still curious how can this ray tracing approach be done in OpenGL...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement