I'm copying a texture to another one using FBO's and glCopyTexImage2D  glCopyTexSubImage2D. Both textures are 1024*1024 pixels. It works, except that the image is offset by 27 pixels vertically in the destination texture. If I specify an offset of y = 27 in glCopyTexImage2D , the images match, but of course I miss a strip of the source. This is true for both glCopyTexImage2D and glCopyTexSubImage2D, with or without specifying the margin in width/height. Anybody experienced something like this ? I have an ATI Mobility Radeon X1400. Thanks.
 Home
 » Viewing Profile: Topics: Farfadet
Farfadet
Member Since 18 Mar 2007Offline Last Active Aug 09 2011 12:36 AM
Community Stats
 Group Members
 Active Posts 81
 Profile Views 929
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
Topics I've Started
problem with glCopyTexSubImage
24 November 2010  09:13 AM
layers theory
01 November 2010  07:34 PM
I've been messing up some time with rendering a mesh with multiple seethrough textures, until I dived into the theory. This is what I came with :
1) rendering multiple layers
We can define the visible colour Vi of layer i, as the colour that would be seen if layers 0 to i were visible, and the remaining layers were invisible. Vi is obtained by computing, from bottom (layer 0) to top (layer i), the successive values of Vi with the formula :
Vi = Ci. αi + Vi1.(1  αi)
with Ci the colour stored in layer i, and αi the alpha value stored in layer i.
This is the classical blending formula, but the destination colour is not the colour of the underlying layer, it is its visible colour. The lowest layer is the background, its alpha is by definition 1, so V0 = C0.
This is easy to implement in a pixel shader: we need just to compute the successive values of Vi, from layer 0 upwards, possibly skipping a hidden layer. This is well known stuff.
2) merging layers
Sometimes we want to merge two layers together, or better said replace two textures with their blended colours. This is the case, for example, when we paint with a textured brush on one layer. At first sight, all we have to do is use the classical blending formula (αdest, 1αdest). Doing this we get a "halo" around the brush (where the alphas are somewhere between 0 and 1). The reason is that we ignore the source alpha, and we shouln't.
Let us see how to correctly merge two successive layers together. What we want is to replace the two layers with one, and get the same visual result than when they are layered. With the two layers, say layers 3 and 4, the visible colour as defined above is given by:
V3 = C3. α3 + V2.(1α3)
V4 = C4. α4 + V3.(1α4)
and we want to replace this by layer 3’ (merge of layers 3 and 4) :
V’3 = C’3. α’3 + V2.(1α’3)
so that V’3 = V4
Therefore we can write the identity :
C’3. α’3 + V2.(1α’3) = C4. α4 + V3.(1α4)
= C4. α4 + (C3. α3 + V2.(1α3)).(1α4)
= C4. α4 + C3. α3. (1α4) + V2.(1α3).(1α4)
Since V2 is an invariant, this can only be met if two conditions are satisfied :
C’3. α’3 = C4. α4 + C3. α3. (1α4)
1α’3 = (1α3).(1α4)
α is the inverse of the opacity : the transparency, so the merged layer’s transparency is the product of the two layer’s transparencies, or :
α’3 = 1 (1α3).(1α4)
And the merged color :
C’3 = (C4. α4 + C3. α3. (1α4)) / α’3
So far so good. I'd welcome comments on this by more experienced people.
My problem is to implement this formula in glsl. I cannot use the existing openGL blending formulas, so I need to do it with GLSL. The problem is that I need to write the result of the merge in texture 3 corresponding to layer 3, but I also need a lookup in texture 3. Can I do simultaneous read and write on the same texture ? I guess not. Another solution would be to write in a temporary texture, and replace texture 3 by this texture when the blending is done. Considering my brush application, I need this to be quick (of course). Is there another option ? What's the best way to do this ? Any help/advice welcome.
1) rendering multiple layers
We can define the visible colour Vi of layer i, as the colour that would be seen if layers 0 to i were visible, and the remaining layers were invisible. Vi is obtained by computing, from bottom (layer 0) to top (layer i), the successive values of Vi with the formula :
Vi = Ci. αi + Vi1.(1  αi)
with Ci the colour stored in layer i, and αi the alpha value stored in layer i.
This is the classical blending formula, but the destination colour is not the colour of the underlying layer, it is its visible colour. The lowest layer is the background, its alpha is by definition 1, so V0 = C0.
This is easy to implement in a pixel shader: we need just to compute the successive values of Vi, from layer 0 upwards, possibly skipping a hidden layer. This is well known stuff.
2) merging layers
Sometimes we want to merge two layers together, or better said replace two textures with their blended colours. This is the case, for example, when we paint with a textured brush on one layer. At first sight, all we have to do is use the classical blending formula (αdest, 1αdest). Doing this we get a "halo" around the brush (where the alphas are somewhere between 0 and 1). The reason is that we ignore the source alpha, and we shouln't.
Let us see how to correctly merge two successive layers together. What we want is to replace the two layers with one, and get the same visual result than when they are layered. With the two layers, say layers 3 and 4, the visible colour as defined above is given by:
V3 = C3. α3 + V2.(1α3)
V4 = C4. α4 + V3.(1α4)
and we want to replace this by layer 3’ (merge of layers 3 and 4) :
V’3 = C’3. α’3 + V2.(1α’3)
so that V’3 = V4
Therefore we can write the identity :
C’3. α’3 + V2.(1α’3) = C4. α4 + V3.(1α4)
= C4. α4 + (C3. α3 + V2.(1α3)).(1α4)
= C4. α4 + C3. α3. (1α4) + V2.(1α3).(1α4)
Since V2 is an invariant, this can only be met if two conditions are satisfied :
C’3. α’3 = C4. α4 + C3. α3. (1α4)
1α’3 = (1α3).(1α4)
α is the inverse of the opacity : the transparency, so the merged layer’s transparency is the product of the two layer’s transparencies, or :
α’3 = 1 (1α3).(1α4)
And the merged color :
C’3 = (C4. α4 + C3. α3. (1α4)) / α’3
So far so good. I'd welcome comments on this by more experienced people.
My problem is to implement this formula in glsl. I cannot use the existing openGL blending formulas, so I need to do it with GLSL. The problem is that I need to write the result of the merge in texture 3 corresponding to layer 3, but I also need a lookup in texture 3. Can I do simultaneous read and write on the same texture ? I guess not. Another solution would be to write in a temporary texture, and replace texture 3 by this texture when the blending is done. Considering my brush application, I need this to be quick (of course). Is there another option ? What's the best way to do this ? Any help/advice welcome.
Tangents/normal calculation at the vertices of a subdivided mesh
08 July 2010  08:55 PM
Hi,
I've implemented CatmullClark subdivision in my app. The only thing left is to compute the tangent space vectors at the subdivided mesh vertices. Temporarily, the vectors (tangent and normal) are interpolated from the vectors at the base mesh vertices. As expected, this gives noticeable artifacts where the surface curvature is high. Internet searches returns a lot on CatmullClark and subdivision, but strangely very little on this topic.
What I'm looking for is a method that does two things :
1) provide tangent vectors at subd surface vertices close enough to the tangent plane of the (exact) subd surface (the normal is computed by cross product of the 2 tangent vectors)
2) make it so that these tangent vectors (or vectors derived from them) correspond to the tangent space needed to apply correct bump mapping.
I could of course compute the tangents and normals of the subdivision mesh the same way I compute them for the base mesh, but the CPU cost is prohibitive.
Some details regarding the app :
 polygonal mesh (triangles and quads)
 shaders supporting lighting, color textures and bump maps
 texture coordinates are simply linearly interpolated during subdivision
 sharp and semisharp creases (for the former, normal discontinuity across the crease)
 for the base mesh, tangent space vectors at vertices are computed as follows :
 compute normals for each polygon
 compute tangent for each polygon as explained in http://www.terathon.com/code/tangent.html
 average those values at vertices, taking into account normal and texture discontinuities (creases, mesh edges and seams)
 orthogonalize and normalize the vectors (only tangent and normal are stored per vertex, the binormal is computed in the vertex shader).
Any link / suggestion / hint would be welcome.
I've implemented CatmullClark subdivision in my app. The only thing left is to compute the tangent space vectors at the subdivided mesh vertices. Temporarily, the vectors (tangent and normal) are interpolated from the vectors at the base mesh vertices. As expected, this gives noticeable artifacts where the surface curvature is high. Internet searches returns a lot on CatmullClark and subdivision, but strangely very little on this topic.
What I'm looking for is a method that does two things :
1) provide tangent vectors at subd surface vertices close enough to the tangent plane of the (exact) subd surface (the normal is computed by cross product of the 2 tangent vectors)
2) make it so that these tangent vectors (or vectors derived from them) correspond to the tangent space needed to apply correct bump mapping.
I could of course compute the tangents and normals of the subdivision mesh the same way I compute them for the base mesh, but the CPU cost is prohibitive.
Some details regarding the app :
 polygonal mesh (triangles and quads)
 shaders supporting lighting, color textures and bump maps
 texture coordinates are simply linearly interpolated during subdivision
 sharp and semisharp creases (for the former, normal discontinuity across the crease)
 for the base mesh, tangent space vectors at vertices are computed as follows :
 compute normals for each polygon
 compute tangent for each polygon as explained in http://www.terathon.com/code/tangent.html
 average those values at vertices, taking into account normal and texture discontinuities (creases, mesh edges and seams)
 orthogonalize and normalize the vectors (only tangent and normal are stored per vertex, the binormal is computed in the vertex shader).
Any link / suggestion / hint would be welcome.
linear system 4*4
02 June 2010  08:33 PM
Hi,
What would be the fastest algorithm to solve a 4 variables  4 unknowns linear system ?
I have the feeling that it must be somewhere between brutal force methods (substitution) and more sophisticated iterative methods.
What would be the fastest algorithm to solve a 4 variables  4 unknowns linear system ?
I have the feeling that it must be somewhere between brutal force methods (substitution) and more sophisticated iterative methods.
Maximum size of a glsl program
08 May 2010  10:25 PM
Hi,
Is there an implementation dependant variable for the maximum size of a glsl program ? I had an exception in the ATI driver, and just by simplifying my (rather big) fragment shader (w/o changing uniforms, varyings... just simplifying the calcs), it works just fine.
Thanks