Jump to content

  • Log In with Google      Sign In   
  • Create Account

Farfadet

Member Since 18 Mar 2007
Offline Last Active Aug 09 2011 12:36 AM

Posts I've Made

In Topic: unlimited detail back again

04 August 2011 - 01:59 AM

Right, there are arguments against their marketing approach. But what about the technology ?

Polygons are great for rendering something between big surfaces or flat faces, and extremely detailed and irregular shapes (like vegetation, rocks, clouds...). For the latter, you eventually get to render polygons that are smaller than a pixel. In that case, voxels/point clouds used in conjunction with a sparse voxel octree data structure (I regroup these under the term "voxels" in what follows) are better, at least in theory, for a few reasons :

1) the sparse voxel octree is very efficient for rendering. This explains how they could obtain that frame rate without GPU acceleration

2) the sparse voxel octree is in itself a compressed data structure

3) you get rid of the real burden of polygons : UV-maps/textures/displacement mapping - especially heavy if you consider highly irregular surfaces, clouds, vegetation... Instead, you just store colour information with the voxel.

The real limitation is that animation is virtually impossible with that technology. So the question is not : which is better ? but rather : which is better for what application ?

In Topic: OpenGL multi-threading problems.

29 December 2010 - 11:54 PM

Maybe this can help :

http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html

In short, it says you can't manage one rendering context from multiple threads in openGL. I believe ths is what you're trying to do.

In Topic: problem with glCopyTexSubImage

24 November 2010 - 09:02 PM

Well, I followed your idea and noticed that the offset is related to the windows settings (such as border margin), and even to the window size. The further the window dimensions are from the texture's (1024*1024), the bigger the offset. This means that the copy of the texture attached to the FBO takes place as if the window framebuffer's dimensions were considered, not the FBO's. However, it is the FBO/texture image that's copied. I tried about every possible combination of glViewPort, but nothing changes (and that's logical, since glViewPort involves the rendering pipeline, and here I use glCopyTexImage). The strange thing is that after copying, I use the same FBO to render on the texture, with the proper glViewPort, and there it works perfectly well. This might simply be a bug in the ATI driver.

Next I'll try copying the texture by rendering it to the destination texture.

Thanks a lot

In Topic: layers theory

08 November 2010 - 07:46 PM

First, sorry it took me so long to answer.

Well, I'm using openGL and GLSL, and I render to texture. Putting aside the render to texture technology (I guess there's not much difference btw HLSL and GLSL), I still have a problem with the standard blending formula Cs*aS + Cd (1 - ad).

Consider the case :
Cs = 1 0 0 0.5
Cd = 0 0 1 0
i.e. the source is partly transparent, and the destination is fully transparent at that pixel.
The standard formula gives :
C = 0.5 0 0.5 0.25 (a = as*as + ad*(1-as) = 0.5*0.5 + 0*(1-0.5) = 0.25)

The resulting alpha is already strange. I can go around this by using :
glBlendFuncSeparate(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA,GL_ONE, GL_ONE). This will add up the alphas, which makes sense for a brush adding paint to a layer. But there is another problem : the resulting color I get is a dark purple because of the blue contribution. But there should be NO blue contribution, since the destination layer is fully transparent. My conclusion : standard blending is valid only for a fully opaque back layer. It assumes an alpha of 1 in the destination layer. Proof of that is that the destination alpha doesn't appear at all in the formula. Dividing by alpha will still not give me a good result : the blue will always be there.

My merging formula is alright to merge two layers without changing the visible result, but I'm still not sure it's OK for painting with a brush with an alpha component.

In Topic: Weird light behaviour in bump mapping shader

12 July 2010 - 07:33 PM

A vertex shader transforms vertices from world space into clipping coordinates space (screen space) (with the modelview-projection matrix), but also needs to transform the vectors, if they are fixed in world space. This is not evident, but you need another matrix to transform vectors : the normal matrix (gl_NormalMatrix). This is explained in the orange book, ch. 6.2.

When I want my light source to be fixed in global space (i.e. turning with the scene) I add the line :

LightPosition = vec3(gl_NormalMatrix * LightPosition);

And just remove it when I want the light fixed in screen coordinates (always coming from the same apparent direction)

I also apply this transformation to the normal, tangent and binormal vectors :

vec3 t,n,b;
n = vec3(gl_Normal);
n = normalize(gl_NormalMatrix * n);
t = Tangent;
t = normalize(gl_NormalMatrix * t);
b = cross(n, t);


PARTNERS