GLSL not working at long distance

Started by
6 comments, last by Ignifex 11 years, 7 months ago
It looks like my shader acts differently when my camera is at short range(camera=~5 units) and long range(camera>5 units).

In this picture, my blue cube light (toon)shader is hitting the plane quad at about 40degree from the plane and produces a light pink, which is the correct color. At this point you can see my camera is ~5units from the plane.

2i0xc3t.jpg

In this picture, the plane shows its base color, as if the light isnt even there, even though all i did was move my camera out 20 units. It SHOULD be light pink just as when the camera was close to the plane.

1440fo4.jpg

This has not been a problem with Opengl's fixed-function lighting.

.vert

varying vec3 normal;
varying vec4 c;
void main()
{
c = gl_Color;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
void main()
{
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3(gl_LightSource[0].position));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
Advertisement
I see you still have not implemented Ignifex' fix in your practically identical thread? Because I still believe as well the light vector must should be something on the lines of
lightvector = normalize(lightPosition - vertexPosition);
Trust me, i'll attempt anything that has the possiblity of working. I confirmed it didn't work in my program + my shader designer, it just produced weird results.

10xuaag.jpg

His shader suggestion:

.vert

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
c = gl_Color;
vertPos = gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3(gl_LightSource[0].position.xyz - vertPos.xyz));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
Despite the fact that the results are not yet as desired, Ignifex' solution is definitely more correct. One thing that comes immediately to mind is that you are using the input vertex position for lighting. Usually I would expect to use the vertex position transformed by the modelview matrix for lighting.
It's important to have some understanding of the lighting equation you are using.

The dot product you are computing is Lambert's cosine law, stating that the amount of light that hits a surface is proportional to the cosine of the angle at which it is hit. This dot product is computed between the light vector and the surface normal, both normalized so that the result is the cosine of the angle between them.

This light vector is not the light position "vector", which is hardly a vector at all, just a position. It is the vector pointing from the shaded point to the light source. The shaded point is easiest to find by an interpolation of your vertex positions. Since your normal and vertices are likely in world space, you will need your light source in world space as well, to get correct and view independent results.
Is this what your saying i need to do? (have light in world space)

.vert

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
c = gl_Color;
vertPos = gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
vec4 lightPos = (vec4(gl_LightSource[0].position)*gl_ModelViewMatrix);
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3( lightPos - vertPos.xyz));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
No. Your light and vertex need to be in the same space gl_Vertex is in model/object space. Multiply it by the ModelMatrix to put it into the world. IE each model you can see has its own matrix. From there you can mutliply it into any camera view space. Same with the sun. The sun vector starts in world space. Multiply it by any cameras view and its relative to that camera. The light should be in camera space (the sun does not know anything about objects in the world so there should be no modelmatrix, just the view). Then take the vertex/normal in view and the vectors are both in view space.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

You can also do all your computations in view space of course, which works perfectly fine. It is often even a little simpler because of the combined model and view matrices.
A brief overview of the spaces you are using:
Object Space -> World Space -> View Space -> Clip Space
Your convert between these like so:
Object Space -[ Modelview Matrix ]-> View Space -[ Perspective Matrix ]-> Clip Space
At the moment your lighting computation seems to have all parts properly in view space.

One thing to look out for though is the gl_LightSource[0].position. If you set this using something like glLightfv(GL_POSITION, mylightposition), the mylightposition you provide is multiplied by the modelview matrix before being sent to the GPU. That gives you two options.
First, you can set it after setting up your camera's modelview matrix, in which case you don't have to transform it in your shader, but you will need to update it whenever your modelview matrix changes.
Second, you can use an identity modelview matrix when setting up your light position, which means you will be transforming it into view space in your shader, but you will only need to update it when the light source moves.
Note that multiplying in your shader for each shaded pixel is rather expensive, so I would go for the first choice.

If any of this does not make sense to you, please let us know.

This topic is closed to new replies.

Advertisement