*YouTube video* Bizarre GLSL reflection behavior (coordinate system problems?)

Started by
3 comments, last by TheChuckster 16 years, 2 months ago
I'm trying to code up a GLSL shader for water. What I am trying to do here is to draw a normal mapped plane using a cube environment map (and eventually the Fresnel equation). For now, I am just trying to get the reflections to work on a plane that isn't normal mapped at all. At first, my reflections were moving and rotating with the camera, which isn't natural at all, so I found out that my vectors were in camera coordinates and that they needed to be in world coordinates. I wrote code to pass the inverse matrix of the camera into the vertex shader, along with its position for the view vector. That way, I can figure out my "model world" matrix and use it to transform the normal vector and the view vector. Well, this works good up to a point. The reflections are no longer moving or rotating with the camera, but when I zoom in or out, the reflections become extremely distorted. I'm not sure if this is natural physical behavior that is experienced with the optical properties of light, or if I'm having some serious scaling issues. My bet is it's the latter. The problem is, I have no idea where to go from here. Perhaps it is caused by a lack of floating point precision. More likely, I'm overlooking something involving matrix algebra in my code. I haven't taken a linear algebra course yet, so I know barely anything at all about matrix math or coordinate system transformations outside of what I taught myself from Google. I promised a YouTube video of the glitch so that you can see first hand what is going on. Here is the link:
">
In the video, I look around and then I move vertically up and away from the plane. Then, I move toward the plane until I'm just barely on top of it. The reflections make it seem like I'm moving a lot faster than I actually I am (as if I'm accelerating), but I'm not accelerating at all. I'm moving at a constant velocity the whole time. One thing, when I zoom into the plane, I should be seeing less of the environment and when I zoom out of the plane, I should be seeing more of the environment. It seems like this is backwards in my demo.
uniform vec3 CameraPos;
uniform mat4 CameraInv;
varying vec3 normal;
varying vec3 view;

void main(void)
{
    gl_Position = ftransform();

    mat4 ModelWorld4x4 = CameraInv * gl_ModelViewMatrix;
    mat3 ModelWorld3x3 = mat3(ModelWorld4x4);

    vec4 WorldPos = ModelWorld4x4 * gl_Vertex;

    normal = normalize(ModelWorld3x3 * gl_Normal);
    view = normalize(WorldPos.xyz - CameraPos.xyz);
}



[Edited by - TheChuckster on February 1, 2008 3:05:08 PM]
---2x John Carmack
Advertisement
I'm not sure exactly what you're doing with the information you get in the vertex shader in the fragment shader, but AFAIK the standard way of doing environment mapping with a cube map is rather simpler:

// Vertex shader:varying vec3 ReflectDir;void main () {    gl_Position = ftransform();    // Standard normal calculation    normal = normalize(gl_NormalMatrix * gl_Normal);    // The position in eye (camera) coordinate space.    vec3 ecPos = gl_ModelViewMatrix * gl_Vertex;    // The direction of the eye from the vertex (the incident vector)     vec3 eyeDir = ecPos.xyz;    // The reflection of our incident vector from the surface (eyeDir).    ReflectDir = reflect(eyeDir, normal);}// Fragment shader:uniform samplerCube EnvMap;varying vec3 ReflectDir;void main() {    // Get the right color from the map, based on our reflection dir.    vec4 envColor = textureCube(EnvMap, ReflectDir);   gl_FragColor = envColor;}


(Code adapted from the Orange Book. I've ignored lighting here for simplicity.)
No, that's completely wrong for camera coordinates. With your code, the reflections move (rotate and translate) with the camera. Perhaps the Orange Book assumes you are using a static scene. As a hack, I multiplied the ReflectDir vector by the inverse camera transformation matrix and got proper results:

uniform mat4 CameraInv;varying vec3 ReflectDir;void main () {    gl_Position = ftransform();    // Standard normal calculation    vec3 normal = normalize(gl_NormalMatrix * gl_Normal);    // The position in eye (camera) coordinate space.    vec3 ecPos = gl_ModelViewMatrix * gl_Vertex;    // The direction of the eye from the vertex (the incident vector)    vec3 eyeDir = ecPos.xyz;    // The reflection of our incident vector from the surface (eyeDir).    ReflectDir = mat3(CameraInv)*reflect(eyeDir, normal);}


However, this is unacceptable if I plan on extending this to support normal mapping and per-fragment lighting. It's important that all of my vectors are ALREADY in the correct coordinate systems especially when I'm dealing with tangent space and light vectors as well. Also, my Fresnel equation will "move with the camera" and be in the wrong coordinate system.
---2x John Carmack
I also tried bump mapping with per-fragment lighting, hoping to avert the problem by setting aside reflections for now. Same coordinate system problems. I have a rotating cube and a static light. The lit and unlit sides of the cube rotate WITH the cube, instead of staying put like they should. I tried multiplying the light vector and the eye vector by the model view matrix without any luck. I'm trying to keep everything in eye coordinates here...

Here's the YouTube video for this one:
">


Once again, here are my flawed GLSL code listings. The obscure stuff near the beginning of the fragment shader just procedurally generates a spotted bump map.

Vertex:
varying vec3 LightDir;varying vec3 EyeDir;attribute vec3 Tangent;uniform mat4 CameraInv;void main(){    EyeDir = vec3(gl_ModelViewMatrix * gl_Vertex);    gl_Position = ftransform();    gl_TexCoord[0] = gl_MultiTexCoord0;    vec3 tangent;    vec3 binormal;    vec3 c1 = cross(gl_NormalMatrix * gl_Normal, vec3(0.0, 0.0, 1.0));    vec3 c2 = cross(gl_NormalMatrix * gl_Normal, vec3(0.0, 1.0, 0.0));    if(length(c1)>length(c2))    {        tangent = c1;    }    else    {        tangent = c2;    }    tangent = normalize(tangent);    binormal = cross(gl_NormalMatrix * gl_Normal, tangent);    binormal = normalize(binormal);    vec3 normal = gl_NormalMatrix * gl_Normal;    vec3 v;    v.x = dot(gl_ModelViewMatrix * gl_LightSource[0].position, tangent);    v.y = dot(gl_ModelViewMatrix * gl_LightSource[0].position, binormal);    v.z = dot(gl_ModelViewMatrix * gl_LightSource[0].position, normal);    LightDir = normalize(v);    v.x = dot(EyeDir, tangent);    v.y = dot(EyeDir, binormal);    v.z = dot(EyeDir, normal);    EyeDir = normalize(v);}


Fragment:
#extension GL_ARB_draw_buffers : enableuniform sampler2D colorMap;uniform sampler2D normalMap;varying vec3 LightDir;varying vec3 EyeDir;void main(){    vec3 litColor;    vec2 c = 16.0 * gl_TexCoord[0].st;    vec2 p = fract(c) - vec2(0.5);    float d, f;    d = p.x * p.x + p.y * p.y;    f = 1.0 / sqrt(d + 1.0);    if (d >= 0.15)    {        p = vec2(0.0);        f = 1.0;    }    vec3 normDelta = vec3(p.x, p.y, 1.0) * f;    vec4 base = texture2D(colorMap, gl_TexCoord[0]);    float Diffuse = max(dot(normDelta, LightDir), 0.0) * 2.0;    vec3 reflectDir = reflect(LightDir, normDelta);    float Specular = max(dot(EyeDir, reflectDir), 0.0);    Specular = pow(Specular, 1.0);    litColor = base * Diffuse + Specular;    vec4 FinalColor = vec4(litColor, 1.0);	gl_FragData[0] = min(FinalColor, 1.0);	gl_FragData[1] = max(FinalColor - 1.0, 0.0);}
---2x John Carmack
Good news! I sat there and thought about it for a few days, and I realized I could just multiply the gl_Position output of my vertex shader by the separate camera matrix instead of using gluLookAt() to mess up my modelview matrix for all of the light, normal, position vectors.

gl_Position = gl_ProjectionMatrix * CameraMat * gl_ModelViewMatrix * gl_Vertex;


Makes things a LOT easier than perpetually trying to scratch my head without any linear algebra courses.

---2x John Carmack

This topic is closed to new replies.

Advertisement