The problem is, I don't know if I am properly transforming my normal map to view space in preparation for deferred lighting. The transformed MRT image appears correct, in terms of the positions of the color variations, but the overall coloring of the result is different from what I've seen in tutorials and other examples. In the examples I've seen, the resulting output remains as vibrant as the original normal map image. That isn't the case for me, but I'm not sure what I'm doing wrong.

Here is a comparison image. The left one is a plain render of the normal map texture to the cube. The one on the right has the normal map transformed into view space.

Gradient positions look right in the transformed image, but the overall coloring seems weird. I don't know how well gDEbugger visualizes RGBA16F textures, or how to check what the actual values are (all gDEbugger shows me are 0-255 color channel values).

Here are the (minimal) shaders I'm using currently, if it helps:

Vertex shader:

#version 150 uniform mat4 worldproj; uniform mat4 world; in vec4 position; in vec3 normal; in vec3 tangent; in vec3 bitangent; in vec2 texture; out vec3 v2f_normal; out vec3 v2f_tangent; out vec3 v2f_bitangent; out vec2 v2f_texture; void main() { gl_Position = worldproj * position; mat3 worldrot = mat3(world); v2f_normal = worldrot * normal; v2f_tangent = worldrot * tangent; v2f_bitangent = worldrot * bitangent; v2f_texture = texture; }

Fragment shader:

#version 150 in vec3 v2f_normal; in vec3 v2f_tangent; in vec3 v2f_bitangent; in vec2 v2f_texture; out vec4 outColor; out vec4 outNormal; out vec4 outPosition; uniform sampler2D texture0; uniform sampler2D texture1; void main() { outColor = texture2D( texture0, v2f_texture ); outColor.a = 1; vec3 normal = vec3( texture2D( texture1, v2f_texture ) ) - 0.5; mat3 t; t[0] = v2f_tangent; t[1] = -v2f_bitangent; t[2] = v2f_normal; normal = normalize( normal * t ); outNormal = vec4(normal * 0.5 + 0.5, 1.0); outPosition = texture2D( texture1, v2f_texture ); }