I'm now at the ray tracing part which is based on the code you (WFP) posted at first in this thread (thanks for that by the way ).
But I don't understand how the construction of the screen-space reflection vector
(positionPrimeSS4, positionPrimeSS and reflectSS in your code) can work properly for you.
Here is the code in my GLSL main function to construct that screen space vector:
/* Sample normal buffer and Hi-Z buffer */
// 'vertexTexCoord' is the tex-coord input from the vertex shader
vec3 normal = texture2D(normalBuffer, vertexTexCoord).rgb; // Normal vector in world-space.
float depth = texture2D(hiZBuffer, vertexTexCoord).r; // Take minimum Z value (post-projected Z values in the range [0.0 .. 1.0])
vec3 viewNormal = normalize(mat3(viewMatrix) * normal); // Normal vector in view-space ('normalVS' in your code)
/* Calculate view position in view space */
// This is different to your code. I don't use a view-ray.
// Instead I make an inverse projection to reconstruct the pixel position (in view space).
vec4 projPos = vec4(vec2(-1.0, 1.0) + vertexTexCoord.xy*vec2(2.0, -2.0), depth, 1.0);
projPos = invProjectionMatrix * projPos;
vec3 viewPos = projPos.xyz/projPos.w; // Pixel position in view-space ('positionVS' in your code)
vec3 viewDir = normalize(viewPos); // View direction ('toPositionVS' in your code)
/* Calculate position and reflection ray in screen space */
vec3 viewReflect = reflect(viewDir, viewNormal); // Reflection vector in view-space ('reflectVS' in your code)
vec4 screenReflectPos = projectionMatrix * vec4(viewPos + viewReflect, 1.0); // <-- PROBLEM
screenReflectPos.xyz /= screenReflectPos.w; // <-- PROBLEM (possible division by zero or negative values)
screenReflectPos.xy = screenReflectPos.xy * vec2(0.5, -0.5) + vec2(0.5);
vec3 screenPos = vec3(vertexTexCoord, depth); // Pixel position in screen-space ('positionSS' in your code)
vec3 screenReflect = screenReflectPos.xyz - screenPos; // Reflection vector in screen-space ('reflectSS' in your code)
I already see the problem here:
"screenReflectPos" is already wrong, when "viewPos.z + viewReflect.z" is less than or equal to zero.
That is to say, when the distance between a rendered pixel and the camera is less than the Z value of "viewReflect",
then "screenReflectPos" will be located behind the camera.
In that case, the projection (screenReflectPos.xyz / screenReflectPos.w) will end up in a wrong result.
How can I fix this? Or rather how did you all fix that problem?
My screen-space here is in the range { [0.0 .. 1.0], [0.0 .. 1.0], [0.0 .. 1.0] } for { x, y, z },
instead of { [0.0 .. resolutionWidth], [0.0 .. resolutionHeight], [0.0 .. 1.0] }.
Here is a visualization of the screen-space reflection vector's X and Y components (Red and Green);
it shows what happens, when the camera comes to close to a wall:
In the circle, the vector is erroneously negated.