Help with GPU Pro 5 Hi-Z Screen Space Reflections

Started by
77 comments, last by WFP 9 years, 3 months ago

Hi everybody,

I also started to implement the Hi-Z Cone Tracing from GPU Pro 5.

As you already mentioned: the code samples are not correct.

The explanation of the Visibility Buffer by "jgrenier" was very helpful :-)

But I'm still wondering a bit about the usage of the min- or max values of the Hi-Z Buffer.

I currently have a Hi-Z Buffer with the minimal Z value in the Red channel and the maximal Z value in the Green channel of the texture MIP-maps.

So I compute the hierarchy for both min and max.

When I compute the visibility map I do it like "jgrenier" explained it:


float maxZ = max(max(fineZ.x, fineZ.y), max(fineZ.z, fineZ.w));
float4 integration = (fineZ / maxZ) * visibility;

My "fineZ" takes its Z values from the Red channel (i.e. the minimal).

But is that correct?

And shouldn't be the "maxZ" the maximal value of all Green channels (i.e. the maximals from the finer MIP level)?

I'm now confused about the Hi-Z buffer, because in the book they say to only use the "min" version but in the code samples, they use both.

Thanks in advance,

Lukas

Advertisement

Hi Lukas,

Glad to see there are still a few readers on this thread :). I was actually able to get the code working to walk behind objects using the min/max version of the Hi-Z buffer. There are still a few issues that can creep up when a ray simply doesn't have the data at its endpoint, but my solution solved for a great deal of missing data (screenshots below).

You're right, in the book the author mentions only using the min version, but then obviously uses the min/max variant in the implementation snippets provided. So when you're building out your Hi-Z buffer, you simply want to to track both the minimum and maximum values, and return something like (it sounds like you're doing this correctly, but for the sake of future readers):


return float2(
	min(min(minDepth.r, minDepth.g), min(minDepth.b, minDepth.a)),
	max(max(maxDepth.r, maxDepth.g), max(maxDepth.b, maxDepth.a)));

And then you make a few small updates to the ray tracing step, namely:


// note that this now returns a float2 and samples both the red and green components
float2 getMinimumDepthPlane(float2 ray, float level)
{
	return hiZBuffer.Load(int3(ray.xy, level)).rg;
}

and in the main ray-tracing loop, you update it like so:


// above as before

float2 zminmax = getMinimumDepthPlane(oldCellIdx, level);
		
// intersect only if ray depth is between the minimum and maximum depth planes
float3 tmpRay = intersectDepthPlane(o, d, min(max(ray.z, zminmax.r), zminmax.g));

// below as before

In regards to the visibility buffer, I actually ended up going with what the book provided. When I looked at the results of the visibility buffer in Visual Studio Graphics Debugger, RenderDoc, or NSight, I was seeing a lot of artifacts using the suggested updates. Your mileage may vary, but the provided implementation is what I recommend starting with.

Below are two screenshots, the first is before the code to walk behind objects was added (notice the large empty space between the soldiers hand and foot where rays were incorrectly being blocked) and the second is after the code was added. Like I said, you may still see a few ray misses with the code present, but they are generally due to actual data missing due to occlusion, rather than incorrect hits.

Before: https://www.dropbox.com/s/9nj523ieryvfb7a/screenshot_31.png?dl=0

After: https://www.dropbox.com/s/l245n2lpwwreihw/screenshot_32.png?dl=0

Thanks,

WFP

I'm now at the ray tracing part which is based on the code you (WFP) posted at first in this thread (thanks for that by the way smile.png).

But I don't understand how the construction of the screen-space reflection vector

(positionPrimeSS4, positionPrimeSS and reflectSS in your code) can work properly for you.

Here is the code in my GLSL main function to construct that screen space vector:


/* Sample normal buffer and Hi-Z buffer */
// 'vertexTexCoord' is the tex-coord input from the vertex shader
vec3 normal = texture2D(normalBuffer, vertexTexCoord).rgb; // Normal vector in world-space.
float depth = texture2D(hiZBuffer, vertexTexCoord).r; // Take minimum Z value (post-projected Z values in the range [0.0 .. 1.0])

vec3 viewNormal = normalize(mat3(viewMatrix) * normal); // Normal vector in view-space ('normalVS' in your code)

/* Calculate view position in view space */
// This is different to your code. I don't use a view-ray.
// Instead I make an inverse projection to reconstruct the pixel position (in view space).
vec4 projPos = vec4(vec2(-1.0, 1.0) + vertexTexCoord.xy*vec2(2.0, -2.0), depth, 1.0);
projPos = invProjectionMatrix * projPos;

vec3 viewPos = projPos.xyz/projPos.w; // Pixel position in view-space ('positionVS' in your code)
vec3 viewDir = normalize(viewPos); // View direction ('toPositionVS' in your code)

/* Calculate position and reflection ray in screen space */
vec3 viewReflect = reflect(viewDir, viewNormal); // Reflection vector in view-space ('reflectVS' in your code)

vec4 screenReflectPos = projectionMatrix * vec4(viewPos + viewReflect, 1.0); // <-- PROBLEM
screenReflectPos.xyz /= screenReflectPos.w; // <-- PROBLEM (possible division by zero or negative values)
screenReflectPos.xy = screenReflectPos.xy * vec2(0.5, -0.5) + vec2(0.5);

vec3 screenPos = vec3(vertexTexCoord, depth); // Pixel position in screen-space ('positionSS' in your code)
vec3 screenReflect = screenReflectPos.xyz - screenPos; // Reflection vector in screen-space ('reflectSS' in your code)

I already see the problem here:

"screenReflectPos" is already wrong, when "viewPos.z + viewReflect.z" is less than or equal to zero.

That is to say, when the distance between a rendered pixel and the camera is less than the Z value of "viewReflect",

then "screenReflectPos" will be located behind the camera.

In that case, the projection (screenReflectPos.xyz / screenReflectPos.w) will end up in a wrong result.

How can I fix this? Or rather how did you all fix that problem?

My screen-space here is in the range { [0.0 .. 1.0], [0.0 .. 1.0], [0.0 .. 1.0] } for { x, y, z },

instead of { [0.0 .. resolutionWidth], [0.0 .. resolutionHeight], [0.0 .. 1.0] }.

Here is a visualization of the screen-space reflection vector's X and Y components (Red and Green);

it shows what happens, when the camera comes to close to a wall:

DevShot%2003%20%28Screen%20Space%20Refle

In the circle, the vector is erroneously negated.

Hi Lukas,

The way I handled rays that point back towards the camera is simply to disallow them. This is one of the concessions I made in trying to keep my sanity in tact while battling some of the artifacts I was getting with the effect initially. Probably a better approach (and one I may revisit in the future) would be to fade the effect, where a ray pointing directly back at the camera would have zero contribution, and rays pointing almost perpendicular to the look vector but still pointing backwards slightly would only be faded a little. Then blend the faded result with your fallback method (local cubemaps, etc.).

Below is what I currently use to lop off rays pointing back to the camera


float3 reflectVS = reflect(toPositionVS, normalVS);
// reject pixels pointing back to camera - update to fade instead of outright cutoff
float rDotV = saturate(dot(reflectVS, toPositionVS));
if(rDotV < 0.1f)
{
	return 0.0f;
}

With that in place, a situation where viewPos.z + viewReflect.z <= 0 shouldn't be possible. A potentially better approach would be something like below (untested):


float3 reflectVS = reflect(toPositionVS, normalVS);
//fade pixels fading back
float rDotV = dot(reflectVS, toPositionVS);
float fade = 0.0f;
if(rDotV < 0.0f)
{
    // when rDotV is -1.0f, fade completely, else interpolate to no fade as ray becomes perpendicular
    fade = lerp(0.0f, 1.0f, abs(rDotV));
}

If you go that route, you would want to hold off applying the fade until after the ray-marching loop - in other words, you need something to apply the fade to - as opposed to the way I do now and use it as an early exit.

Let me know if you have any questions or if I missed something.

Thanks,

WFP

Ok thanks. Now I fixed that problem by scaling the reflection offset position by a factor of 0.01:


vec3 viewReflect      = reflect(viewDir, viewNormal);
vec3 screenPos        = vec3(vertexTexCoord, depth);
vec3 screenReflectPos = ProjectPoint(viewPos + viewReflect * 0.01);
vec3 screenReflect    = screenReflectPos.xyz - screenPos;

Another thing I thought was wrong is the usage of "oldCellIdx":


// get the new cell number as well
const float2 newCellIdx = getCell(tmpRay.xy, cellCount);

// if the new cell number is different from the old cell number, a cell was crossed
if(crossedCellBoundary(oldCellIdx, newCellIdx))
{
	// intersect the boundary of that cell instead, and go up a level for taking a larger step next iteration
	tmpRay = intersectCellBoundary(o, d, oldCellIdx, cellCount.xy, crossStep.xy, crossOffset.xy);
	level = min(HIZ_MAX_LEVEL, level + 2.0f);
}

Shouldn't "newCellIdx" be used for the "intersectCellBoundary" function call instead of "oldCellIdx"?

But it looks even more worse when I do that blink.png.

I still don't understand how to set the correct values for "crossOffset".

I changed HIZ_CROSS_EPSILON to a small value, since "epsilon" typically represents a small number such as 0.0001 or so.

So why do you use such a large value?


static const float2 HIZ_CROSS_EPSILON = float2(texelWidth, texelHeight);

Here is a picture of the current state of my Hi-Z ray-marching (everything is fully specular):

DevShot%2005%20%28Wrong%20Ray%20Marching

Hi Lukas,

Glad you got the first issue taken care of.

The reason to use oldCellIdx is because newCellIdx could be too far of a jump (probably what you're seeing when you say it looks worse). newCellIdx not being equal to oldCellIdx simply tells you that you need to in fact keep moving a long the ray, but you send oldCellIdx to intersectCellBoundary so you don't trace too far in one jump and miss something.

As far as HIZ_CROSS_EPSILON goes, remember that I'm using texel width and height, not texture width and height. In other words, I'm using the equivalent of:


static const float2 HIZ_CROSS_EPSILON = float2(1.0f / textureWidth, 1.0f / textureHeight);

But it turns out that even that was too large of a step in many cases. I've found the below to give a good result:


static const float2 HIZ_CROSS_EPSILON = float2(texelWidth, texelHeight) * 0.5f; // just needs to make sure in the next pixel, not skipping them entirely

You're right, you do want it to be a small value - in fact, it needs to be just enough to ensure you're in the next pixel along the ray.

Hope that helps and let me know if you have any more questions.

Thanks,

WFP

I've found a bug in my Hi-Z Buffer Generation process:

I wrote min. and max. Z values but I read only the min. values, thus I did not propagate the max value through the MIP channels.

But I have a new issue:

I presume you all know the following equation from the book at page 161:


Ray(t) = O + D * t

The picture on the next page shows how this can be used. In the HiZTrace function we define our "D" like this:


float3 d = v.xyz / v.z;

This ensures that d.z is always 1.0, which means we can now interpolate easily through the entire Z buffer with this ray.

So far so good, but this only works for rays V whose Z value is not nearly zero.

Look at the picture on page 162 and imagine a ray V whose Z value is very close to, or even equal to zero.

But we definitely will have those rays: imagine a camera which looks straight forward to a wall and the floor is still inside the screen space.

Therefore it should be (partially) reflected in the wall. These reflection rays will then point towards the camera but the rays on the floor will point to the wall.

So we have positive and negative Z values for the reflection rays.

I made a visualization of these Z values (red is positive, blue is negative and thus black is zero):

DevShot%2008%20%28Reflect%20Z%20Visualiz

With a simple ray-marching algorithm such a scenario is no problem, but with the Hi-Z ray tracing described in GPU Pro 5 this is not possible.

And this problem has not been mentioned in the book. Correct me if I'm wrong.

How did you solve those situations?

Hi Lukas, I was also very annoyed that the case where the z components approaches zero fails. I worked around it by changing the algo a bit on my side. I just simply advance the ray from it's current position in the ray without starting at the 'o' position which was reprojected in the near clipping plane (which requires the division by the z component). I also handle raymarching backwards with only the min h-zi structure (i.e. supporting negative z components of a reflection rays). Heres a code snippet.


float3 intersectDepthPlane(float3 o, float3 d, float t)
{
	return o + d * t;
}

float3 advanceRay(float3 o, float3 d, float t)
{
	if(d.z > 0) {
		d /= abs(d.z);
		return o + d * t;
	} else {
		return o;
	}
}

float1 level = HIZ_START_LEVEL;
float2 hiZSize = getCellCount(level);

float3 ray = p.xyz;
float3 d = v;

float1 cross_epsilon = ((1.0 / 512.0) / 2.0) * ssr_cross_threshold * 0.01;
float2 cross_step = float2(d.x >= 0.0f ? 1.0f : -1.0f, d.y >= 0.0f ? 1.0f : -1.0f);
float2 cross_offset = float2(cross_step.xy) * cross_epsilon;
cross_step.xy = saturate(cross_step.xy);

float2 rayCell = getCell(ray.xy, hiZSize.xy);
ray = intersectCellBoundary(ray, d, rayCell, hiZSize.xy, cross_step.xy, cross_offset.xy);

// =======================================================================================
// RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING
// RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING - RAY-MARCHING

float min_z;
int iterations = 0;
while(level >= HIZ_STOP_LEVEL && iterations < MAX_ITERATIONS)
{
	// get the minimum depth plane in which the current ray resides
	min_z = getMinimumDepthPlane(ray.xy, level);
	float max_z = getMaximumDepthPlane(ray.xy, level);

	// get the cell number of the current ray
	float2 cell_count = getCellCount(level);
	float2 old_cell_id = getCell(ray.xy, cell_count);

	// intersect only if ray depth is below the minimum depth plane
	float3 debug_point_color = float3(0,0,0);
	float3 tmp_ray = ray;

	[branch]
	if(v.z > 0) {
		if((min_z - ray.z) > 0){
			tmp_ray = advanceRay(ray, d, min_z-ray.z);
		}
		float2 new_cell_id = getCell(tmp_ray.xy, cell_count);
		if(crossedCellBoundary(old_cell_id, new_cell_id)){
			tmp_ray = intersectCellBoundary(ray, d, old_cell_id, cell_count.xy, cross_step.xy, cross_offset.xy);
			level = min(HIZ_MAX_LEVEL, level + 2.0f);
		}

	}
	else{
		if(ray.z < min_z) {
			tmp_ray = intersectCellBoundary(ray, d, old_cell_id, cell_count.xy, cross_step.xy, cross_offset.xy);
			level = min(HIZ_MAX_LEVEL, level + 2.0f);
		}
	}

	ray.xyz = tmp_ray.xyz;
	--level;
	++iterations;

}

Ok thanks for your code sample.

Could you please post me the latest state of your "intersectCellBoundary" function implementation?

I still have what WFP posted at the very beginning of this thread (converted to GLSL):


vec3 IntersectCellBoundary(vec3 pos, vec3 dir, vec2 cellIndex, vec2 cellCount, vec2 crossStep, vec2 crossOffset)
{
    vec2 cell = cellIndex + crossStep;
    cell /= cellCount;
    cell += crossOffset;

    vec2 delta = cell - pos.xy;
    delta /= dir.xy;

    float t = min(delta.x, delta.y);

    return IntersectDepthPlane(pos, dir, t);
}

And I see another issue at the line:


delta /= dir.xy;

Guess what happens when dir.x or dir.y is nearly zero rolleyes.gif

I'm currently writing lots of debugging code. Such as GLSL storage buffer objects to get more precise debug output from the shader.

And visualizations of the ray marching iterations.

But I still get stuck with the "intersectCellBoundary" function sad.png

Kind regards,

Lukas

EDIT:

I added more debugging visualizations again. In the following screenshot you can see, that the Ray Marching is moving too far.

The green line shows where the hit point should actually be detected on the floor. However the last ray point is visible at the small white box on the lower right:

DevShot%2009%20%28Ray%20March%20Visualiz

The only thing I can imagine which is wrong, is the "intersectCellBoundary" function. But I don't understand what these lines mean:


vec2 delta = cell - pos.xy;
delta /= dir.xy;
float t = min(delta.x, delta.y);

Can anyone explain me what these lines are good for?

And could you please post me your latest version of the "intersectCellBoundary" function?

Thanks in advance,

Lukas

Hi Lukas,

Sorry you're still having trouble with the algorithm, but I'm not sure it's in your intersectCellBoundary method. This is what I'm currently using and the results I'm getting are pretty decent.


float3 intersectCellBoundary(float3 o, float3 d, float2 cellIndex, float2 cellCount, float2 crossStep, float2 crossOffset)
{
	float2 index = cellIndex + crossStep;
	index /= cellCount;
	index += crossOffset;
	float2 delta = index - o.xy;
	delta /= d.xy;
	float t = min(delta.x, delta.y);
	return intersectDepthPlane(o, d, t);
}

Regarding these lines


float2 delta = index - o.xy;
delta /= d.xy;
float t = min(delta.x, delta.y);
return intersectDepthPlane(o, d, t);

think of it like this. "delta" is a vector pointing from the origin of the ray to "index" which is a point along the ray between "o" and "d" (where z = 0 at o and z = 1 at d). Dividing delta by d.xy gives you a percentage of how far along the ray each component is. You take the minimum of the two to decide how far to move between "o" and "d".

At this time, I'm unsure where the artifacts in the image you posted are coming from, but I hope the above explanation helps clear some things up for you. :)

Thanks,

WFP

This topic is closed to new replies.

Advertisement