Screen-space reflections camera angle issues

Started by
2 comments, last by Krypt0n 10 years, 7 months ago

I've tried to implement screen-space reflections and it seems to work but only at a very tight reflection angle:

[attachment=17858:giboxssr0.jpg]

Here's what happens when I inrease the camera angle to the surface:

[attachment=17857:giboxssr1.jpg]

And if you look at the image below, when I am facing the surface, it just renders the entire scene from the camera's point of view, instead of a reflection:

[attachment=17856:giboxssr2.jpg]

I understand that SSRs only render what the camera can see, so at larger angles from each surface, the reflection will disappear; however, in my case it seems to only work for very tight angles - I've seem implementations that still show a correct reflection at larger angles.

Here's my code:


	vec4 bColor = vec4(0.0);
	float reflDist = 0.0;
	vec3 screenSpacePos;	
	E = normalize(camPos - gsout.worldPos.xyz);
	reflDir = normalize(reflect(-E, bumpN));
	float currDepth = 0.1;

	for(int i = 0; i<20; i++)
	{
		vec4 clipSpace = proj*view*vec4(gsout.worldPos.xyz+reflDir*reflDist,1.0);
		vec3 NDCSpace = clipSpace.xyz/clipSpace.w;
		screenSpacePos = 0.5*NDCSpace + 0.5;

		float sampleDepth = texture(reflTex, screenSpacePos.xy).w;
		currDepth = (proj*view*vec4(gsout.worldPos.xyz+reflDir*reflDist,1.0)).z;
		float diff = currDepth - sampleDepth;
		if(diff < 0)
			bColor.xyz = texture(reflTex, screenSpacePos.xy).xyz;
		reflDist += 0.1;
	}

The reflTex stores the screen space rendering of the scene in the xyz components and length(camPos.xyz - worldPos.xyz) in the w component in a 32-bit floating point texture.

Would anyone be able to give me some tips on what I may be doing wrong?

Advertisement

reflDist += 0.1;

that seems to be quite random, isn't it? depending on the scene's scale and distance etc. you could pretty much just sample the same pixel all the time. the distance you add should be about the size of a pixel.

or the other way around, for optimal results: project the start and end of the reflection ray to the screen, then step by pixel distance and calculate for every pixel the depth of the ray.

(ok ok, to be really optimal, you'd need to make the reflection ray "infinity", clip it by the near or far plane, depending on orientation, project it, clip it by the screen rect, and then step pixel by pixel with some line algorithm e.g. bresenham. But the above should already give you more correct results ;) )

reflDist += 0.1;

that seems to be quite random, isn't it? depending on the scene's scale and distance etc. you could pretty much just sample the same pixel all the time. the distance you add should be about the size of a pixel.

I'm not quite sure what you are saying, because reflDist += 0.1 means that I am adding 0.1 distance of world space units to the ray direction. I add this to the starting position of the reflection ray - i.e. the world position of the pixel I am looking at. To give you a scale of things, my entire scene is about 2.0x2.0x2.0 world space units.

I'm saying

1. 0.1 might progress you exactly 0 pixel in worst case, 1.0 in world space on a flat angle to the surface might not move far in screenspace if you go far away.

2. 0.1 might progress you 10pixel at a time, making you skip the object you actually want to reflect

beside that, your code looks algorithmically correct (ignoring optimization opportunities :) )

This topic is closed to new replies.

Advertisement