We're not using screen normals though, they're regular world space normals. The problem is coming from the lighting equation making it black on the side that's facing away from the light. Actually.... in the processing of writing this I had an idea. There's no reason I need to do the lighting equation using the normal after the normal-mapping is applied. I switched it to just use the flat plane's normal and now the black is gone. Thanks for the suggestions guys.
Ok, two more issues. The lighting equation is causing the backside of waves to turn black:
Also, this is something that's been around for a while. One side of the wave (I'm guessing it's the back) is getting a strong reflection while the other side has a strong refraction. It's creating a kind of spotted/streak effect near the viewer where the water will be mostly transparent with spots of strong blue (from the sky) mixed in. This seems pretty unnatural to me. Any suggestions?
And again with the water set to an orange color (in this case there are also some strong orange spots):
I'm working on tweaking our water shader to get a nicer look out of it and there are a couple of issues I'm not sure how to solve. The main one is that the colorization, while nice during the day, makes the water look lit up at night and is very unnatural:
All I'm doing is lerping between the sampled refraction color and a water color parameter. How can I make it so it isn't "creating light"?
Another issue I'm having is the specular. Works during the day, creates black spots at night. Running through the shader debugger, I found it was returning QNANs, but I can't understand why, I have an if-statement that should return 0 specular for when the dot product returns a negative:
One last issue I'm working on is making the water color strength vary based on the depth of the water. I haven't been able to find any documentation online about this so I've just sort of been making something up and playing around with it to see what works well. What I've done so far is read the terrain depth buffer sampled straight along the view ray, subtracted the depth from the camera to the surface, and then used that in an exponential fog equation. It works pretty well but seems a bit strange when shallow water becomes very foggy when viewed at a shallow angle. Any advice for this?
That's what I figured but I wasn't able to get it to work. It just considers everything to be out of the shadow.
float4 position = //bunch of code to reconstruct world space position from log depth
float4 decalPos = mul(position, DecalViewProjection);
float shadowDepth = ProjectorDepthMap.Sample(pointSampler, decalTexCoord.xy);
if(shadowDepth < decalPos.z / decalPos.w)
I'm trying to implement projective texturing but I'm having some trouble getting it to a usable state. Right now it works but it projects to infinity (or more specifically, to the far plane). I can't just pull back the far plane because it could result in the texture being cut off on steep surfaces, and wouldn't solve the problem of projecting through surfaces. I've tried to mimic a sort of spotlight shadow technique but wasn't able to get that to work since there are pretty much no tutorials on shadows for deferred shading pipelines. So, my question: How do you get a projective texture to stop at the first surface it hits?
Edit: I forgot to add tags. I'm using DX11 & sharpdx
I'm trying to calculate a view/projection/bounding frustum for the 6 directions of a point light and I'm having trouble with the views pointing along the Y axis. Our game uses a right-handed, Y-up system. For the other 4 directions I create the LookAt matrix using (0, 1, 0) as the up vector. Obviously that doesn't work when looking along the Y axis so for those I use an up vector of (-1, 0, 0) for -Y and (1, 0, 0) for +Y. The view matrix seems to come out correctly (and the projection matrix always stays the same), but the bounding frustum is definitely wrong.
This is the code I'm using:
camera.Projection = Matrix.PerspectiveFovRH((float)Math.PI / 2, ShadowMapSize / (float)ShadowMapSize, 1, 5);
for(var i = 0; i < 6; i++)
var renderTargetView = shadowMap.GetRenderTargetView((TextureCubeFace)i);
var up = DetermineLightUp((TextureCubeFace) i);
var forward = DirectionToVector((TextureCubeFace) i);
camera.View = Matrix.LookAtRH(Position, Position + forward, up);
camera.BoundingFrustum = new BoundingFrustum(camera.View * camera.Projection);
private static Vector3 DirectionToVector(TextureCubeFace direction)
throw new ArgumentOutOfRangeException("direction");
private static Vector3 DetermineLightUp(TextureCubeFace direction)
Ah, so the shadows are supposed to be added into the lighting buffer output apparently. Since I wasn't the one who set this up originally I hadn't really done much research into the compositing stage of shadows, so I just assumed it was normal to add them in during the final gbuffer pass. Thanks for the help guys.