I may give that a try but I'm not so sure it'll help. I've been using the effects system for all my shaders so far with no problem. When I get a chance I'll set up the particle system in a separate project and switch between DX11 and 11.1 and see if that makes any difference. Thanks for trying though unbird, I appreciate it.
Well, as far as I can tell, that is what I'm doing. The gbuffer value I'm reading in should be converted to projection-space and compared with the projection-space particle depth. I must have done something wrong though, because the particles look exactly the same as before; there's no fade-out near the intersection area.
I've been trying to implement soft particles as described in the nvidia paper. I'm having a problem with this part: "If we want to compare consistent depth values, the fetched value Zbuf needs to be transformed into projection space". I'm using log-depth, so I can't use the formula they're using. They also never explain what any of the variables in the equation are or why the comparison needs to be done in projection-space and not "Normalized device coordinate" space.
I know how to reconstruct a world-space position from a log-depth value from a previous topic, so I tried several different ways to get it to work for the soft particles.
For reference, this is the equation for the reconstruction:
float depthVal = DepthMap.Sample(depthSampler, texCoord).r;
depthVal = (pow(0.001 * FarPlane + 1, depthVal) - 1) / 0.001;
depthVal /= (1 * FarPlane) / (FarPlane - 1); //The 1 is the near plane value
float2 invProjPos = mul(input.ScreenPosition.xy * depthVal, InverseProjection);
float4 position = mul(float4(invProjPos, -depthVal, 1), InverseView);
position /= position.w;
So I figured to get the projection-space value, the equation should be just the first 3 lines. Next, I figured the particle's Z value should be calculated like this (from the vertex shader):
We're not using screen normals though, they're regular world space normals. The problem is coming from the lighting equation making it black on the side that's facing away from the light. Actually.... in the processing of writing this I had an idea. There's no reason I need to do the lighting equation using the normal after the normal-mapping is applied. I switched it to just use the flat plane's normal and now the black is gone. Thanks for the suggestions guys.
Ok, two more issues. The lighting equation is causing the backside of waves to turn black:
Also, this is something that's been around for a while. One side of the wave (I'm guessing it's the back) is getting a strong reflection while the other side has a strong refraction. It's creating a kind of spotted/streak effect near the viewer where the water will be mostly transparent with spots of strong blue (from the sky) mixed in. This seems pretty unnatural to me. Any suggestions?
And again with the water set to an orange color (in this case there are also some strong orange spots):
I'm working on tweaking our water shader to get a nicer look out of it and there are a couple of issues I'm not sure how to solve. The main one is that the colorization, while nice during the day, makes the water look lit up at night and is very unnatural:
All I'm doing is lerping between the sampled refraction color and a water color parameter. How can I make it so it isn't "creating light"?
Another issue I'm having is the specular. Works during the day, creates black spots at night. Running through the shader debugger, I found it was returning QNANs, but I can't understand why, I have an if-statement that should return 0 specular for when the dot product returns a negative:
One last issue I'm working on is making the water color strength vary based on the depth of the water. I haven't been able to find any documentation online about this so I've just sort of been making something up and playing around with it to see what works well. What I've done so far is read the terrain depth buffer sampled straight along the view ray, subtracted the depth from the camera to the surface, and then used that in an exponential fog equation. It works pretty well but seems a bit strange when shallow water becomes very foggy when viewed at a shallow angle. Any advice for this?
That's what I figured but I wasn't able to get it to work. It just considers everything to be out of the shadow.
float4 position = //bunch of code to reconstruct world space position from log depth
float4 decalPos = mul(position, DecalViewProjection);
float shadowDepth = ProjectorDepthMap.Sample(pointSampler, decalTexCoord.xy);
if(shadowDepth < decalPos.z / decalPos.w)
I'm trying to implement projective texturing but I'm having some trouble getting it to a usable state. Right now it works but it projects to infinity (or more specifically, to the far plane). I can't just pull back the far plane because it could result in the texture being cut off on steep surfaces, and wouldn't solve the problem of projecting through surfaces. I've tried to mimic a sort of spotlight shadow technique but wasn't able to get that to work since there are pretty much no tutorials on shadows for deferred shading pipelines. So, my question: How do you get a projective texture to stop at the first surface it hits?
Edit: I forgot to add tags. I'm using DX11 & sharpdx
I'm trying to calculate a view/projection/bounding frustum for the 6 directions of a point light and I'm having trouble with the views pointing along the Y axis. Our game uses a right-handed, Y-up system. For the other 4 directions I create the LookAt matrix using (0, 1, 0) as the up vector. Obviously that doesn't work when looking along the Y axis so for those I use an up vector of (-1, 0, 0) for -Y and (1, 0, 0) for +Y. The view matrix seems to come out correctly (and the projection matrix always stays the same), but the bounding frustum is definitely wrong.
This is the code I'm using:
camera.Projection = Matrix.PerspectiveFovRH((float)Math.PI / 2, ShadowMapSize / (float)ShadowMapSize, 1, 5);
for(var i = 0; i < 6; i++)
var renderTargetView = shadowMap.GetRenderTargetView((TextureCubeFace)i);
var up = DetermineLightUp((TextureCubeFace) i);
var forward = DirectionToVector((TextureCubeFace) i);
camera.View = Matrix.LookAtRH(Position, Position + forward, up);
camera.BoundingFrustum = new BoundingFrustum(camera.View * camera.Projection);
private static Vector3 DirectionToVector(TextureCubeFace direction)
throw new ArgumentOutOfRangeException("direction");
private static Vector3 DetermineLightUp(TextureCubeFace direction)