# 3D My DX9 HLSL water reflection is wrong

## Recommended Posts

Hi everyone.
In a desperate attempt to get this thing working i am trying to get help here (since no one @ stackExchange was able to help). My task is simple (at least i think it should be), i want to add screen space reflections to my water shader. Its a thing that exists longer than my Grandma but it seems that its still secret NSA technology, considering how poor the documentation about it is. I have a working shader that creates nice reflections on a planar water surface as long as the camera is very close to the water surface but the reflection starts to fail as soon as the camera moves anywhere. I dont know exactly whats wrong and i have no idea how to fix it, i guess its either the GetUV function that gets screen coordinates instead of coordinates that fit on the water texture (my shader is applied to the water surface, its not a post-process effect) or the reflection direction is wrong for whatever reason.

Here are screenshots, illustrating how it looks and whats wrong:
https://imgur.com/a/5kIGikU

Here is the full shader code, the only variable is "screenInput", which is simply the whole screen rendered by the game that is used for reflection.

//-- Include some common stuff
#include "mta-helper.fx"

texture screenInput;
texture gDepthBuffer : DEPTHBUFFER;

///////////////////
// SAMPLE STATES //
///////////////////

sampler2D screenSampler = sampler_state
{
Texture = <screenInput>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

sampler SamplerDepth = sampler_state
{
Texture = (gDepthBuffer);
MinFilter = Point;
MagFilter = Point;
MipFilter = None;
};

// code from https://habrahabr.ru/post/244367/
float3 GetUV(float3 position)
{
float4 pVP = mul(float4(position, 1), gViewProjection);
pVP.xy = float2(0.5f, 0.5f) + float2(0.5f, -0.5f) * pVP.xy / pVP.w;
return float3(pVP.xy, pVP.z / pVP.w);
}

struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL0;
float2 textureCoords : TEXCOORD0;
};

struct PixelInputType
{
float4 position : POSITION;
float2 textureCoords : TEXCOORD0;
float4 reflectionPosition : TEXCOORD1;
float Depth : TEXCOORD2;
float3 worldPosition : TEXCOORD3;
float3 worldNormal : TEXCOORD4;
float4 vposition : TEXCOORD5;
float3 Normal : TEXCOORD6;
};

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
PixelInputType output;

// Create the view projection world matrix for reflection.
matrix projection = mul(gWorldViewProjection, gWorld);
projection = mul(gWorld, projection);

// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = MTACalcScreenPosition(input.position);
output.worldPosition = MTACalcWorldPosition(input.position);
output.worldNormal = MTACalcWorldNormal(input.normal);

// Store the texture coordinates for the pixel shader.
output.textureCoords = input.textureCoords;

// compute the eye vector
float4 vertexPosition = mul(input.position, gWorld);

float4 vPos = mul(vertexPosition, gView);
float4 pPos = mul(vPos, gProjection);

output.reflectionPosition.x = 0.5 * (pPos.w + pPos.x);
output.reflectionPosition.y = 0.5 * (pPos.w - pPos.y);
output.reflectionPosition.z = pPos.w;
output.reflectionPosition.w = vPos.z / vPos.w;

output.Depth = output.position.z;
output.vposition = mul(input.position, gWorldViewProjection);

output.Normal = input.normal;
return output;
}

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
float2 textureCoords = input.reflectionPosition.xy / input.reflectionPosition.z;

// get to pixel view direction
float3 viewDir = normalize(input.worldPosition - gCameraPosition);

float3 reflectDir = normalize(reflect(viewDir, float3(0, 0, -1)));

// cast rays
float L = 0.01 * input.reflectionPosition.w;
float3 currentRay;
float3 nuv;
for(int i = 0; i < 10; i++)
{
currentRay = input.worldPosition + reflectDir * L;
nuv = GetUV(currentRay);
L = distance(currentRay, gCameraPosition);
}

float4 reflectionColor = tex2D(screenSampler, nuv.xy);

return saturate(reflectionColor);
}

////////////////////////////////////////////////////////////////////////////////
// Technique
////////////////////////////////////////////////////////////////////////////////
technique WaterTechnique
{
pass pass0
{
ZEnable = true;
ZWriteEnable = true;
ZFunc = 2;
}
}

// Fallback
technique fallback
{
pass P0
{
// Just draw normally
}
}

Maybe some genius here knows what to do and maybe there is a better way to do it than cast some rays because i have seen some shaders that do very cryptic things but still get a nice result.

Bumpedy bump? 😅

##### Share on other sites

This is not a "please fix my problem for free" site. You can discuss problems and possible solutions, but everybody solves his or her own problems. Also, please give it several days, people here are busy.

In general, if something doesn't work and you don't know why, it is too complicated at this point in time. Simplify it.

In this case, the likely candidate looks like the reflection position calculation. Invent a simple scene, and decide what numbers you expect from it. Run the computation on the CPU eg using glm. Verify each result with your expectations. Once you verified the numbers are correctly computed, move to the GPU.

##### Share on other sites

Isnt it more likely that the reflectDir is wrong because i am reflecting viewDir with a simple 0,0,-1 normal vector?

What makes you think reflectionPosition is wrong?

##### Share on other sites

Sorry if this looks like "let me google that for you", but you wrote you couldn't find information about it.

Here are two tutorials that could help you, both use a ray tracing approach. Maybe it helps figuring out what's wrong.

Edited by Green_Baron

##### Share on other sites

It doesnt look like a LMGTFY, any help is welcome, the second link uses DX11 which is completely different from DX9 HLSL, so i try to look into the first link.

€DIT: I dont know what this is in the first link, but HLSL isnt that either. Its already difficult enough to understand whats nearly going on in a HLSL file but translating another pile of unknown stuff is literally impossible.

Edited by Einheit 101

##### Share on other sites
19 hours ago, Einheit 101 said:

Isnt it more likely that the reflectDir is wrong because i am reflecting viewDir with a simple 0,0,-1 normal vector?

What makes you think reflectionPosition is wrong?

You think it is wrong, which is the only thing that counts.

I suggested that you debug the problem yourself, which is the only way to get forward as soon as a problem goes beyond a certain level of complexity. Other people are not going to invest lots of time to solve your problems.

Waiting for a passer-by or a website to give you the answer eats a lot of time, may never happen, and doesn't prepare you for the next problem that will pop up.

Edited by Alberth

##### Share on other sites

I fixed the reflection coordinate issues, but now i have to combat typical SSR artifacts - The screen borders have lost information and here ( http://remi-genin.fr/blog/screen-space-plane-indexed-reflection-in-ghost-recon-wildlands/ ) they explain how to fill these gaps (they stretch the screen), but i cant simply add this code to my shader because these guys use values that have not been initialized/explained.

On the other hand there are some flickering artifacts that may result from depth-related issues, it seems like the game cannot decide if it wants to draw the reflection or the water color... It doesnt look healthy. No idea. But the goal comes closer.
https://i.imgur.com/jHTzNdW.png

//-- Include some common stuff
#include "mta-helper.fx"

float4 waterColor = float4(0.2, 0.4, 0.9, 1);
float2 sPixelSize = float2(0.00125,0.00166);
texture screenInput;
texture gDepthBuffer : DEPTHBUFFER;

///////////////////
// SAMPLE STATES //
///////////////////

sampler2D screenSampler = sampler_state
{
Texture = <screenInput>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

sampler SamplerDepth = sampler_state
{
Texture = (gDepthBuffer);
MinFilter = Point;
MagFilter = Point;
MipFilter = None;
};

// code from https://habrahabr.ru/post/244367/
float3 GetUV(float3 position)
{
float4 pVP = mul(float4(position, 1), gViewProjection);
pVP.xy = float2(0.5f, 0.5f) + float2(0.5f, -0.5f) * pVP.xy / pVP.w;
return float3(pVP.xy, pVP.z / pVP.w);
}

//--------------------------------------------------------------------------------------
//-- Get value from the depth buffer
//-- Uses define set at compile time to handle RAWZ special case (which will use up a few more slots)
//--------------------------------------------------------------------------------------
float FetchDepthBufferValue( float2 uv )
{
float4 texel = tex2D(SamplerDepth, uv);
#if IS_DEPTHBUFFER_RAWZ
float3 rawval = floor(255.0 * texel.arg + 0.5);
float3 valueScaler = float3(0.996093809371817670572857294849, 0.0038909914428586627756752238080039, 1.5199185323666651467481343000015e-5);
return dot(rawval, valueScaler / 255.0);
#else
return texel.r;
#endif
}

//calculate pixel position
float3 GetPosition(float2 UV, float depth)
{
float4 position = 1.0f;

position.x = UV.x * 2.0f - 1.0f;
position.y = -(UV.y * 2.0f - 1.0f);

position.z = depth;

position = mul(position, inverseMatrix(gViewProjection));

position /= position.w;

return position.xyz;
}

//--------------------------------------------------------------------------------------
//-- Use the last scene projecion matrix to linearize the depth value a bit more
//--------------------------------------------------------------------------------------
float Linearize(float posZ)
{
return gProjection[3][2] / (posZ - gProjection[2][2]);
}

struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL0;
float2 textureCoords : TEXCOORD0;
};

struct PixelInputType
{
float4 position : POSITION;
float2 textureCoords : TEXCOORD0;
float Depth : TEXCOORD1;
float4 worldPosition : TEXCOORD2;
float3 worldNormal : TEXCOORD3;
float4 vposition : TEXCOORD4;
float3 Normal : TEXCOORD5;
};

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
PixelInputType output;

// Create the view projection world matrix for reflection.
matrix projection = mul(gWorldViewProjection, gWorld);
projection = mul(gWorld, projection);

// Calculate the position of the vertex against the world, view, and projection matrices.
output.worldPosition = MTACalcWorldPosition(float4(input.position.xyz, 1));
float4 viewPos = mul(output.worldPosition, gView);
output.worldPosition.w = viewPos.z / viewPos.w;
output.position = mul(viewPos, gProjection);

MTAFixUpNormal(input.normal);
output.worldNormal = MTACalcWorldNormal(input.normal);

// Store the texture coordinates for the pixel shader.
output.textureCoords = input.textureCoords;

output.vposition = mul(input.position, gWorldViewProjection);

output.Depth = output.vposition.z / output.vposition.w;

output.Normal = mul(input.normal, (float3x3)gWorld);
return output;
}

////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
{
float2 txcoord = (input.vposition.xy / input.vposition.w) * float2(0.5, -0.5) + 0.5;
txcoord += 0.5 * sPixelSize;
float cameraDepth = FetchDepthBufferValue(txcoord); // Isnt there an easier and more reliable way to get depth?

// get to pixel view direction
float3 viewDir = normalize(input.worldPosition - gCameraPosition);

// reflection direction
float3 reflectDir = normalize(reflect(viewDir, input.Normal));

// cast rays
float3 currentRay = 0;
float3 nuv = 0;
float L = 0.01 * input.worldPosition.w; // length of the starting ray
float d;
float3 newPosition;
for(int i = 0; i < 10; i++) // i think 10 rays are enough
{
currentRay = input.worldPosition + reflectDir * L;
nuv = GetUV(currentRay);
d = FetchDepthBufferValue(nuv.xy); // I guess this function is not 100% correct because of all the depth related issues

newPosition = GetPosition(nuv.xy, d); // Whats the difference between this and currentRay...?
L = length(input.worldPosition - newPosition); // dynamically adjust length
}

int err = 0;
if ((d > 0.99999) || (input.Depth > 0.99999)) err = 1; // Prevent reflection of background and objects too far away, if you want
if (input.Depth > d) err = 1; // Prevent reflection of objects that are actually in front of the water and not behind it
//if ((nuv.x > 1) || (nuv.x < 0)) err = 1; // Delete looped reflections on the screen borders, delete this once you can stretch these gaps
//You can leave this disabled and the resulting looped gaps wont look too bad when you add refraction to the reflection, but we better find a way to stretch nuv.x

// Use fresnel effect to make reflections less intense when looking straight down to them, this also covers some obvious artifacts
float fresnel = saturate(1.5 * dot(viewDir, -input.Normal));

//TODO -----> fade reflections softly out when reaching depth of 0.99999 (or whatever depth you want)
//TODO -----> implement edge stretching to fill reflection gaps on the screen sides, mentioned above
//nuv.x *= 1 + screenStretchValue;

float coord = nuv.y - 0.5;
float dy = 1 / 2 - coord;
float dist = pow(dy * dy, 0.5);
float distFromCenter = 0.5 - dist;

float4 reflectionColor = tex2D(screenSampler, nuv.xy);
reflectionColor = lerp(reflectionColor, waterColor, fresnel);
reflectionColor = lerp(reflectionColor, waterColor, err);
return saturate(reflectionColor);
}

////////////////////////////////////////////////////////////////////////////////
// Technique
////////////////////////////////////////////////////////////////////////////////
technique WaterTechnique
{
pass pass0
{
ZWriteEnable = true; // Required to make the shader compatible with any depth related effect (dynamic lighting...)
ZFunc = 2; // Using 2 fixes depth issues of reflected far away objects but creates depth issues with close objects, using 4 does otherwise, nothing is really good

// I guess the reflection flickering has to be fixed inside the pixel shader, but dont ask me how

}
}

// Fallback
technique fallback
{
pass P0
{
// Just draw normally
}
}

##### Share on other sites
On 11/5/2019 at 6:34 PM, Einheit 101 said:

I fixed the reflection coordinate issues, but now i have to combat typical SSR artifacts - The screen borders have lost information and here ( http://remi-genin.fr/blog/screen-space-plane-indexed-reflection-in-ghost-recon-wildlands/ ) they explain how to fill these gaps (they stretch the screen), but i cant simply add this code to my shader because these guys use values that have not been initialized/explained.

If you're just playing around as a hobby and want to test things out, don't expect other people to do the hard work for you.  You're right that you can't simply add the code, as most shaders are designed for the system they are used in.  You couldn't take the shaders from my engine without knowing how they are configured or parsed, or what engine specific information gets injected, or how the final programs are actually run.  That article that you posted does a decent job trying to explain what the code is actually doing, but there are other articles out that give a bit more insight.  I suggest re-reading it, plus other articles, to understand the principle and how to actually convert that to code yourself, instead of just trying to insert someone else's code into your own.

On 11/5/2019 at 6:34 PM, Einheit 101 said:

On the other hand there are some flickering artifacts that may result from depth-related issues, it seems like the game cannot decide if it wants to draw the reflection or the water color... It doesnt look healthy. No idea. But the goal comes closer.
https://i.imgur.com/jHTzNdW.png

This looks like your ray steps are too large.  The steps determine how often you are sampling, and if the steps are larger than the pixels on the final image, then you get gaps.  Note that this may or may not be the issue, but it looks like you're hitting the same issue with the tree's.  The solid walls work better because it doesn't matter where the ray hits, the target area always has data, but the tree's and objects are more sparse, resulting in misses in the ray tracing and causing false negatives.

## Create an account

Register a new account

• 9
• 56
• 18