# Precomputing a normal map results in wierd normals when sampled

## Recommended Posts

I am doing terrain tessellation and I have two ways of approaching normals:

1) Compute the normal in the domain shader using a Sobel filter

2) Precompute normals in a compute shader with the same Sobel filter and then sample it in the domain shader. Texture format is R10G10B10A2_UNORM

This is the normals (in view space) from 1), which looks correct

This is normals when sampled from the precomputed normal map:

This is what the computed normal map looks like

This is the sobel filter I use in the compute shader

float3 SobelFilter( int3 texCoord )
{
float h00 = gHeightmap.Load( texCoord, int2( -1, -1 ) ).r;
float h10 = gHeightmap.Load( texCoord, int2( 0, -1 ) ).r;
float h20 = gHeightmap.Load( texCoord, int2( 1, -1 ) ).r;

float h01 = gHeightmap.Load( texCoord, int2( -1, 0 ) ).r;
float h21 = gHeightmap.Load( texCoord, int2( 1, 0 ) ).r;

float h02 = gHeightmap.Load( texCoord, int2( -1, 1 ) ).r;
float h12 = gHeightmap.Load( texCoord, int2( 0, 1 ) ).r;
float h22 = gHeightmap.Load( texCoord, int2( 1, 1 ) ).r;

float Gx = h00 - h20 + 2.0f * h01 - 2.0f * h21 + h02 - h22;
float Gy = h00 + 2.0f * h10 + h20 - h02 - 2.0f * h12 - h22;
// generate missing Z
float Gz = 0.01f * sqrt( max( 0.0f, 1.0f - Gx * Gx - Gy * Gy ) );

return normalize( float3( 2.0f * Gx, Gz, 2.0f * Gy ) );

}

[numthreads(TERRAIN_NORMAL_THREADS_AXIS, TERRAIN_NORMAL_THREADS_AXIS, 1)]
void cs_main(uint3 groupID : SV_GroupID, uint3 dispatchTID : SV_DispatchThreadID, uint3 groupTID : SV_GroupThreadID, uint groupIndex : SV_GroupIndex)
{
float3 normal = SobelFilter( int3( dispatchTID.xy, 0) );
normal += 1.0f;
normal *= 0.5f;

gNormalTexture[ dispatchTID.xy ] = normal;
}

The snippet in the domain shader that samples the normal map:

const int2 offset = 0;
const int mipmap = 0;
ret.mNormal = gNormalMap.SampleLevel( gLinearSampler, midPointTexcoord, mipmap, offset ).r;
ret.mNormal *= 2.0;
ret.mNormal -= 1.0;
ret.mNormal = normalize( ret.mNormal );
ret.mNormal.y = -ret.mNormal.y;

ret.mNormal = mul( ( float3x3 )gFrameView, ( float3 )ret.mNormal );
ret.mNormal = normalize( ret.mNormal );

-----------------------------------------------

Now, if I compute the normals directly in the domain shader, different sampling method in the Sobel filter

float3 SobelFilter( float2 uv )
{
const int2 offset = 0;
const int mipmap = 0;

float h00 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( -1, -1 ) ).r;
float h10 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( 0, -1 ) ).r;
float h20 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( 1, -1 ) ).r;

float h01 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( -1, 0 ) ).r;
float h21 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( 1, 0 ) ).r;

float h02 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( -1, 1 ) ).r;
float h12 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( 0, 1 ) ).r;
float h22 = gHeightmap.SampleLevel( gPointSampler, uv, mipmap, int2( 1, 1 ) ).r;

float Gx = h00 - h20 + 2.0f * h01 - 2.0f * h21 + h02 - h22;
float Gy = h00 + 2.0f * h10 + h20 - h02 - 2.0f * h12 - h22;
// generate missing Z
float Gz = 0.01f * sqrt( max( 0.0f, 1.0f - Gx * Gx - Gy * Gy ) );

return normalize( float3( 2.0f * Gx, Gz, 2.0f * Gy ) );
}

And then just computing it in the domain shader:

ret.mNormal = SobelFilter( midPointTexcoord );

ret.mNormal = mul( ( float3x3 )gFrameView, ( float3 )ret.mNormal );
ret.mNormal = normalize( ret.mNormal );

I am sure there is a simple answer to this and I am missing something... but what? Whether I sample a precomputed value or compute it in the shader, it should be the same?

##### Share on other sites
Posted (edited)

Can't tell from the pictures or code. First I suspected mipmaps, but you're explicitly using level 0

Only hint I can give is to further narrow down where the divergence happens. A GPU debugger might I help, I recommend RenderDoc.

PS: Wait, can't tell for sure, but you might have some mixup with texcoords vs. pixel coords. Try outputting these, maybe you see something.

Edited by unbird

##### Share on other sites
Posted (edited)
1 hour ago, unbird said:

Can't tell from the pictures or code. First I suspected mipmaps, but you're explicitly using level 0

Only hint I can give is to further narrow down where the divergence happens. A GPU debugger might I help, I recommend RenderDoc.

PS: Wait, can't tell for sure, but you might have some mixup with texcoords vs. pixel coords. Try outputting these, maybe you see something.

[domain("quad")]
DomainOut ds_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad)
{
DomainOut ret;

float2 topMidpointWorld = lerp( quad[ 0 ].mWorldPosition.xz, quad[ 1 ].mWorldPosition.xz, uv.x );
float2 bottomMidpointWorld = lerp( quad[ 3 ].mWorldPosition.xz, quad[ 2 ].mWorldPosition.xz, uv.x );
float2 midPointWorld = lerp( topMidpointWorld, bottomMidpointWorld, uv.y );

float2 topMidpointTexcoord = lerp( quad[ 0 ].mTexcoord, quad[ 1 ].mTexcoord, uv.x );
float2 bottomMidpointTexcoord = lerp( quad[ 3 ].mTexcoord, quad[ 2 ].mTexcoord, uv.x );
float2 midPointTexcoord = lerp( topMidpointTexcoord, bottomMidpointTexcoord, uv.y );

const int2 offset = 0;
const int mipmap = 0;
ret.mNormal = gNormalMap.SampleLevel( gLinearSampler, midPointTexcoord, mipmap, offset ).r;
ret.mNormal *= 2.0;
ret.mNormal -= 1.0;
ret.mNormal = normalize( ret.mNormal );
ret.mNormal.y = -ret.mNormal.y;

ret.mNormal = mul( ( float3x3 )gFrameView, ( float3 )ret.mNormal );
ret.mNormal = normalize( ret.mNormal );

float y = quad[ 0 ].mWorldPosition.y + ( SampleHeightmap( midPointTexcoord ) * gHeightModifier );

ret.mPosition = float4( midPointWorld.x, y, midPointWorld.y, 1.0 );
ret.mPosition = mul( gFrameViewProj, ret.mPosition );

ret.mTexcoord = midPointTexcoord;

return ret;
}

This is the full domain shader - if I output the texcoord to an output texture in a pixel shader I see it does go from [0,1] for the whole terrain. The normal map is the same size and all as the heightmap. I do use RenderDoc (it's awesome) for debugging stuff like this 🙂

The vertex shader that computes the texture coordinate & vertex positions are like this:

VertexOut vs_main(VertexIn input)
{
VertexOut ret;

const uint transformIndex = gTransformOffset + input.mInstanceID;
// silly that we have to transpose this...
const float4x4 worldTransform = transpose( gWorldTransforms.Load( transformIndex ) );

ret.mWorldPosition = mul( worldTransform, float4( input.mPosition, 1 ) ).xyz;

ret.mTexcoord = ( ret.mWorldPosition.xz - gWorldMin ) / ( gWorldMax - gWorldMin );
ret.mTexcoord = clamp( ret.mTexcoord, 0.0f, 1.0f );

return ret;
}

Edited by KaiserJohan

##### Share on other sites

Looks fine to me. So much for my guess work

Other than debugging the shader and/or further instrumenting it I have no idea, sorry.

My next suspect would be the transformation, i.e. gFrameView.

##### Share on other sites

The transform is the same, whether I compute it or sample the normals 🤔

Is there anything that could happen under the hood when sampling? The texture is bound correctly (as seen in RenderDoc). Is the sampler object the problem?

##### Share on other sites
ret.mNormal = gNormalMap.SampleLevel( gLinearSampler, midPointTexcoord, mipmap, offset ).r;

Why only r, is it a float3, and is gNormalMap R10G10B10A2_UNORM?

##### Share on other sites
3 hours ago, TeaTreeTim said:

ret.mNormal = gNormalMap.SampleLevel( gLinearSampler, midPointTexcoord, mipmap, offset ).r;

Why only r, is it a float3, and is gNormalMap R10G10B10A2_UNORM?

That's it! I sampled only the red channel by mistake. Good catch, works now as expected, big thanks! 🙂

## Create an account

Register a new account

• 18
• 18
• 11
• 21
• 16