# DX11 Normal Interpolation issues on my generated terrain

This topic is 1968 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi all,
I am having a weird problem with normals on my generated terrain. I am not sure whether is it is shader or mesh issue, but here is how it looks:

As you can see, I get this pattern along the edges of the triangles. This reminds me of per vertex shading  however I am aiming for per pixel shading.

Here are my vertex and pixel shaders:

VS:

cbuffer cbToProjection
{
float4x4 matToProj;
}

struct VS_IN
{
float4 Position : POSITION;
float3 Normal: NORMAL;
};

struct VS_OUT
{
float4 Position : SV_POSITION;
float3 NormalWS: TEXCOORD1;
};

VS_OUT main(VS_IN IN)
{
VS_OUT OUT;
OUT.Position = mul(IN.Position,matToProj);
OUT.NormalWS = normalize(IN.Normal);
return OUT;
}

PS:

struct PS_IN
{
float4 Position : SV_POSITION;
float3 NormalWS: TEXCOORD1;
};

float4 main(PS_IN IN): SV_TARGET0 {
float3 normal = normalize(IN.NormalWS);
float3 toLight = normalize(float3(1,3,-2));
float NDotL = saturate(dot(toLight,normal));
float4 color = float4(1.0f,1.0f,1.0f,1.0f);
color.rgb *= NDotL;
return color;
}

so what am I doing wrong?

##### Share on other sites

From your shader code, it looks like you are doing this correctly (although you could remove the normalize call on the normal vector in the VS since you renormalize in the PS after rasterization).  My guess is that your terrain is defining three vertices for each triangle face, rather than one vertex at each grid point.  You can verify this by checking the number of vertices  you are passing in with your draw call, or you can also check this with PIX/Graphics Debugger to see how many primitives are generated from how many input vertices.

##### Share on other sites
From your shader code, it looks like you are doing this correctly (although you could remove the normalize call on the normal vector in the VS since you renormalize in the PS after rasterization).  My guess is that your terrain is defining three vertices for each triangle face, rather than one vertex at each grid point.  You can verify this by checking the number of vertices  you are passing in with your draw call, or you can also check this with PIX/Graphics Debugger to see how many primitives are generated from how many input vertices.

So, I moved the rendering into indexed rendering, so there is only one normal per vertex and still get same result.

##### Share on other sites

Have you tried with different light directions? The dark areas simply look like they are dark because they are facing away from the parallel light you have hard-coded in your shader (ie. the dark sections are always on the -z axis in your image).

It may also be worth outputing the normal to the frame buffer so you can visually see any issues with the interpolation.

color = normal * 0.5f + 0.5f;

return color;

You can also change your light vector so that it's always directly above the terrain (float3 toLight = float3( 0.0f, 1.0f, 0.0f );) which should give you a more even lighting across the terrain, again helping to see any issue with the normals. With the off-center light angle, it makes it difficult to say what is wrong really sorry :) But, certainly, your shader code looks fine. If it isn't the light direction confusing you, then it maybe the normals themselves.

My first guess is just that the light is at an angle really ;)

n!

##### Share on other sites

It could also be that I'm misunderstanding what you're complaining about.

Sometimes people build their terrain such that the vertices look like:

Whereas you can avoid some artifacts on terrain lighting if you structure your vertices like:

n!

##### Share on other sites
From your shader code, it looks like you are doing this correctly (although you could remove the normalize call on the normal vector in the VS since you renormalize in the PS after rasterization).  My guess is that your terrain is defining three vertices for each triangle face, rather than one vertex at each grid point.  You can verify this by checking the number of vertices  you are passing in with your draw call, or you can also check this with PIX/Graphics Debugger to see how many primitives are generated from how many input vertices.

So, I moved the rendering into indexed rendering, so there is only one normal per vertex and still get same result.

That may or may not mean that there is exactly one vertex normal being used at each grid point.  How many vertices are in your vertex buffer, how many indices in your index buffer, and how many primitives are you drawing?  Compare that with your grid size and make sure that you only have N+1 x N+1 vertices for a grid of size N x N.

##### Share on other sites

Have you tried with different light directions? The dark areas simply look like they are dark because they are facing away from the parallel light you have hard-coded in your shader (ie. the dark sections are always on the -z axis in your image).

It may also be worth outputing the normal to the frame buffer so you can visually see any issues with the interpolation.

color = normal * 0.5f + 0.5f;

return color;

You can also change your light vector so that it's always directly above the terrain (float3 toLight = float3( 0.0f, 1.0f, 0.0f );) which should give you a more even lighting across the terrain, again helping to see any issue with the normals. With the off-center light angle, it makes it difficult to say what is wrong really sorry But, certainly, your shader code looks fine. If it isn't the light direction confusing you, then it maybe the normals themselves.

My first guess is just that the light is at an angle really ;)

n!

Setting the light to 0,1,0 doesn't help, the problem persists. Rendering the normals into the frame buffer shows the same problem.

It could also be that I'm misunderstanding what you're complaining about.

Sometimes people build their terrain such that the vertices look like:

Whereas you can avoid some artifacts on terrain lighting if you structure your vertices like:

n!

I am building the mesh the way shown in the first picture, will try the other way that thanks.

From your shader code, it looks like you are doing this correctly (although you could remove the normalize call on the normal vector in the VS since you renormalize in the PS after rasterization).  My guess is that your terrain is defining three vertices for each triangle face, rather than one vertex at each grid point.  You can verify this by checking the number of vertices  you are passing in with your draw call, or you can also check this with PIX/Graphics Debugger to see how many primitives are generated from how many input vertices.

So, I moved the rendering into indexed rendering, so there is only one normal per vertex and still get same result.

That may or may not mean that there is exactly one vertex normal being used at each grid point.  How many vertices are in your vertex buffer, how many indices in your index buffer, and how many primitives are you drawing?  Compare that with your grid size and make sure that you only have N+1 x N+1 vertices for a grid of size N x N.

For a 16x16 grid, there are 256 vertices, index buffer holds 1350 indices

##### Share on other sites

It could also be that I'm misunderstanding what you're complaining about.

Sometimes people build their terrain such that the vertices look like:

Whereas you can avoid some artifacts on terrain lighting if you structure your vertices like:

n!

So, I tried building the mesh as in the second image, unfortunately the problems did not go away.

16x16 mesh:

normals:

##### Share on other sites
Hey, thanks for the images. The lighting in the first one shows your problem much clearer for me.It looks like your normals are incorrect. For terrain normals I take the height sample x,y and compute the normal for that sample with:

float h0 = GetSample( x + 0, y - 1 );
float h1 = GetSample( x - 1, y + 0 );
float h2 = GetSample( x + 1, y + 0 );
float h3 = GetSample( x + 0, y + 1 );

Vector3 normal;

normal.x = h1 - h2;
normal.y = separation; // separation = distance between samples (I use 1.0f).
normal.z = h0 - h3;

normal.Normalize();

return normal;

If that doesn't help, perhaps posting your normal calculation code?

n! Edited by nfactorial

##### Share on other sites

Hey, thanks for the images. The lighting in the first one shows your problem much clearer for me.It looks like your normals are incorrect. For terrain normals I take the height sample x,y and compute the normal for that sample with:

float h0 = GetSample( x + 0, y - 1 );
float h1 = GetSample( x - 1, y + 0 );
float h2 = GetSample( x + 1, y + 0 );
float h3 = GetSample( x + 0, y + 1 );

Vector3 normal;

normal.x = h1 - h2;
normal.y = separation; // separation = distance between samples (I use 1.0f).
normal.z = h0 - h3;

normal.Normalize();

return normal;

If that doesn't help, perhaps posting your normal calculation code?

n!

Are you using y-up for your normals, but z-up for your sample locations in this response?

EDIT:  Also, I think your y value for the normal should be 2 * the separation.  According to this post, at least http://www.gamedev.net/topic/163625-fast-way-to-calculate-heightmap-normals/

Edited by Vexal

• ### Similar Content

• I have a problem with SSAO. On left hand black area.
Texture2D<uint> texGBufferNormal : register(t0); Texture2D<float> texGBufferDepth : register(t1); Texture2D<float4> texSSAONoise : register(t2); float3 GetUV(float3 position) { float4 vp = mul(float4(position, 1.0), ViewProject); vp.xy = float2(0.5, 0.5) + float2(0.5, -0.5) * vp.xy / vp.w; return float3(vp.xy, vp.z / vp.w); } float3 GetNormal(in Texture2D<uint> texNormal, in int3 coord) { return normalize(2.0 * UnpackNormalSphermap(texNormal.Load(coord)) - 1.0); } float3 GetPosition(in Texture2D<float> texDepth, in int3 coord) { float4 position = 1.0; float2 size; texDepth.GetDimensions(size.x, size.y); position.x = 2.0 * (coord.x / size.x) - 1.0; position.y = -(2.0 * (coord.y / size.y) - 1.0); position.z = texDepth.Load(coord); position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float3 GetPosition(in float2 coord, float depth) { float4 position = 1.0; position.x = 2.0 * coord.x - 1.0; position.y = -(2.0 * coord.y - 1.0); position.z = depth; position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float DepthInvSqrt(float nonLinearDepth) { return 1 / sqrt(1.0 - nonLinearDepth); } float GetDepth(in Texture2D<float> texDepth, float2 uv) { return texGBufferDepth.Sample(samplerPoint, uv); } float GetDepth(in Texture2D<float> texDepth, int3 screenPos) { return texGBufferDepth.Load(screenPos); } float CalculateOcclusion(in float3 position, in float3 direction, in float radius, in float pixelDepth) { float3 uv = GetUV(position + radius * direction); float d1 = DepthInvSqrt(GetDepth(texGBufferDepth, uv.xy)); float d2 = DepthInvSqrt(uv.z); return step(d1 - d2, 0) * min(1.0, radius / abs(d2 - pixelDepth)); } float GetRNDTexFactor(float2 texSize) { float width; float height; texGBufferDepth.GetDimensions(width, height); return float2(width, height) / texSize; } float main(FullScreenPSIn input) : SV_TARGET0 { int3 screenPos = int3(input.Position.xy, 0); float depth = DepthInvSqrt(GetDepth(texGBufferDepth, screenPos)); float3 normal = GetNormal(texGBufferNormal, screenPos); float3 position = GetPosition(texGBufferDepth, screenPos) + normal * SSAO_NORMAL_BIAS; float3 random = normalize(2.0 * texSSAONoise.Sample(samplerNoise, input.Texcoord * GetRNDTexFactor(SSAO_RND_TEX_SIZE)).rgb - 1.0); float SSAO = 0; [unroll] for (int index = 0; index < SSAO_KERNEL_SIZE; index++) { float3 dir = reflect(SamplesKernel[index].xyz, random); SSAO += CalculateOcclusion(position, dir * sign(dot(dir, normal)), SSAO_RADIUS, depth); } return 1.0 - SSAO / SSAO_KERNEL_SIZE; }

• I've been following this tutorial -> https://www.3dgep.com/introduction-to-directx-11/#The_Main_Function , did all the steps,and I ended up with the main.cpp you can see below.
The problem is the call at line 516
g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); which is crashing the program, and the very odd thing is that the first time trough it works fine, it crash the app the second time it is called...
Can someone help me understand why? 😕    I have no idea...

• Hi guys, I'm trying to learn this stuff but running into some problems 😕
I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data:
//... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this:
hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1.
Can someone point out what I'm doing wrong? 😕

• Hello everyone,
After a few years of break from coding and my planet render game I'm giving it a go again from a different angle. What I'm struggling with now is that I have created a Frustum that works fine for now atleast, it does what it's supose to do alltho not perfect. But with the frustum came very low FPS, since what I'm doing right now just to see if the Frustum worked is to recreate the vertex buffer every frame that the camera detected movement. This is of course very costly and not the way to do it. Thats why I'm now trying to learn how to create a dynamic vertexbuffer instead and to map and unmap the vertexes, in the end my goal is to update only part of the vertexbuffer that is needed, but one step at a time ^^

So below is my code which I use to create the Dynamic buffer. The issue is that I want the size of the vertex buffer to be big enough to handle bigger vertex buffers then just mPlanetMesh.vertices.size() due to more vertices being added later when I start to do LOD and stuff, the first render isn't the biggest one I will need.
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; vertexBufferDesc.ByteWidth = mPlanetMesh.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = &mPlanetMesh.vertices[0]; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); if (FAILED(result)) { return false; } What happens is that the
result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); Makes it crash due to Access Violation. When I put the vertices.size() in it works without issues, but when I try to set it to like vertices.size() * 2 it crashes.
I googled my eyes dry tonight but doesn't seem to find people with the same kind of issue, I've read that the vertex buffer can be bigger if needed. What I'm I doing wrong here?

Best Regards and Thanks in advance
Toastmastern
• By yonisi
Hi,
I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows:
1. Create a Reflection view matrix  - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code:
void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK.

3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code:
//Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue.
Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that).

Any help will be appreciated because I don't know what is wrong or where else to look.
• By thmfrnk
Hi,
I am looking for a usefull commandline based texture compression tool with the rights to be able to ship with my application. It should have following caps:
Supports all major image format as source files (jpeg, png, tga, bmp) Export as DDS Compression Formats BC1, BC2, BC3, BC4, BC7 I am actually using the nvdxt tool from Nvidia, but it does not support BC4 (which I need for one-channel 8bit textures). Everything else which I found wasn't really useful.
Any suggestions?
Thx

• I have been trying to create a BlendState for my UI text sprites so that they are both alpha-blended (so you can see them) and invert the pixel they are rendered over (again, so you can see them).
In order to get alpha blending you would need:
SrcBlend = SRC_ALPHA DestBlend = INV_SRC_ALPHA and in order to have inverted colours you would need something like:
SrcBlend = INV_DEST_COLOR DestBlend = INV_SRC_COLOR and you can't have both.
So I have come to the conclusion that it's not possible; am I right?
• By Royma
I want to know the reason that I reduced the drawcalls from 8 to 1, but it runs slow down.Should I abandon this method or is there any way to optimize this method to run more efficiently than multi-pass rendering?
Here is the gs code:

[maxvertexcount(24)]
void main(
triangle DepthGsIn input[3] : SV_POSITION,
inout TriangleStream< DepthPsIn > output
)
{
for (uint k = 0; k < 8; ++k)
{
DepthPsIn element;
element.RTIndex = k;
for (uint i = 0; i < 3; ++i)
{
element.position = input.position + shadowBias * g_cameras[k].world[1];
element.position = mul(element.position, g_cameras[k].viewProjection);
element.depth = element.position.z / element.position.w;

output.Append(element);
}
output.RestartStrip();
}
}

• By savail
Hey,
There are a few things which confuse me regarding DirectX 11 and HLSL shaders in general. I would be very grateful for your advice!
1. Let's take for example a scene which invokes 2 totally separate pipeline render passes interchangeably. I understand I need to bind correct shaders for each of the render pass and potentially blend/depth or rasterizer state but what about resources such as Constant Buffers, Shader Resource Views and Unordered Access Views? Assuming that the second render pass uses none of the resources used by the first pass, do I still need to unbind the resources and clean pipeline state after first pass? Or is it ok to leave pipeline with unbound garbage since anything I'd need to bind for second pass would overwrite contents in the appropriate register slots anyway?
2. Is it a good practice to assign register slots manually to all resources in HLSL?
3. I thought about assigning manually register slots for every distinct render pass up to the maximum slot limit if neccessary. For example in 1 render pass I invoke 3 CS's, 2 VS's and 2 PS's and for all resources used by those shaders I try to fill as many register slots as neccessary and potentially reuse many times the same slot in shaders sharing the same resource. I was wondering if there is any performance penalty or gain when I bind all of my needed resources at the start of render pass and never gonna have to do it again until next render pass? - this means potentially binding a lot of registers and having excessive number of bound resources for every shader that is run.
4. Is it a good practice to create a separate include file for every resource that occurs in >= 2 shader files or is it better to duplicate the declarations? In first case, the code is imo easier to maintain and edit but might be harder to read if there's too many includes. I've come up with a compromise between these 2 like this: create a separate include file for every CB that occurs in >= 2 shader files and a separate include file for every sampler I ever need to use. All other resources like srvs and uavs I prefer to duplicate in multiple shaders because they take much less space than CB for example... I'm not sure however if that's a good practice
• By Kris1992
I want implement Particle system based on stream out structure to my bigger project. I saw few articles about that method and I build one particle. It works almost correctly but in geometry shader with stream out i cant get value of InitVel.z and age because it always is 0. If i change order of age(for example age is before Position) it works fine for age but 6th float of order is still 0. It looks like he push only 5 first positions. I had no idea what i do wrong because i try change almost all(create input layout for vertex, the same like entry SO Declaration, change number of strides for static 28, change it to 32 but in this case he draw chaotic so size of strides is probably good). I think it is problem with limits of NumEntry in declaration Entry but on site msdn i saw the limit for directx is D3D11_SO_STREAM_COUNT(4)*D3D11_SO_OUTPUT_COMPONENT_COUNT(128) not 5. Pls can you look in this code and give me the way or hope of implement it correctly?? Thanks a lot for help.

• 32
• 12
• 10
• 9
• 9
• ### Forum Statistics

• Total Topics
631352
• Total Posts
2999483
×