Sign in to follow this  
RobMaddison

PerPixel lighting looking like PerVertex lighting

Recommended Posts

Hi all I'm using DirectX9 and ps3/vs3. I'm having a bit of an issue with my normals. My terrain's patches are made of of quads like this:
*---*---*
|  /|  /|
| / | / |
|/  |/  |
*---*---*
I have, for now, uncompressed normals at each vertex and when rendered, it appears like the terrain is lighting per vertex rather than per-pixel. Here's a solid screenshot (link) and a wireframe one (link). It's mostly noticeable when the terrain slopes across the diagonal, or in other words, when the diagonal of the quads lie adjacent to the direction of the terrain 'hip' or bump. Do I need to switch anything on to use per-pixel lighting? I thought you just got shading per-pixel by using pixel shaders. Here's a snippet of my very simple pixel shader:
...
float4 m = float4(0.288f, 0.408f, 0.631f, 1.0f);
float s = saturate(dot(lightVec, In.Normal));
m = m+s; // this is m plus s (don't know why the plus sign isn't showing)
return m;
}
All it is doing is using the light to change from light blue to white. Any ideas? Thanks all

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddisonI thought you just got shading per-pixel by using pixel shaders.


No, it doesn't work this way, because the quality of the normal is what's ultimately important. Here you are just taking the per-vertex normal from the pixel shader and doingto dot product in the pixel shader.

What you need is a per-pixel normal, usually this involves a normal map which gives a much more detailed normal very cheaply, but you don't have to use one.

Here is what you generally do:

1) Make sure your mesh vertices have a precalculated tangent vector.
2) in the pixel shader create a tangent-binormal-normal matrix:

half3x3 worldToTangentSpace;
worldToTangentSpace[0] = mul(in.tangent, matWorld);
worldToTangentSpace[1] = mul(cross(in.normal,in.tangent), matWorld);
worldToTangentSpace[2] = mul(in.normal, matWorld);

3) transform your light and output
Out.light.xyz = mul(worldToTangentSpace, vecSunDir);

4) In the PIXEL SHADER normalize this light vector...the normalization is important:

float3 Sun=normalize(In.light);

5) then do your dot product...you can use a normal map like this:

float3 normalmap=tex2D(normalsamp,intex.xy)*2-1;
float diffuse=saturate(dot(normalmap,Sun));

If you dont want a normal map just use a flat normal like

float3 normalmap=float3(0.5,0.5,1)*2-1;

---------------------------------------------------------------------------

Note: for a much simpler method without a normal map, I THINK you can do this; dont bother with the tangent stuff; just renormalize the normal in the pixel shader and then do the dot product... you should get reasonable per-pixel lighting.

Share this post


Link to post
Share on other sites
Thanks Matt

I was under the impression that the normal passed from the Vertex shader to the pixel shader was already interpolated. So you're saying you have to manually do this?

I don't have any normal maps (yet), I simply have a [pre-calculated] averaged normal per vertex. This comes into the vertex shader via a stream.

I thought the whole point of vertex/pixel shaders was that whatever you passed from the vertex to the pixel shader was interpolated ready for the pixel shader to use. I guess not :)

I'll look into your suggestion, thanks.

Rob

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddison
Thanks Matt

I was under the impression that the normal passed from the Vertex shader to the pixel shader was already interpolated. So you're saying you have to manually do this?


No no it's interpolated. What you're doing in your simple pixel-shader is definitely per-pixel lighting. It looks like your problem there is that something is a little wacky with your normals, but I'm not sure what. Hopefully a terrain expert will see this and can help you out further, it's not something I'm particularly knowledgeable about. [smile]

EDIT: make sure you normalize your normal in the pixel shader, you don't get a unit vector when you interpolate between two unit vectors.

Share this post


Link to post
Share on other sites
Vertex lighting means to calculate the reflection on each vertex and to let the interpolator calculate the in-between values. If, on the other hand, the vertex normals and light vectors are interpolated and the reflectance is computed by the pixel shader, then per-pixel lighting is done. That is because normal interpolation _does_ give you a normal per pixel (so to say). Sure, a normal map allows to have more surface details, but it is in no way a _requirement_ for per-pixel lighting. Normal maps are just a way to enhance the mesh's surface structure without raising the spatial resolution of the mesh.

Assume a small specular reflectance lobe that resides between 2 vertices. Using vertex lighting, the slobe is sampled in the offshoots by the 2 vertices, and hence the interpolated reflactance is everywhere, well, flat. But if the vectors are interpolated and the product is computed in the pixel shader, then the sampling is done (typically) with a higher resolution, the lobe isn't missed, and a nice highlight will be rendered. q.e.d. :)

However, how looks the vertex script like? What's with the normalization Matt has mentioned? Can you show us all belonging parts of the scripts?


Just for nit-picking: A "bi-normal" is part of the co-ordinate frame (e.g. Frenet frame) on a curve. When dealing with a surface, we have a "bi-tangent" instead. Don't know when the term bi-normal has happened to surfaces, perhaps in the semantic declarations for vertex buffers? Never mind. It is used often synonymously, although not being really correct.

Share this post


Link to post
Share on other sites
This looks suspicious:

float s = saturate(dot(lightVec, In.Normal));

The 2 vectors you dot must be normalized per-pixel. That is, it should be dot(normalize(lightVec),normalize(In.Normal)).

Also, *very important*: The light vector should *not* be normalized in the vertex shader. It should be interpolated unnormalized, and normalized only in the pixel shader. Normalizing it in the vertex shader(prior to interpolation) will definately yield incorrect results.

You could post your full vertex and pixel shaders for more details.

Share this post


Link to post
Share on other sites
the point i was trying to make is that with diffuse lighting in a situation like this, plain per pixel-lighting (certainly your normal is interpolated) is not always a big win in terms of quality, that's why I suggested normal maps...it's a low cost way to correct almost any problem due to to low vertex density (other than mesh fixes).

Other than that, the result in your screen shot doesn't look like anything is wrong, its just the vertex density is too low and too discontinuous to produce very smooth lighting without higher frequency normal data. Per-pixel interpolation itself isnt a magic bullet with meshes like this (although for specular lighting is a huge improvement..try doing a specular term in your code and you will see...it it looks bad then you know something is wrong.)

Share this post


Link to post
Share on other sites
Hi guys

Thanks again for all your valuable input, this site is brilliant.

Here's my cut-down shader (i've removed my texturing stuff for clarity which wasn't being used for this issue anyway):

struct V_To_P
{
float4 Position : POSITION;
float3 Normal : NORMAL;
};

float1 startPositionX;
float1 startPositionY;

float4x4 xViewProjection;
float4x4 xWorld;
float3 lightVec;

V_To_P SimpleVertexShader( float3 Position : POSITION, float4 YPos : TEXCOORD, float3 Normal : NORMAL)
{
V_To_P Output = (V_To_P)0;

float4 newPos = float4(Position.x, YPos.x, Position.z, 1.0);
float4 xFormedPos = mul(newPos, xWorld);
Output.Position = mul(xFormedPos, xViewProjection);
Output.Normal = Normal;
return Output;
}

float4 SimplePixelShader(V_To_P In ) : COLOR0
{
float4 m = float4(0.288f, 0.408f, 0.631f, 1.0f);

float s = saturate(dot(normalize(lightVec), normalize(In.Normal)));

return m + s;
}

technique Simplest
{
pass Pass0
{
VertexShader = compile vs_3_0 SimpleVertexShader();
PixelShader = compile ps_3_0 SimplePixelShader();
}
}


The Position parameter contains the x and z of the terrain, the YPos parameter contains the height, and Normal contains the normal.

I'm fairly confident that the normals are correct but I will double-check them. My lightVec constant var is normalized when initially sent into the vertex shader (by setting the lightVec constant var).

Is everything interpolated between the vertex shader and pixel shader? I mean in my V_To_P type, I've specified Normal, but I haven't explicitly told the interpolator to interpolate it. Does it work that out from the semantic?

Thanks
Rob

Share this post


Link to post
Share on other sites
The results look correct to me. You'll notice a huge difference if you try to place a point light close to the terrain; but in this case, your terrain is lit by a directionnal light, and it doesn't have any specular reflection. Per-pixel lighting won't improve the results much over per-vertex lighting.

Y.

Share this post


Link to post
Share on other sites
yes..use a normal map. If you are using LOD and/or uneven tesselation, this map can even be made just from the hieghtmap you are using.. this would at least give the normals consistent density. Also try belnding in a dteail normal map.. all this will consideraby improve lighting.

Also, terrain lighting can be improved using a slight fresnel term and specular.

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddison
float4 SimplePixelShader(V_To_P In ) : COLOR0
{
float4 m = float4(0.288f, 0.408f, 0.631f, 1.0f);

float s = saturate(dot(normalize(lightVec), normalize(In.Normal)));

return m + s;
}

Why are you adding m and s? This effectively gives you an ambient term of m, and a white diffuse term - I would sort of expect you to multiply m and s.

Share this post


Link to post
Share on other sites
My understanding is that this problem is standard. Basically, even though the colors match along an edges of the triangles, the color derivative can be discontinuous which makes the edge visible. Also, mach banding causes these color gradients to appear sharper than they are.

I have found that once the terrain is textured, these artifacts are not detectable. Also, smoothing the terrain also helps.

Share this post


Link to post
Share on other sites
Quote:
yes..use a normal map. If you are using LOD and/or uneven tesselation, this map can even be made just from the hieghtmap you are using.. this would at least give the normals consistent density. Also try belnding in a dteail normal map.. all this will consideraby improve lighting.

Also, terrain lighting can be improved using a slight fresnel term and specular.


Matt - I am essentially using a normal map, but on per vertex basis. My terrain is 4096x4096, so effectively, I have a 4096x4096 normal map (which is currently uncompressed whilst I try and sort this issue). I can't really store a normal map at any finer granularity than that due to space constraints (if that's what you mean?). I'll take a look at using the fresnel term and specular - thanks.


Quote:
Why are you adding m and s? This effectively gives you an ambient term of m, and a white diffuse term - I would sort of expect you to multiply m and s.


Swiftcoder - It doesn't really matter at this stage for me, I'm just aiming for my terrain to go from light blue to white (snow). If I start my terrain off as light blue and multiply m by s, I get dark blue (or black if s is zero) to bright white. I have tried variations on this and it doesn't improve the terrain - in fact it gets slightly worse as the range is 0 - 1 rather than my shade of blue to 1.


Quote:
My understanding is that this problem is standard. Basically, even though the colors match along an edges of the triangles, the color derivative can be discontinuous which makes the edge visible. Also, mach banding causes these color gradients to appear sharper than they are.

I have found that once the terrain is textured, these artifacts are not detectable. Also, smoothing the terrain also helps


Quat - I think you're right. I have been studying the Crysis editor and that suffers from the same issue that I'm seeing (if you paint some hilly terrain white, you can clearly see the same diamond-shaped irregularities). Also, using dark textures does fix this issue (or at least it hides it pretty well), but the intention for my terrain is mostly fresh white snow so I'll need to think of another way.

If using a fresnel term and specular don't improve things, my thoughts are to give in to a pre-calculated lightmap.

One thing that still puzzles me.. I've seen lots of pixel shader tutorials online and in books (I have two fairly in-depth shader books) and the ubiquitous teapot lit by per-pixel diffuse lighting never seems to suffer from this problem - although I guess the mesh density of the teapot is much higher than the low LOD parts of my terrain.

Thanks

Share this post


Link to post
Share on other sites
[quote]Original post by RobMaddison

Matt - I am essentially using a normal map, but on per vertex basis. My terrain is 4096x4096, so effectively, I have a 4096x4096 normal map (which is currently uncompressed whilst I try and sort this issue). I can't really store a normal map at any finer granularity than that due to space constraints (if that's what you mean?). I'll take a look at using the fresnel term and specular - thanks.

[quote]

Your terrain is not evenly tessellated as can be seen in your wireframe screen shots..using a normal map (even if not any finer than 4096--it can even be 2048) in the pixel shader will smooth out problems caused by this. Most of the discontinuities you pointed out occur at boundaries of tesselation quality.

Also, to improve lighting you can use a tiled normal map with some details...this can eliminate any further problems.

Share this post


Link to post
Share on other sites
I had a very similar problem myself, Im not sure if im remember the cause exactly though. I think my normals were off by one, more precisely i was using opengl immediate mode and was defining the normal AFTER the vertex causing vertex 2 to use normal 1 and so on.

However you are sending the normals and vertices, I would definitly check your normals and vertices match up. I will try and find my old code to see if i can recreate this.

Share this post


Link to post
Share on other sites
Quote:
Your terrain is not evenly tessellated as can be seen in your wireframe screen shots..using a normal map (even if not any finer than 4096--it can even be 2048) in the pixel shader will smooth out problems caused by this. Most of the discontinuities you pointed out occur at boundaries of tesselation quality.

Also, to improve lighting you can use a tiled normal map with some details...this can eliminate any further problems


Matt - it is evenly tessellated, it's just the change in LODs that you can see in the screenshots. This also happens when you view an area of the terrain at the highest LOD (so evenly tesselated). It's when the diagonal of the quad goes across a ridge of terrain that this is mostly noticable. I can also view my full terrain at the lowest LOD level (LODs are 1, 2, 4, 8 and 16 samples wide), which still yields this unsightly effect - I'll post some more screenshots shortly.

What do you mean by using another normal map? I already have a normal map that covers the whole terrain with a normal at each Vertex. Are you talking about a normal map to use across each terrain quad? If so, how would you calculate these normals? I'm not sure I see what you mean with this.


Quote:
I had a very similar problem myself, Im not sure if im remember the cause exactly though. I think my normals were off by one, more precisely i was using opengl immediate mode and was defining the normal AFTER the vertex causing vertex 2 to use normal 1 and so on.

However you are sending the normals and vertices, I would definitly check your normals and vertices match up. I will try and find my old code to see if i can recreate this.


ArThor - I have checked the normal creation carefully and everything lines up as it should.

I'll post some more screenshots in a short while.

Thanks

Share this post


Link to post
Share on other sites
Here are some more detailed screenshots:

This one and in wireframe shows the terrain at the highest (most detailed) level of detail. You can see areas (circled) where the diamonds appear. If you look at the wireframe version, you can see that these particular quads are running against the direction of the terrain 'shelf'. By that, I mean that the quads at those points aren't really a true representation of the slope.

In this one and in wireframe, it's even more noticable.

In this one, things look half-decent, but that's because it's taken from a position where the quads are a truer representation of the slope, i.e., they lie at a more favourable angle for the slope they represent.

You can still see areas where diamonds are showing though, mostly at the bottom right. Whilst I appreciate that things might not look quite so good if a quad lies unfavourably for the terrain it's representing, I'd have thought per-pixel shading would have done a better job.

Thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddison
Whilst I appreciate that things might not look quite so good if a quad lies unfavourably for the terrain it's representing, I'd have thought per-pixel shading would have done a better job.
I am thinking that you might be splitting quads into triangles in one direction, and generating normals with the spilt in the other direction. Any chance you can post your normal generation code?

Share this post


Link to post
Share on other sites
Quote:
I am thinking that you might be splitting quads into triangles in one direction, and generating normals with the spilt in the other direction. Any chance you can post your normal generation code?


Swiftcoder - I've checked and double checked the normal generation code as I thought exactly the same as what you've just suggested, but it's fine.

Essentially, this is how it works:


*---b---c
| /| /|
| / | / |
|/ |/ |
a---*---d
| /| /|
| / | / |
|/ |/ |
f---e---*



If we're considering the vertex in the middle '*', I calculate normals for f-*-a, a-*-b, b-*-c, c-*-d, d-*-e, e-*-f and average them by the number considered (could be less if we're on the edge or corner of the entire terrain). I then store the average normal into my normal [vertex] buffer as a D3DXVECTOR3.

I did try removing the averaging code and just setting the normal to the last triangle considered which yielded _very_ per-vertex-lit results. That gives me the inclination that it's not the normals.

Incidentally, things look fine on a slope which goes adjacent to the terrain axis - so:


*---*---*
| /| /|
| / | / |
|/ |/ |
*---*---* <--- camera
| /| /|
| / | / |
|/ |/ |
*---*---*



If you imagine the terrain slopes down toward the camera position uniformly, it looks fine.

It's this situation:


l---h---h
| /| /|
| / | / |
|/ |/ |
h---l---h
| /| /|
| / | / |
|/ |/ |
h---h---l
_
| camera



That yields the worse results. Looking at the terrain across the diagonals where 'l' is low and 'h' is high (like looking north west along a mini-valley).

I'm starting to think this is just literally because my terrain quad format doesn't lend itself to the represenation of the terrain well enough as in some directions it looks fine.

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddison
That yields the worse results. Looking at the terrain across the diagonals where 'l' is low and 'h' is high (like looking north west along a mini-valley).

I'm starting to think this is just literally because my terrain quad format doesn't lend itself to the represenation of the terrain well enough as in some directions it looks fine.
Well, yeah. If you have a valley across the direction of triangles splits, it will be a problem. The reason most people never encounter this is that they generate terrain normals from the heightmap, rather than from the triangle mesh. I don't know if you are using a heightmap, but presumably you have some high-level representation of your terrain - try generating the normals from that.

You can also try just using the 4 vertices in a cross to generate the normals (really simple on a heightmap), the normals might not be quite as accurate, but they shouldn't exhibit the problems either.

Edit: Just to clarify - you are using world-space normals intentionally, right?

[Edited by - swiftcoder on October 14, 2008 5:26:24 PM]

Share this post


Link to post
Share on other sites
Quote:
Well, yeah. If you have a valley across the direction of triangles splits, it will be a problem. The reason most people never encounter this is that they generate terrain normals from the heightmap, rather than from the triangle mesh. I don't know if you are using a heightmap, but presumably you have some high-level representation of your terrain - try generating the normals from that.

You can also try just using the 4 vertices in a cross to generate the normals (really simple on a heightmap), the normals might not be quite as accurate, but they shouldn't exhibit the problems either.

Edit: Just to clarify - you are using world-space normals intentionally, right?


Swiftcoder - thanks for the post. I did try generating the normals from a cross of vertices, but the results weren't improved (I thought they would be). I used these vertices:


.---c---.
| /| /|
| / | / |
|/ |/ |
b---*---d
| /| /|
| / | / |
|/ |/ |
.---a---.



so for vertex '*', I average the normal out of surface normals from a-*-b, b-*-c, c-*-d, d-*-a. I'm not using a standard bitmap or image file for my heightmap so it might be tricky to use something to create a normal map. Also, my normals need to be streamed into the vertex shader (later) for compression.

I am using word-space normals intentionally, yes - the terrain doesn't have any rotation or scale, only translation. And seeing as i'm using a directional light, translation shouldn't matter with regard to normals (unless I'm wrong there). Here's the code to calculate the normals. xPos and yPos contain the position of the vertex in question - apologies for the unoptimized code.


// calculate the normal
int x0 = -1;
int x2 = +1;
int y0 = -1;
int y2 = +1;

int triangles = 0;

// create a vector array - this will hold the triangles around point x1, y1
D3DXVECTOR3 vectors[4][2];

// bottom left triangle
int xPos2 = xPos + x0;
int yPos2 = yPos + y0;
if (xPos2 >= 0 && yPos2 >= 0)
{
xPos2 = xPos;
yPos2 = yPos + y0;
vectors[triangles][0] = D3DXVECTOR3(0.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), -1.0f);
xPos2 = xPos + x0;
yPos2 = yPos;
vectors[triangles][1] = D3DXVECTOR3(-1.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 0.0f);
triangles++;
}

// top left triangle
xPos2 = xPos + x0;
yPos2 = yPos + y2;
if (xPos2 >= 0 && yPos2 <= pTerrain->GetTerrainParams()->Width)
{
yPos2 = yPos;
xPos2 = xPos + x0;
vectors[triangles][0] = D3DXVECTOR3(-1.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 0.0f);
xPos2 = xPos;
yPos2 = yPos + y2;
vectors[triangles][1] = D3DXVECTOR3(0.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 1.0f);
triangles++;
}

// top right triangle
xPos2 = xPos + x2;
yPos2 = yPos + y2;
if (xPos2 <= pTerrain->GetTerrainParams()->Width && yPos2 <= pTerrain->GetTerrainParams()->Width)
{
xPos2 = xPos;
vectors[triangles][0] = D3DXVECTOR3(0.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 1.0f);
xPos2 = xPos + x2;
yPos2 = yPos;
vectors[triangles][1] = D3DXVECTOR3(1.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 0.0f);
triangles++;
}

// bottom right triangle
xPos2 = xPos + x2;
yPos2 = yPos + y0;
if (xPos2 <= pTerrain->GetTerrainParams()->Width && yPos2 >= 0)
{
xPos2 = xPos + x2;
yPos2 = yPos;
vectors[triangles][0] = D3DXVECTOR3(1.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), 0.0f);
xPos2 = xPos;
yPos2 = yPos + y0;
vectors[triangles][1] = D3DXVECTOR3(0.0f, (float)heightMap->GetHeightAtPoint(xPos2, yPos2), -1.0f);
triangles++;
}

// create the normals of each triangle and average them
D3DXVECTOR3 averageNormal = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
for (int i = 0; i < triangles; i++)
{
D3DXVECTOR3 normal = CalculateNormal(vectors[i][1], D3DXVECTOR3(0.0f, (float)heightMap->GetHeightAtPoint(xPos, yPos), 0.0f), vectors[i][0]);
D3DXVec3Normalize(&normal, &normal);
averageNormal += normal;
}

averageNormal /= triangles;
D3DXVec3Normalize(&averageNormal, &averageNormal);

// add the normal to the normal [vertex] buffer
pNormals[normalCount++] = averageNormal;

........

// calculates a normal from a triangle
D3DXVECTOR3 CalculateNormal(D3DXVECTOR3& v0, D3DXVECTOR3& v1, D3DXVECTOR3& v2)
{
D3DXVECTOR3 edge1 = v2 - v1;
D3DXVECTOR3 edge2 = v0 - v1;
D3DXVECTOR3 normal;

D3DXVec3Cross(&normal, &edge1, &edge2);
D3DXVec3Normalize(&normal, &normal);
return normal;
}



jpventoso - I did try that, but then all the normals change when I move my camera around. i.e. if I look at one point in shadow and then move the camera slightly, that same point will become lighter (or darker).

Here's a latest screenshot and in wireframe.

Thanks

Share this post


Link to post
Share on other sites
Am I right in thinking that as my terrain isn't transformed in any way other than translation (patches are reused and translated in the x/z directions), and my light (sun) vector is a normal, i.e. a directional light, that I DO NOT need to convert any normals in my shaders to any other coordinate space? I'm assuming they can all stay in world space.

Thanks

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this