• Create Account

## Normal artifacting exposes underlying topology

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1Promit  Senior Moderators

12499
Like
0Likes
Like

Posted 28 June 2013 - 03:31 PM

I have a heightmap for which I've computed normals. When I simply output normals as color, I get this:

So basically the interpolation is not smooth and is showing up with artifacts that reflect the underlying topology. I don't understand why I'm getting this or what to do about it. Normalizing per fragment makes no difference. Is the actual normal computation wrong and showing up as this artifact, or is there something else going on here?

SlimDX | Shark Eaters for iOS | Ventspace Blog | Twitter | Proud supporter of diversity and inclusiveness in game development

### #2phil_t  Members

7648
Like
1Likes
Like

Posted 28 June 2013 - 03:47 PM

That's actually expected, due to the way it's triangulated - there's probably nothing wrong with your calculations. If you generate your triangles in the opposite orientation, you'll get artifacts facing the other direction. These artifacts will be less noticeable the smaller you make your triangles, or the more textured you make your terrain.

You can also reduce it quite a bit if you just orient each triangle depending on the direction the slope faces (I suspect you'll notice that on slopes that are rotated 90 degrees from the one you pictured above, the artifact is not so noticeable). If the edges that splits each quad into two triangles is aligned so that it goes between the two vertices with the least height difference between them, then things will look better.

I explain this a bit here, under "Additional optimizations":

http://mtnphil.wordpress.com/2011/09/22/terrain-engine/

You could probably also get something that looks better if you instead sampled the terrain normals in the pixel shader instead of the vertex shader, since then you'd get values interpolated between 4 normals instead of 3.

Edited by phil_t, 28 June 2013 - 03:51 PM.

### #3marcClintDion  Members

435
Like
0Likes
Like

Posted 28 June 2013 - 04:01 PM

What you are seeing is normal, no pun intended.  I'm surprised that you are not seeing at least some improvement by normalizing the normals in the fragment processor.

You can either increase the detail of the model or you can post your shader code.   There are several people around who can help you configure your shaders to perform better, both visually and computationally.

Also, normal mapping will resolve this issue.  For dynamic height mapping you'd have to use a tangent space normal map. (Unless someone came up with something clever that I'm not aware of).  Most people only use tangent maps anyways.

There is an example in the cgToolkit from nVidia that shows how to use a normalization cube map to align the tangent space normals to your models' geometry.

https://developer.nvidia.com/cg-toolkit

In the installed directory look for:  /cg/examples/OpenGL/basic/23_bump_map_floor.  It will easily port to HLSL or GLSL with no issues.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

### #4phil_t  Members

7648
Like
0Likes
Like

Posted 28 June 2013 - 04:05 PM

Also, normal mapping will resolve this issue

How so? If that were true you could just simulate a normal map where everything is (0, 0, 1) and be done with it :-)

### #5Promit  Senior Moderators

12499
Like
2Likes
Like

Posted 28 June 2013 - 04:19 PM

Dang. I was really hoping you guys wouldn't say that. I've seen the artifact before and vaguely remembered hearing it was expected, but Friday afternoon was not kind to me. I guess I'll hope that normal maps add enough entropy to cover for it.

You could probably also get something that looks better if you instead sampled the terrain normals in the pixel shader instead of the vertex shader, since then you'd get values interpolated between 4 normals instead of 3.

Hmm. Suppose instead of running the normals through the geometry, I uploaded to a texture and then sampled them per fragment? That would create a bilinear interpolation across all four corners that's independent of tessellation and might just get me out of trouble.

Edited by Promit, 28 June 2013 - 04:21 PM.

SlimDX | Shark Eaters for iOS | Ventspace Blog | Twitter | Proud supporter of diversity and inclusiveness in game development

### #6cowsarenotevil  Members

2786
Like
0Likes
Like

Posted 28 June 2013 - 05:38 PM

Like others have said, this is to be expected. If you find that it's a real problem in practice, though, you might be able to work avoid the "problem" by interpolating your normals non-linearly.

I'm surprised that you are not seeing at least some improvement by normalizing the normals in the fragment processor.

I disagree.

Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

Note that if all we're doing is rendering the normal map (ignoring lighting), these two processes won't do anything different, so there's no advantage to doing it "per-pixel". The only way you'll get a result that doesn't exhibit the "topology" (which is really just like the pixelation of a scaled up image) is to use an interpolation algorithm that doesn't exhibit artefacts that you find unpleasant.

Ultimately, you're still interpolating, generating data where there is none; you just have to find a way to generate data that you like.

-~-The Cow of Darkness-~-

### #7Krypt0n  Members

4529
Like
0Likes
Like

Posted 28 June 2013 - 06:32 PM

interpolating per vertex colors across triangles is actually called gouraud shading and if you look it up on google images, you will find that it's the way it is. The reason is that you interpolate linearly what isn't linear. at the center of a quadratic surface, you actually take the average of just two corners. The solution back then was to interpolate linearly what really is linear (e.g. normal, light vector, eye vector) normalize those and then per pixel do the shading (phong) and that looks actually correct (not Physically based, but had no topology artifacts).

### #8marcClintDion  Members

435
Like
0Likes
Like

Posted 29 June 2013 - 01:24 PM

How so? If that were true you could just simulate a normal map where everything is (0, 0, 1) and be done with it :-)

I suppose you could do this, instead of loading 1 out of those 2 normal maps that are used in that example, personally I think this idea of yours has merit, this way you can save on texture bandwidth.  But this is besides the point, whether you use a straight blue texture or just define all the unprocessed normals in the fragment processor as all (0,0,1).  is irrelevant to what the shader is ultimately doing.

The nVidia sample is using a normalization cube map, it's this map that will be helping to adjust the light for the contours of the model.  This cube map is a world space normal map, it's not (0, 0, 1) like the tangent map.  But again, you are absolutely right, the tangent map is useless if there is no intention to add extra detail between the vertices of a low-poly model.  It's just that this code example is set up this way.  Your idea will optimize that shader for cases like this where the bump detail is not needed.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

### #9marcClintDion  Members

435
Like
0Likes
Like

Posted 29 June 2013 - 02:23 PM

Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely.

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

### #10marcClintDion  Members

435
Like
-2Likes
Like

Posted 30 June 2013 - 01:48 PM

removed: didn't seem to be appreciated...  and I obviously did not read the original post properly

Edited by marcClintDion, 30 June 2013 - 09:44 PM.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

### #11marcClintDion  Members

435
Like
0Likes
Like

Posted 30 June 2013 - 03:56 PM

same...

Edited by marcClintDion, 30 June 2013 - 09:45 PM.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

### #12Digitalfragment  Members

1371
Like
0Likes
Like

Posted 30 June 2013 - 07:28 PM

Hmm. Suppose instead of running the normals through the geometry, I uploaded to a texture and then sampled them per fragment? That would create a bilinear interpolation across all four corners that's independent of tessellation and might just get me out of trouble.

Yeah, this works quite well. It has an advantage that you no longer need to have vertex normals / tangents / binormals passed through the vertex pipe reducing bandwidth in the vertex shader too. The texture coordinate can be derived from the position, and as you pointed out - its effectively bi-linear filtered across the quad, as opposed to the interpolation that happens across the triangle.

Another alternative, which i have tried before is to triangulate the quads based on their slope direction. Always cut the quad across the diagonal that's closest to being vertically flat.

### #13cowsarenotevil  Members

2786
Like
0Likes
Like

Posted 30 June 2013 - 07:40 PM

No, I'm wrong, I should have put that more tactfully.

You wrote that you tried normalizing in the fragment shader and that this had no effect.

Could it be that you only normalized one of the two terms that are used in the diffuse lighting calculation.  As follows,

//=====================================================================================================

float3 N = normalize(normal);                   // It seems to me that if you do not normalize both of these in the fragment shader
float3 L = normalize(lightPosition - P);     // then you will not see any benefit,  it will still look like the image you posted.

float diffuseLight = max(dot(L, N), 0.0);
//=====================================================================================================

The image that Promit posted represents, in his own words, the normals as colors. There are no lighting calculations being performed at all. Thus, the result will represent linear interpolation whether it is computed in the pixel shader, in the vertex shader, or in the fixed-function pipeline (unless specific steps are taken to use another type of interpolation).

That's why everyone is saying that the image represents the expected result.

-~-The Cow of Darkness-~-

### #14skytiger  Members

294
Like
1Likes
Like

Posted 02 July 2013 - 12:57 PM

I solved this problem using catmull-rom interpolation for the terrain heightfield

combined with runtime calculation of normals using bspline interpolation

bspline has C2 continuity ... no artifacts!

but it is expensive ...

http://skytiger.wordpress.com/2010/11/28/xna-large-terrain/

### #15marcClintDion  Members

435
Like
1Likes
Like

Posted 03 July 2013 - 01:33 AM

Oh..... I'm still embarrased?

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.