• ### What is your GameDev Story?

#### Archived

This topic is now archived and is closed to further replies.

# Calculating vertex normals for LOD planet terrain

This topic is 5335 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I'm trying to implement lighting in my spherical terrain project. The project uses the LOD scheme described here. Note, all vertices are reused and shared. I have lighting working fairly well, but it doesn't look quite right. I'm trying to calculate vertex normals in the standard way of averaging the surface normals of the triangles that share the vertex. I'll try to explain my problem, consider the following situation: Triangles a, b, c and d are the new children that result from the split of the parent. Triangle e is an unsplit neighbour of this parent being split. How would I calculate the normal of vertex 1? (the vertex shared by triangles a, b and d). Would I take the average of the surface normals of triangles a, b & d or the average of a, b, d AND e? Another question - is it necessary to recalculate the normals of vertices that belong to a triangle, after the triangle is split? I had thought that the normals of the triangles vertices would be affected by the orientation of the triangle's children. If there is a better way to perform basic lighting for spherical LOD terrain the please say so. I'm not really familiar with shaders as my graphics card doesn't support them. If anyone requires a more comprehensive diagram/description of the problem, then please just say so. Many thanks in advance, it is most appreciated. [edited by - cheese on May 28, 2004 7:02:52 PM]

##### Share on other sites
If this really is just for tesselating triangles on a sphere, you could just use the mathmatically correct normal (just normalize the vector from the center of the sphere to the vertex).

The problem is that the lighting will be different for the large tri (e) and for the two smaller triangles sharing the edge (a,b). So you won''t have physical gaps, but you will have lighting discontinuities(for lack of a better term). Depending on where this occurs it may not be noticeable.

If you need to keep the lighting coherent, you may just want to average the normals of the two endpoints, though I''m pretty sure that for a sphere that should be the same thing.

I say try them all and see which one you like best.

lonesock

Piranha are people too.

##### Share on other sites
use all normals of connected faces. problem is that e is four times bigger than a,b,d so it should weigh more.

n = (1*a.n + 1*b.n + 1*d.n + 4*e.n)/7

or forget the 7 if you have to normlize the result anyway (and i guess you should).

##### Share on other sites
Thanks for the replies, like you suggested lonesock, I''ve tried a few different methods, and in the situation I explained, I seemed to get best results by averaging the normals of the vertices at either end of the edge being bisected. If triangle e is split and has children, I calculate the normal in the correct way of averaging the 6 triangles that share the vertex.

I''ll post some screenshots later on if get the chance, just to get some feedback on how ''correct'' the lighting looks.

Thanks again for the replies.

##### Share on other sites
I find basic dot3 normal map a lot better than vertex lighting for this sort of thing. Vertex lighting makes LOD transitions much more visible.

Anyway, you should avoid T-junctions as much as possible. This vertex shouldn''t be there at all. For example, you could replace all references to "vertex 1" with the vertex at the top of triangle b which would make triangles a and d to properly connect to e. This will not only solve your problem with lighting, it''ll also prevent the inevitable little cracks between the polygons caused by rounding errors and the like.

##### Share on other sites
First of all, you indeed need to re-calculate the normals of adjacent triangles if LOD changes. Otherwise, "high-level" vertices (created at an early LOD stage) and "low-level" vertices will be together in one low-level triangle, e.g. the triangle below vertex 1 if face a splits many times. This triangle will look ugly unless you adjust the normal of vertex 1.

In my first rendering approach I have used the same split method as you do. The big problem is to prevent the lighting jump if LOD changes.

Note that you can NOT prevent this jump if you set
normal_1 = normalize(normal_a+normal_b)
unless you use phong lighting. If you use Gouraud lighting (the standard at the moment) you get for the triangle e
light value at vertex 1 = (light value at A + light value at B) / 2 (see ascii art)
A...........C .         .  .   e   .   1     .    .   .     . .      B

But for the triangle a and b you get at vertex 1:

light value at vertex 1 = light vector * normal_1

which might be DIFFERENT from the light value above, e.g. if normal_1 directly points to the light but normal_A and normal_B don''t.

I found the lighting jump to be VERY noticable. In fact, it looked horrible. I changed my engine to use ROAM-style diamonds, but I had to animate lots of normals all the time creating a huge CPU overhead.

The big advantage of light/normalmaps are that you don''t have to bother about this stuff at all. Lighting is ALWAYS correct. So I would strongly recommend it.

##### Share on other sites
Thanks for the replies everyone, I've finally got the lighting to look satisfactory - as you suggested Lutz, I needed to re-calculate the normal of existing vertices.

However, the re-calculation does not involve another cross product, say if I was re-calculting the normal of vertex at the top of triangle b in the diagram, its:

normal = (normal of parent + normal of b) * 0.5

The surface normals of the parent (the triangle being split into a, b, c and d), and triangle b are already normalized, and so their average should also be a unit vector. Is this right? Either way, it looks correct in practice.

I would use normalmaps, however I'm not sure that it is possible to do so, because I do not use a heightmap - all vertex heights are calculated during run-time using a noise algorithm. In addition, my terrain is spherical - and potentially the size of a planet (to scale). This is why I have to calculate vertex heights during runtime , because the potential number of vertices means that precalculating a normal map would take incredibly long. The normal map would be huge

I agree that changes in LOD could be a big problem with lighting, however I see no other way of doing it right now (I'm still fairly early in development). To make LOD changes less obvious, I would use geomorphing, and calculate the normals after the height of a vertex had been calculated using geomorphing.

[edited by - cheese on June 1, 2004 1:16:20 PM]

##### Share on other sites
I use dot3 in my planet renderer... admittedly it doesn''t display full sized planets at 1-meter resolution, but if you''re already generating a texture then it''s not too hard to generate a normal map at the same time. My implementation spends a lot more time on the base texture (selecting terrain type based on slope, altitude, latitude etc) than the normal map. I just sort of assumed you''d be generating textures too if you''re generating geometry on the fly (so the texture fits the geometry).

Btw, average of two unit vectors is only a unit vector if the original two are the same.. Otherwise the average will be shorter than 1. (Consider two unit vectors pointing in the opposite directions causing a zero-length average)

##### Share on other sites
I hope this thread wasn''t too old to respond to - if so then I apologise.

I think perhaps I''ve misunderstood your explination slightly - do you mean that you precalculate texture coordinates and normals in a single pass over the height map?

I''m pretty sure precalculation of a normal map can still be applied to procedural noise-based terrain, though perhaps only upto a certain level of accuracy. Perhaps I can use a combination of normal mapping for the low-resolution mesh, and then calculate normals on the fly for triangles at a higher resolution.

Thanks as always.

##### Share on other sites
Yes, I do it all in a precalculation process.

(for each triangular sector)
1. generate heightmap
2. generate vertices & calculate normals
3. generate texture & normal map
4. throw away vertex normals and heightmap
5. transfer everything else to VBO

My texture generator finds which triangle each texel is in and basically interpolates the normal and texture color parameters between the three vertices of the triangle. This is needed because the texture is different size than the heightmap.

You could probably do this on the fly for the high-res patches if they are limited in size and you prevent multiple patches being created/textured per frame (queue the texture generation requests and use the normal map of the low-res mesh while waiting). It''ll take some clever coding and optimization though.

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 13
• 9
• 15
• 14
• 46
• ### Forum Statistics

• Total Topics
634063
• Total Posts
3015310
×