Archived

This topic is now archived and is closed to further replies.

Calculating vertex normals for LOD planet terrain

This topic is 4940 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm trying to implement lighting in my spherical terrain project. The project uses the LOD scheme described here. Note, all vertices are reused and shared. I have lighting working fairly well, but it doesn't look quite right. I'm trying to calculate vertex normals in the standard way of averaging the surface normals of the triangles that share the vertex. I'll try to explain my problem, consider the following situation: Triangles a, b, c and d are the new children that result from the split of the parent. Triangle e is an unsplit neighbour of this parent being split. How would I calculate the normal of vertex 1? (the vertex shared by triangles a, b and d). Would I take the average of the surface normals of triangles a, b & d or the average of a, b, d AND e? Another question - is it necessary to recalculate the normals of vertices that belong to a triangle, after the triangle is split? I had thought that the normals of the triangles vertices would be affected by the orientation of the triangle's children. If there is a better way to perform basic lighting for spherical LOD terrain the please say so. I'm not really familiar with shaders as my graphics card doesn't support them. If anyone requires a more comprehensive diagram/description of the problem, then please just say so. Many thanks in advance, it is most appreciated. [edited by - cheese on May 28, 2004 7:02:52 PM]

Share this post


Link to post
Share on other sites
If this really is just for tesselating triangles on a sphere, you could just use the mathmatically correct normal (just normalize the vector from the center of the sphere to the vertex).

The problem is that the lighting will be different for the large tri (e) and for the two smaller triangles sharing the edge (a,b). So you won''t have physical gaps, but you will have lighting discontinuities(for lack of a better term). Depending on where this occurs it may not be noticeable.

If you need to keep the lighting coherent, you may just want to average the normals of the two endpoints, though I''m pretty sure that for a sphere that should be the same thing.

I say try them all and see which one you like best.

lonesock

Piranha are people too.

Share this post


Link to post
Share on other sites
use all normals of connected faces. problem is that e is four times bigger than a,b,d so it should weigh more.

n = (1*a.n + 1*b.n + 1*d.n + 4*e.n)/7

or forget the 7 if you have to normlize the result anyway (and i guess you should).

Share this post


Link to post
Share on other sites
Thanks for the replies, like you suggested lonesock, I''ve tried a few different methods, and in the situation I explained, I seemed to get best results by averaging the normals of the vertices at either end of the edge being bisected. If triangle e is split and has children, I calculate the normal in the correct way of averaging the 6 triangles that share the vertex.

I''ll post some screenshots later on if get the chance, just to get some feedback on how ''correct'' the lighting looks.

Thanks again for the replies.

Share this post


Link to post
Share on other sites
I find basic dot3 normal map a lot better than vertex lighting for this sort of thing. Vertex lighting makes LOD transitions much more visible.

Anyway, you should avoid T-junctions as much as possible. This vertex shouldn''t be there at all. For example, you could replace all references to "vertex 1" with the vertex at the top of triangle b which would make triangles a and d to properly connect to e. This will not only solve your problem with lighting, it''ll also prevent the inevitable little cracks between the polygons caused by rounding errors and the like.

Share this post


Link to post
Share on other sites
First of all, you indeed need to re-calculate the normals of adjacent triangles if LOD changes. Otherwise, "high-level" vertices (created at an early LOD stage) and "low-level" vertices will be together in one low-level triangle, e.g. the triangle below vertex 1 if face a splits many times. This triangle will look ugly unless you adjust the normal of vertex 1.

In my first rendering approach I have used the same split method as you do. The big problem is to prevent the lighting jump if LOD changes.

Note that you can NOT prevent this jump if you set
normal_1 = normalize(normal_a+normal_b)
unless you use phong lighting. If you use Gouraud lighting (the standard at the moment) you get for the triangle e
light value at vertex 1 = (light value at A + light value at B) / 2 (see ascii art)

A...........C
. .
. e .
1 .
. .
. .
B

But for the triangle a and b you get at vertex 1:

light value at vertex 1 = light vector * normal_1

which might be DIFFERENT from the light value above, e.g. if normal_1 directly points to the light but normal_A and normal_B don''t.

I found the lighting jump to be VERY noticable. In fact, it looked horrible. I changed my engine to use ROAM-style diamonds, but I had to animate lots of normals all the time creating a huge CPU overhead.

The big advantage of light/normalmaps are that you don''t have to bother about this stuff at all. Lighting is ALWAYS correct. So I would strongly recommend it.

Share this post


Link to post
Share on other sites
Thanks for the replies everyone, I've finally got the lighting to look satisfactory - as you suggested Lutz, I needed to re-calculate the normal of existing vertices.

However, the re-calculation does not involve another cross product, say if I was re-calculting the normal of vertex at the top of triangle b in the diagram, its:

normal = (normal of parent + normal of b) * 0.5

The surface normals of the parent (the triangle being split into a, b, c and d), and triangle b are already normalized, and so their average should also be a unit vector. Is this right? Either way, it looks correct in practice.

I would use normalmaps, however I'm not sure that it is possible to do so, because I do not use a heightmap - all vertex heights are calculated during run-time using a noise algorithm. In addition, my terrain is spherical - and potentially the size of a planet (to scale). This is why I have to calculate vertex heights during runtime , because the potential number of vertices means that precalculating a normal map would take incredibly long. The normal map would be huge

I agree that changes in LOD could be a big problem with lighting, however I see no other way of doing it right now (I'm still fairly early in development). To make LOD changes less obvious, I would use geomorphing, and calculate the normals after the height of a vertex had been calculated using geomorphing.

Thanks again for your replies.

[edited by - cheese on June 1, 2004 1:16:20 PM]

Share this post


Link to post
Share on other sites
I use dot3 in my planet renderer... admittedly it doesn''t display full sized planets at 1-meter resolution, but if you''re already generating a texture then it''s not too hard to generate a normal map at the same time. My implementation spends a lot more time on the base texture (selecting terrain type based on slope, altitude, latitude etc) than the normal map. I just sort of assumed you''d be generating textures too if you''re generating geometry on the fly (so the texture fits the geometry).

Btw, average of two unit vectors is only a unit vector if the original two are the same.. Otherwise the average will be shorter than 1. (Consider two unit vectors pointing in the opposite directions causing a zero-length average)

Share this post


Link to post
Share on other sites
I hope this thread wasn''t too old to respond to - if so then I apologise.

I think perhaps I''ve misunderstood your explination slightly - do you mean that you precalculate texture coordinates and normals in a single pass over the height map?

I''m pretty sure precalculation of a normal map can still be applied to procedural noise-based terrain, though perhaps only upto a certain level of accuracy. Perhaps I can use a combination of normal mapping for the low-resolution mesh, and then calculate normals on the fly for triangles at a higher resolution.

Thanks as always.

Share this post


Link to post
Share on other sites
Yes, I do it all in a precalculation process.

(for each triangular sector)
1. generate heightmap
2. generate vertices & calculate normals
3. generate texture & normal map
4. throw away vertex normals and heightmap
5. transfer everything else to VBO

My texture generator finds which triangle each texel is in and basically interpolates the normal and texture color parameters between the three vertices of the triangle. This is needed because the texture is different size than the heightmap.

You could probably do this on the fly for the high-res patches if they are limited in size and you prevent multiple patches being created/textured per frame (queue the texture generation requests and use the normal map of the low-res mesh while waiting). It''ll take some clever coding and optimization though.

Share this post


Link to post
Share on other sites
Thanks again fingers, you''ve been more than helpful in your responses. If possible, may I ask one more question is possible (this is the last one i swear).

To what level of resolution do you precalculate? i.e. how many times do you split each of the triangles of the icosahedron? I assume the resolution in your precalculation is the maximum resolution that can be viewed during run time?

I''ve been carefully trying to understand the SOAR algorithm recently (''Visualization of Large Terrains made Easy'' by Lindstrom and Pascucci). It finally clicked last night, and I am now considering how I could possibly adapt it to spherical terrain (as opposed to flat).

SOAR is based on tree of vertices as opposed to a tree of triangles, which eliminates the need for pointers to adjacent triangles in the mesh because cracks can never be formed. Unfortunately, this makes it a lot more difficult to represent the entire planet as a single (continuous) mesh. This would also cause problems when trying to share vertices.

SOAR uses a binary tree of vertices, and so a cube is the obvious base shape. I thought that the best way would be to treat the 6 faces of the cube as 6 seperate square terrains, and treat them as such.

If I used precalculation as you did, I could use the data layout schemes that are suggested by SOAR, which is one of its main performance advantages.

Maybe I''ll start another thread to discuss the feasibility of SOAR to planet rendering. I don''t know how many people are familiar with SOAR, or that would want to discuss it. Best to try I suppose.

Thanks again.

Share this post


Link to post
Share on other sites
As it is, the icosahedron is tesselated 8 times so each of its triangles turns into 256x256. Without big changes I could increase it to 10 at most but then it uses up a lot of memory (that''s 20Mtris). It''s not really suitable for ground level viewing of real scale planets. That requires something like what you''re doing.

The difference between precalculated and on the fly calculated terrain isn''t that great when it''s a procedural terrain/texture. The same routines can be used for both. On the fly just requires lots of tweaking to prevent hitches as you move around and it generates new patches of terrain... You need to regulate how much work it does each frame.

Share this post


Link to post
Share on other sites