# Terrain Normals : Quick questions about calculating on the fly...

This topic is 1216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I think I understand normals reasonably well, but I'm curious about some of the practical issues relating to using normals for terrain.

The simplest way to calculate normals appears to be precalculating them by getting the cross products of the sides of all the terrain's triangles and sending them to the vertex shader as in this tutorial:

http://www.mbsoftworks.sk/index.php?page=tutorials&series=1&tutorial=24

But what about deformable terrain? Games like Magic Carpet or Populous the Beginning come to mind for me, where you have spells that can raise and lower the terrain.

The obvious naive solution is to recalculate the entire set of terrain normals after any change to the terrain, but what is an actual practical way like might have been efficient enough to have been used by those games ~20 years ago?

Is it as simple as splitting the map up into sectors, detecting the maximum area that might be affected by a spell, and then recalculating all of the normals in the triangles in just those sectors?

##### Share on other sites

If the spell has an area of effect, can't you just apply the normal recalculation to the area of effect, be it an AABB or circular radial effect?

Something to be aware of though is if your spell directly affects a specific area, is there any 'falloff' region past the edges of the area, where there is still slight deformation of the terrain, reducing to none?

Another thought that enters my mind, how do you prevent spells creating broken terrain, e.g. holes in the mesh and such?

This is an interesting subject...

##### Share on other sites

This is much a question of frequency of the update and calculation.

In a not per frame modification of shape, you usualy offload computation onto one frame, and issue new geometry onto gpu on next, there is then a tradeoff between having dynamic buffer (faster load/slower render) and static buffer, but for causal updating a static buffer should be used with combination of geometry on-air loading.

On gpu you can transform geometry instant, but you cannot access adjecent information.

There is a lot to the subject, it depends much on what you want to achieve/simulate.

Edited by JohnnyCode

##### Share on other sites

If the spell has an area of effect, can't you just apply the normal recalculation to the area of effect, be it an AABB or circular radial effect?

Well, that was the general idea of splitting the map up into sectors; to try to easily restrict the recalculations to the smallest area necessary.

Just hypothesizing that maybe splitting the map into squares or w/e would be a practical way to do that.

Something to be aware of though is if your spell directly affects a specific area, is there any 'falloff' region past the edges of the area, where there is still slight deformation of the terrain, reducing to none?

Possibly. In the sense that I'm speaking in a very general sense where all we know is that the terrain is changing somewhere somehow and the normals have been invalidated. Though I was thinking along the lines of such falloff areas counting as the total region of the spell's effect for terrain purposes.

Another thought that enters my mind, how do you prevent spells creating broken terrain, e.g. holes in the mesh and such?

I assume you'd just allow raising and lower vertices, probably cap it to some kind of reasonable value, and have some sort of minimum height, beyond which would be bedrock or perhaps water.

TBH, I find Magic Carpet a very impressive game for 1994 in retrospect. This was a time in the 16-bit era when even the Amiga was still very much alive (though only just...) and Magic Carpet has all kinds of terrain deforming, castle-building weirdness. Probably a bit ahead of its time though which is where it may suffer... haven't actually played it in many many years tho, and even then only on PS1... but I digress.

There is a lot to the subject, it depends much on what you want to achieve/simulate.

As a thought... I remember Tiberian Sun (which was 2D, granted, but if we wanted to achieve it in 3D...) had the ability to deform the terrain, e.g. by repeatedly bombarding the same spot with artillery.

Let's say we have a perfectly flat terrain mesh, and we just want to create some sort of simple crater or depression; it might not be necessary to create any new vertices, and the effect might be achieved just by lowering a few vertices by certain amounts.

- In that case would you just use a static VBO and simply give it new data, then recalculate the normals based on whatever method you have of narrowing down the normals to be recalculated? Or maybe the VBO should be dynamic, or does it depend on how many times you expect the terrain to change? (I see stream is another usage hint for data that changes often, but static seems not to necessarily be the worst choice for data that changes in all cases, if I'm reading it correctly).

- If we wanted to increase the amount of vertices to create a more detailed crater, I suppose that would involved complicated code in a geometry shader?

##### Share on other sites

- In that case would you just use a static VBO and simply give it new data, then recalculate the normals based on whatever method you have of narrowing down the normals to be recalculated? Or maybe the VBO should be dynamic, or does it depend on how many times you expect the terrain to change? (I see stream is another usage hint for data that changes often, but static seems not to necessarily be the worst choice for data that changes in all cases, if I'm reading it correctly).

Recomputing smooth normals will have to be done on cpu/or geo shader, since without adjacent information it is not possible to do, so extruding verticies in shader will not be sufficient if you aim to have smooth normals on it as well. But then issuing/replacing old verticies does not realy demand other than a static buffer, from my experience, dynamic buffers couse multiple times slower render than static ones, and I have successfully used static buffers for streaming 100K+ meshes on air absolutely instantly, even on cheap gpu's.

- If we wanted to increase the amount of vertices to create a more detailed crater, I suppose that would involved complicated code in a geometry shader?

Wheather you incorporate geometry shader or cpu instruction, depends on where you are bound(gpu or cpu), how big geometry you compute, and with what resolution, but I am not sure wheather geometry shader can add new verticies, I have very few experience with geo shaders, and very often geo shaders are not better choice than cpu, since on cpu you can well speculate with other cores and such (prepare original terain in other thread at start, have it socket listen, send modification defintion, use unblocking read on socket to peek for computed geometry from main thread, copy geometry to main thread). Also you can gain benefit from possible ability of knowing how modification will happen before it is supposed to appear.

Geometry shader might aid you in an animated deformation (bending cloths, etc)

Edited by JohnnyCode

##### Share on other sites

Games like Magic Carpet or Populous the Beginning come to mind for me

As far as I know those games didn't use normals or anything similar. They just used a predrawn tileset. There just isn't any algorithm that could be used 20 years ago that could do that realtime so they faked it by using predrawn images.

1. 1
2. 2
Rutin
20
3. 3
khawk
16
4. 4
A4L
14
5. 5

• 11
• 16
• 26
• 10
• 11
• ### Forum Statistics

• Total Topics
633756
• Total Posts
3013707
×