Generating Height Maps for Quad-tree Spherical Terrain

Started by
9 comments, last by swiftcoder 9 years, 9 months ago

I've been working on a project for a while that involves planetary/spherical terrain. I plan on using height maps to displace the surface vertices and for generating normal maps to normal map the terrain as well. My spherical terrain setup consists of a quadrilateralized spherical cube - a cube morphed into sphere (see first image below) that is subdivided into smaller patches by a quad-tree subdivision algorithm (see second image below). To generate the heights for the height map I will be using a 3D perlin noise script. However, I don't understand what positions I plug into the perlin noise to get the heights for the height map (each terrain patch will have its own height map) and afterwards apply the height map to the mesh to displace the vertices.

27150-planets-1-cubemap.png

27152-quadtree+qlsc.png

My end goal is to generate a height map with a size of 192x192 but I have to make some optimizations before I can do that. The vertex "resolution" for each terrain patch is 16x16 (289 vertices per patch), I don't know if this helps or not though. To reiterate, my problem is that I don't know how to generate a height map (texture) using a 3D perlin noise function so that the height map could be then mapped to the sphere to displace vertices. Anybody have any experience with this? Could someone shed some light on what I'm missing here or point me in the right direction. I've been searching for weeks and haven't found anything really helpful. I know it is some sort of U,V to X,Y,Z mapping but I don't know how this would work just for the quadrilateral patches of the sphere, not the entire thing or one of the six faces of the cube.

Advertisement

Map a texture to each face of the cube, this gives you a U,V texture coordinate pair for each X,Y,Z vertex. Then you for each face, you iterate over the U,V space, and for each pixel, pass the (normalised) X,Y,Z coordinates into the 3D noise function.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

One thing to make sure of is to use simplex noise as your basis function, which is more stable in 3 dimensions than oldschool "perlin" noise is. There are some great journals on this site that go back a ways, but they discuss procedural terrain in detail.

Also, don't tessellate your cube faces uniformly. When you normalize the points into a sphere it distorts the tessellation and you end up with faces that are pretty far from uniform. If you tessellate your faces according to the tangent function over 90 degrees per side (0,0 being at the center of each side), then after you normalize your sphere will appear much more uniformly tessellated.

Map a texture to each face of the cube, this gives you a U,V texture coordinate pair for each X,Y,Z vertex.

How does that work?

EDIT: Specifically, how do you get X,Y,Z vertices from the U,V texture coordinate, I'm not too familiar with texture mapping.

EDIT: Specifically, how do you get X,Y,Z vertices from the U,V texture coordinate, I'm not too familiar with texture mapping.

You go the other direction - assign the correct U,V coordinates to each vertex.

It's easiest if you start with just the corner vertices of the cube - it should be pretty easy to construct 6 faces using those vertices. Then for each face, assign the right U,V coordinate to the corner, and you can subdivide the face as needed, interpolating both positions and texture coordinates.

(but really, you need to go learn about texture mapping before tackling a terrain engine)

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

It took a while but I finally got it to work at full functionality, although some optimizations are in order (on average 14fps when moving around the surface). At the moment all of the height map generation and noise calculation is being done on the CPU but I do plan to utilize the GPU for most of that, eventually. My system for generating the heightmaps takes the four corner vertices of one of the patches and interpolates between them, generating all of the necessary vertices for the heightmap, simultaneously, UV's are being generated in the same manner that correspond to the generated vertices. The UV's are then converted in the texture space while the generated positions are plugged into the noise function to return a value that is between 0 and 1. This returned value then becomes the alpha component of the pixel at the U,V coordinates in the texture. The texture is simply read by iterating through an array of the U,V coordinates, returning the alpha component, and applying the height to the actual mesh vertices.

The system allows heightmaps of arbitrary size which also allows me to generate normal maps, whenever I get to it, for lighting. Some wireframe images of the results can be found below.

RidgedMultifractals1.png
Ridged multifractals were used here for this terrain.
RidgedMultifractals2.png
I also used ridged multifractals here except that it's inverted.
My next step is to be able to generate normal maps and vertex colors. I'll post the results here once I get them working. smile.png
EDIT: Normal maps weren't hard at all but a problem with the mesh tangents completely ruins the lighting which seems like it will take more time to sort out.

Would there be a way to calculate tangents for the terrain patches using the height map data or even the normal map? I've found some things involving terrain but it all relied on "up" being in the positive-y direction. For, reasons, I can't use object space normal maps which is a bit of a headache as I have to calculate mesh tangents for tangent space normal mapping.

Would there be a way to calculate tangents for the terrain patches using the height map data or even the normal map? I've found some things involving terrain but it all relied on "up" being in the positive-y direction. For, reasons, I can't use object space normal maps which is a bit of a headache as I have to calculate mesh tangents for tangent space normal mapping.

See my old thread about the topic (apologies for the missing images).

TL;DR: not possible to have continuous tangent space for a sphere. You either use object-space normal maps, or you cheat and try and hide the discontinuities.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

So, in order to generate object space normal maps, I tried storing the terrain positions, after having the height applied, into the heightmap as the r,g,b components (the height value is the alpha). From there I was finding the edges between four adjacent pixels/positions, multiplying the edges in pairs, and adding the results from that and normalizing them to get the final normal. Here is the code (C#) to better explain:


Vector3 p0 = PixelToPos(heightmap.GetPixel(x-1,y), posArr); //ignore the posArr thing
Vector3 p1 = PixelToPos(heightmap.GetPixel(x+1,y), posArr);
Vector3 p2 = PixelToPos(heightmap.GetPixel(x,y-1), posArr);
Vector3 p3 = PixelToPos(heightmap.GetPixel(x,y+1), posArr);
Vector3 p4 = PixelToPos(heightmap.GetPixel(x,y), posArr);

Vector3 e0 = p4 - p0;
Vector3 e1 = p4 - p1;
Vector3 e2 = p4 - p2;
Vector3 e3 = p4 - p3;

Vector3 n0 = Vector3.Cross(e0, e2);
Vector3 n1 = Vector3.Cross(e1, e3);

Vector3 n = (n0 + n1).normalized;

normalmap.SetPixel(x, y, new Color (n.x, n.y, n.z, 1.0f));

This produces a normalmap that is mostly black with specks of color in them - it doesn't work. Is there another way to generate the object space normal maps or are there just some errors I overlooked in my code?

EDIT: Um... help? Tried picking apart other parts of my code to check for errors, to me it all seems as if it would work. By the way, my code was "inspired" by the "formula" for normal map calculation found in this paper (my code is only somewhat different because I had tried an exact implementation of the formula in the paper and it didn't work): http://www.student.itn.liu.se/%7Echral647/tnm084/tnm084-2011-real-time_procedural_planets-chral647.pdf

EDIT 2: Alright, so I found a couple of errors in my code that I resolved, the code above has been updated to reflect that. I also found some problems with my PixelToPos function that I resolved, in short the function was returning positions that were way off (not being able to use ARGB Float32 textures is a real nightmare for storing positions) so it easily could have been the main problem, but, unfortunately it wasn't. Pretty much fixed all of the problems that I recognized and this is my result:
Object%20Space%20Normal%20Maps%20Gone%20
EDIT 3: I'm not trying to spam or anything but I fixed a few more problems and ran a few more tests... anyhow found that there is a weird texture tiling issue with the normal maps that was causing them to be used for rendering in a strange way, sort of fixed it and now I have this (which at least looks like... something):
Tiling%20Issue.png
Anyhow the tiling issue causes the texture to tile differently all depending on the quad-tree depth of the terrain patches. The normal map calculations are still off a bit because the PixelToPos code doesn't return exact positions but at least I've figured out what the problems are. Also the black lines around every patch were expected as I don't have any special cases for the edge pixels in the heightmap/normalmap.

So... it took a really long time to get everything to work with 8 bit per channel ARGB textures but this is my result:

Um...Yay.png

Finally, time for texturing...

PS: Thanks to swiftcoder and y2kiah for the help! smile.png

This topic is closed to new replies.

Advertisement