• Create Account

## random midpoint displacement self-intersection problem

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

2 replies to this topic

### #1carangil  Members

518
Like
0Likes
Like

Posted 16 November 2012 - 04:30 PM

I'm having trouble with detecting self-intersecting geometry. Sorry for the long prelude to the problem, but there's a lot of points to get through before I get to the actual problem.

I have a very simple procedurally-generated planet renderer:
• Each patch of terrain is a quad patch at a fixed tesselation level (such as 33x33 vertices).
• Patches are arranged in a quadtree; when the average size of a quad in a patch takes up too many pixels, the patch is split into 4 smaller patches; likewise when polygons get too small, a patch and it's siblings are removed and the parent node is put back into the renderlist.
• The root-level of the quadtree is simply a 6-faced cube, each face is one of these patches that has been deformed to fit on a sphere
• Yes, at the root-level the points are not evenly distributed; a common problem when squishing a cube into a sphere. but as you get closer, certain areas will split before others, and it's not long before things even out. I don't really care that things are a little uneven when zoomed out; its much less annoying than dealing with polar coordinates!

This is what works:
• Plain sphere: When it's time to split a patch, I generate a more-detailed patch. In the simple case of a sphere with no added detail, I compute the points on the sphere exactly for the new patches. This works fine, I get a perfect sphere I can zoom into all I want.
• Bilinear filtering: Instead of computing the points from the sphere, I cut the patch into 4 subpatches (so 33x33 becomes 4x (17x17), and then I up-scale the 17x17 patches to 33x33 be linearly interpolating the in-between vertices.). This works fine when detailing the sphere; the sphere does not become smooth (because it started as a polygonal model), but it does result in becoming more tesselated.
• Bilinear filtering with random detail: Same as above, I add a random bit of vertical (relative to the center of the sphere) displacement to the interpolated vertices. This results in rocky terrain the gets more and more detailed as you get closer to it. This is what I want. Each successive generation, I pass down a lower 'detail scale' value, so that the smaller I get, the smaller the random displacement details are. So each generation, I divide the detail height value by 2. This seems to scale properly. I don't have any problems with this.
This is what does not work, and this is where I have a problem:

The problem with #3 above, is that the terrain is little more than a heightfield. It can become more detailed, but each point on the sphere has no other points above it. It's kind of 'standard.' I wanted to go a step further than what all the other terrain generators do. So, instead of displacing a random height in the direction away from the center of the sphere, I displace along the point's normal vector. The result is more realistic; sometimes detail will cut into the side of a cliff. It gives the surface some undercuts. Sometimes the undercuts have undercuts. If you move the camera into these undercut areas, some places are almost like mini-caves. For the most part, it looks pretty cool.

The problem I am having, is now that I have given the surface this 'extra' degree of freedom it never had before, there are some areas where the terrain will intersect itself. In the case of a cave-like area, its fine, I dont care, the ceiling is touching the floor or something; it's a place where the player can't go past. The barrier doesn't look bad. But it leads to weirdness at the tops of mountains where instead of coming to a point, imagine there's a column and both sides are undercutting into it enough that they sort of pinch past each other, leaving the top part of the column connected to the the bottom by an infinitely thin piece of rock; something completely impossible. Is there any way to prevent, hide or mitigate this?

I can post images later tonight probably.

### #2JTippetts  Moderators

12318
Like
1Likes
Like

Posted 16 November 2012 - 10:56 PM

I have run into exactly the situation you are describing.

To my knowledge, there is no easy way to prevent it. Fractal functions (either subdivision-based such as midpoint displacement, or implicit such as fractal noise) are not "self aware". There is no easy way for point (x,y) to make adjustments to itself to avoid intersecting with the displacement provided by point (x1,y1). The points are mutually unaware of each other, effectively speaking; at least in this context. The problem is compounded by the fact that the heightmap plane is topologically a planar membrane that is being stretched and forced into self-intersecting forms.

There are some things that you can do to try to mitigate the problem. Here is an image of a distorted spire, formed as a cone displaced in X and Z by a couple of fractal methods:

You can see it has a lot of the deviant behavior you describe, and that behavior only gets worse as the scale of the displacement is increased. In this next one, I calculated a scaling factor as (MaxHeight-PointHeight)^2, and used that to scale the amount of X and Z displacement applied to the geometry. The higher up the spire a point is, the smaller the scale and thus the smaller the displacement.

This is a bit of a hack to avoid the weird mountaintops, and you still get plenty of weird folding on lower-elevation peaks and ridges.

Basically, once you try to introduce overhangs and other elements of a 3D nature, then the displaced heightmap plane is no longer a sufficient abstraction to work with, and you need to shift your thinking to a volumetric method. If you think of your ground surface as simply the threshold between solid ground and sky, and don't try to force it into a heightmap representation by displacing a 2D plane, then it doesn't matter how the volume is folded and distorted by the displacement functions; the generated mesh geometry will not be self-intersecting. An example:

In places where the threshold of the ground is pushed through itself, it might bore holes, and it still might result in some unrealistic geometry (floating chunks, etc...) but at least the mesh geometry itself isn't weirdly folded and self-intersecting. And the strange formations can be controlled by the parameters of the system, such that floating islands are reduced, if not eliminated altogether.

### #3carangil  Members

518
Like
0Likes
Like

Posted 18 November 2012 - 01:42 AM

Thanks for your response. Your pictures point out pretty well the issues I'm having. I was going to post pics of my own last night, but got caught up in stuff, and your pics illustruate the problem well enough for anyone else looking at the thread. What kind of algorithm did you use for the last picture? My current algorithm is based on 2d grids. I have a square 'sheet' of n-by-n vertices connected together into triangles. I can freely move each vertex in x/y/z, and each sheet can specify which sheets are neighbors. (so that two adjacent sheets can see each other's edges, so the neighbor's vertices can line up, eliminating gaps. ) Sheets can be split and detailed as needed at runtime quite fast, since the sheet is just plain old polygon data.

To move to a volumetric approach, I'm assuming some kind of voxel structure (probably octrees), and then I would have to either render the octree directly OR do marching cubes on it and then render the polygons. It sounds like its getting too heavy for runtime. Unless anyone has a better, easier volumetric approach?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.