Jump to content
  • Advertisement
  • entries
    19
  • comments
    27
  • views
    11717

Planet Generation Plans

nukomod

2668 views

 

Hey all!

This time I'm going to write a little about my plans for generating planets. This entry won’t go into any specific details of the terrain generation but will be just a very high level overview of the basic framework. I don't think any of the ideas in here are new.. the procedural generation community are a very clever bunch indeed! I'm always amazed how when an idea pops into my head I'm able to find an existing article about it that goes into fine detail on the subject. Anyway let's crack on!

Heightmaps & Voxels

When it comes to generating terrain an easy first choice is to generate height maps and build the terrain mesh from that. You can get some good looking results quickly by overlaying some noise generation and tweaking things here and there.

image3.jpg.d0643950ffdd9f86dd0f6e1020922ba8.jpg

Though these can give some nice results they have limitations. For example tunnels and overhanging terrain can’t be represented in a simple height map. You also have to choose a projection scheme to map a 2D height map onto a sphere.

image2.jpg.4f5a2e118dcbb0837f584cfc2e21d34d.jpg

There’s another approach to terrain generation using voxels and that’s what I’m aiming to use. With voxels you can define all sorts of complex terrain and you can define what’s hiding under the ground too - whether it's seams of coal, ore or underground rivers. Many games use voxel terrains to great effect such as the infamous Minecraft. Sign me up!

In Minecraft the voxels are arranged in a grid, keeping everything nice and simple.. until you want to model a whole planet. Then you’d get something like this:

image4.jpg.06803f5afaa2b9138cb96c5e1150a331.jpg

Though that does look really cool, I don’t want my worlds to be great big cubes floating in space, so what do I do? There are all sorts of solutions to building a spherical world from a voxel grid, but they seem to be full of difficult space warping, caveats and rendering complications. I’d rather not deal with these kinds of complications, so I’m going to try a different approach.

Instead of arranging the voxels in a grid I’m planning to arrange them as points on a geodesic sphere like this (imagine the vertices are voxel points):

image9.jpg.0446fca9ecb9619023c5af59509a80be.jpg

It’s like taking the space warping issues you’d need to do at the cubes edges and spreading it evenly across the entire surface of the sphere. Instead of it becoming a difficult edge case it becomes a constant low level annoyance and keeps things consistent at all times. It will make the mesh generation a little more fun later too ;)

Voxel Planet Generation

The plan is to use an icosahedron as a starting point. Each vertex here is a voxel. This can be considered the lowest possible Level Of Detail (LOD) of a planet.


image1.jpg.a202d4363bdf2de698a27ea7eebeadd1.jpg

The space is chopped up into tetrahedrons from the planet surface into its core. There is an extra vertex at the centre of the planet for this.

image7.jpg.8681f66211b5b6385177bcae6a15e424.jpgimage8.gif.257e47a37cb70829e3a746f664e40150.gif

These tetrahedrons can be subdivided through to the core of the planet as required.

image5.png.87fe7726c2411700a7f6d0545558fae7.png

These illustrations are somewhat misleading though as this isn’t just generating a spherical mesh, but a bunch of voxels that could be empty space. The voxel points (vertices) hold information about what's actually in the space they occupy, whether it’s atmosphere, rock, magma etc. It is the job of the terrain generator to define what a voxel contains. I'm going to keep a clear divide between the planet voxel generation code and the terrain generation code this way.

I have some uncertainties on how to best manage the subdivisions of the planets voxels as required, but I’ll bash it out in a prototype.

Dynamic Changes

The game also needs to be able to make permanent changes to the terrain during play. Large explosions should create craters and I want them to persist. To do this I will need to be address the voxels and save state changes about them. I'm not 100% sure on the approach for this yet either. One train of thought is to basically create an algorithm for the voxel generation that that is able to assign each possibly generated vertex a unique 64bit number. That would have no waste and allow the maximum number of voxels, and some napkin math makes me believe it would have good detail on earth sized planets. Another approach could be some kind of hash of their XYZ or polar coordinates, though that will be more wasteful so it’d bring the total addressable voxels way below what a 64bit unique id could theoretically hold.

Ok that’s enough for now!
 

 

 

 



19 Comments


Recommended Comments

I've had some success applying the usual cube-> sphere transformations to a cube where each face is a voxel shell with some pre-determined thickness. It's not terribly elegant, but it's easy to get up and running. Your tetrahedral approach might avoid some of the distortion and apparent grid - interested to see how it turns out.

Share this comment


Link to comment

Very interesting.  I'll be curious to see how you realise the plan.

Share this comment


Link to comment

Looks neat. In fact I'm working on almost the exact same thing :)  However I'm not sure you can divide tetrahedrons evenly. You can get four new ones on the corners and then you get this sort of odd center space.  In your picture it looks to me like you divided up the faces of the original tetrahedrons which you can do easily, however chopping up the insides might be tricky, however if you figure out a good way to do it, it would be way cool.

I decided to use prisms, just because you can make an octree out of them like you can a cube.  The down side is you can't go all the way to the center of your sphere and also the inner ones are a bit smaller than the outer ones. But if you build your geometry around some reasonable band of the world it shouldn't be a huge problem, and also it gets rid of the problem of geometry looking different in different corners of the world like you would get if you generated your world in a simple subdivided cube.

I had an old version working a few years back but it just used height-maps applied to a subdivided icosahedron based simplex-noise. However it did let you walk around a huge planet without generating any data in advance.

Share this comment


Link to comment
19 minutes ago, Gnollrunner said:

However I'm not sure you can divide tetrahedrons evenly. You can get four new ones on the corners and then you get this sort of odd center space.

That odd sort of space in the centre is an octahedron. Decomposing the octahedorn is tricky, but not impossible

Share this comment


Link to comment

You guys are spot on about dividing up the space. I didn't realise at first that tetrahedrons don't tessellate in 3D space, it would have been great if they did. The picture of the sphere where you can see inside is quite misleading isn't it! Gnollrunner I'm playing with some options now, which includes some with prisms, seems I may be going down the same route you did :) Do you have any links related to what you did?

Share this comment


Link to comment
1 hour ago, nukomod said:

You guys are spot on about dividing up the space. I didn't realise at first that tetrahedrons don't tessellate in 3D space, it would have been great if they did. The picture of the sphere where you can see inside is quite misleading isn't it! Gnollrunner I'm playing with some options now, which includes some with prisms, seems I may be going down the same route you did :) Do you have any links related to what you did?

I really don't, I'm just kind of winging it.  I'm basically doing the same thing as marching cubes except with prisms. The hard part is really the LOD transitions.  I think surface nets or dual contouring does that better but then you get other issues you need to deal with. I haven't found any magic bullets. I'm using octrees with chunking such that all cells in a chunk that have any geometry must be at the same level AND neighboring chunks can at most have one level difference. Even with that, the edge conditions are a pain. I worked out an algorithm that seems to work but I'm still debugging all the cases. I have table look ups with pointers to little subroutines that handle various cases. It's a lot of code, but I'm tying to avoid a lot of ifs.

Share this comment


Link to comment
On 6/19/2018 at 6:03 PM, nukomod said:

You guys are spot on about dividing up the space. I didn't realise at first that tetrahedrons don't tessellate in 3D space, it would have been great if they did. The picture of the sphere where you can see inside is quite misleading isn't it! Gnollrunner I'm playing with some options now, which includes some with prisms, seems I may be going down the same route you did :) Do you have any links related to what you did?

Yeah, I was doing the same kind of problem (I started with an octohedron, but...) and found the same problem. I don't have my notes at the moment, but I seem to remember splitting lengths into three (or maybe four?) rather than two and thus getting a vertex in the center, which IIRC solved it. Maybe you can experiment along those lines, I'd love to know what you end up with in either case!

Share this comment


Link to comment

You could consider an more complex cube->sphere thingy, similar to the results a hexahedral meshing algorithm would produce, see this image on the left. The only advancement is that the interior does not tesselate to infinity, if you want destruction of whole planets for example. Increasing scale issue on the surface remains the same.

 1-s2.0-S0378475414000871-gr14.jpg

Share this comment


Link to comment
5 hours ago, JoeJ said:

You could consider an more complex cube->sphere thingy, similar to the results a hexahedral meshing algorithm would produce, see this image on the left. The only advancement is that the interior does not tesselate to infinity, if you want destruction of whole planets for example. Increasing scale issue on the surface remains the same.

 1-s2.0-S0378475414000871-gr14.jpg

The first one looks similar to what I'm doing, except you start out with a cube so you end up with hexahedrons instead of prisms. In my case I won't have destruction of whole planets (no Death Star in my game :P) so I don't worry about the middle. You can easily use marching cubs, surface nets etc with this, but you sill have the same issues of LOD transition, chunking and so forth which in my experience is the hardest part.

I think if I wanted to support full destruction I might be tempted to just forget about tying the make voxel orientation uniform around the world and just build everything with regular matrix.

Share this comment


Link to comment

Thinking of it my proposal is pretty stupid - using a simple subdividing cube grid projected to sphere has the same advantages at the core and the same number of singularities at the surface. (Ooops :) )

 

Edit: But there may be advantages to support Minecraft alike building eventually? Not sure - my world looks flat if i look outside the window and i'm happy with that :)

Edited by JoeJ

Share this comment


Link to comment

Just got a look at my old handwritten notes (dear lord in heaven....). I completely forgot that I started experimenting on something you miiiight have a use for. Basically, treating existing objects as point clouds and converting anything near the player to visible graphics via a standard marching cubes algorithm. If there are a lot of gold points in a section, that section is depicted as a gold thingie, and so on.

Just a thought. I'll go into my padded cell now and count the silence.....

Share this comment


Link to comment

as far as topology goes one method I've found that looks sort of neat is as follows:

1. create a random line (for sphere, hemisphere), random side of the line raise +1, other side of line -1.
2. repeat procecess until desired shape is achieved. (100-1000+)

 Problem with this though is that sometimes it leads to really unround, lopsided planets sometimes. Other than that it can look pretty dynamic. I did not come up with this idea, though, and I don't remember where I got it. Also may need to smooth or scale.

Edited by h8CplusplusGuru

Share this comment


Link to comment
4 hours ago, h8CplusplusGuru said:

that looks like it exactly 😃

I implemented this years ago. The main problem I have with it is, it doesn't easily let you just generate local terrain at one point on the planet.  While it's procedural, you end up having to store everything rather than just keeping the stuff you need that's around the camera.  For me simplex noise works a lot better since you can easily throw away and regenerate local data as needed to whatever level you want, and also you can customize your functions locally in infinite ways. 

I suppose you could use it as a base function for which you store your terrain at a predetermined level and then apply simplex-nose stuff on top of it to fill in the details.

Share this comment


Link to comment

@Embassy of Time, You had brought the idea of creating procedural tessellated planetary generation to my attention years ago.  I didn't really understand then, and perhaps now, the criteria of the obstacle.  However, I've got an observation to make regarding attempts to create a procedural-tessellate-able sponge-world without the use of voxels and why it may not be possible.  The reason being the conflict between what a GPU can process and what the CPU can process.  I'm going to kinda make shit up here because I don't have my info 100% but from the bits I do know about CPU's and GPU's I gonna make the following claim.  I'm going to claim that in order to compute a procedural-tessellate-able non-voxel sponge-planet you'd require GPU's that are designed to process information in higher dimensions, mayvbe 4 but maybe only 3.  Problem is modern GPU's, as I see it, are designed to provide quick processing of individual screen pixels.  So what ever higher dimension processing the CPU is doing the GPU is squishing it all into 2D and spitting out the results; therefore, you couldn't utilise the GPU for any higher dimensional computations for the intended goal.  You'd need to rely entirely on the CPU and that would result in a quick noticeable bottleneck that would prevent the result from looking real.

Share this comment


Link to comment
2 hours ago, Awoken said:

@Embassy of Time, You had brought the idea of creating procedural tessellated planetary generation to my attention years ago.  I didn't really understand then, and perhaps now, the criteria of the obstacle.  However, I've got an observation to make regarding attempts to create a procedural-tessellate-able sponge-world without the use of voxels and why it may not be possible.  The reason being the conflict between what a GPU can process and what the CPU can process.  I'm going to kinda make shit up here because I don't have my info 100% but from the bits I do know about CPU's and GPU's I gonna make the following claim.  I'm going to claim that in order to compute a procedural-tessellate-able non-voxel sponge-planet you'd require GPU's that are designed to process information in higher dimensions, mayvbe 4 but maybe only 3.  Problem is modern GPU's, as I see it, are designed to provide quick processing of individual screen pixels.  So what ever higher dimension processing the CPU is doing the GPU is squishing it all into 2D and spitting out the results; therefore, you couldn't utilise the GPU for any higher dimensional computations for the intended goal.  You'd need to rely entirely on the CPU and that would result in a quick noticeable bottleneck that would prevent the result from looking real.

Not sure I get your chain of logic here, but I am trying. I think the issue you worry about is, in essence, CPU capacity. The core information about a landscape needs perhaps to be made in CPU, since it has a more flexible math (I don't know how GPUs differ from CPUs in this regard, so I'm going from what you described as best as I can). The issue becomes how to translate raw landscape information into actual graphics. But I think that's less of a bottleneck than you suggest. The CPU would end up with a polygon landscape with some added data to enhance details, just like using graphics shaders to build upon basic graphics data. While it requires some careful math, I doubt it would make or break the process. But I have no definitive evidence for that.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By ThinkSmall98
      Hi, 
      I used the 3D shortest distance between two line segments algorithm at this website: http://geomalgorithms.com/a07-_distance.html#dist3D_Segment_to_Segment
      This function in Python is checking if two capsules intersect. I checked the algorithm from the website and it seems to work. I tried implementing an epsilon to help with floating point error, but I don't think I did it correctly. Help would be much appreciated. 
      def check_intersection(particle1,particle2): decimal.getcontext().prec = 100 epsilon = 2**-52 #implement epsilon small_num = 0.00000001 #number to check if they're closely parallel u = particle1.get_s() #s1 v = particle2.get_s() #s2 p0 = particle1.get_p1_position() #P0 q0 = particle2.get_p1_position() #Q0 w = np.array([p0[0]-q0[0], p0[1]-q0[1], p0[2]-q0[2]]) #distance from 2 particles from their p1's a = u[0]**2 + u[1]**2 + u[2]**2 #dot product of u*u. Always >=0 b = u[0]*v[0] + u[1]*v[1] + u[2]*v[2] #dot product of u*v. c = v[0]**2 + v[1]**2 + v[2]**2 #dot product of v*v. Always >=0 d = u[0]*w[0] + u[1]*w[1] + u[2]*w[2] #dot product of u*w e = v[0]*w[0] + v[1]*w[1] + v[2]*w[2] #dot product of v*w D = (a*c)-b**2 #always >=0 #Set all to defaults sc = sN = sD = D #sc = sN / sD, default sD = D >= 0 tc = tN = tD = D #tc = tN / tD, default tD = D >= 0 if D**2 < small_num: # checks if SCs are parallel sN = 0.0 # force using point P0 on segment S1 sD = 1.0 # to prevent possible division by 0.0 later tN = e tD = c else: # get the closest points on the infinite lines sN = (b * e) - (c * d) tN = (a * e) -(b * d) if sN < 0.0: sN = 0.0 tN = e tD = c elif sN > sD: # sc > 1 => the s=1 edge is visible sN = sD tN = (e + b) tD = c if tN < 0.0: # tc < 0 => the t=0 edge is visible tN = 0.0 # recompute sc for this edge if -d < 0.0: sN = 0.0 elif -d > a: sN = sD else: sN = -d sD = a elif tN > tD: # tc > 1 => the t=1 edge is visible tN = tD # recompute sc for this edge if (-d + b) < 0.0: sN = 0.0 elif (-d + b) > a: sN = sD else: sN = (-d + b) sD = a # division to get sc and tc if abs(sN) < small_num: sc = 0.0 else: sc = sN / sD if abs(tN) < small_num: tc = 0.0 else: tc = tN / tD # difference of 2 closest points dP = np.array( [w[0] + (sc * u[0]) - (tc * v[0]), w[1] + (sc * u[1]) - (tc * v[1]), w[2] + (sc * u[2]) - (tc * v[2])] ) # dP = w + np.multiply(sc,u) - np.multiply(tc,v) #S1(sc) - S2(tc) close_d = (math.sqrt(dP[0] ** 2 + dP[1] ** 2 + dP[2] ** 2) ) # closest distance b/w 2 lines # check if distance <= radius * 2, if so, INTERSECTION! diff = abs( close_d - (2*S_RADIUS) ) if(diff <= epsilon): return True else: return False
    • By bandages
      So, in real life, incoming dot normal at the silhouette is always 0.  With smooth shaded meshes, it never is, not naturally, not outside of contrived situations.  (Not with flat shaded meshes either, I guess.)
      And incoming dot normal is one of the bedrocks of CG.  Probably the equal of 4x4 matrix multiplication.  Problems with silhouette normals show up in Fresnel, in diffuse lighting, in environment mapping....  everywhere.  But I can't really find anybody talking about it.  (Maybe I'm not Googling the right terms.)
      Obviously, the problem decreases as poly count goes up, eventually reaching a point where it's dwarfed by other silhouette problems (like translucency or micro-occlusion) that CG doesn't handle well either.  But, if I'm reasoning correctly, normal maps don't improve the problem-- they're as likely to exacerbate it as improve it, and the exacerbations are, aesthetically speaking, probably worse than the improvements are better.
      I've tried playing with crude fixes-- basically, rotating normals toward incoming by a percentage, or of course clamping incoming dot normal (like we all have to do) to prevent it from bending behind the mesh.  Nothing I've tried looks good.  I suppose the best option might be to rotate normals to perpendicular to incoming at the silhouette and then interpolate to the nearest inflection point  of something like screen space depth to preserve curvature, but the math for how to do that is beyond me, and I'm not sure it would look any better.  Or maybe, instead, somehow, adjust the drawn silhouette to match the silhouette defined by incoming dot normal?  Not even sure how that would work, not if the normal was pointing away from incoming.
      I don't know-- is this a solvable problem?  Has anyone tried other stuff and given up, pursued anything that was promising but too expensive, anything like that?  Are there any papers I'm missing?  It's really surprising to me that I can't find anyone else talking about this.
      (Apologies if I chose the wrong subforum for this.  I considered art forums, but I felt that people frequenting the programming forums would have more to say on the subject.)
    • By fabian7
      I'm trying to calculate normals of my grid surface. The map is 29952px x 19968px and each cell is 128px x 128px. So I have 36895 vertices.

      My flat map array is sent to shaders with the following structure:
      float vertices[368950] = { // x y z znoise xTex yTex xNorm yNorm zNorm Type 16384,16256,-16256, 0, 0.54, 0.45, 0, 0, 1, 1, 16256,16384,-16384, 0, 0.54, 0.45, 0, 0, 1, 1, ...... } I calculate the zNoise with a function
      float noise(float x, float y){}; And it works (I add it to y and z in the vertex shader).
      Method 1
      If i calculate normals using finite-difference method i obtain a nice result. Pseudo-Code:
      vec3 off = vec3(1.0, 1.0, 0.0); float hL = noise(P.xy - off.xz); float hR = noise(P.xy + off.xz); float hD = noise(P.xy - off.zy); float hU = noise(P.xy + off.zy); N.x = hL - hR; N.y = hD - hU; N.z = 2.0; N = normalize(N); But, in the case I need to edit the map manually, for example in a Editor context, where you set the zNoise with a tool to create mountains as you want, this method won't help.
      I get this nice result (seen from minimap) (Normals are quite dark purposely)
      Method 2
      | | | --6----1----+- |\ |\ | Y | \ | \ | ^ | \ | \ | | | \| \| | --5----+----2-- +-----> X |\ |\ | | \ | \ | | \ | \ | | \| \| --+----4----3-- | | | So i'm trying to calculate the normal using the adjacent triangles, but the result is very different (it seems that there's a bug somewhere):
      Code:
      std::array<glm::vec3, 6> getAdjacentVertices(glm::vec2 pos) { std::array<glm::vec3, 6> output; output = { // getVertex is a function that returns a glm::vec3 // with values x, y+znoise, z+znoise getVertex(pos.x, pos.y + 128), // up getVertex(pos.x + 128, pos.y), // right getVertex(pos.x + 128, pos.y - 128), // down-right getVertex(pos.x, pos.y - 128), // down getVertex(pos.x - 128, pos.y), // left getVertex(pos.x - 128, pos.y + 128), // up-left }; return output; } And the last function:
      glm::vec3 mapgen::updatedNormals(glm::vec2 pos) { bool notBorderLineX = pos.x > 128 && pos.x < 29952 - 128; bool notBorderLineY = pos.y > 128 && pos.y < 19968 - 128; if (notBorderLineX && notBorderLineY) { glm::vec3 a = getVertex(pos.x, pos.y); std::array<glm::vec3, 6> adjVertices = getAdjacentVertices(pos); glm::vec3 sum(0.f); for (int i = 0; i < 6; i++) { int j; (i == 0) ? j = 5 : j = i - 1; glm::vec3 side1 = adjVertices[i] - a; glm::vec3 side2 = adjVertices[j] - a; sum += glm::cross(side1, side2); } return glm::normalize(sum); } else { return glm::vec3(0.3333f); } } I get this bad result (seen from minimap) unfortunately
      Note: The buildings are in different positions but the surface has the same seed using the two methods.
      Could anyone help? 🙂 Thanks in advance.
    • By bzt
      Hi guys,
       
      I've an OpenGL question, which is quite math ad linear algebra related.
      Let's assume we have two coordinate systems, S (scene) and O (object). I'd like to place O inside S, so I need O' (in S coordinates). Using the following transformation matrices I can do that: rotation, scale, displacement. So far so good. I have two questions though:
       
      1) assuming the place of O' is specified with 4 points (zerus, and one for each axii unit vector end points) how can I calculate the required transformation matrices?
      It's a "simple" case, as let's say points are P0, P1, P2, P3 and x = P0->P1, y = P0->P2, z = P0->P3. Also |x| = |y| = |z| (all has the same length) and they enclose 90 degree with each other. This surely can be solved using standard GL transformations easily, I just need an algorithm to calculate the matrices from P0, P1, P2, P3.
       
      2) the more difficult question, how can I do the same if O' can be distorted, so |x| != |y| != |z| and their angle is not necessarily 90 degree? (Imagine that you "sit" on O, so O' became stretched and it's top became larger and moved to the side, so that angle(x, y) = 80 degree for example). How to get the required transformation matrices in this universal case, when you only know P0, P1, P2, P3?
       
      Hope it's clear what I'm asking. I need an algorithm to generate a transformation matrix that I can then use to transform all points in O into O'.
      bzt
    • By Kharin
      I'd like to know the whole process of creating a game from the beginning to the end. What I have to do first, what I have to note for the future, what issues need to be identified. What needs to be done after the game idea is formed. What kind of specialists will be needed. How can I tell correctly my idea to specialists and get what I need from them. How can i become the connecting link between them. How much time it will take to create the game. What additional costs await the company in addition to remuneration and advertising.
      My game is going to be as simple as possible. Probably it is going to be the same type as VooDoo games or sth. May be like "Helix Jump", "Rise Up", "Balls VS Blocks" and etc.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!