Jump to content
  • Advertisement
Sign in to follow this  
studentTeacher

Spherical Worlds using Spherical Coordinates

This topic is 2107 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everybody! I have a question about creating spherical planets, and I thought that I'd bounce a couple ideas off you all before trying it out.

 

I am working on a space-sandbox-RPG game, and part of this game will be realistic space travel (in that LOD is present so you physically leave a planet and fly to a new one, and land -- without a change of whole areas; only a change of LOD as you leave or arrive to a planet). For the planets, I want them smaller than Earth, but when you're on the surface the "roundness" of the planet should be barely noticeable if at all.

 

My question is this: has anyone used or thought about spherical coordinates to procedurally create LOD and terrain on planets? I know about the cube-map approach, but I also know about the distortions and such that occur at the corners, on top of other problems as well. That's why I've been toying with the idea of a spherical planet generation that isn't a cube-map! What do you guys think? Do you see any issues that might arise that I just can't foresee?

 

Right now, I think it might be a little too computationally expensive to compute planets this way; but I still want to look into it, if there aren't too many issues that might arise. I also need to figure out how to input the parameters (phi, theta, r) -- would it be like (x, y, z) inputs for 3D perlin noise, or some other type of function? That's part of the fun smile.png

 

Thanks,

ST

Share this post


Link to post
Share on other sites
Advertisement

However you define "distortion", it's going to be worse at the poles with spherical coordinates than in the cube map.

Share this post


Link to post
Share on other sites
Exactly what actual concrete design/development have you done on this? Because the difficulty in this task simply isn't in creating the function used to represent/generate the world, but rather lies in the abstraction that you use to translate the outputs of this function into concrete vertex buffers to feed to the GPU. Therein lies the difficulty. Sure, you could construct a noise function that takes (phi, theta, r) as an input. But what then? How is the underlying geometry structured? How are your LoDs handled? How are they stitched together?
 
The fundamental difficulty lies in sphere tessellation. The most common forms of sphere tessellation are (pictured here): cube map (top), uv sphere (left) and icosphere (right).

Bh9gMJF.png

You can see that in the cube map there are the distortions of patch size/shape that increase as you draw nearer the corners. In the uv sphere representation (which most closely approximates polar coordinates), you still see distortion of patches, this time as you near the north and south poles. The icosphere is the only one of the three that has non-distorted, uniform patch sizes, and if you can figure out techniques for dealing with LoD on the triangular patches, this is probably your best bet for side-stepping distortion issues. Simply thinking you can use polar coordinates to magically eliminate distortion just won't work.

Personally, I think your best bet is to ditch the spherical basis and go with a volumetric approach. You can use dual contouring to stitch between LoD levels, and the shape of the planet is 100% determined by the shape of the function used to generate it, rather than being forced by the underlying abstraction onto a sphere. This means that if you want planets shaped like mountainous donuts, then no problem. The techniques underlying both the sphere planet and Planet Donut are exactly the same. Plus, a volumetric approach works out-of-the-box with a standard 3D noise function basis, so you don't have to do any weird tricks in constructing your basis.

Share this post


Link to post
Share on other sites

Exactly what actual concrete design/development have you done on this? Because the difficulty in this task simply isn't in creating the function used to represent/generate the world, but rather lies in the abstraction that you use to translate the outputs of this function into concrete vertex buffers to feed to the GPU. Therein lies the difficulty. Sure, you could construct a noise function that takes (phi, theta, r) as an input. But what then? How is the underlying geometry structured? How are your LoDs handled? How are they stitched together?
 
The fundamental difficulty lies in sphere tessellation. The most common forms of sphere tessellation are (pictured here): cube map (top), uv sphere (left) and icosphere (right).

[pic]

You can see that in the cube map there are the distortions of patch size/shape that increase as you draw nearer the corners. In the uv sphere representation (which most closely approximates polar coordinates), you still see distortion of patches, this time as you near the north and south poles. The icosphere is the only one of the three that has non-distorted, uniform patch sizes, and if you can figure out techniques for dealing with LoD on the triangular patches, this is probably your best bet for side-stepping distortion issues. Simply thinking you can use polar coordinates to magically eliminate distortion just won't work.

Personally, I think your best bet is to ditch the spherical basis and go with a volumetric approach. You can use dual contouring to stitch between LoD levels, and the shape of the planet is 100% determined by the shape of the function used to generate it, rather than being forced by the underlying abstraction onto a sphere. This means that if you want planets shaped like mountainous donuts, then no problem. The techniques underlying both the sphere planet and Planet Donut are exactly the same. Plus, a volumetric approach works out-of-the-box with a standard 3D noise function basis, so you don't have to do any weird tricks in constructing your basis.

 

I think it's more of using phi and theta to get a different "r" for that vector direction defined by phi and theta -- this can define the height of the world at that direction emanating form the sphere. Possibly using some other value (r) to allow for 3D noise on the terrain, creating overhangs and such just like when used with a 3D grid and x, y, z coordinates. The main thing I'm going for here is to create spherical planets without distortion -- something those icosphere might solve, the UV sphere does a *better* job than the cube-map, and the cube-map does a not-so-good job.

 

As for what I've dabbled in, I've dealt with 3d noise and dual contouring, and I've applied this to cube maps to make planets. Cube-maps haven't made me happy enough with the distortion that occurs at the corners of the cube. You mention that a volumetric approach will rely on the shape of the function used to generate it....can you elaborate that a little bit? What I think it means is this: I use a linear gradient to define the ground from the sky, and perturb this surface with noise to generate terrain, overhangs, etc -- this creates terrain based on a flat world. If I use a spherical gradient, would that be the "shape of the function" that you talk about? I would perturb the surface made form the spherical gradient?

 

--ST

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!