• Advertisement
Sign in to follow this  

I-Novae, how do they do Earth sized planets?

This topic is 1435 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have been working on solving the issues involved with planet sized terrain meshes for a while now, and it appears that I-Novae has solved them:

 

The main issues are GPU precision and jitter issues when using large values on the GPU and morphing LoD details smoothly into the next LoD.

 

It appears that I-Novae is using some sort of quadtree cubesphere, but the leaf node meshes are being done on the GPU to do the smooth morphing between meshes.

 

I'm not sure if they are even doing that, but I also don't see how it is possible on an Earth sized planet.

 

If anyone has any wisdom to share on the topic

Share this post


Link to post
Share on other sites
Advertisement

I-Novae's version looks quite artificial, not to mention the fact that the morphing is quite apparent.

 

Outerra (http://www.outerra.com/), in my opinion, does a much better job - and I think if you browse the forums you'll get a pretty good idea of how they achieved it. Don't forget to download their tech demo.

Share this post


Link to post
Share on other sites

Why would a planet size mesh be a problem? They are probably using some real time procedural mesh generation technique probably seeded from some texture based scheme or painted on which defines how the meshes are generated, mostly mountains and craters, sand dunes etc.. couple this with realistic ground shaders and no vegetation. Surface voxel based terrain generators have been around for along time, slap on nice looking shaders and proper atmospheric light model shader for sky and you got urself a wallpaper worthy engine.

Share this post


Link to post
Share on other sites
ddn3

I believe the biggest challenge with a planet sized mesh is because solving for vertex positions on the GPU is difficult because those positions won't fit into a float with meter/sub-meter level precision. And without solving for positions on the GPU, morphing between LoD levels or split/unsplitting meshes on the GPU becomes difficult (I haven't found simple easy solution).

I'm going to check out the outerra.com demo and forums, but for now I'm slowly implementing a quadtree cubesphere that solves this problem by creating the vertex positions in a float texture, created on the CPU for double precision, and that way I can morph between LoD levels on the GPU.

Share this post


Link to post
Share on other sites


I'm slowly implementing a quadtree cubesphere

That's the approach I'm using at the moment. It's non-GPU so it is easier for me to understand.

Even with the few "pops" of the terrain, that video looks AWESOME!

Share this post


Link to post
Share on other sites
mark ds

I found this which is very interesting for creating noise functions, but was something I was already planning on doing. When a node splits, just create a single new octave for it, interpolate the points location in the parent, then add the new octave of noise to the new child. http://forum.outerra.com/index.php?topic=245.0

But I don't see how they're taking a cube face position like 4000000.1,4000000.1, 637100 and projecting that onto a sphere on the GPU. And they'd still have to apply the heightmap offset from their noise functions.

Share this post


Link to post
Share on other sites

A pretty good trick for any massive coordinate systems is to represent position using both integers and floats.

 

Keep the player (camera) centred at [0.0f, 0.0f, 0.0f] and move the world around him/her. You represent the large scale using integers (which are fast to add/subtract) and keep local 'sub-coordinates' in floats. For example (in 2D) the player could be in cell [14, 7] with an offset into that cell of [0.87f, 0.13f]. Assuming the integers represent 100 metre increments, to find out where to draw cell [16, 2] simply subtract them (maybe on the CPU) to get a relative offset of [2, -5]. Then pass the floating point coordinate [0.87f, 0.13f] to the GPU where you subtract again to get [1.13f, -4.87f]. Multiply this by the scaling factor (100 metres) and you have your render position. This way you're never dealing with massive floating point numbers, but only the offsets - which avoids floating point rounding errors. This works just as well in 3D.

Share this post


Link to post
Share on other sites
mark ds That seems perfect logical and reasonable except for a quadtree cubesphere, if you're using a GPU solution then you have to solve for the positions in the GPU on the face of a sphere. There's no way to do that without using the radius of the sphere. SO maybe you could use a cubesphere until a certain height and then once the camera is a below a certain height, switch ever node in our quadtree to a plane. Maybe have the furthest planes bend towards the center of the planet to simulate the curvature of the Earth (but probably not needed). Do you have a way to use spherical projections and still use floating points on the GPU?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement