Hello,
This is my first time posting here, so nice to meet you everyone!
For some time now i'm trying to find a good solution for rendering a procedural generated universe.
I've read most of the articles on the subject (or at least what Google provided me) and i'm still satisfied with the information i have.
Since tutorials on this subject are almost non existent or outdated, i began trying to think my way out of this.
I was thinking that if we can use Octrees for collision detection, frustum culling and raytracing, why can't we use this beautiful data
structure to solve the universe rendering problem.
So, for what we know, we have 4 types of "spaces":
1. Intergalactic space
2. Interstellar space
3. Interplanetary space
4. Planet surface
The root cuboid of the universe would be the universe itself, using the parsec scale.
The children cuboids would represent the intergalactic space, using the lyghtyear scale, recursing all the way down to the surface of the planet.
Now i know that i would have to somehow interpolate the scales and camera projection settings between these spaces in order to have a smooth transition.
After getting inside a cuboid (interplanetary space), i would render the surroundings in a cubemap and use it as my "skybox".
Using the octree method i believe would give me plenty of advantages one of it being the ease with which i can generate galaxies, star systems and planets. For instance i could use a noise function to generate children for the root cuboid, thus generating procedural galaxies. I could apply the same algorithm for generating interplanetary bodies, so on and so forth.
While inside a planetary space cuboid, i could project the distant stars on the cubemap based on their luminosity. For instance bright stars would get rendered as bright spots on the cubemap, i think you get the point here.
I know that in order to overcome the floating point precision i would have to somehow scale the surrounding cuboids (spaces) in order to keep the depth buffer happy, this should be pretty easy, at least in theory.
Now the real problem is the transition of the camera from a high level space to a lower level space. Some interpolation should be used here in order to make it smooth so that when we get from the root cuboid inside a children cuboid objects should not just instantly scale up.
I was thinking to subdivide the universe to a point where a cuboid space could give me enough floting point precision to render it's content without any unpleasant surprises and use the logarithmic depth buffer in order to overcome z fighting at very large distances.
The interplanetary space cuboid would use doubles for positioning giving me precision ranging from 1 mm to about1 trillion km, as Sean O'Neill states in one of his articles on the subject.
I'm sorry if i was not clear enough on some of the things i said here, its just that there's so many things inside my head right know, i can't really synthesize this into a better question.
Now, what do you people think of this? Is it doable, was it done before? What would be the challenges, the good and the bad?
Thank you for taking your time to read this.