Sparse Voxel Octree Max Depth?

Started by
5 comments, last by ChristOwnsMe 11 years, 2 months ago

I have been reading about sparse voxel octrees, and I was interesting in using one to render a large surface, where each point in space is 1 meter. If I wanted to render a surface that was, lets say, 100 miles across, I would have to have to have an octree that could represent 160934 units across(which would be 18 levels deep in the octree If I am not mistaken). Is this feasible? I would be procedurally generating the data in the octree as it was needed(as the camera gets closer to the surface). Thank you for any input or guidance.

Advertisement

You don't need to make the tree deep enough to cover all data with a single root node. A regular grid of root nodes is just as feasable. In my case I wanted to represent a standard landscape so the area to be covered is much wider than high. I hit the 4GB mark at 20480 x 20480 x 4096, using a regular grid of 5x5x1 nodes, each being 12 levels deep. I *think* I can make that about 10 times larger with some optimizations I have in mind, but I currently don't have the time to study this topic any further.

Still 160000² sounds way outside of what current technology could handle, but maybe your procedural technique can make it feasable. Mine used 40 minutes just to generate a landscape at 20480².

----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.

Its feasible, but you might want to consider a few things:

-Since you want a flat-ish object, using an octree isnt probably the best choice. Consider using a quadtree or multiple octrees. (An octree does work though, and is likely simpler than some hybrid solution in case you want to be able to have a floating island 10 km in the air)

-You should consider not making the leaf nodes so small. If possible, have the leaf nodes be regular grids of tiles/voxels. This is because if the size of the node related data is large compared to the actual tile/voxel data, being able to not store a single tile wont be very useful because there will be a lot more data because of the nodes. (For example if you have 1 byte tiles/voxels, and the node is 64 bytes, it doesnt make sense to make each leaf node contain 1 tile if you can instead have them contain a 16*16 grid of tiles)

Especially since your tree will likely just represent a big sphere/circle, the savings from having little tiles/leaf are small in terms of memory usage.

o3o

Just make that your octree is actually a bi-tree or a BVH and the anisotroy issue is gone.

however, 160k*160k = 25 billion. so it wont enter memory. you'll have to stream data from disk and you can never see your entire world at once at full resolution.

its the same problem as the visualization of points data taken from 3D laser scanners. (30GB of points is common) there are MANY research papers on the subject. sometime people uses levels of details for when the distance of vision has to become very deep so most of the finer data can be ignored and left on disk.

the partitionning structure itself should not go down to 1 cell = 1 voxel, Like waterlimon said, what good can it brings ? if the leafs contains 100 to 1000 voxels its already a nice balance. Also you could imagine that the leaves contains even more, like say 10000 and then within a leaf, you use a grid that can be streamed in and out memory at will....

anyway how will you render all that ? a graphic card can handle around these numbers:

- 10000 draw calls per frame (single objects), or

- 1 million hardware instances, or

- 20 millions polygons in one object. (surely much more with 680 series using tesselation, I imagine around 90 million would still be real time)

- a 1 millions point primitives buffer

I suggest you take some look at the researchs that were covered about visualization of heavy point data, it looks like your problem, and partitioning structures are largely discussed in these papers.

Thanks for the responses everyone. My goal is to make a planet eventually, so I thought octrees would work out the best, so one could go inside the planet as well. from a distance, only very large voxels would be generated for the upper levels of the octree, but then as you got to the surface, you would eventually display 1 meter cubes. So what I am trying to say is I do NOT need to generate all of it at once. I thought I might be able to procedurally produce the detail ONLY when more detail is needed for a given depth in a given node of an octree.

Waterlimon

Thanks for that. This makes perfect sense. So each node in the octree could be a 16x16x16 grid of voxels, correct?

Lightness1024

For rendering, I was thinking of raycasting on the GPU into a sparse voxel octree. If I was at the surface, lets say, I would only be rendering as many cubes as you might see in a minecraft scene. And if I looked 100 miles to the east, I might see a mountain range rendered with very large cubes(as it is very far away). This is what I am going for.

Yeah, something like that. Whatever gives the best performance and least visual artifacts.

You might want to implement it using plain old polygons though, raycasting is quite expensive and itll take a lot of work to get it all working properly.

Perhaps aim for geometrical detail where each surface is maybe a few pixels, with a tiny texture (virtual texturing of some sort?)

o3o

I see. I feel like I need a GPU accelerated way to render the data at some octree level, so I was just playing around with good ways to do that. I don't need to render billions of cubes at one time. Just enough to make a surface scene look something like minecraft. And even approaching the surface will be rendering lower level detail blocks in all leaves until one is within 50 meters of the surface or so and a full detail 1 meter resolution has to be displayed. Thanks for all the advice. I shall research more.

This topic is closed to new replies.

Advertisement