• Advertisement
Sign in to follow this  

Minecraft planet using Level of Detail?

This topic is 2159 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey. So I am a noobie with graphics programming, but I was wondering if LoD can handle billions of objects? If i had a planet generated out of minecraft style boxes, and viewed it from space, would it even render? The other option is to just take the average color over a section and create a texture for the planet surface until you get close enough, thereby making a fake LoD. Thanks for the input.

Share this post


Link to post
Share on other sites
Advertisement
The best answer would probably come from mister Minecraft himself, but I guess rendering millions of cubes is crazy, even when batched, using instancing, or whatsoever.

The particular shape lends itself for all kinds of tricks though. For example, maybe they combine blocks to bigger ones from a distance? Let's say 1 cubic meter is the smallest cube you encounter. After 20 meters or so, you could start combining 8 blocks to one 2M3 cube, if all 8 are present.

Another trick is culling whatever is not visible. Culling can be pretty difficult, but in this case, all you need to know if there are neighbour blocks. A cube surrounded by 6 others can never be visible, unless the neighbor blocks are transparent maybe. So, the majority of underground blocks don't have to be rendered at all, until they get revealed by removing the cube above or asides it.

Maybe they aren't rendering cubes at all for distant stuff. How about voxels? I have no experience with them, but drawing colored squares instead of cubes on a buffer may be a boost as well. Only the foreground cubes would be actual 3D shapes then.


Just some thoughts...

Share this post


Link to post
Share on other sites
If you stored your landscape as a sparse voxel octree, maybe you could use marching cubes or simply raycasting.

Share this post


Link to post
Share on other sites

If you stored your landscape as a sparse voxel octree, maybe you could use marching cubes or simply raycasting.

Indeed and since SVO's are hierarchical structures they only have a O(logn) cost, which means assuming infinite memory, you can render any amount of voxels very quickly. Of course, my assumption of infinite memory is invalid, but considering how efficiently voxels can be encoded, you could potentially get a draw distance of at least 8-10x times the current distance supported in Minecraft (which is limited by the efficiency of the hardware rasterizer).

Rendering any number of polygons/voxels is pretty much a solved problem with octrees, kd-trees, BVH's, etc... the real challenge nowadays is how to store all that data.

I don't believe Minecraft uses any sort of advanced LOD (blocks do not get combined at a distance, although they should which would allow for a much greater draw distance), but I do believe some tricks are being used to reduce the number of polygons pushed on screen (such as occlusion culling, etc...).

Remember that the fastest polygons are those that aren't drawn.

Share this post


Link to post
Share on other sites
But doesn't the tree invalidate itself if the tree changes? Players would be able to change the landscape like in MC, so does this change things?

Share this post


Link to post
Share on other sites
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

But doesn't the tree invalidate itself if the tree changes? Players would be able to change the landscape like in MC, so does this change things?

[/font][/quote]

Yes, but this isn't like a BSP tree where adding one thing shifts the balance and invalidates everything. An octree is more like a mipmap. The area immediately near the player/camera should be 'expanded' out to the most detailed leaf nodes of the tree. Turning individual cells on/off only affects these nodes, plus their immediate parents all the way up to the top of the tree.

Share this post


Link to post
Share on other sites
Great, thanks a lot for the replies. I did some calculations about the size of a planet that takes 1 hour to run across, and this is what I got.


1 hour to run acors(assuming they run 37 km an hour):

37 km circumferance
radius of planet = 6 km
diameter = 12 km
Formula for the surface area of a sphere = 4 x pi x R² = 452 km

1 mc block = 1m
1 kilometer = 1000 meters

cover area of 452,000,000 meters.

at 1 byte per "voxel" block

452,000,000 blocks = 3.36766242980957 GB's

This could be lowered by creating a smaller planet, or using less data for a voxel of course.

So the idea was to create the surface of a planet so the player could see it from space, and then only generate new blocks as they explore underground. It seems that it is indeed possible to do this then.

Share this post


Link to post
Share on other sites
Hey Shadowman, the idea is that of course the game only generates terrain which is in range of the player. That way the amount of data grows pretty slowly unless you're a crazed explorer. Generating everything in one shot is not usually practical. Consider that a 6km radius planet is extremely small - a player can run a full revolution in a couple hours and the planet's curvature will be obvious from the surface (you will not be able to see the base of a mountain 2 km ahead).

That said what you can do is generate high-resolution voxels (1m^3/voxel) for blocks near the player, and generate the rest of the planet with lower-resolution voxels (which would allow it to be seen from space with good enough detail). Then as the player moves, the low-resolution voxels are updated to high-resolution. But it is quite a bit harder to implement.

It is, in fact, possible to render an infinitely big landscape with zero memory, using implicit voxel data (via procedural means, using scalar fields), but that's not worth much outside of tech demos since it's impossible to interact with the voxels and it's significantly more computationally intensive to render it (still O(logn) complexity, though).

With simple RLE (run length encoding, i.e. 11111111 -> (8)1) compression people have achieved rates of around 1.5 bits per voxel, and better schemes can attain even better compression rates. Of course this is dependent on the underlying voxel distribution, natural landscape is very redundant but not everything is (alien-like terrain is significantly more random).

Share this post


Link to post
Share on other sites

at 1 byte per "voxel" block

452,000,000 blocks = 3.36766242980957 GB's

Totally wrong. Most of minecraft content is procedurally generated. A whole world, before interactions, can be stored in like 200 bytes if memory serves. I think Notch noted this multiple times.
No need to compress anything.
Personally, I'd build LODs as impostors of original geometry.

Share this post


Link to post
Share on other sites

Totally wrong. Most of minecraft content is procedurally generated. A whole world, before interactions, can be stored in like 200 bytes if memory serves. I think Notch noted this multiple times.
No need to compress anything.

8 bytes for seed. However, pretty much every single chunk generated from it has inherent "interactions" in the form of liquid flows and hence - every chunk must be saved. So, compressing that chunk data to preserve it is essentially mandatory due of the amount of data involved.

Share this post


Link to post
Share on other sites

can handle billions of objects?


Sure.

Here's another, based on Perlin noise (by Perlin himself) that allows infinite zoom. It's closest to voxels one can get, just the rendering process is slightly simplistic. But it runs in Java and has done so for a decade, so it doesn't require absurdly powerful hardware or even a GPU.

So I am a noobie with graphics programming[/quote]

That however is a problem.


Either way, the question is asking the wrong thing. A better question would be: "I want to render Earth-like planet using minecraft-like voxels at 1m resolution. How could that be done?"

First observation would obviously be that regardless of how big the planet is, one single person can never possibly see all of it. So whichever process would be chosen would favor terrain representation centered around user's view.

Since all we care about are surfaces, amount of data grows much slower (area = n^2, volume = n^3). At any given point in time, user may only see around 2 million voxels (1080p or such). The problem now becomes efficiently generating several million voxels for each frame. In worst case, moving at a steady pace, assuming replacement of entire view over one second, things become fairly doable, assuming decent hardware and procedural generation. Things can also be simplified by not iterating fully, so we can reduce number of voxels per frame by a factor of 8 or even more.

at 1 byte per "voxel" block

452,000,000 blocks = 3.36766242980957 GB's[/quote]

You don't need that. Even if trying to simulate real Earth, there is no data that would cover anything more than land surface and even that is available at fairly low resolution. 1m is likely pushing it. The rest would need to be faked, at which point we need procedural generation.

As for changes to terrain - through some meaningful limiting, a user simply cannot modify enough to matter. If a user can modify 10,000 blocks per second and they spend 1 year doing it, we only end up with ~300GB of changes. In Minecraft, a user modifies 1 block every 2 seconds.


Naive solution of simply having an array representing every pixel obviously doesn't work yet (it will in a decade or so). But for everything else, it's perfectly doable.

Share this post


Link to post
Share on other sites
I think I am starting to understand. I was getting confused because with LOD the data needs to be there to render at a lower resolution, so I thought that the entire surface needed to be rendered already. So let me see if I am getting this right.

You are saying that if I am looking at a planet, I can procedural render very low resolution voxels, and then if a player sees, say, a mountain on the planet, as he zooms in to that specific part of the planet, you generate more and more detail, until they get close enough to the surface to where you would start generating the minecraft sized 1m blocks, but the rest of the planet is still just a blury voxel mess because no 1 m blocks have been generated for that area because the player never viewed that area at the surface level. Is this correct?

Thanks a lot for the replies, they are really helping.

Share this post


Link to post
Share on other sites
, but the rest of the planet is still just a blury voxel mess because no 1 m blocks have been generated for that area[/quote]

You always display 100k voxels, for example. They just vary in size. At planet level, each of them will be 2km across. As you zoom in, they get refined. Due to distance to each block, their relative size in pixels is about the same, perhaps 2-4 pixels across.

It's not guaranteed to be best approach, but it is one way.

There's other techniques. Impostors/billboards would likely work.

There are two biggest takeaways here:
- We simply don't have enough data to simulate Earth, except possibly at surface. So 6.4e6^3 1m voxels isn't really a problem, we need to generate them procedurally or in some other way. May as well generate them on the fly
- Total number of voxels isn't really interesting, you can never see more than 2 million pixels anyway.

Such approach doesn't work directly for existing datasets, but it works for procedurally generated content.

It's about the same as that infinite resolution engine being advertised.

It obviously also comes with possibly unreasonable limitations. Very limited physics simulation, potentially difficult higher quality rendering (shadows and such), only localized effects (cannot have something affecting entire volume of Earth).

These limitations are much bigger deal than rendering itself since they are unlikely to be able to fake many aspects. You cannot simulate erosion on a mountain if that block is still at 20km size.

Share this post


Link to post
Share on other sites
The only thing that still confused me is how the data is generated consistently. For instance, if I have 100k voxels, and I see a mountain ranged, and I begin to zoom in to that mountain range, how do you generate a more detailed version of that mountain range. Do you basically use perlin noise at a low detail(at the planet level) and then as you zoom in you use more perlin noise for the next ,level of detail(adding to the previous noise already generated), and so on and so on until you are at the 1 m block level?

Share this post


Link to post
Share on other sites
I see a mountain ranged,[/quote]

Do not see a mountain range, that's impossible. Instead, realize there is no mountain range.

how do you generate a more detailed version of that mountain range.[/quote]

A voxel is a voxel.

Say a world is represented by following heights: [2, 6, 4, 3]. The "6" would be the peak of mountain.

At next level, we refine it. We take each value and add or subtract 1 randomly: [2+1, 2+1, 6+1, 6-1, 4+1, 4-1, 3-1, 3+1] = [3, 3, 7, 5, 6, 3, 2, 5]. And presto, the world became more refined (7,5 are the peak of the mountain).

For volumetric worlds, one might instead choose to turn on or off certain voxels.

Now it just comes down to choosing a suitable refinement method. There is no prescribed method here, it's mostly a matter of experimentation.

Share this post


Link to post
Share on other sites
I've been theory crafting about this, but I don't really know what I'm talking about because I've never done anything close to this yet.

Start with a tree. A "node" would be a block. Eight blocks make a second level block, eight second level blocks make a third level block.
You could probably have every third level be a direction, 1sts would be north-south pairs, 2nds would be east-west pairs, 3rds would be up-down pairs.

Where ever you can, render pairs and pairs of pairs instead of blocks. It's really simple, if someone breaks a high level block, subdivide it on the fly.
If one block is made of the same material, you only store one instance of that material.


In addition to that, try to cheat on background objects by skipping over details that you might not even see.
Make points in space instead of cubes, using only every N^Mth cube. Connect them to make triangles. Really simplistic looking mountains would look great in an artistic way.

At N distance (I recommend 3x) you could render a pair instead of a block.
Because at 2x distance you can see 2x as many blocks on any axis, and thus 4x if you look at a wall.
So to compensate, render 1/4 blocks and you'll be seeing the same number of cubes at any distance.

The best part is if you know a huge square block is air, you can pass right through it without thinking much (don't forget to render creatures).
Because any line with any slope acts the same on any scale, you can just skip entire chunks of air same as if they were small cubes.
You won't have to examine every cube of air to see if you're occluded.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement