# 3D Grid Rendering

## Recommended Posts

I am trying to render a tile-based landscape in 3D. The landscape is represented as a 3D array, with one byte per tile - if the tile is 0, it is regarded as empty space, and otherwise it is the material ID. In many ways this is similar to a voxel setup, but it differs in that the tiles are much larger, and they are rendered as a whole, rather than subdivided. Think of the terrain of Dwarf Fortress, but rendered in 3D. All right, so I need an efficient way to render a fairly large (say 512^3 tiles) map in full 6 DoF. Ray casting seems to be out here, because I can't accept the skipped tiles in the distance. That brings us to marching cubes, but all the existing implementations seem to want to render partial-voxels - which I guess makes sense for a normal terrain renderer. I need instead to generate full voxels (well, the visible ~3 sides). Does anyone have any suggestions, links, etc. as to good ways to accomplish this? While I can find much literature on how to use marching cubes to divide the each cube by the voxels it contains, I can find no literature on the actual traversal of the cube field, which is the only part I need.

##### Share on other sites
Partial voxels? What? Marching cubes isn't a rendering method, its just an algorithm to go from voxels to polygons. There are some GPU implementations, is that what you're talking about?

Why not use marching cubes on the 512^3 to turn your voxel field into a polygon mesh, and then render the mesh afterward?

##### Share on other sites
Well, let's think of it in simplest terms. The most important area when rendering is, in fact, empty space, because a tile being empty space means there are 0 to 6 "walls" that need to be rendered.

This means that, at simplest, you want to generate a mesh that represents the walls around all empty spaces. Do whatever subdivision, spatial sorting, etc that you want, but you'll basically have a mesh.

This mesh needs to be updated whenever a tile changes, so use your spatial subdivision to make smaller the areas that need changing. Say, create 16x16x16 tile "batches".

Marching cubes isn't intended for polygonizing voxels, it's meant for polygonizing functional space of arbitrary precision; with voxels, you already have your upper bound for detail: 1x1x1.

By using batches, you also get a couple other potential benefits for nearly free; by generating all facets in the batch, and not just the "far" three, you can render them with clockwise winding to show just the "far" side, and then either don't draw the backfaces or enable alpha-blending and render with counterclockwise winding to show a translucent "near" side. You can also wind up the level of detail; instead of one square per wall, you can add a rough look to rock and dirt, cobblestones to finished floors, patches to grass, torches and tables to finished walls, stalactites and stalagmites to cave ceilings and floors, etc.

Lastly, you can deal with distant batches in a down-scaling fashion. You can even run a background thread to constantly run through mid-distance and far batches, and optimize the meshes (merge coplanar walls of the same surface or whatnot).

##### Share on other sites
Quote:
 Original post by WyrframeWell, let's think of it in simplest terms. The most important area when rendering is, in fact, empty space, because a tile being empty space means there are 0 to 6 "walls" that need to be rendered.This means that, at simplest, you want to generate a mesh that represents the walls around all empty spaces. Do whatever subdivision, spatial sorting, etc that you want, but you'll basically have a mesh.
Good stuff! Generating holes rather than solids makes a lot of sense, and simplifies things a bit.

Quote:
 This mesh needs to be updated whenever a tile changes, so use your spatial subdivision to make smaller the areas that need changing. Say, create 16x16x16 tile "batches".
So we regard our 3D grid as some type of implicit octree?

Quote:
 By using batches, you also get a couple other potential benefits for nearly free; by generating all facets in the batch, and not just the "far" three, you can render them with clockwise winding to show just the "far" side, and then either don't draw the backfaces or enable alpha-blending and render with counterclockwise winding to show a translucent "near" side.
I take it is doesn't make sense to do view-dependant generation? I assume most graphics cards will be pixel-bound on large scenes, so the number of triangles generated is not really going to be a problem.

Quote:
 You can also wind up the level of detail; instead of one square per wall, you can add a rough look to rock and dirt, cobblestones to finished floors, patches to grass, torches and tables to finished walls, stalactites and stalagmites to cave ceilings and floors, etc.
Instead of generating a quad for each face, we use a predefined mesh based on the material? That sounds handy, although transitions between materials could be a bit tricky. That also sounds like it could create something similar to the terrain in StarCraft 2.

Quote:
 Lastly, you can deal with distant batches in a down-scaling fashion. You can even run a background thread to constantly run through mid-distance and far batches, and optimize the meshes (merge coplanar walls of the same surface or whatnot).
This would be a view-dependant rendering optimisation?

Since the terrain is already an implicit octree, we can obviously do efficient frustum culling. Would it make sense to also perform some type of occlusion culling, or just to render the 'patches' from near to far, and let the Z-buffer deal with overdraw?

I also realised that pure cubes aren't going to cut it. I need both ramps and water levels, but those should be trivial, if I regard (for instance) 0 as empty, 1-6 as water levels, and use the sign bit to represent ramps. Because ramps and water don't necessarily take up the whole cube, I suppose they need to be regarded as empty by the generation algorithm, and then use custom geometry.

##### Share on other sites
Personally, I would have ramps be considered normal geometry, and generated thus.

Just to get my own head around it... let the high bit clear (0) represent non-solid presence in that cube. If the tile is non-solid, then it uses the material of the cube directly below it for its material, and the next 4 bits are corner flags; set for high, clear for low. Flat ground, or an "empty" cube is represented with all corner flags zero. Now we can have ramps approachable from diagonals and multiple sides, making overland terrain look nicer in the process. The last three bits indicate water level, 0 to 7. If the tile is all solid (high bit set), the next 7 bits are terrain type, with two reserved spaces. We need an "empty" space for thinking about caverns taller than 1 cube, which still needs a water level 0 to 7, so set aside (0000xxx) from other terrain. We can still have complex underground water tables by setting aside another part of the solid space (say, 1111xxx) representing "sand which has absorbed xxx of water", using the normal interpretation of water level. However, The other 14 "core" terrain types can have those three "water" bits to play with, giving us air plus sand plus 112 solids that involve no water level.

But for rendering, water should be separate, and I completely forgot about it. You'll want separate water-facet and solid-facet meshes for each batch, so the water can change often without interfering with the solids, and so you can exclude or optimize water separately at a distance. Water will usually be shown only as a "top" surface, so this can be optimized heavily at range, and can probably be rendered with an animated, gently waving surface up-close.

Damn, I like that idea. Now I feel like making a 3D Dwarf Fortress... race you? :)

##### Share on other sites
Occlusion culling would be rather inefficient unless the camera is above "terrain height", where the ground obscures everything underground. You'd spend a lot of time figuring out which batches are fully occluded by what cells in nearer batches. Just doing near-to-far and letting the Z-buffer do its job should be enough, but your draw range should be reasonably limited either way. Even rendering optimized-mesh batches, grouping your rendering buckets by material, etc, if your map starts looking like worst-case scenario (uniform swiss cheese; each face of 50% of the cubes), you're going to bog down.

##### Share on other sites
http://www.flipcode.com/archives/10-31-2002.shtml

##### Share on other sites
Quote:
To some extent. Our data representation is quite similar to voxels, but we are generating polygonal geometry directly (using some form of marching cubes attached to LOD), rather than raytacing the image, as is done there.

Quote:
 Original post by WyrframeEven rendering optimized-mesh batches, grouping your rendering buckets by material, etc, if your map starts looking like worst-case scenario (uniform swiss cheese; each face of 50% of the cubes), you're going to bog down.
I think the dwarf fortress rules for cave-ins largely prevent the worst case from occurring. Even if the player spent all their time tunnelling/constructing swiss-cheese, you couldn't go above 25% of the cubes.

As for occlusion culling, I wasn't thinking of traditional methods. Since most of the time will be spent above the terrain, looking down, and we probably need a 'slice level' to see into hills, we can of course discard all the geometry above the slice, but we can also discard any rooms entirely below the surface. However, I can't think of a good way to accomplish the latter, apart from a flood-based algorithm.

Of course, the tricky part of all this is the need to keep performance very high - Dwarf Fortess can bring a modern dual-core to its knees with ease, so adding significant rendering overhead would probably kill it.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628342
• Total Posts
2982177

• 9
• 24
• 9
• 9
• 13