Best way to do terrain?

Started by
9 comments, last by Norman Barrows 11 years, 2 months ago

This is my first time working on a large open terrain-type level (not skyrim large, just bigger than any indoor scene would ever be) and I'm struggling to find the right way to do the terrain visuals.

My game is low-poly, handpainted textured (WoW style sort of). At the moment I'm using a standard heightmap based terrain with an RGBA mask that stores texture-splatting weights. This is an easy enough approach (and is the one that is favoured by my physics engine - the game is very reliant on physics). This approach seems to incur the following limitations (please correct me if I'm wrong here):

1) restrictions on level geometry (e.g I'd like to have a rock arch, some sharp ledges and a hollowed-out volcano)

2) wasted vertices on areas that need less detail (e.g flat beach)

3) texture splatting means restricted texturing detail. Increasing the vertex count would help this but would be bad for rendering and bad for physics.

So instead, I'm thinking of going for something more akin to the mario galaxy type levels: (http://images1.fanpop.com/images/image_uploads/Super-Mario-Galaxy-Screens-super-mario-galaxy-815698_1280_720.jpg)

Using regular models for the terrain would solve the first 2 problems above (I think?) but I'm not clear how it's best to texture such a level. Obviously a lot of the texturing needs to be repeated tiles (before I was only using 4 256x256 textures). But at the same time, if you want the model to have the right texture coordinates in order to repeat a small texture many times, you need those extra vertices! So it seems to be a bit of a trade-off.

Advice?

Thanks in advance.

Advertisement

I'd explore the possibility to use generic meshes by going through the standard level geometry system. If you have one. This appears to me the only solution flexible enough to do what you want to accomplish.

Keep in mind we're in 2013. If you think about "wasted vertices", you're over thinking it. Now, we could go great lengths discussing how rasterizing small triangles gives issues and all sort of things but truth is vertex transform is rarely a problem in itself.

It is acceptable to use a render mesh with higher detail than physics mesh. My convex hulls for example have roughly 1/4 geometric complexity.

Previously "Krohm"

Thanks for the reply Krohm.

But could you elborate on what you mean by a "standard level geometry system". I'm completely self-taught in this area so a lot of terminology goes over my head. Do you mean just separating the level into different meshes and then handling through a quadtree/octree with all the other models?

Hope someone can give me more info.

could you elborate on what you mean by a "standard level geometry system"

I'll try my best.
By a low-level, close-to-the-metal point of view, we only have drawcalls.
At an higher abstraction, those drawcalls come from various sources. They could come from a particle system or by meshes. There's a mesh which is way more important than others. The "world mesh" is typically larger, so large it needs intra-mesh culling, some sort of partitioning scheme. The system managing this "big mesh" is the "level geometry system".
Note nobody says this system must exist, it is indeed possible to use an assembly of generic meshes to make them appear to be contiguous. But I think having a "level geometry system" is still very valuable and I don't think it's going to go away soon.

For example, for Quake and Unreal engines the "level geometry system" is based on BSP-trees while generic meshes are not (but the BSP helps culling them AFAIK).

The whole point of the message was: instead of trying to turn a heightmap-based system into something that does not look like an heightmap, work on something more flexible.

Previously "Krohm"

Ok yeah, I understand what you mean by a level geometry system. I wonder if anyone has any examples or code of any such systems?

So how exactly would that work for a regular model? The only way I can see that being effective is to partition the level mesh, as you say. And then work it into

the frustum culling.

I should point out that my game is fairly low poly. I anticipate that the terrain will come out at around 2,000 polys which is obviously very low by modern standards.

On any decent computer I don't really need to worry about it but it'd be nice if I could set my minimum requirements quite low and it's always good practice to nail

these techniques for future usage.

I don't suggest to partition regular models. They are culled as a whole, they either render or not (mostly because of locality properties). At best, their bounding box is culled using the level geometry system.

For 2k tris, I'd rather put everything in a std::vector and live with it.

In my tests, an Athlon XP 2800+ with Radeon 9600 can render about 20k tris with a linear scan and still lock easily at 40+ fps. Optimize your life, get the job done first. Profile. Think about it.

Previously "Krohm"

in my current project, the "world mesh" is 2500 miles x 2500 miles in size at a resolution of 10 ft x 10 ft quads. that's 3 trillion, 484 million, 800 thousand triangles. it uses a procedural height map. to draw the ground, i draw all quads within clip range. for each quad, i lock a quad mesh, height map it, unlock it , then draw it. i use a circular buffer of 20 ground quads meshes. this ensures i have a free quad mesh to height map while the card is drawing the previous quad mesh. plains, hills, mountains, stream, rivers, lakes, cliffs, canyons, etc are all done using the ground mesh. unusual terrain such as rock shelters, caves, volcanoes, etc are drawn as separate meshes over top the ground mesh.

the entire world mesh is drawn with just 20 quads, and some formulas that return the y value at a given x,z location based on the terrain there. terrain info for the entire world is stored in a 500x500 array of records (structs), one for each 5x5 mile square in the world, and contains info about elevation, vegetation coverage, water, and special terrain features present.

i too felt rather daunted when i first contemplated how to do this. i'm now on my 4th version of drawground(). and it doesn't look like i'll need a drawground5(). : )

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

For 2k polygons, I would just brute-force render the whole terrain. That is a pretty small amount, just around the size of a 32x32 heightmap (1024 squares * 2 tris per square). I didn't start looking for more efficient methods until I used maps of at least 500x500 in size.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

KD rendering is a good option. Voxels and fractals are also good options.

But, I personally enjoy using tessellation in DX11 for my terrain rendering, since I can let the GPU do almost all the work

2000 polys (tris) = 1000 quads.

say 5 textures to blend, one base, 4 splat.

and 4 tiles per texture.

and say 4 types of blendmaps (different blending patterns - alphamaps).

texture size 256x256.

one texture per quad, no wrap, etc.

you'd need 20 texture tiles and 4 alphamaps.

one pass in HLSL per quad, or 5 passes in retained mode (i think - double check me on these numbers, it may be 5 passes in HLSL too if you want to use any alphamap for any splat operation. )

the size of your quads will the determine the granularity of your splats.

make the quads as small as needed to get the splat size you want.

if the resulting mesh is too many triangles, then its time to think LOD strategies, I'd say. such as a high rez mesh around the camera surrounded by a ring of lower rez meshes out to the horizon, or perhaps something as simple as only splat out to 1/2 the distance to the far clip plane or something like that.

my drawground() routine doesn't do splatting. the game uses the fixed function pipeline (so far) for maximum compatibility, and there's so much vegetation in most areas you can't even see the ground texture. that whole one extra pass per texture thing put me off. i may end up doing a little bit of it though, where the ground is visible and boring (and not supposed to be boring, like endless dunes of sand).

in the end, splatting is just another technique for procedurally generating a texture for a ground quad. ground quad size and the number of times you repeat the texture across the quad will still determine the final resolution of the ground tiles. the method of texture generation is irrelevant.

another trick to getting small spats while using large underlying quads:

i used this technique in "Combat racer" to draw dirt roads etc:

1. draw the underlying ground mesh with the base texture.

2. instead of splatting a second texture onto a quad, you draw a quad any size you want , anywhere you want, just above the ground, and texture it with your splat texture, either with an alpha blend, or an alpha test.

this way you only overdraw quads where you want splat, splat quads can be any size, and at any location relative to the underlying ground quads. the only trick is to watch out for z fighting if you draw too close to the ground mesh. you can also use a 3d mesh for the splat instead of a flat quad, to do effects like a slightly raised brick path, etc.

in the end, this is simply drawing special terrain as separate meshes over top the ground mesh.

if you want to get really fancy, blend the textures yourself, then draw. that way you can blend a small splat onto a big underlying ground texture, and use big ground quads.

in the previous version of my current project i used a 10 channel real time weighted texture blender for ground textures, with a 10 texture cache.

you'd make a call to the texture blender , passing it say, 4 textures (all i needed for that project), and 4 weights (blend factors 0-100%). if it already had one like that blended in the cache, it returned that texture, otherwise it would blend the new texture and add it to the cache (LRU algo). then you locked a blank texture, copied the whole thing into the texture, set texture, and draw. texture data structures were not used for the blender or cache, just 2d arrays of RGB char structs. you only mess with textures, locking, etc, when you copy the results.

you could use a similar technique.

start with your underlying ground texture in the blending buffer.

have a routine that does a splat. you pass it the tex coords for the center or UL corner of the splat, the splat texture, and the alphamap. it does the blend:

buf=buf*(1-blendfacor)+splat*blendfactor (is that right?) for r,g,b , for each texel.

make a call for each desired splat.

then copy the buffer to a texture and draw your quad.

you would have to draw quads one at at time, or in sections, say 4x4 quads, blending 16 textures first, then using them to texture the 16 quads in your 4x4 quad ground mesh.

if you wanted to get really crazy, you could just have one huge texture for the whole ground mesh. blend the whole thing in a big buffer first, then copy it to a huge empty texture, set texture, draw ground mesh, done! but th vidcard may not like your huge texture. I've tested using high rez textures up to about 1024x1024 with no problems.

a 2048 triangle ground mesh is 32 quads across. at 256x256 texture size, thats 32x256=8K texels across. so a texture for the whole ground mesh would only be an 8K by 8K texture.

your statement of "low poly, <2000" is somewhat vague.

if in fact you have something more like a 50x50 quad ground mesh, thats 2500 quads (ploys), or 5000 tris. at 256x256 texture size, it would be 50*256= 12800 texels (12k?) across. the vidacrd may or may not be able to handle a 12k by 12k texture. i come up with 576 meg for the texture size at 12k by 12k.

me personally, i'd probably write a blender/splater thing like i described that does a one quad texture, and just draw each quad. if splat patterns repeat i'd use a cache too.

gets you everything you want at the price of a single texture lock per splat pattern (a pattern of splats on quad, not a single splat), and a single mesh lock per quad draw (remember, you have to height map each quad before drawing it).

i'd use a circular buffer to hold blended textures, a circular buffer of empty D3DXtextures to copy the blend results into, and a circular buffer of ground quads to height map.

for storing the splat info, i'd probably use a sparse matrix, an array of structs with: quad coords of the splat, texel coords of the center of the splat in the quad, splat texture ID #, and alpha map ID #.

to draw a quad, you'd go through the sparse matrix and look for records with matching quad coords. for each match, use the texel coords and texture and alphamap ID#'s to call your splat blend routine. copy the result to a texture, then set texture, height map the quad, and draw.

sorting the sparse matrix list on quad x,z and stopping your search once you passed the x,z of the quad in question is the standard optimization used for this type of search.

wow!! i think i may just have to build that, sounds cool! <g>.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement