Distant Terrain Rendering

Started by
23 comments, last by EvilDonut 15 years, 9 months ago
That's true, I hadn't thought of using vertex streams yet, thanks.

Regarding the index buffers, I have one per level which is used for each chunk in that level so the footprint is relatively low. I am having to use 32 bit index buffers though (for the minute), as some of my chunks are 257x257 meaning over 65536 indices. It's been a couple of years since I've done any work on this and, at that time, it was considered best practice to split up index buffers if they went over 65536. I may do that at some point because using 32 bit buffers for a buffer only slightly over the max seems like a bit of a waste of memory.

I can't really compress the heightmap as I need the values to be exact and DXT is a lossy compression IIRC (although I appreciate it would be lossy in the vertex shader as well as CPU_. The reason for this is I need the ability to quickly pull out arbitrary heights from the map in order to a) create sporadic splatted textures and b) calculate accurate sloping (for the player(s)).

Thanks for the advice (I realise this thread is now slightly dissimilar to the title).
Advertisement
I haven't actually implemented such a scheme, but as I was reading through all of these great replies, it occurred to me that dual paraboloid mapping would work great in this circumstance. The far away terrain mesh is inherently overly complex - or put another way it is highly tesselated with respect to the viewpoint.

This is exactly what paraboloid mapping needs, which would allow you to render the distant terrain into the paraboloid maps == two render passes with highly complex terrain. However, since the terrain is far away you would only need to render it once in a while due to the spatial coherence in the viewer's position with respect to the terrain. So the two extra rendering passes would be highly amortized over several hundred frames or more, and there wouldn't be any tesselation issues with the DPM generation.

Has anybody tried this out before? The distant geometry rendering basically becomes a simple lookup (similar to the cube mapping example mentioned above), and you could dynamically decide the near and far clipping planes to allow for very good depth resolution in the mid range. I'll have to add this to the list of things to try out...
Quote:Original post by stephanh
I dont know how it's done in oblivion, never played it.
But as you said, either use a terrain lod scheme which handles very large datasets well (e.g. geoCLipmapping) or render far away landscape into a cubemap which gets updated every now and then (every 20 frames or so...). Distant level geometry was done that way in shadow of the collossus (ps2).

The Making Of "Shadow Of The Colossus"

Edit: typo geomipmapping -> geoclipmapping




Hmmm cube map OR skydome (if theres an issue with the corners)

Thats an interesting idea to optimize even intermediary ranged scenery for long views....

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Quote:Original post by RobMaddison
Some great replies...

For my requirements (4km x 4km), I've decided to go with a hybrid of chunked LOD and brute force. Here's my plan:

All in all, I'm estimating static goemetry and base textures taking up ~150mb. Is this far too much for xbox360 (which is where I ultimately would like this to go)? It has 512mb system mem so I assume it's fine.


Just had a look at my old geomipmap terrain and plugged in a pieced-together 4096x4096 version of the Oblivion map. Total memory usage is 100MB with a chunk size of 64x64 and no textures.

Thinks to do:

-just store height values in buffers and offsets for chunks (you'll probably have to use 4byte floats anyway, so unfortunately you waste about 3byte per vertex). Put them together in a shader.

-for the tree, don't use any pointers and text book tree traversal. Ugly overhead. Instead, exploit the fact that your tree will be complete. Store it as an array (level for level). Moving to the child/parent is a matter of a simple bitshift on the current index and "collecting" all leaf nodes is a matter of calculating the offset and number, then just fetch them (in nice, cache friendly order, as they are all in one consecutive block)

-just use spheres for culling. They aren't precise, but they are good enough and fast

Rendering the whole thing (view is corner to corner) is 2,5 million triangles at 70fps (7900GT). Fiddling with the lod threshold etc. and reducing it to a sane amount should be good to go. Though I wouldn't recommend geomipmapping for really large terrain with "infinite" view distance.

Also, did you consider a method to hide the gaps between different lods? Skirts seem to be pretty accepted, though after stumbling over a chinese-wall-like monster skirt from hell in Oblivion, I'm not so happy with them. All V or U like terrain can look pretty bad. On the other hand, I hate my code to automatically create index buffers for all linking pieces and it means a lot of extra render calls.
f@dzhttp://festini.device-zero.de
I've changed my original terrain renderer to use the Oblivion heightmaps with procedurally generated texturing to determine what areas are grass/sand/rock/snow/water. The LOD method I'm sing is completely different to my first attempt, as that was unneccesarily resource-hungry. Following are some shots:



The terrain is divided into 'patches', each corresponding to a 1024x1024 heightmap (as output by Oblivion's construction set), each of which are further divided into 'cells' of size 128x128 (this size was used as it was the highest power of 2 that could be used with 16 bit index buffers after skirts were added to each side). LOD meshes are computed for each cell by generating a mesh for each of the heightmap mipmap levels, so the highest LOD is 128x128, then 64x64, and so on. There were 8 LOD meshes for each of the 896 cells, each of which was saved as a .x file in a Data folder - so quite a lot of them!

Each heightmap patch has its normals computed and saved in a 1024x1024 texture, which is used in the pixel shader lighting function. Each heightmap patch also has two 'terrain textures' of size 512x512 (gonna try 256x256 as there was almost no quality loss from halving it from 1024 to 512), each color channel of which is used to represent a single terrain type (grass, rock, etc). I tried using bitfields so I could compress several terrain types into a single color channel, the interpolation didn't quite work though. The values of each channel are used as blending factors for the individual terrain textures in the pixel shader.

For visibility determination I'm doing a ridiculously simple method involving a spherical interesction test between each cell and the view frustum - there is no quad tree of cells (although there could easily be) as looping through all 900 cells doing this very simple test is pretty quick.

Water is done by drawing a very big quad over each cell. The quad is planar and has a Y value of 0, so using this method there is no way of getting rivers at different heights :( Implementing this would require something clever be done to construct the vertices representing the surface of the water. The big water quad does a texture lookup to determine its alpha factor, which allows me to give the smooth transition between sand and water rather than an instant change as you see in many older games.

Next on the to-do list is to improve the water by bump mapping it with a water-like texture which will change over time in some way so as to simulate waves/ripples/motion on the water surface. Once I add sky boxes I will do some very simple skybox reflection, but doing some kind of environment-based reflection involving local/global terrain will probably be more challenging considering the vast amount of terrain.

I tried implementing Screen Space Ambient Occlusion and made pretty good progress but it began to really harm the framerate when I tried to get antialiasing working - i was using render-to-texture for the scene (you can't do antialiasing with render-to-texture, so you have to render to a really big texture then shrink it down which involves 4x more calls to your pixel shader function), storing linear z values in the alpha channel, then passing this texture to a SSAO shader that operates on a full screen quad. The results were kind of what SSAO should look like, except for auras above some peaks, but I suspect large scale terrain is not the best way to test an SSAO shader - a small enclosed scene with models, e.g. people, would be far better.

EDIT: See how much better a bit of bump mapping makes water look? :) The screenshots don't show it, but the normal map is animated so as to simulate a rippling/wavy/choppiness effect - the animation is done entirely in the pixel shader which is parameterised by a 'water phase' variable (-1 to +1, increments slightly every frame) that is used in some trig equations that are applied to the normals. I don't think I've got specular lighting working properly yet though, you can't see any at all when you are at the waters edge.


[Edited by - EvilDonut on July 7, 2008 8:52:26 PM]

This topic is closed to new replies.

Advertisement