Public Group

# Level of detail confusion

This topic is 2606 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello,

I'm a little confused with level of detail algorithms. I've been trying to create a little procedural landscape generator (using the geometry shader with marching cubes), and I'm aiming at a view distance of at least five kilometers, so clearly I need to use some sort of LOD scheme. Is it true that a triangle viewed at twice the distance will appear roughly twice as small? Based on this assumption I whipped up a quick python script and calculated that a view distance of two kilometres with a base resolution of one triangle per 0.1 meter (world coordinates, so that is near the camera the terrain will have a resolution of around 0.1 meters between each vertex) would be achievable with around 350 000 vertices. I don't have much experience in computer graphics so I'm wondering, is this estimate completely off or is it reasonable? I'm really struggling to mentally represent all this 3D stuff, I really need some guidance and information on how level of detail really works.

##### Share on other sites
is this estimate completely off or is it reasonable?[/quote]Neither. Your first stop is to try brute force it first, for prototyping reasons, then iterate. So far, my main problem with terrain has been (1) on the workflow and (2) spending too much time on something that later proven unnecessary.

Look, 64*1024 vertices were not a problem... on GeForce 4. Much less on 6. So, brute force it first. Seriously.

##### Share on other sites

is this estimate completely off or is it reasonable?
Neither. Your first stop is to try brute force it first, for prototyping reasons, then iterate. So far, my main problem with terrain has been (1) on the workflow and (2) spending too much time on something that later proven unnecessary.

Look, 64*1024 vertices were not a problem... on GeForce 4. Much less on 6. So, brute force it first. Seriously.
[/quote]
The brute force approach didn't work. I ran out of video memory (on a 2GB card, lol). Well without level of detail I hit four hundred million vertices. I think I am going to try and abstract it to a minimum, for instance start with rendering a tiny grid without worrying about shading (perhaps just displace the vertices with noise), then increasing the grid's area, and see which combinations of tessellation and mesh reduction preserve image quality at particular distances. 3D graphics development is quite an amazing domain!

With 350 kVerts?

##### Share on other sites
No, 400 million. I "brute-forced" it without using any level of detail, so now I'm thinking of determining which method to use to reduce the scene complexity without losing too much detail. I did a few tests and established my graphics card could draw about 1.6 million vertices in realtime (60 fps) with vertex transform and some basic pixel shading (such as diffuse + ambient lighting). I'm trying to get an idea of how far I can go in terms of triangle count while retaining the ability to actually shade stuff (since GPU's have been using unified architecture for a while now). I know many people think vertex count is not very relevant and I agree but I think it's a good thing to have a rough idea of the hardware's capabilities.

##### Share on other sites

No, 400 million. I "brute-forced" it without using any level of detail, so now I'm thinking of determining which method to use to reduce the scene complexity without losing too much detail. I did a few tests and established my graphics card could draw about 1.6 million vertices in realtime (60 fps) with vertex transform and some basic pixel shading (such as diffuse + ambient lighting). I'm trying to get an idea of how far I can go in terms of triangle count while retaining the ability to actually shade stuff (since GPU's have been using unified architecture for a while now). I know many people think vertex count is not very relevant and I agree but I think it's a good thing to have a rough idea of the hardware's capabilities.

400 Million vertices need about 12 Gig of memory if they are just position data so it is no wonder you run out of memory. 400 000 000 * 4 * 8 = 12 800 000 000 bytes = 11.92.. GiB of memory. If you add normals to this you can add an additional 400 million * 3 * 8 bytes of memory.
You defenitely want to LOD this or reduce the resolution to something that is manageable.

Where the test you did with only a single vertexbuffer that had 1.6 Million verts in it, try again with batching verts in buffers of arround 10K-25K verts you will find you can draw more. Batching can improve the throughput quite dramatically as cachelines on the GPU are used more effectively. It's the copy from sysmem to VRAM which can dramatically slow down your drawing code for VB's and textures.

##### Share on other sites
Whoops.
Excuse me, couldn't quite get the bigger picture.
I think you will find ClipMaps useful. They basically involve working with a fixed set of vertices which are dynamically mapped to a texture containing the height samples you need for a certain set of triangles.

##### Share on other sites
I t
hink you will find ClipMaps useful. They basically involve working with a fixed set of vertices which are dynamically mapped to a texture containing the height samples you need for a certain set of triangles.[/quote]

Indeed ClipMaps seem useful and should work very well if I use chunk processing for instance. Much better than my initial idea of not displacing the camera at all but offsetting my density function by the camera position

I'm going to be looking at the diamond-square algorithm to generate some terrain first. It has intrinsic tessellation so it will be hopefully be simple to implement LOD to try and understand how it works.

1. 1
2. 2
3. 3
Rutin
13
4. 4
5. 5

• 26
• 11
• 9
• 9
• 11
• ### Forum Statistics

• Total Topics
633700
• Total Posts
3013418
×