LOD determination

Started by
3 comments, last by Nairou 16 years, 2 months ago
In a 3D scene, is LOD typically entirely determined by the object's distance from the player camera? For example, when parsing the scene graph to select the geometry to be rendered, do you determine then and there what geometry LOD to use based on raw distance alone? Or are there other factors that make the LOD determination a bit less simple than this sounds? I know some have suggested taking other variables into account such as object movement speed or reflection distance, but still, that is data available to the scene graph, not something you have to wait and determine later in the rendering process. I was originally starting to think that the renderer would need to query each object and allow it to select it's own proper geometry LOD based on certain variables. Not only did it seem like overkill, but I started to wonder if there was really any reason to give them the option. If its only based on distance, then you can just calculate it and know ahead of time what LOD you want from the object. Any thoughts?
Advertisement
Distance is a good start but alone it makes a bad measure (unless all objects have identical world-space size). For instance, a needle pin and a super tanker, both consisting of say 2000 vertices, would require very different LOD values. What usually works better is the size of the object on the screen (the size in screen space). A fast method is to divide the distance to the object by the size of the bounding volume. A more elaborate test would take the view direction into consideration or compute the projected size (e.g. a candle seen directly from above or below can probably be rendered with fewer vertices than when seen from the side). It is also possible to count the number of fragments that the rendered object produce every n'th frame (for instance with the extension ARB_occlusion_query) and then produce a LOD value from that.
In my engine, I have a LODSwitch node. This node basically loads N other nodes (be it meshes, particle systems, billboards, whatever) and a set of rules.
Then during update, the switch node apply the rules, and select the correct LOD node. When the render visitor is called, the correct node is given to the renderer.

The advantage of that is : each LOD can have its own rules. If you only take distances into account, a car LOD will have closer distances than a building, for instance (since a car is smaller on screen than a building, you won't notice if you start using lower LODs sooner)
I let the graphists specify the rules for each LODs. It will always looks better than using any sort of equation. And it's also faster :)

In my case, rules are simply distances to camera.
Distance to camera doesn't work to well IMO.
You need to consider the FOV (I.e lods based on distance wouldn't look to good
in a sniper rifle sight since the objects is getting maginfied due to the fov but still maintains the same camera distance).

Also one could argue that the screen resolution makes a difference.
Lod model X might look very good in a 640x480 resolution but terrible low-poly in a 1920x1080 resolution.

I've always used the projected size as the lod selctor, this "solves" both of the issues above.
Awesome info, thanks! I hadn't thought about calculating the projected size, but that sounds perfect.

The projected size sounds like it can also be calculated during the scene graph traversal, rather than during rendering, so that answers the other part of my question about when LOD is determined and who determines it.

ndhb mentioned counting fragments as another method, which sounds like an in-renderer LOD method. I'm a little curious why one would use that method if you have to wait until render time to determine which LOD to use.

I'd like to perform all LOD determination at the scene graph stage (before rendering), but I want to make sure I'm not missing something that would require LOD determination within the renderer to pull it off.

This topic is closed to new replies.

Advertisement