Jump to content
  • Advertisement
Sign in to follow this  
Pilpel

Quadtrees for terrain culling and more terrain questions

This topic is 1115 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I wanted to start reading this article on quadtrees but then I saw it was published almost 15 years ago.

 

- Is it still a valid tutorial? I want to use quadtrees to frustum cull parts of my terrain.

- Are there any more uses for quadtrees?

- In my generateTerrain() function, where I insert all the vertices to a buffer object, I could generate normals (and UVs) as well, but is it really necessary? Is it preferred to let the shader generate these attributes?

Share this post


Link to post
Share on other sites
Advertisement

The first use for a quadtree I can think of in addition to your link deals with 2D collision detection. You can find more information on Wikipedia where there is a small list (here) as well. I think a quadtree for 2D is pretty common, though, I could be wrong. A more up to date article for a quadtree terrain (though in C# but you can translate it) I found was here.

 

There are more advanced terrain clipping/LOD systems you can do but I don't know the exact details of them. The node based LOD found on Rastertek has worked for my needs and was pretty easy to implement. I had a simple box for each terrain node/chunk, and I just did bounding box detection to know which chunks were to be rendered. This may not have been as efficient as other techniques but I didn't have any noticeable problems.

 

Normals are generated when I create the terrain and I just store then inside the buffer. If the terrain isn't changing there doesn't seem to be a reason to regenerate the normals each time using a shader. In the case where my terrain changed (when using a simple brush) I just recalculated and updated the buffer. 

Share this post


Link to post
Share on other sites
Yes, quadtrees are still relevant, though not necessarily the basic versions. Using them well requires a lot of tuning and minor algorithm hacks, though that likely will only matter if you're making a something particularly big and complex.

This can be combined with the technique that AnnaMarie mentions. You would use the view frustrum, query the quad-tree, and determine which chunks to draw. Comparing every chunk will work if you have a particularly small set of chunks (in fact, that'll probably be _faster_ than a quad-tree, because big-O only really tells the story for large values of N!) but will collapse if you have a large or infinite terrain.

Quad-trees by themselves only deal with the top-down 2D layout of the terrain, and only make a lot of sense with certain world layouts/designs. More advanced systems want to take into account occlusion or vast differences in height (e.g. a mountain may block rendering of a large and expensive forest and your world design may even need to be built around those kinds of world occlusions for perf reasons). Some of those advanced solutions build off of a quad-tree-like approach (or spatial hashing or the like) while others do something else entirely, e.g. voxelization of chunks (which can be GPU-parallelized much better than a tree data structure).

... short version: yes, quadtrees are still fine, depending on your specific needs that only you are in a position to evaluate and profile and test.

There aren't many other (good) uses that I know of. Using them for 2D collision testing _works_ but is very far from the state of the art; especially in scenes with lots of movement, updating a quad-tree is possibly going to be too expensive for use with physics. One that does come to mind though involves some forms of hierarchical path-finding.

Regarding the generation of normals, if your target hardware is recent enough I'd recommend doing that at shader time. It can drastically reduce the memory GPU usage - and hence the GPU memory bandwidth requirements - of your terrain. Memory bandwidth is by far the most precious and limited resource we have today on modern CPUs and GPUs. Calculating normals at runtime each frame can be practically free in comparison. Of course, __profile your target hardware__ and find out for sure which technique works best for you!! It'll also come down a lot to how your organize the raw terrain data in your buffers and how well the data plays with GPU caches (e.g. you want vertices to be processed locally amongst neighbors so terrain texture/data reads are exceedingly likely to be warm in the cache).

If you're doing your first terrain system, though, keep it simple. You can always move the normal generation from baking/loading time to GPU runtime later if you need to.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!