Jump to content
  • Advertisement
Sign in to follow this  
DeathFry

Geomipmapping and Index Buffers

This topic is 3619 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I've been trying to implement my own Geomipmapping. I know that by now there are more advanced techniques for CLoD and all, but I still wanna grow and go at it step by step. I was able to do a complete implementation that recreated the index buffers on each update, properly fixing cracks. Of course, since this was done way too often it killed performance. So I'm rewriting it so as to make it usable. What I'm thinking of doing right now is, on load time, create a Vertex Buffer covering the whole terrain. Then, for each LoD, I pre-calculate 16 short arrays holding the indexes for each situation of the patch having lower LoD neighbors and store them in system memory. Then, on run-time, I'll simply update the patch's LoD, its neighbors LoDs, and then set the Index Buffer to that array. Haven't gotten it quite working yet, but I suppose it'll have a better performance than recalculating the index arrays on each update. One of the questions I have is... is this the best approach? Having the index arrays pre-calculated and stored in system memory costs... well... memory, heh. I know even Netbooks now-a-days have over 512MB of RAM, but that's no excuse to go on wasting memory. Another one is... I'm using TriangleStrips. It seems to me that will help since I'm reducing the size of the index arrays; and since I'm gonna have a whole bunch of them stored in memory, I think they'll save memory over using TriangleLists. On the previous version I was actually using TriangleLists... I changed to TriangleStrips this time, and I'm having some trouble with the math to calculate the new index arrays. With TriangleLists I had it all figured out, heh. So I'm wondering if that memory I'll save is worth the trouble. Oh, and one of the limitations of this implementation is that a given patch can only have its neighbors have a one LoD difference. Is it a crippling limitation? Before I could have a 65x65 patch have a 5x5 patch next to it. Well, thanks in advance for your input!

Share this post


Link to post
Share on other sites
Advertisement
I found that the CPU cost could be brought down to very reasonable levels (extremely low, in fact), with a couple tricks:
* Cache an index buffer for each patch, and only invalidate ones that are affected when an LoD changes.
* Limit the frequency at which LoD can change, especially downwards. In other words, keep track of the last time the LoD of a patch changed, and if it's under a threshold, don't change. Consider using a longer threshold value for decreasing the poly count of a patch.
* Prevent the LoD of a patch from changing when you're standing on it. This is usually coupled with locking the patch detail to maximum in this situation. Doing this really helps cut down on visual jitter as well.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
* Cache an index buffer for each patch, and only invalidate ones that are affected when an LoD changes.


From this, what I can understand is that I should only change/re-create the index array when the LoD changes. Is my interpretation correct? Thus, if there is no LoD change then I leave it as it is and go on to the next patch.

But, in this case, I'll also have to be on the lookout of neighbor LoD changes, no? Because if a neighbor patch changes its LoD, then I have to fix the cracks.

Quote:
* Limit the frequency at which LoD can change, especially downwards. In other words, keep track of the last time the LoD of a patch changed, and if it's under a threshold, don't change. Consider using a longer threshold value for decreasing the poly count of a patch.


I'm at a loss on this one. Currently I'm doing something akin to what's mentioned in Boer's paper - in section 2.3.1.2 Pre-calculating d. You suggestion is modifying this threshold/algorithm to something that will help reduce the frequency of LoD changes?

Quote:
* Prevent the LoD of a patch from changing when you're standing on it. This is usually coupled with locking the patch detail to maximum in this situation. Doing this really helps cut down on visual jitter as well.


Hadn't considered this one! Good call ;)

I see that on one of your implementations you also allow for patches with a bigger difference than one in LoD are allowed. Are there any particular benefits to that?

Share this post


Link to post
Share on other sites
Quote:
Original post by DeathFry
Quote:
Original post by Promit
* Cache an index buffer for each patch, and only invalidate ones that are affected when an LoD changes.


From this, what I can understand is that I should only change/re-create the index array when the LoD changes. Is my interpretation correct? Thus, if there is no LoD change then I leave it as it is and go on to the next patch.

But, in this case, I'll also have to be on the lookout of neighbor LoD changes, no? Because if a neighbor patch changes its LoD, then I have to fix the cracks.
Yup. Basically you'll flag the patch and its 4 neighbors as dirty when the detail level changes.
Quote:
Quote:
* Limit the frequency at which LoD can change, especially downwards. In other words, keep track of the last time the LoD of a patch changed, and if it's under a threshold, don't change. Consider using a longer threshold value for decreasing the poly count of a patch.


I'm at a loss on this one. Currently I'm doing something akin to what's mentioned in Boer's paper - in section 2.3.1.2 Pre-calculating d. You suggestion is modifying this threshold/algorithm to something that will help reduce the frequency of LoD changes?
Nope, I'm telling you to manually ignore the LoD changes if they're happening too fast. The way I implemented this was to converge -- each patch stored a current LoD and a target LoD. Every update, I check how long it's been since the last time that patch changed detail level. If the time exceeds the minimum threshold (iirc something in the vicinity of 0.25 seconds works well), I step the current LoD one level closer to the target level, and mark it and all its neighbors for an index update. The benefits are that it slows down the rate at which indices are recomputed, and also cuts back massively on visual jitter, because the steps are more gradual and slower than if you simply snapped to the ideal detail.

As an added trick, you can globally restrict the number of patches to recompute per frame. That is, let's say you've got 17 patches that are not currently at their ideal level and are eligible to be stepped based on time. Maybe empirical testing has shown that you can only afford to do 6 a frame before spilling out of your frame time. Instead of doing them all at once, you can spread them over three frames. (You may or may not choose to temporarily allow cracks for this.)

Oh, and you can use frustum/occlusion cullling results to determine what to update too. No need to recompute the indices for a patch that isn't actually visible.
Quote:
I see that on one of your implementations you also allow for patches with a bigger difference than one in LoD are allowed. Are there any particular benefits to that?
So-so. It allows the terrain to converge to a more optimal detail setting overall, and eliminates the need to adjust patches to fit the difference restriction. On the other hand, it means the actual crack patching code is more complex and expensive.

I found in testing that 65x65 was the optimal patch size for performance with my scaling setup. The thing is, there's really no point sliding down past 17x17 anymore. My implementation was more flexible, but it's not helpful. If you set 17x17 and 65x65 to be the boundaries, then patches can only be a max of 2 levels apart anyway, so restricting the difference to a single level is no big deal. If I were to write it now, I'd restrict it to a single level difference and strip a lot of the modulos and multiplications out of my version of the crack patching, which would lean it out a lot.

In fact, I'd probably just build and store all the permutations, too. It'd actually save memory over the caching approach.

Share this post


Link to post
Share on other sites
Quote:
... strip a lot of the modulos and multiplications out of my version of the crack patching, which would lean it out a lot.


Actually, that's one of the things I noticed that killed performance on my previous implementation. When I ran the CLR Profiler it turns out that those particular pieces of code, with all the modulos, Math.Pow(2, x) and other operations thrown inside several for() loops generated a lot of garbage... and by a lot I mean around 200/300MB of it that the garbage collector had to go through.

So I remembered that somewhere I read that if something can be pre-calculated, it better be pre-calculated. Maybe a few more KB or MB are used, but like I said, most computers have a good amount of RAM on them.

Olrait, I'll continue on my pre-calculated way and see how it fares ;)

Thanks a lot!

Share this post


Link to post
Share on other sites
Quote:
Original post by DeathFry
Actually, that's one of the things I noticed that killed performance on my previous implementation. When I ran the CLR Profiler it turns out that those particular pieces of code, with all the modulos, Math.Pow(2, x) and other operations thrown inside several for() loops generated a lot of garbage... and by a lot I mean around 200/300MB of it that the garbage collector had to go through.
Um...none of those things should be allocating memory.

Share this post


Link to post
Share on other sites
Not them directly, but some of those values I stored on ints/short, and over time they would create some garbage. I could probably have optimized all that, and quite probably will after this other implementation is done, so that I can compare them and see which one yield better results.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!