Jump to content
  • Advertisement
Sign in to follow this  
Dirk Gregorius

Fastest way to lookup pointers

This topic is 2391 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I need to look-up pointers based on some integer key. The number of entries is know before (no resizing) and I also don't need to remove entries. What is the fastest way to implement this? I was thinking of a hash table with linear proping. Since I know the number of entries before I can choose the table size such that the load balance never exceeds 70%. Is there any recommened standard way for this? I know there are std::map and friends but I want to implement this also as an excersise!

Thanks,
-Dirk

Share this post


Link to post
Share on other sites
Advertisement
If the key is only integer, especially 32 bit integer, a very fast method is multiple level index.
Split 32 bits to multiple levels (such as 3 levels) to 10, 10, 12, or what ever.

At first, allocate a buffer to hold 2^10 pointers and initialize to null.
For a give integer, check its first 10 bits, and see if the pointer is null. If it's null, allocate a new 2^10 pointers and put the buffer pointer to the old buffer.
Then check next 10 bits, then check remainder 12 bits.

This is how Intel CPU manages virtual memory pages, AFAIK.

The time complexity is always O(1), and depending on your key's value space and how you choose the bits, the may or may not big memory overhead.

Share this post


Link to post
Share on other sites

If the key is only integer, especially 32 bit integer, a very fast method is multiple level index.
Split 32 bits to multiple levels (such as 3 levels) to 10, 10, 12, or what ever.

At first, allocate a buffer to hold 2^10 pointers and initialize to null.
For a give integer, check its first 10 bits, and see if the pointer is null. If it's null, allocate a new 2^10 pointers and put the buffer pointer to the old buffer.
Then check next 10 bits, then check remainder 12 bits.

This is how Intel CPU manages virtual memory pages, AFAIK.

The time complexity is always O(1), and depending on your key's value space and how you choose the bits, the may or may not big memory overhead.


This will perform very well if you're using a large number of integer keys which are relatively dense. But if you have few (compared to 2^32) keys and they're sparse, cache misses will probably kill your performance. If this is the case then hash tables + linear probing is your best bet. I use the FNV-1a algorithm here: http://isthe.com/chongo/tech/comp/fnv/

Hash lookups are very fast as there is no modulo operation. I use a table size equal to double the number of keys, however linear probing results in more collisions than the link list method I'm using, so you might get better performance with a larger table size.

Share this post


Link to post
Share on other sites
The use case is a an edge map when creating a Doubly Connected Edge List to match the twin edges. Currently I use an O(n^2) search since I wanted to see if my construction works from a triangle soup works. For the test meshes this was fast enough, but I can imagine larger meshes in the future but nothing with more than 2^16 vertices since I use 16 bit indices.

Any suggestions for this specific use case?

Thanks for the help so far!
-Dirk

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!