Jump to content
  • Advertisement
Sign in to follow this  
asp_

boost::unordered_map vs. std::map

This topic is 3446 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I ran into somewhat of a peculiar issue with boost::unordered_map today. The situation is basically that I have a giant calculation cache of approximately 300,000,000 calculations which are each about 8 byte each and a has function which is 5 SHL and 6 XOR on size_t values. The program runs in 64-bit mode for obvious reasons and boost uses size_t as their hash results which is a 64-bit integer on AMD64. The calculations aren't pre-calculated but calculated on request, cached and reused. Now for the strange issue, boost::unordered_map is about 2 times slower than std::map with these data sets. The operator <() for the std::map is about 10 ifs and each lookup should be about 28 steps if my quick and dirty math isn't off. The only reason I could think of would be if the hash algorithm is extremely poor and we get a lot of collisions and linear searches but that isn't the case since all bits of the hash except for 5 are guaranteed to be unique. Furthermore reads are much more common than writes, a factor of maybe 1000:1. Anyone have any input as to what else could be slowing down the hash map so much?

Share this post


Link to post
Share on other sites
Advertisement
What are you using for the Hash and Pred parameters?

And when you say "2 times slower", doing what? What's the typical usage and what are you timing?

Share this post


Link to post
Share on other sites
Quote:
Original post by asp_
The only reason I could think of would be if the hash algorithm is extremely poor and we get a lot of collisions and linear searches but that isn't the case since all bits of the hash except for 5 are guaranteed to be unique.
A bucketing function with 5 mostly-constant bits would be a terrible bucketing function. It would mean you'd only be using about 3.1% of your buckets.
Quote:
Furthermore reads are much more common than writes, a factor of maybe 1000:1.
Reads, or lookups? Bad hashing hits lookups harder than anything else.

Share this post


Link to post
Share on other sites
Quote:

5 mostly-constant bits would be a terrible bucketing function

They're not constant. The function is basically


(x0 << 0) ^ (x1 << 5) ^ (x2 << 10) ^ (x3 << 15) ^ (x4 << 20) ^ (x5 << 25)


Where x0..x5 contains 6 bits of information. Now since it's 64-bit I could make the hashes completely unique by shifting them 6 but I don't think that minor overlap is the issue?

Quote:

Reads, or lookups

Well these are lookups but I read the stored result. Key -> Value :) Inserts are what I called writes. I never delete.

Quote:

And when you say "2 times slower"

The entire application needs this cache since it relies on it so heavily because each of those 300,000,000 results are slow to calculate. The whole application literally takes twice as long per work unit. I do at least 20,000,000 - 30,000,000 lookups per second, don't have the actual number handy.

Share this post


Link to post
Share on other sites
I have not used boost hashtable, but I remember using tr1 or the gnu one from std::ext. I noted that it initialized the bucket array to a size of around 50 and I assume it would grow by doubling when the ratio of elements to buckets exceeded some constant factor (probably 1:1). It might make sense to preallocate/reserve the bucket array to something close to the expected number of contained elements in order to avoid automatic resizing if the boost api supports it. Perhaps also investigate what the behaviour is for resizing/reallocating the bucket buffer - ie does it grow exponentially.

Additinoally, if the api offers it (see if you can get some sort of metric for collisions) and whether the hash is grouping elements efficiently. The one I used allowed you to get iterators to the buckets to easily determine how many elements each bucket held and I would assume the boost variant should also permit this.

Anecdotally, I have been really supprised that hash table performance really does get hit by hash functions - even if/when they appear simple. I think I found that for strings a std::map was around the same speed as a hashtable, which is not what you would expect given the complexity of the two approaches ie O(1) vs O( log N) for insertion. I guess the short circuit involved in comparing two strings in a map means they it is unlikely they need to be iterated to their length to determine their ordering. But it is a similar thing even with integers where there is an assembler primitive to compare but hashing means byte accessing the value.

Edit: hopefully more readable

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!