C++ unordered_set's .insert slow, even with preallocation

Recommended Posts

Hi,

I have an array of structs, each containing three elements:

class A { public: unsigned int x, y, z; };
//...
A* long_list; //Has at least 250,000 elements

. . . and I need to find a set of the unique structs in long_list.

I try to construct the set as follows:

unordered_set<A> mapping(/*passing in long_list's length to preallocate*/);
//Now .insert(...) each struct into the set

The problem is that this takes a ridiculously long amount of time (like 30 seconds in debug mode, 0.5 in release).

Also, the following:

namespace std {

template<> struct hash<typename my::A> {
std::size_t operator()(const my::A& vr) const {
std::hash<unsigned int> func;
size_t h1 = func(vr.x);
size_t h2 = func(vr.y);
size_t h3 = func(vr.z);
return h1+h2+h3;
}
};

}

And:

bool operator==(const A& a, const A& b) {
if (a.x!=b.x) return false;
if (a.y!=b.y) return false;
if (a.z!=b.z) return false;
return true;
}

Maybe the hashing combination is leading to a lot of collisions? Anyway, why is this so slow?

Thanks,
-G

Edited by Geometrian

Share on other sites
Adding hash values together is a poor way to generate a composite hash. In your case if you have three elements (0, 0, 1), (0, 1, 0) and (1, 0, 0) they will all collide. Since one legal way of implementing std::hash<int> is to just return the integer then you could also get collisions with things like (1, 2, 3), (2, 2, 2) or (6, 0, 0). One way to handle this is to use boost::hash_combine() which will combine hash values in a more or less sane way. If you don't want to use boost, it's hash_combine implementation looks like:
template<typename T> void hash_combine(size_t & seed, T const& v) {
seed ^= hash_value(v) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
}

(or at least one of it's versions anyways)

Share on other sites

Computing the hash function as:

size_t h  = func(vr.x);
size_t h2 = func(vr.y);
size_t h3 = func(vr.z);
hash_combine(h,h2);
hash_combine(h,h3);
return h;

or:

size_t h = 0u;
hash_combine(h,vr.x);
hash_combine(h,vr.y);
hash_combine(h,vr.z);
return h;

. . . seems to give no noticeable improvement.
Thanks,

Edited by Geometrian

Share on other sites
That's not elements to pre-allocate, that's buckets. It's implementation defined, but very likely it still has to allocate the elements as it goes. And since you will probably have many matches, that's way too many buckets, so it's not going to help at all and could even make it slower.

I don't have a quick fix. You try write or try to find a hash map that pre-allocates elements, or use a custom allocator in unordered_map.

Share on other sites

Oops. Well, removing the "preallocation" doesn't seem to do much. If anything, it's a bit faster, but not enough.

Share on other sites
I could add, writing a hash map is not that much work if all you care about is insert and find, but writing allocators for stl can be a nightmare, so I know which I would do.

Share on other sites
Some possibilities: 1) if you're inserting elements one by one, try using range insert instead. 2) Switch to a pooled allocator like boost::pool.

Share on other sites
I would use something like FNV-1a for the hash.

What kind of container is this coming from? If it starts off in a linked list then it could easily be that much of the time is taken by traversing that list, especially if it's home-grown, or you are removing the items from it as you go.

Create an account

Register a new account

• Forum Statistics

• Total Topics
628381
• Total Posts
2982360

• 10
• 9
• 15
• 24
• 11