Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Flat hash containers


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Ryan_001   Prime Members   -  Reputation: 1461

Like
0Likes
Like

Posted 23 February 2013 - 01:31 AM

A few months ago I put to together a few simple hash containers for my own use.  Similar to unordered_set (and its multiset, map, and multimap variants) but instead using continuous memory like a vector, called unordered_flat_set, unordered_flat_multiset, unordered_flat_map, and unordered_flat_multimap.
 
On another forum a few people mentioned that this would be useful for them and that I should consider submitting it to boost.  Now this seems like a rather daunting task to me at the moment.  I thought I'd post what I have here to see what people think.  The latest files can be found here: https://sourceforge.net/projects/unorderedflatse/files/?.  It compiles with no errors or warnings on both VC2012 with the november CTP compiler and MinGW/gcc v4.7.2.  I don't have proper benchmarking or unit tests yet, but there is some simple example code.
 
My questions then being:
1) Should I even bother with attempting to submit it to boost?
2) Does the interface make sense?  Would it be something you would want to use?
3) Where is the boost::vector::containter unit test code?  I can find the unit testing framework but I can't actually find the unit tests.  It seemed like that'd be a perfect starting point for putting together some proper tests, but I can't seem to find it...
 
There's a few smaller questions/issues but they're kinda minor compared to questions 1 and 2 so I'll leave it at that.

Edited by Ryan_001, 23 February 2013 - 01:33 AM.


Sponsor:

#2 e‍dd   Members   -  Reputation: 2105

Like
1Likes
Like

Posted 23 February 2013 - 05:48 AM

In response to your first question, about submission to boost:

 

The work you will have to do to get your library to the point of submission (assuming there's sufficient interest -- ask on the boost mailing list) includes:

  • provide documentation in a manner consistent with boost's style and infrastructure
  • ditto for tests
  • ditto for build scripts
  • modifying your code to use boost's portability macros/conventions
  • getting it to build and perform well on platforms that you may have little or no experience with. I'm almost certain that you will be required to provide a version that builds cleanly on POSIX systems such as Linux and OS X.

Libraries undergoing submission are assigned a review manager. At some point, your library will be reviewed publicly on the boost mailing list. It can take quite a long time, depending on interest, to get your library through the review backlog.

 

Once the review starts, a lot of people will tear your code apart, often with little or no tact. So make sure you're comfortable with criticism. Revisions will probably have to be made to what you have now until people are satisfied. I haven't looked at your code, it's just what usually happens.

 

Assuming submission is successful, you will have to maintain your library for the foreseeable future. This will probably mean:

  • familiarizing yourself with boost's release process
  • responding to bug reports, even those where you don't have access to a similar system to that of the reporter. Of course, this is true wherever you 'publish' your code, but the exposure provided by boost will mean more bugs reported (even if they're invalid).
  • there's talk on the mailing list of boost moving to a new version control/modularization/distribution system soon, too. So you will almost certainly have more maintenance work to do in future.
  • More generally, you will have to stay up to date with boost's infrastructure changes, just to keep your library's head above water. 

 

Exposure is nice of course, but submission to boost isn't a one-shot thing.


Edited by e‍dd, 23 February 2013 - 05:50 AM.


#3 EWClay   Members   -  Reputation: 659

Like
1Likes
Like

Posted 23 February 2013 - 06:03 AM

My argument would be that chained hash tables have the same algorithmic complexity but much better worst case performance. If you are worried about allocations you can plug in a pool allocator to the existing containers.

Would the difference in performance (mainly from better caching) matter enough to add another type of container?

I may be wrong though. If you submit it you might at least get some constructive criticism.

#4 EWClay   Members   -  Reputation: 659

Like
1Likes
Like

Posted 23 February 2013 - 08:53 AM

Regarding the interface, the use of sentinel values is awkward and could lead to surprising behaviour.

No standard container invalidates all iterators or changes the order on erase so that could be surprising too.

I appreciate the efficiency concerns that lead to this but maybe for a general purpose container the trade-off isn't the right one.

#5 jwezorek   Crossbones+   -  Reputation: 1981

Like
0Likes
Like

Posted 23 February 2013 - 10:30 AM

I haven't reviewed the code but am not seeing a reason why a hash table would need an interface any different than the standard interface of std::unordered_x regardless of its implementation, i.e. chaining vs. buckets. I would expect any library that is part of boost to use the std interface where one exists, if possible.


Edited by jwezorek, 23 February 2013 - 01:03 PM.


#6 Cornstalks   Crossbones+   -  Reputation: 6991

Like
0Likes
Like

Posted 23 February 2013 - 12:18 PM

Dang. Impressive work. Building with Clang++ 4.2 I get the following warnings (with -Wall and -pedantic):

 

 
$ clang++ -Wall -pedantic -std=c++11 -stdlib=libc++ main.cpp
In file included from main.cpp:4:
./sentinel.hpp:107:77: warning: '&&' within '||' [-Wlogical-op-parentheses]
   bool operator() (const T& x, const T& y) const { return x == y || x != x && y != y; }
                                                                  ~~ ~~~~~~~^~~~~~~~~
./sentinel.hpp:107:77: note: place parentheses around the '&&' expression to silence this warning
   bool operator() (const T& x, const T& y) const { return x == y || x != x && y != y; }
                                                                            ^
                                                                     (               )

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#7 Hodgman   Moderators   -  Reputation: 31920

Like
1Likes
Like

Posted 23 February 2013 - 07:03 PM

My argument would be that chained hash tables have the same algorithmic complexity but much better worst case performance. If you are worried about allocations you can plug in a pool allocator to the existing containers.

In my engine, my use cases for flat (and non-resizable) hash containers are mainly for:
* Ones that are "deserialized" from files, as I use in-place loading where the file is loaded into RAM and cast to a struct and is ready to use without parsing. My flat hash table uses only offsets (instead of pointers) and supports only POD types to allow for this.
* Similar to the above, if a hash table is required on a NUMA core, then it needs to be memcpy-able (and needs to keep working if memcpy'ed to a different address space), which is the same requirement as above.
* If a hash table requires simultaneous insertions from multiple cores, then a flat lock-free implementation is much simpler than one that uses traditional allocations.

^Given these quirky requirements, I don't think boost would care for my own implementation laugh.png


Edited by Hodgman, 23 February 2013 - 07:04 PM.


#8 Ryan_001   Prime Members   -  Reputation: 1461

Like
0Likes
Like

Posted 24 February 2013 - 01:15 AM

Edd, I appreciate the info.  You have a rather extensive repertoire, have you tried to submit anything to boost?
 
EWClay, you are right about the sentinel, and the use of a sentinel with an insert, erase, or find operation would cause undefined behavior.  I check for it in the debug build, but there are no checks in the release build.  The problem is using a 'flag' type implementation loses a lot of the performance advantages, as right now, for small data types you get on average 1-cache line read/inset, erase, or find.  Which really is where the performance advantage is.  Clearly I'll need to write up some benchmarks.  Do you know of any popular benchmarks online comparing vector versus set or similar?

jwezorek, I did try to stay as close as possible to unordered_set, but there are a few spots where I had to change or add things.  I tried to document where it differed from the standard at the top of "unordered_flat_base.hpp".  Its a mix of both unordered_set and vector interfaces, but not a perfect drop in for either.

 

Cornstalks, thank-you very much that was quite helpful, and a clever catch on the part of the Clang compiler.  I've fixed it on my end but haven't yet uploaded the change to sourceforge.



#9 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 24 February 2013 - 04:35 AM


My argument would be that chained hash tables have the same algorithmic complexity but much better worst case performance. If you are worried about allocations you can plug in a pool allocator to the existing containers.

In my engine, my use cases for flat (and non-resizable) hash containers are mainly for:
* Ones that are "deserialized" from files, as I use in-place loading where the file is loaded into RAM and cast to a struct and is ready to use without parsing. My flat hash table uses only offsets (instead of pointers) and supports only POD types to allow for this.
* Similar to the above, if a hash table is required on a NUMA core, then it needs to be memcpy-able (and needs to keep working if memcpy'ed to a different address space), which is the same requirement as above.
* If a hash table requires simultaneous insertions from multiple cores, then a flat lock-free implementation is much simpler than one that uses traditional allocations.

^Given these quirky requirements, I don't think boost would care for my own implementation laugh.png

Oh, they have their uses, I don't deny that.

Actually a lock free hash table is something Boost is missing.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS