Jump to content
  • Advertisement
Sign in to follow this  
cmac

Unity Implementing a C# Sorting Network Library

This topic is 652 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,
 
As a game programmer who often prototypes and designs in Unity, I've delved into general C# optimizations for the sake of good practice over the last while. Recently I've been using sorting networks instead of common algorithms for small arrays, and since I coulnd't locate a simple C# library with Bose-Nelson sorts I decided to centralize the logic into one. However, as a beginner programmer I have a few questions regarding it:
 
Note: I've been generating all networks from http://pages.ripco.net/~jgamble/nw.html
 
 
Parallelism:
 
Since the operations are all separated into parallels, is there any more optimal way of structuring the comparisons to make use of multithreading? My understanding is there is a time cost to the hardware switch between threads, and since these are generic low-cost comparisons I figured running the ifs in sequence would be faster anyway. But I'm a multithreading noob so if I'm wrong please correct me.
 
Sorting Network Optimization:
 
I've been struggling to find comprehensive information on sorting networks outside of dense university papers, though an old article I encountered seems pretty straightforward and definitive: http://www.drdobbs.com/sorting-networks/184402663
 
I'm already certain that Bose-Nelson networks are the most optimal option when n <= 8, but wasn't able to locate definitive information on larger sizes (and too lazy to test all the test cases right now). According the article linked above, there still are guaranteed optimal networks for 9 <= n <= 16 using non-BoseNelson algorithms. When Algorithm choice is "Best" on the generator I linked above(http://pages.ripco.net/~jgamble/nw.html) it provides a lighter version of its Bose-Nelson counterpart, which I am assuming is in fact more optimal than traditional algorithms.
 
I'm a bit less certain in the cases of 17 <= n <= 32. I'm assuming the larger it gets, the less distinct the advantage networks have over traditional algorithms. For that reason, I'm drawing the conclusion that they should only be used when array input order is entirely unknown and thus an appropriate algorithm can't be informedly-chosen. But I'm wondering if they have any use in this case, or if you're generally better off with any sorting algorithm? I'm planning running some test cases, but any expert input to help narrow criterea is welcome.
 
General Library Structure:
 
I'm less familiar with building libraries in C# compared to C++, so considering this is a single source file I figured the best method of distribution is just the source. But is it considered good practice to wrap it in a namespace and .dll file etc? It's obviously mostly for personal use, but considering it's an optimal solution for n <= 16 cases and it saves a lot of mindless legwork I figured it may as well be open source.
 
 
Apologies for the variety in questions, don't feel obligated to address every single one if you respond.

Share this post


Link to post
Share on other sites
Advertisement

I would think (not know) that, since you would have to synchronise the threads so often - e.g. 4 times in the Sort_8 case, relative to only doing 7 levels of comparison - that the overhead of multithreading would be a net cost more than a benefit. This may change if comparison itself is expensive - e.g. "which of these 2 remote websites has the most images on it?" - and where it's possible to effectively perform comparisons in parallel (which may not be true, even of the previous example, if retrieving the website saturates the single shared network connection).

 

Probably the simplest way to implement slow operations in parallel that need to synchronise at certain points is to use some sort of 'futures' system (https://msdn.microsoft.com/en-gb/library/ff963556.aspx). You can fire off 4 operations, wait for them all to complete, then move on to the next stage. If the futures are using a thread pool then the threading overheads can be fairly minimal. But again, the overhead of the mere extra code and functions is going to vastly outstrip the benefits if comparisons are cheap (e.g. strings, integers).

 

I'd be very surprised if this algorithm, multi-threaded or not, outperformed the standard sort algorithms provided by C#. Most standard library sort algorithms are optimised for special cases like small values of N, for example. When they say it's 'optimal' for N<16 that's referring to the number of comparisons, not the actual execution time. In fact I'd expect (again, my personal conjecture) execution on cheap comparisons to be poorer with this system because the code size is larger and there are more conditional statements, both of which are often culprits in high-performance code.

 

Regarding distribution, I like C# code to be in its own namespace, but beyond that I just like to get the plain source code. A unit test suite is also a good idea if you want to be taken seriously (i.e. executable proof that your sort returns the same results as a standard sort).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!