sorting question

Started by
15 comments, last by CProgrammer 20 years, 2 months ago
Mergesort doesn''t have bad cases like Quicksort AFAIK. And in my tests it''s nearly as fast as far as the constant goes, and of course it''s also normally O(n lg n).

For short lists, bubble sort *will* be faster than Quicksort or Mergesort written recursively, whatever the data. The value of "short" depends on how well your compiler optimizes tail recursion, and details of implementation. I found the crossover point to be around 40 elements when I did tests for my undergrad thesis (on a Sparc architecture compiling with an outdated version of gcc and no optimization flags).

If you *know* your data is "nearly sorted"... is it possible that you could maintain it in a completely sorted state? Encapsulate your list, with sortedness being an invariant in your class design. If you take that approach, insertion sort is almost certainly your only sensible option - just find the right place to insert via binary search every time you add something.
Advertisement
quote:Original post by Anonymous Poster
Believe it or not, in some cases it is faster to randomize the data before using quicksort.
Only if your implementation of quicksort always chooses the first or last element as a pivot. There are far better ways to choose a pivot.
I''ve seen this page around for a while, and from the looks of it the O/E Transpose and the Shear sort blow the pants off the quick sort.

http://www.cs.rit.edu/~atk/Java/Sorting/sorting.html

Is the performance this much greater and if so, why is the shear sort not more recommended over quicksort?

-=[ Megahertz ]=-
-=[Megahertz]=-
Did you notice that Shear sort and O/E Transpose are both parallel algorithms? (i.e. use more than one processor. A lot more than one processor to be effective.)
The worst case for Quick Sort being sorted order is only true when using a naïve way to pick the pivot. Most STL implementations use far better approaches.

I would really advice you to benchmark it, for linear arrays (i.e. vectors) beating most std::sort implementatins for small datasets (sub million size) can be done using a very naïve merge sort or even just a simple variant of quick sort if you can live with worst case but that doesn''t seems to be possible in this case.
HardDrop - hard link shell extension."Tread softly because you tread on my dreams" - Yeats
I did notice that. Perhaps I''m confused as to how these applets run. Are they running on the client side or is there some sort of server side, trickery involved that im not aware of. My assumption obviously is that is client side. (I''m not real knowledgeable in Java so I honestly couldn''t tell you)

If it is client side, I''m then confused as to how it still beats the other sorts. I AM on a single processor machine. I think theres some code there for it, I''ll take a look as see if I can see what im mising about this.


-=[ Megahertz ]=-
-=[Megahertz]=-
Also, people should remember that std::sort in pretty much EVERY C++ compiler now is not using quicksort, it''s using introsort, which avoids the n^2 behavior of quicksort, even on pathologically bad inputs. And further, once your list size is small enough, that std::sort will switch over to using insertion sort automatically. Unless you have some staggering evidence that using std::sort is too slow for whatever you''re doing, don''t reinvent the wheel.

This topic is closed to new replies.

Advertisement