Quote:Original post by ToohrVyk
Eelco: radix sort is linear in the number of entries. What I want to point out is that O-notation will hide constants away from you, and that in the case of radix sort, that constant is somewhere around log N. This is why in almost all cases, where n < N, I might prefer using a heap sort with O( n log n ) complexity instead of a sort in O( n ) that is actually a hidden O( n log N ).
And how about pushing that reasoning to the extreme? Here:
I coded recently an algorithm that sorts, in place, using a heapsort-like algorithm, a table of 2^32 numbers *at most* (you won't be needing more for most applications). Of course, you can submit smaller datasets.
The sort occurs in O(N log N), where N = 2^32, because that's how heap sort performs on my table. Since the algorithm doesn't depend on the number of entries, it works in O(1).
Yes, I see your point: my algorithm only allows a number of entries smaller than 2^32. Indeed, I did make that approximation here: a pointer is only 32 bits, hence the limited amount of memory. But radix sort does the same approximation: an integer is only 32 bits, hence the limited amount of passes.
Do you understand my point now? (And do you agree?)
no, i still dont understand you completely. although i get your point about hidden constants, still, my experience is that they are relatively small in a decent radix, and they are still insiginifcant compared to the extra lon(n) term.
Quote:
[BTW, don't say someone's opinion is wrong if you admitted, in the same post, that you didn't understand it]
sorry, it was a joke, probably a bad one. however, you are clearly wrong about memory requirments, of that much im entirely sure.