n*log(n) VS log(n!)

Started by
41 comments, last by grhodes_at_work 17 years, 11 months ago
Some here said that O(log n!) is approximetaly O(nlogn - n)
For ANY input, (nlogn - n) is faster than (nlogn).

In its best case, my algorithm takes O(n). Worst case is O(log n!) .
So, in any case, I didnt heard about anything faster than this one.

One bad thing about my algorithm, is that it requeres a kind of linked list, and this one takes a little extra memory.

Unfortunelly, at this moment, I cannot post my algorithm here.
First of all, I want to show it to my university professor.
Advertisement
Quote:Original post by cpp_boy
Unfortunelly, at this moment, I cannot post my algorithm here.
Why not?
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
a) O(log(n!)) = O(nlog(n) - n) = O(nlog(n))
b) log(n!) aprox = nlog(n) - n

BIG DIFFERENCE

sorry. I feel the need to clarify your notation. No argument as to the speed.
But you keep using O() notation. O() notation covers a class of functions.
All O(nlog(n)) functions are not the same speed. There are lots of sorting functions
that beat qsort for some inputs.

I'm not arguing that your sort is the same speed as qsort. But you keep trying to make the notation say something
it isn't ever going to say. As far as you have said:
the best case is xN, the worst case is xlog(n!).
So, likely you beat the best case of sort, and didn't hit the xN^2 worst case of qsort.

Like others have said, you may have re-discovered another sorting algo. or actually come up with something new.

What type of benchmarks have you done? sorted, partially sorted, totally random, reverse sorted, all the same element?

If you really have done extensive benchmarking, I hope to see what you have done after you have who-ever look over it.
There is a chance you have something that could be good in some applications.
It might be a good idea to post your algorithm here before presenting it, because there is a non-zero chance (mildly put) that it is a variation of an already known algorithm. Noone will try to 'steal' your algorithm (if anything, you'll help the programming community if you have made a real breakthrough).

I have a small benchmark written in C sitting in my computer, which times several sorting algorithms and prints the results. I could run some benchmarks for you if you are interested.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Quote:Original post by cpp_boy
Some here said that O(log n!) is approximetaly O(nlogn - n)
For ANY input, (nlogn - n) is faster than (nlogn).


Dispite your claims, you do not understand O notation. O(nlogn -n) equals O(nlogn). Really it does. They are the same set.

QuickSort is an O(n log n - n) function. QuickSort is an O(log n!) function. All O(nlogn) functions are O(logn!) functions.

O(nlog(n)) is a set of functions.
O(log n!) is a set of functions.
As it happens
O(nlog(n)) is the same set as O(log n!)

In other words, there is a K and C such that for sufficiently large N, for all n>N:
K nlog(n) < log(n!) < C nlog(n)

So, based off the knowledge that your search algorithm is O(log n!), no conclusion about it's relative performance compared to standard search algorithms can be made.

I'm not saying your algorithm is faster or slower than other functions. I'm saying that your function being O(logn!) doesn't tell you anything at all about the relative speed of your function compared to other search functions.

Oh, and beware the Turning tarpit. Just because two things are equivilent does not make them the same.
I suppose a more layman's way of saying the same thing, is that Big-O notation "gets rid of" lower-order terms and terms that have a diminishing relative effect as n tends toward infinity. For instance, for some polynomial xn + xn-1 + xn-2 + ... + x2 + x1, Big-O notation would classify this as O(xn), without regard to the other terms. So the "real" polynomial associated with that complexity could have been the polynomial I posted, or maybe it was just xn. The idea is you don't know, so it makes no sense to compare complexities in that way and say that one is faster than another, because you have no idea what other terms used to be there but were "discarded" because of the Big-O.
I know perfectly well what is O, Theta, and Omega notations.
But still. The average of my algorithm is somewhere between n And log(n!) .
And this is realy fast. It beats merge and quick in any possible case !!!
If your algorithm determines the order of sorting by pairwise comparison (as the majority of algorithms, such as quick sort and merge sort do) then it can't be better than O(n log(n)). If not then you may have something similar to radix sort.
Quote:Original post by Fiddler
It might be a good idea to post your algorithm here before presenting it, because there is a non-zero chance (mildly put) that it is a variation of an already known algorithm. Noone will try to 'steal' your algorithm (if anything, you'll help the programming community if you have made a real breakthrough).


I'd bet 3 to 1 odds that he shows this algorithm to his proffessor, gets told its real name, then never comes back to this thread.
Quote:Original post by cpp_boy
I know perfectly well what is O, Theta, and Omega notations.
But still. The average of my algorithm is somewhere between n And log(n!) .
And this is realy fast. It beats merge and quick in any possible case !!!


Natural merge sort in the best case is n and the worst nlog(n) so it has the same properties as your algorithm.

This topic is closed to new replies.

Advertisement