How to correctly pronounce big-oh notation?

Started by
31 comments, last by GameDev.net 18 years, 4 months ago
People call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:

o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).


Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right).
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Advertisement
Quote:Original post by Fruny
People call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:

o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).


Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right).

I concur with Fruny. Use "big-O of n squared" if you want to be precise, or one of the shorter versions if you want to be quick about it.
- k2"Choose a job you love, and you'll never have to work a day in your life." — Confucius"Logic will get you from A to B. Imagination will get you everywhere." — Albert Einstein"Money is the most egalitarian force in society. It confers power on whoever holds it." — Roger Starr{General Programming Forum FAQ} | {Blog/Journal} | {[email=kkaitan at gmail dot com]e-mail me[/email]} | {excellent webhosting}
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2n), but Θ(n log n). (It is also O(n log n), of course.)
Quote:Original post by Sneftel
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2n), but Θ(n log n). (It is also O(n log n), of course.)

Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct.
There's another problem, of course... saying "big-O" without saying "best case" or "worst case" or "average case", all of which are important in different situations. But good point.
Without qualification, it means "worst case" by definition.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
What Fruny said. If you say an algorithm runs in order n time it doesn't mean that it will take any specific amount of time (a search, for example, could find its goal on the first "operation"). It simply means that the algorithm is guaranteed to run in that time. It's an upper bound.

Big-Omega and Big-Theta are, of course, entirely different beasts.

By the way...

Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n).
Quote:Original post by smitty1276
Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n).

No, that's the point. A function which has time complexity O(n log n) also has time complexity O(2^n). It is, as you said, an upper bound.
No, the upper bound AND the lower bound are the same, O(n log n). That's why its an optimal algorithm.

Wikipedia. It mentions that the best case is about 1/2 of the worst case. But it is still n log n.

Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.

EDIT:
Also, that can't be what Trap meant.

Quote:
Original post by trap
Quote:Original post by Sneftel
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2^n), but Θ(n log n). (It is also O(n log n), of course.)



Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct.

Quote:Original post by smitty1276
[...]Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.[...]
This is exactly why Big O Notation isn't practical - it defines an upper bound. If an algorithm is O(n), that means it will never be worse than c*n after some point n0. In practice, either those parts that were abstracted away or parts that are entirely implementation dependant and not part of the algorithm itself (such as the cache on modern computers and the need to cater to it's whim for efficiency) will dominate. An algorithm that is O(ln(n)) isn't better than one that is O(n) if each step of the former involves reading 100GB of data while each step in the latter is only a single register modification, any more than an O(n) algorithm (radix sort, for example) is better than an O(n ln(n)) (comparison sort) when the former requires more extra memory than is available on the target device. Not all orders of complexity were created equally =-)
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk

This topic is closed to new replies.

Advertisement