Sign in to follow this  

How to correctly pronounce big-oh notation?

This topic is 4389 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been reading lots of data structures and algorithm books but I have yet to have a class. Therefore I have a simple question how do you pronounce big-oh notation? Such O(1), O(n), etc? Only reason I ask is I will be giving a presentation and I don't want to say it incorrectly.

Share this post


Link to post
Share on other sites
Every computer scientist I have ever talked to pronounces it "O of n" or "big O of n squared". "O to the n" would imply exponentiation.

Share this post


Link to post
Share on other sites
O(n) == "Order n" or "Order of n"
O(n^2) == "Order n squared"... etc.

Big-oh is meant to indicate the highest order term in the polynomial representing the time or space complexity of an algorithm. If the time complexity is 3n^2 + log n + 5n + 20, or some other arbitary polynomial, then the algorithm would run in "order n squared" time. The n^2 term dominates as n approaches infinity, so you can disregard the others, generally.

Share this post


Link to post
Share on other sites
I was just thinking about it, and sometimes you can use entirely different language altogether.

For example, an O(log n) algorithm can be said to have "order log n" complexity, or you can just say that it has logarithmic complexity.

O(n) has linear complexity. O(n^x) has polynomial complexity. O(x^n) has exponential complexity... etc.


Share this post


Link to post
Share on other sites
Every computer scientist I have spoken to pronounced it "Ordo", like O(n2) goes "Ordo n squared". So did all my algebra and calculus lecturers. Might there be a difference in language?

Share this post


Link to post
Share on other sites
Quote:
Original post by sordid
All my professors in University prefix "Big Oh of". I suspect it's because it is a capital O rather than a lowercase O, to avoid confusion.


Ditto.

Share this post


Link to post
Share on other sites
People call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:

o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).


Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right).

Share this post


Link to post
Share on other sites
Quote:
Original post by Fruny
People call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:

o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).


Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right).

I concur with Fruny. Use "big-O of n squared" if you want to be precise, or one of the shorter versions if you want to be quick about it.

Share this post


Link to post
Share on other sites
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2n), but Θ(n log n). (It is also O(n log n), of course.)

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2n), but Θ(n log n). (It is also O(n log n), of course.)

Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct.

Share this post


Link to post
Share on other sites
There's another problem, of course... saying "big-O" without saying "best case" or "worst case" or "average case", all of which are important in different situations. But good point.

Share this post


Link to post
Share on other sites
What Fruny said. If you say an algorithm runs in order n time it doesn't mean that it will take any specific amount of time (a search, for example, could find its goal on the first "operation"). It simply means that the algorithm is guaranteed to run in that time. It's an upper bound.

Big-Omega and Big-Theta are, of course, entirely different beasts.

By the way...

Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n).

Share this post


Link to post
Share on other sites
Quote:
Original post by smitty1276
Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n).

No, that's the point. A function which has time complexity O(n log n) also has time complexity O(2^n). It is, as you said, an upper bound.

Share this post


Link to post
Share on other sites
No, the upper bound AND the lower bound are the same, O(n log n). That's why its an optimal algorithm.

Wikipedia. It mentions that the best case is about 1/2 of the worst case. But it is still n log n.

Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.

EDIT:
Also, that can't be what Trap meant.

Quote:

Original post by trap
Quote:
Original post by Sneftel
The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2^n), but Θ(n log n). (It is also O(n log n), of course.)



Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct.

Share this post


Link to post
Share on other sites
Quote:
Original post by smitty1276
[...]Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.[...]
This is exactly why Big O Notation isn't practical - it defines an upper bound. If an algorithm is O(n), that means it will never be worse than c*n after some point n0. In practice, either those parts that were abstracted away or parts that are entirely implementation dependant and not part of the algorithm itself (such as the cache on modern computers and the need to cater to it's whim for efficiency) will dominate. An algorithm that is O(ln(n)) isn't better than one that is O(n) if each step of the former involves reading 100GB of data while each step in the latter is only a single register modification, any more than an O(n) algorithm (radix sort, for example) is better than an O(n ln(n)) (comparison sort) when the former requires more extra memory than is available on the target device. Not all orders of complexity were created equally =-)

Share this post


Link to post
Share on other sites
Extrarius,
Quote:
An algorithm that is O(ln(n)) isn't better than one that is O(n) if each step of the former involves reading 100GB of data while each step in the latter is only a single register modification

Big-O perfectly characterizes that. You know that the time to perform a register modification will be negligible (log 1, in this case) regardless of the algorithm. However, as the size of the dataset grows to 100GB or larger (assuming an "element" is 1 byte for simplicity) you know that you would much, much, much rather use an algorithm with an upper bound of O(log 100000000000) than an algorithm with an upper bound of O(100000000000^2). That's the whole point... Big-O indicates the time complexity as the dataset grows arbitrarily large. That's it's whole purpose. Of course it doesn't matter if n is small, it doesn't matter.

I'm not sure I follow you. How is this ambiguous at all?

Also, saying that an O(n) algorithm is also an O(n^2) algorithm is not really correct. It is true that the O(n) is guaranteed to run in O(n^2), but it is not mathematically correct to say that an order 1 polynomial is an order 2 polynomial.

And regardless, it still doesn't answer my question of why Trap said mergesort was O(2^n). I suppose he just mistyped.

Share this post


Link to post
Share on other sites
smitty1276: You misunderstood me. The "100GB" is not part of the data being operated on by the algorithm (for example, a lookup table used as a 'sub step' when manipulating an array), and would not be included in N in any way. It would be one of the many things that are thrown out because of the way asymptotic notation works. At best, it would be included as some kind of 'm' term (meaning one that is based on something other than the number of elements) that can be left out without invalidating the Big O expression.

Big O is great for theoretical analysis and for studying similar variations of a single algorithm, but it leaves out all the important details when used for practical comparisons of vastly different algorithms meant to complete the same task.

Also, Big O notation only indicates an upper bound. It says 'this algorithm will not run worse than _ as long as certain conditions are met'. Thus, it is entirely correct to say an O(n) algorithm is O(n^2) and is O(2^n). It isn't as descriptive or as useful, certainly, but it is just as correct.

Share this post


Link to post
Share on other sites
Unless he made a quick edit Trap actually stated that Quicksort is O(n²) and Mergesort is O(n log n), which is correct for a bog-standard quicksort (there are various clever techniques you can apply to Quicksort but I can't remember if any of them actually affect the complexity or just the probability of hitting the worst case). The worst case for Quicksort occurs when there are no duplicates and you always select the smallest or largest element for the pivot. In this case each level of recursion sorts just one element and passes on n - 1 elements to the next level of recursion, giving a time complexity of O(n²).

Enigma

Share this post


Link to post
Share on other sites
Quote:
Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.

Not as meaningless as you'd think. Several algorithms have been initially shown to have a particular time or space upper bound complexity, and that upper bound was mathematically proven. Then, later, someone else came along and showed that an even lower upper-bound was provable. This, of course, didn't disprove the earlier reasoning; both were correct. The fact that algorithms can have multiple upper bounds is essential to complexity theory.

Share this post


Link to post
Share on other sites
Quote:

No, the upper bound AND the lower bound are the same, O(n log n). That's why its an optimal algorithm.


No, one upper bound, in this case O(n lg n) is the same. However, for any algorithm there exists an infinite number of upper bounds.
Also, what do you mean by saying that the lower bound is O(n lg n)? Do you mean Omega(n lg n) or something else?

Share this post


Link to post
Share on other sites

This topic is 4389 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this