GDKnight 302 Report post Posted December 5, 2005 I've been reading lots of data structures and algorithm books but I have yet to have a class. Therefore I have a simple question how do you pronounce big-oh notation? Such O(1), O(n), etc? Only reason I ask is I will be giving a presentation and I don't want to say it incorrectly. 0 Share this post Link to post Share on other sites
Nice Coder 366 Report post Posted December 5, 2005 O to the oneO to the nO to the log 2 nOr thereabouts, iirc.From,Nice coder 0 Share this post Link to post Share on other sites
Sneftel 1788 Report post Posted December 5, 2005 Every computer scientist I have ever talked to pronounces it "O of n" or "big O of n squared". "O to the n" would imply exponentiation. 0 Share this post Link to post Share on other sites
smitty1276 560 Report post Posted December 5, 2005 O(n) == "Order n" or "Order of n" O(n^2) == "Order n squared"... etc.Big-oh is meant to indicate the highest order term in the polynomial representing the time or space complexity of an algorithm. If the time complexity is 3n^2 + log n + 5n + 20, or some other arbitary polynomial, then the algorithm would run in "order n squared" time. The n^2 term dominates as n approaches infinity, so you can disregard the others, generally. 0 Share this post Link to post Share on other sites
smitty1276 560 Report post Posted December 5, 2005 I was just thinking about it, and sometimes you can use entirely different language altogether.For example, an O(log n) algorithm can be said to have "order log n" complexity, or you can just say that it has logarithmic complexity.O(n) has linear complexity. O(n^x) has polynomial complexity. O(x^n) has exponential complexity... etc. 0 Share this post Link to post Share on other sites
Staffan E 536 Report post Posted December 5, 2005 Every computer scientist I have spoken to pronounced it "Ordo", like O(n^{2}) goes "Ordo n squared". So did all my algebra and calculus lecturers. Might there be a difference in language? 0 Share this post Link to post Share on other sites
Promit 13262 Report post Posted December 5, 2005 Order n, Order n squared, Order n log n, etc. I usually stick to "constant time" for O(1), but that's not really that important. 0 Share this post Link to post Share on other sites
Guest Anonymous Poster Report post Posted December 5, 2005 It's pronounced like "Oh my cactus" 0 Share this post Link to post Share on other sites
sordid 246 Report post Posted December 5, 2005 All my professors in University prefix "Big Oh of". I suspect it's because it is a capital O rather than a lowercase O, to avoid confusion. 0 Share this post Link to post Share on other sites
Billr17 437 Report post Posted December 5, 2005 Quote:Original post by sordidAll my professors in University prefix "Big Oh of". I suspect it's because it is a capital O rather than a lowercase O, to avoid confusion.Ditto. 0 Share this post Link to post Share on other sites
Fruny 1658 Report post Posted December 5, 2005 People call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right). 0 Share this post Link to post Share on other sites
kSquared 1356 Report post Posted December 5, 2005 Quote:Original post by FrunyPeople call it "big-O notation" because "small-o" has another meaning, mathematically: it represents something that is negligible in front of the term that is in parentheses:o(n) is "negligible compared to n". It's often used in series expansions: cos(x) = 1 - (x^2)/2 + o(x^2) and spares you from having to specify the exact power the remaining term should be comparable to (here O(x^4)).Generally here, we use "order of n" or "linear time" (because there's also space complexity) or just "it's linear", since it's generally obvious you're talking about complexity. For other complexities than O(n), sometimes we just say "it is n log n", again because the meaning is obvious (while saying "it's n" doesn't sound quite right).I concur with Fruny. Use "big-O of n squared" if you want to be precise, or one of the shorter versions if you want to be quick about it. 0 Share this post Link to post Share on other sites
Sneftel 1788 Report post Posted December 5, 2005 The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2^{n}), but Θ(n log n). (It is also O(n log n), of course.) 0 Share this post Link to post Share on other sites
Trap 684 Report post Posted December 5, 2005 Quote:Original post by SneftelThe funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2^{n}), but Θ(n log n). (It is also O(n log n), of course.)Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct. 0 Share this post Link to post Share on other sites
Sneftel 1788 Report post Posted December 5, 2005 There's another problem, of course... saying "big-O" without saying "best case" or "worst case" or "average case", all of which are important in different situations. But good point. 0 Share this post Link to post Share on other sites
Fruny 1658 Report post Posted December 5, 2005 Without qualification, it means "worst case" by definition. 0 Share this post Link to post Share on other sites
smitty1276 560 Report post Posted December 5, 2005 What Fruny said. If you say an algorithm runs in order n time it doesn't mean that it will take any specific amount of time (a search, for example, could find its goal on the first "operation"). It simply means that the algorithm is guaranteed to run in that time. It's an upper bound.Big-Omega and Big-Theta are, of course, entirely different beasts.By the way...Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n). 0 Share this post Link to post Share on other sites
Sneftel 1788 Report post Posted December 6, 2005 Quote:Original post by smitty1276Trap, how did you come to the conclusion that mergesort is O(2^n)? I think its upper bound and lower bound are n log(n).No, that's the point. A function which has time complexity O(n log n) also has time complexity O(2^n). It is, as you said, an upper bound. 0 Share this post Link to post Share on other sites
smitty1276 560 Report post Posted December 6, 2005 No, the upper bound AND the lower bound are the same, O(n log n). That's why its an optimal algorithm.Wikipedia. It mentions that the best case is about 1/2 of the worst case. But it is still n log n.Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.EDIT:Also, that can't be what Trap meant.Quote:Original post by trap Quote:Original post by Sneftel The funky thing is, many people use "big-O" when what they really mean is "big-theta". For instance: Quicksort is O(2^n), but Θ(n log n). (It is also O(n log n), of course.)Bad example: Quicksort is O(n^2) in the worst case. Replace "Quicksort" with "Mergesort" and it's correct. 0 Share this post Link to post Share on other sites
Extrarius 1412 Report post Posted December 6, 2005 Quote:Original post by smitty1276[...]Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.[...]This is exactly why Big O Notation isn't practical - it defines an upper bound. If an algorithm is O(n), that means it will never be worse than c*n after some point n_{0}. In practice, either those parts that were abstracted away or parts that are entirely implementation dependant and not part of the algorithm itself (such as the cache on modern computers and the need to cater to it's whim for efficiency) will dominate. An algorithm that is O(ln(n)) isn't better than one that is O(n) if each step of the former involves reading 100GB of data while each step in the latter is only a single register modification, any more than an O(n) algorithm (radix sort, for example) is better than an O(n ln(n)) (comparison sort) when the former requires more extra memory than is available on the target device. Not all orders of complexity were created equally =-) 0 Share this post Link to post Share on other sites
smitty1276 560 Report post Posted December 6, 2005 Extrarius,Quote:An algorithm that is O(ln(n)) isn't better than one that is O(n) if each step of the former involves reading 100GB of data while each step in the latter is only a single register modificationBig-O perfectly characterizes that. You know that the time to perform a register modification will be negligible (log 1, in this case) regardless of the algorithm. However, as the size of the dataset grows to 100GB or larger (assuming an "element" is 1 byte for simplicity) you know that you would much, much, much rather use an algorithm with an upper bound of O(log 100000000000) than an algorithm with an upper bound of O(100000000000^2). That's the whole point... Big-O indicates the time complexity as the dataset grows arbitrarily large. That's it's whole purpose. Of course it doesn't matter if n is small, it doesn't matter.I'm not sure I follow you. How is this ambiguous at all?Also, saying that an O(n) algorithm is also an O(n^2) algorithm is not really correct. It is true that the O(n) is guaranteed to run in O(n^2), but it is not mathematically correct to say that an order 1 polynomial is an order 2 polynomial.And regardless, it still doesn't answer my question of why Trap said mergesort was O(2^n). I suppose he just mistyped. 0 Share this post Link to post Share on other sites
Extrarius 1412 Report post Posted December 6, 2005 smitty1276: You misunderstood me. The "100GB" is not part of the data being operated on by the algorithm (for example, a lookup table used as a 'sub step' when manipulating an array), and would not be included in N in any way. It would be one of the many things that are thrown out because of the way asymptotic notation works. At best, it would be included as some kind of 'm' term (meaning one that is based on something other than the number of elements) that can be left out without invalidating the Big O expression.Big O is great for theoretical analysis and for studying similar variations of a single algorithm, but it leaves out all the important details when used for practical comparisons of vastly different algorithms meant to complete the same task.Also, Big O notation only indicates an upper bound. It says 'this algorithm will not run worse than _ as long as certain conditions are met'. Thus, it is entirely correct to say an O(n) algorithm is O(n^2) and is O(2^n). It isn't as descriptive or as useful, certainly, but it is just as correct. 0 Share this post Link to post Share on other sites
Enigma 1410 Report post Posted December 6, 2005 Unless he made a quick edit Trap actually stated that Quicksort is O(n²) and Mergesort is O(n log n), which is correct for a bog-standard quicksort (there are various clever techniques you can apply to Quicksort but I can't remember if any of them actually affect the complexity or just the probability of hitting the worst case). The worst case for Quicksort occurs when there are no duplicates and you always select the smallest or largest element for the pivot. In this case each level of recursion sorts just one element and passes on n - 1 elements to the next level of recursion, giving a time complexity of O(n²).Enigma 0 Share this post Link to post Share on other sites
Sneftel 1788 Report post Posted December 6, 2005 Quote:Unless you are meaning to say that an algorithm that is guaranteed to run in n log n time is ALSO guaranteed to run in exponential time. This is of course true, but is somewhat meaningless.Not as meaningless as you'd think. Several algorithms have been initially shown to have a particular time or space upper bound complexity, and that upper bound was mathematically proven. Then, later, someone else came along and showed that an even lower upper-bound was provable. This, of course, didn't disprove the earlier reasoning; both were correct. The fact that algorithms can have multiple upper bounds is essential to complexity theory. 0 Share this post Link to post Share on other sites
MDI 266 Report post Posted December 6, 2005 Quote:No, the upper bound AND the lower bound are the same, O(n log n). That's why its an optimal algorithm.No, one upper bound, in this case O(n lg n) is the same. However, for any algorithm there exists an infinite number of upper bounds.Also, what do you mean by saying that the lower bound is O(n lg n)? Do you mean Omega(n lg n) or something else? 0 Share this post Link to post Share on other sites