Big-O is pretty meaningless for you *most of the time *unless you are going to publish a scientific paper.

It tells you in a rather abstract manner how your algorithm's performance changes as n gets bigger. O(f(n)) tells you that as n gets bigger, the number of "operations" (whatever that is) gets bigger no more than f(n). It's an **upper bound**. Note that it does not tell you anything about how expensive one "operation" (whatever that is) is.

Opposed to that, big-Sigma and big-Theta (mentioned by Alvaro) are lower and "tight" (upper and lower) bounds, respectively.

Still, even these are pretty meaningless for you *most of the time*. You will of course generally favor an algorithm with a better big-O (or big-Theta) if you have one at hand, and you will usually want to avoid O(n^{2}) or O(n^{3}) or worse, if that's easily possible.

Why? Because it is obvious that n^{2} or n^{3} grows "pretty fast" and *obviously *fewer operations are faster than more operations. But is the obvious also true?

For small n, and low constant, you may find that n and n^{2} and even n^{3} are *kind of the same*. Of course it's a big difference whether you have $10 or $1,000 in your pocket, but doing 10, 100, and 1,000 iterations for something that's done a few times per frame are no real difference to a modern processor, if the iteration as such isn't too complicated.

On the other hand, a O(n) algorithm that does 10 iterations and produces 10 cache misses may indeed very well be slower than the O(n^{3}) algorithm does 1,000 iterations but accesses memory in a more optimal way (or doesn't need to access memory at all!). An algorithm that has to do one extra seek on the harddisk will almost certainly be slower than any other algorithm, even if there are two orders of magnitude difference in big-O.

A good example for this is sorting. Quicksort is a relatively "poor" algorithm if you compare its average (and even more so its worst case) big-O to some more modern competitors. However, in practice, it has been one of the top performers (or *the *general-purpose top performer) for 5 decades, and it still is.

Bubblesort is well-known as one of the worst possible sort algorithms. Nevertheless, it does have its valid uses, for example when you do coarse front-to-back sorting for occlusion cullers. You can do a single pass of Bubblesort per frame. This will eventually, distributed over several frames, get most things mostly sorted (which is good enough for that case) at a 100% fixed, 100% predictable budget. You know exactly in advance how much data you'll be touching, and you know it will happen in a linear pattern, too. It's an "almost free" operation.

The "surprising" performance characteristics of std::vector, std::deque, and std::list that this young man here shows on his website are another such example. How come that std::list performs not just a little bit but significantly (3-5 times slower!) worse than the other two, even when it has O(1) and the other two have O(n)?

That's because big-O does not really tell you anything about the *time *it takes to do something, nor about some other petty details such as cache misses or allocations.