This is completely made-up and a very forced example, but imagine two different algorithms with loops like below, where they both call some inner function (DoAlgorithmInner1/DoAlgorithmInner2) that we assume are both of equal cost.
for( i=0; i < n; ++i ) for( j=0; j < 100; ++j ) DoAlgorithmInner1( i, j )
The first is O(n*100), which is O(n*constant), which is just O(n).for( i=0; i < n; ++i ) for( j=0; j < n; ++j ) DoAlgorithmInner2( i, j );
The second is O(n2).
Say that n is only 10 -- the first one calls the inner function 10*100 times, the second calls it 10*10 times.
Say that n is 10000 -- the first one calls the inner function 10000*100 times, the second calls it 10000*10000 times.
So for any case where n is less than 100, the n^2 algorithm actually does less work.
For a real world example -- try implementing a comparison sort, like quicksort, which is O(n log n) and also a radix sort, which is O(n).
The constants, which are removed from the big-O analysis are actually very high with the radix sort, which makes it behave like the above example.
e.g. n*1000 might be more expensive than n*n*10.
Nice example and clear explaination