The other posters are right. Unfortunately, they''re not always so simple. I don''t know if you learned the technical definition of big-O yet, but you have to be careful with more complex situations. In your first example, c, d, and e are all corrent answers. In this case, the number of times you perform the operation is exactly n. From the formal definition, there is some constant k, and some number of elements p (we actually used n naught, but I can''t type that), such that k*n > n for all n >= p. If k > 1 and p = 1, then is is true, and it is O(n). However, k*n^2 > n for all n >= p for k > 1 and p = 1, so it is also O(n^2). This is where it gets more complicated, because any big-O can encompass any larger big-O. For example, if you had:
for(int x = 0; x < n; x += 2){ doStuff();}
This obviously executes n/2 times. However, you can still find k and p for both O(n) and (n^2). Generally however, you want the tightest bound, which is as the other posters said. Basically, you''re trying to find how many operations it would be if n was really big. n and n/2+1 may as well be the same when n = 1,000,000, at least in terms of running time. If you''re sorting some small amount of data, and you have an algorithm that''s O(n^2) and one that''s O(n*log(n)), it won''t really matter which you choose. However, when n is big, even though the O(n*log(n)) one may take longer to setup, it much more efficient on a large data set, which it what big-O trys to convey. Hopefully a bit of a more formal definition coupled with the other descriptions will help.
tj963