Sign in to follow this  
DevFred

time complexity

Recommended Posts

DevFred    840
Once I read an interesting question. Given: - a problem size n - an algorithm with a certain time complexity O(f(n)) - a certain time a processor needs to solve the problem Question: If you upgrade to a processor twice as fast, how can the problem size grow so you can still solve the new, larger problem in the same time? For O(n), the answer is obviously 2n. If you could previously search for an element in an unordered array of size 1000, now you can search in a 2000 element array in the same time. For O(log n), the answer seems to be 4n (binary base 2 to the power of the speedfactor 2). A binary search tree twice as deep is four times as large. For O(1), you can't really tell, because the problem size doesn't influence the time complexity. Answers that seem to make sense are infinity and 1. For O(n^2), I'm pretty sure it's 1.41n, because 1.41^2 = 2. The one complexity class I cannot solve is O(n log n). It has to be something between 1.41n and 2n, and some simple numeric tests with OpenOffice Calc seem to indicate ~1.7n. But how exactly do I get to that factor? I tried (m log m)/(n log n) = 2 and then solve for m, but I got stuck.

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by DevFred
For O(n), the answer is obviously 2n. If you could previously search for an element in an unordered array of size 1000, now you can search in a 2000 element array in the same time.

Nope. Consider the following algorithm for finding the maximum of n integers.

Step 1. Initialize "max" to negative infinity.
Step 2. Find pi to 1 billion places.
Step 3. For each integer, if that integer is greater than "max", set "max" to that integer.

This algorithm is O(n). It'll take a few hours to find the maximum of 5 elements. Double the processor speed, and you can find the maximum of well over 10 elements in that period of time.

Your other assumptions are incorrect for similar situations. This may seem specious, but the important thing to remember is that big-O notation is not an exact metric of a function's complexity or performance. It's an approximate upper bound on performance.

Share this post


Link to post
Share on other sites
Zipster    2365
To expand on Sneftel's post, Big-O notation deals with asymptotic complexity -- asymptotic meaning that you let 'n' go to infinity, and complexity meaning that you deal with the nature of the algorithm itself, regardless of fast you can actually execute that algorithm.

If you want to deal with the actual amount of time (or number of instructions) it takes to solve a problem, you should use notation like T(n).

Share this post


Link to post
Share on other sites
DevFred    840
Quote:
Original post by Sneftel
Quote:
Original post by DevFred
For O(n), the answer is obviously 2n.

Nope. Consider the following algorithm for finding the maximum of n integers.

Step 1. Initialize "max" to negative infinity.
Step 2. Find pi to 1 billion places.
Step 3. For each integer, if that integer is greater than "max", set "max" to that integer.

Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.

Further numerical toying around suggests that for O(n log n), there is no constant factor to describe the growth in problem size. The "factor" starts at 1.6 and approximates 2.0 slowly. Must be some other formula than x*n.

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by DevFred
Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.

Exactly. So the right question to ask is, "how many more elements will a CPU which is seventy bazillion and one times as fast as my current CPU be able to process, as a CPU which is bazillion times as fast as my current CPU?" As for the O(n*log n) case, the relationship is roughly reciprocal. That's another good example of why big-O notation isn't capable of providing an answer to your question.

Share this post


Link to post
Share on other sites
Driv3MeFar    1080
Quote:
Original post by DevFred
Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.


If O(n) times only become meaningful as n grows to infinity, how can you meaningfully theorize about increasing the size of n?

Share this post


Link to post
Share on other sites
DevFred    840
Quote:
Original post by Sneftel
Quote:
Original post by DevFred
Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.

Exactly. So the right question to ask is, "how many more elements will a CPU which is seventy bazillion and one times as fast as my current CPU be able to process, as a CPU which is bazillion times as fast as my current CPU?"

What? I'm not following you. I think you are confusing a constant with a constant factor.

Quote:
Original post by Driv3MeFar
Quote:
Original post by DevFred
Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.

If O(n) times only become meaningful as n grows to infinity

I didn't say that. I said "at some point".

f(n) element O(g(n)) means that for some constants c and n_0, f(n) <= c*g(n) for every n >= n_0.

Once you reach that n_0, f doesn't grow faster than g anymore. So you can certainly argue about how 2n behaves relatively to n because 2n >= n_0 if n >= n_0.

I mean, that's what complexity theory is all about, right? How well does the algorithm scale to larger problems.

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by DevFred
Quote:
Original post by Sneftel
Quote:
Original post by DevFred
Finding pi's first billion places is a constant time operation that at some point won't matter anymore as n grows bigger and bigger.

Exactly. So the right question to ask is, "how many more elements will a CPU which is seventy bazillion and one times as fast as my current CPU be able to process, as a CPU which is bazillion times as fast as my current CPU?"

What? I'm not following you. I think you are confusing a constant with a constant factor.
Heh... what? No. Do you understand the mathematical meaning of big-O notation? That is, a definition of it that does not mention algorithmic complexity?

Share this post


Link to post
Share on other sites
DevFred    840
Quote:
Original post by Sneftel
Do you understand the mathematical meaning of big-O notation?

As already stated, my understanding of big-O is this:
Quote:
Original post by DevFred
f(n) element O(g(n)) means that for some constants c and n_0, f(n) <= c*g(n) for every n >= n_0.

Once you reach that n_0, f doesn't grow faster than g anymore.

Share this post


Link to post
Share on other sites
Sneftel    1788
Right. And your "solutions" place very strong assumptions on the value of n0.

BTW, your interpretation of that equation is just a bit off. You say "f doesn't grow faster than g anymore" when n > n0... but there's no guarantee that f grows faster than g at values below n0.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this