#### Archived

This topic is now archived and is closed to further replies.

# Big O Notation

This topic is 6157 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am having some difficulty fully grasping big O notation. I have been reading a book that goes through a few examples. I have no problem with the consept that f(n) = O(g(n)) if and only if the lim as n -> infinity {f(n)/g(n)} = c with the exception of one detail. What exactly is g(n)? Here are some examples I have come accross... Constant Time - O(1) Algorithms The time to write a value to an array memory location is: ¨C It takes a constant time, say c1. ¨C time(n) is O(1) since limit {c1 / 1} = c1. n ¡ú¡Þ data[0] = 1; The time to add two int array elements is: ¨C Reading each int from memory takes a constant time, say c2. ¨C Adding the two ints takes a constant time, say c3. ¨C time(n) is O(1) since limit {(c2 + c3) / 1} = c2 + c3. n ¡ú¡Þ data + data[j] Now in both these situations g(n) appears to be equal to 1. In both these examples they say that time(n) = O(1) - the one being g(n). So where do they get g(n) = 1? Does anyone have any good resources on this that I could look up? Does anyone have a straight forward explanation of Big O notation?

##### Share on other sites
My appologies for the messy post. This is due to cut and paste.

the ..C is a bullete the funny symbols under limit represents n->infinity

##### Share on other sites
g(n) is the measure you compare f(n) against.

When you say an object is 3'' long, you are comparing it to a 3'' measure.

Same thing here, you are comparing, say the time complexity of your function, to a set of arbitrary functions, usually polynomials, logs & the like.

The whole trick is to determine the most accurate g(n) possible

Hope I was clear.

##### Share on other sites
So then in the above examples the g(n) is actually refering to the number of times the operation is occuring. ie. when writing to an array in the below example they have a single line data[0] = 1;

Now if this line was in a loop say...

for(int i=0; i{
data = 1;
}

then g(n) would actually be equal to n and therefor we would have f(n) = O(n).

Is that a valid assumption?

##### Share on other sites
DRAT my for loop got all buggerd up! SWINE forum.

for my example I had meant for(int i=0 i<n i++)

What escape keys do I have to use to get the semi colins? \ or /
perhaps?
/;/;/;/;
\;\;\;\;

##### Share on other sites
DRAT my for loop got all buggerd up! SWINE forum.

for my example I had meant for(int i=0 i
What escape keys do I have to use to get the semi colins? \ or /
perhaps?
/;/;/;/;
\;\;\;\;

##### Share on other sites
I''m only kinda learning Big O stuff now (cuz class), but I''ll try to explain what I think g(n) is... If I''m wrong, I hope someone corrects me.

I believe that Big O is a lower bound of the function f(n).
Big O and the other 2 notations, are the growth rate of f(n).
Big O, for example, in algorithms can be thought of as the "worst-case" running time of an algorithm. I think.

So Big O (1) will always execute within a constant time.

There are little details hidden by the Big O notation. For example, to algorithms with the same growth rate, Big O (n lg n), say quicksort and mergesort, won''t have the same actual running time. That''s because although the growth rate is the same, mergesort has a big constant c, while quicksort has a small constant c.

I really hope that helps...hope that''s correct

zel0

##### Share on other sites
Big O is a measure of running time, expressed in terms of the number of times an operation occurs. Constant time operations are denoted by O(1) while all other operations are proportional to n, how many times a generic statement of Big O(1) executes.
int i = 45679878; // O(1).for(int idx = 0; idx < 10; ++idx) // O(n): single iteration  do_something();                 //.for(int idx = 0; idx < 10; ++idx)   // O(n2): nested loop  for(int idy = 0; idy < 10; ++idy) //    do_something();                 //

You get non-powers of n when you employ recursive/divide-and-conquer algorithms, particularly those that divide the problem by half each time. You then obtain results like O(n log n) and O(n2 log n).

To answer your questions about f(n) and g(n), g(n) is the original function (or algorithm) in question. f(n) is the approximation given by Big-O, which discountenances lower-order terms. For example, if g(n) = n2 + n; f(n) = O(n) = n2. Why? Because n2 grows much faster than n, meaning that its value quickly makes the lower-order terms unimportant.

[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

##### Share on other sites
Big-O Notation is how you categorize the growth rate of your functions..more or less.

f(n) = O(g(n)) if and only if there exists to positive constants c and n0 ("n-not") such that
***|f(n)| <= c|g(n)| for all n>= 0*** <- this is what you have to understand

"f(n) is a big-O of g(n)"

basically, if n increases, f(n) grows no faster than g(n). In other words, g(n) is an asymptotic upper bound on f(n).

urm...this means that if some function f(n) with input size "n" there exists some constant "c" where that function *cannot* cross after a point on that function f(n-not)

imagine two lines, one is f(n) and another is c*g(n)..there comes a point (the point of some number n-not) on f(n) where f(n) can no longer intersect with it...an upper bound..

in order for f(n) to be 'categorized' into that g(n) (be it linear n, or nlogn, or logn, or n^128) these 2 positive real numbers must exist

dont worry..this stuff is tuff to grasp at first...wait till you do Theta and Omega Notation...it should all come together

Edited by - x-bishop on February 1, 2002 8:07:41 PM

1. 1
2. 2
3. 3
Rutin
14
4. 4
5. 5
khawk
11

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633671
• Total Posts
3013265
×