Archived

This topic is now archived and is closed to further replies.

Some Guy

Big O

Recommended Posts

Some Guy    100
May I ask what O(n) is and means? What is the Big O function? I''m really not sure how to ask this or which forum I should have put it in, but please help.

Share this post


Link to post
Share on other sites
Waverider    169
It''s a way of representing how an algorithm performance is affected by the number of elements in must operate on.

O(n) means the performance is a direct multiple of the number of elements n.

O(n^2) means the performance takes a lot longer (7 elements = 49, 8 elements = 64)

There''s other varieties, too. O(n log n), etc.

Generally, this notation is used to describe the performance of sort algorithms.

There isn''t really any O function per se. Honestly, I don''t know how they calculate what the parameter is for a given algorithm.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
(I might be a bit rusty, I haven''t really used big O notation in a while)
In simplistic terms it''s a way of expressing how fast an algorithm is, without refering to actual CPU speed and taking into account how well it performs with different sized datasets ("n" is an abstraction of the dataset/input).

Lets look at some sorting algorithms and how "fast" they are.

SortA

Numbers (N) Time taken
1000 1
2000 2
3000 3
...
1000000 1000


SortB

Numbers (N) Time taken
1000 1
2000 4
3000 9
...
1000000 10000000


As you can see SortA is linear, while SortB is expontental. SortA could be expressed simply as O(n), while SortB would be O(n^2). Obviously it gets more complex, but the main objective is to measure how well the algorithm performs on different data sizes (for instance O(10n) is slower than O(n^2) on small datasets, but faster on large ones). It also lets you determine how fast an algorithm is without regard to actual implementation. This way you can avoid being mislead by one algorithm having a better implmentation than another (if one is optimized in assembler and the other is written in a script language the times would not accurately reflect the algorithms performance except on very large datasets)

Share this post


Link to post
Share on other sites
wizard341    144
Big O notation is a simplified expression how the program runs no matter how big the data set may be.

Take for example this equation: 2 + 3 + 4.

This equation will be evaluated in constant time, because there are no variables present that could change how many values there are. There will always be 3 values, so we can call this O(1) meaning, that it will always take a constant number of steps to compute this sum.

Now take this equation: 2+3+N!, where n is a variable.

N can hold any value, from 0 to infinity. So as N gets really really large, those extra 2 steps to add 2 and 3 dont make any differenece (in computing time) so we simplify this to say that it will take O(n) steps to add everything together.

Now take this one: N^2 + N! + N! + 3 + 2 + 1

It may make sense at first to call this O(N^2+N+N), because N can get be really big. But if we remember what happened last example the bigger term will eventually overule everything.
Take N to be 10,000, not a very large number in computing terms. Now, N^2 = 1bilion steps to compute this. Now we could add on the other steps getting 1 billion 20 thousand and 6 steps, but in proportions, 20 thousand doesnt really matter.

So in general, BIG(O) notation is not accurate to the T persay, but it is a good indication of how your program might run when things get very large.

[edited by - wizard341 on March 5, 2003 3:36:48 PM]

Share this post


Link to post
Share on other sites
mstein    122
Big-O ... O(n) is the upper time bound than an algorithm (or i guess anything you are modeling for that matter could take. In this case, O(n) would mean that whatever you are modeling takes at most linear time. This is useful when comparing algorithms. An algorithm that runs in O(n*n) will take much longer than one running in O(n) or O(n log n) . . . im sure someone else will elaborate more, but this is the gist. . . .also

O(3n) == O(n+5) == O(3n + 4n) . . . they all boil down to linear time.

O(log n) < O(n) < O(n log n) < O(n*n) . . .

hope that sheds some light

Share this post


Link to post
Share on other sites
Zipster    2365
And in terms of calculating the O function for a given algorithm... I''d say you step through it several times untill you see a patern, and then write a forumula for it.

Share this post


Link to post
Share on other sites
63616C68h    122
O(c) == "y = constant". An algorithm denoted by O(c) would take the same time to complete no matter what the size of the data set. This performance could be experienced when for instance, accessing cells in an array.



[edited by - 63616C68h on March 5, 2003 6:25:30 PM]

Share this post


Link to post
Share on other sites
Sneftel    1788
It''s not fuzzy logic; this stuff is mathematically provable. But most people just settle for experimental values.


But... but that''s what HITLER would say!!

Share this post


Link to post
Share on other sites
Karg    133
Big O (uppercase Omicron, but O is close enough) is the asymptotic upper bound on a function. The formal definition is:
f(n) = BigO(g(n)) if,
for some constants c and k, 0<= f(n) <=cg(n), for all n >= k.
Basicly once you get n big enough(pass k), then cg(n) will always be greater than or equal to f(n).

For computer algorithms, often what is measured is speed or space. Space can easily be measured in bytes or bits:
An algorithm with a space requirment of O(n^2) bytes.
Speed needs to be quantified somehow. It can be by arithmatic operations, element swaps or compares, or some other method.
An algorithm with O(n * log(n)) element compares.

A little quirk, if f(n) = O(n^c), then also, f(n) = O(n^k), where k > c, beacuse the subsequent functions, O(n^k), will be greater than O(n^c). Therefore any function that is O(n^3) will also be O(n^4), O(n^3.1), O(n^234), etc.

Some of BigO's relatives are:
LittleO, written o(n) - same as BigO but strictly greater than instead of greater than or equal to
Omega - Less than or equal to. Also the asymptotic lower bound of an equation.
Theta - a function that is both an upper and lower bound. The intersection of all O(g(n)) and Omega(g(n)) for a given f(n). In form as above:
f(n) = Theta(g(n)) if,
for some constants c1, c2, and k
0 <= c1g(n) <= f(n) <= c2g(n)

And there's more than you ever wanted to know about BigO!

Karg

[edited by - Karg on March 5, 2003 12:45:25 AM]

Share this post


Link to post
Share on other sites