• Advertisement
Sign in to follow this  

whats the "Big O" of this algo?

This topic is 684 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

whats the "Big O" of this algo?

 

need a little help from a  math junkie out there! <g>

 

if i have a sorting algo where the number of iterations for n elements is: the sum of all whole numbers 1 though n, what's the Big O of that algo?

 

Muchas gracias in advance!

Edited by Norman Barrows

Share this post


Link to post
Share on other sites
Advertisement

So do you mean this?

 

https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF

 

Ie. for 5 elements:

NumIterations(5): 1 + 2 + 3 + 4 + 5 = 15

If so, the runtime is O(n^2). Looking at the formula given on the page, its n * (n+1) / 2, but since we do not care for constant factorss in big-o notation, n * n is whats left over.

 

We can also deduce this if we look at the number of elements growing vs the number of iterations when doubling:

 

NumIterations(10): 55

NumIterations(20): 210

NumIterations(40): 820

 

And now lets look at how they scale in comparison:

 

55 / 15 = 3.66666

210 / 55 = 3.818181

840 / 210 = 4.00000

 

So whenever we double n, the amount of iterations roughly quadrouples, which means O(n^2).

Edited by Juliean

Share this post


Link to post
Share on other sites

Or, to put it another way: "Not Good"

That depends on your problem and the constants in your complexity. For small n algorithms with smaller constants may be faster even though they are n^2 asymptotically. You can see this by comparing nlog(n) + 300*n with n^2; even though the latter is worse asymptotically it performs better for small n.

Share this post


Link to post
Share on other sites

>> Or, to put it another way: "Not Good"

 
and offhand, whats a good big O for a search algo?   i'm being lazy here - could google it myself.   but what for example might be a good big O for small N, and what night be good when N is large?
 
i'm wondering where the tradeoff point is between an O(n^2) algo and a good big O algo for large N. IE up to how many N the O(n^2) algo is faster (theoretically).
 
i definitely appreciate all the help with this. i can do it.  i just don't really get into it for some reason. more a means to an end for me i guess.

Share this post


Link to post
Share on other sites

For search or for sorting?

 

For search, a binary search on sorted data is O(logN). And using a hashing system, if appropriate, then constant time.

 

For sorting, most algorithms are O(N logN) (that's theoretically the fastest possible, unless you're able to use something like bucket sort, which is O(n)), but vary in terms of best/worse cast performance, performance on mostly-sorted data, and whether they are a stable sort or not.

Share this post


Link to post
Share on other sites
>> whats a good big O for a search algo? i'm being lazy here - could google it myself.   but what for example might be a good big O for small N, and what night be good when N is large?

The "big O" for most algorithms is a worst case of O(n^2) and a best case around O(n log n). They have varying lesser terms, and their performance depends on details about the data they are working on.

How large is "large"? Thousands? Millions? Billions? Does it all fit in memory? Does it all fit in cache?

How is the data arranged? Do you need to maintain any arrangements with a stable sort? If it is already nearly sorted there are faster algorithms for that, if you know exactly which items are not sorted you may be best with something like an insertion sort. There are many different sorting routines for varying situations, algorithms like heap sort and insertion sort and stack sort and radix sorts and quicksort and even bubble sort all have scenarios where they are good choices.

For anything under a few thousand items and fit in your cache it really doesn't matter what you pick. Even good old bubblesort can handle it in a trice thanks to cache effects.

Quicksort is usually amazing at near-random data often tending to O(n log n), but with bad pivot selection can be terrible and get to O(n^2). It also tends to have performance plummet when the data is far larger than the cpu cache, as there are other algorithms that are more cache friendly for very large data sets. And if your data is so large it doesn't fit in your memory or is otherwise kept on media or even across multiple media, there are algorithms specializing in that, too.

In general, your standard library's default sort routine is smart. Most good libraries have a sort routine that defaults to quicksort and can detect when it is performing poorly or when it is on a known problematic size and switch over to a better algorithm automatically.

Share this post


Link to post
Share on other sites

For reference, the sort algorithm that the standard library implements as std::sort() is usually https://en.wikipedia.org/wiki/Introsort which has an O(n log n) average and worst case performance, but it could also be something else.

 

https://en.wikipedia.org/wiki/Sorting_algorithm has a big table comparing performance and memory use of a variety of sort algorithms.

Edited by Adam_42

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement