Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


C++ STL Algorithms


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
22 replies to this topic

#1 Darkbouncer4689   Members   -  Reputation: 110

Like
0Likes
Like

Posted 11 December 2011 - 10:38 PM

Hey all,

I've been brushing up on my algorithms and data structures lately and today I was messing around with everything in the STL. I had always known about qsort, but it seems like there are some easier functions available if you happen to already be using an STL container such as vector.

Namely something like:
vector<int> myVector;

sort(myVector.begin(), myVector.end());

I'm guessing that sort is implemented as a quicksort, but there is one part that I'm not sure about. The template for sort is

void sort(RandomAccessIterator first, RandomAccessIterator last);

I know that for the basic quicksort (taking first element as pivot) it's worst case is a sorted or reverse sorted list in which it runs at O(n^2). This can be avoided by choosing a random pivot which makes it's expected run time O(n log n). I'm guessing this is what RandomAccessIterator is about, but I'm not familiar with them. If I simply use vector.begin() and vector.end() is it converted to a RandomAccessIterator or do I need to do something special to insure a O(n log n) run time?

Thanks in advance!

Sponsor:

#2 Chris_F   Members   -  Reputation: 2437

Like
1Likes
Like

Posted 11 December 2011 - 10:52 PM

I'm guessing this is what RandomAccessIterator is about


No, it's not. RandomAccessIterator is simply a template parameter. Sort is expecting you to pass it iterators that support random access. This means, for instance, that std::sort cannot be used on a std::list.

http://www.cplusplus...algorithm/sort/

Approximately N*logN comparisons on average (where N is last-first).
In the worst case, up to N2, depending on specific sorting algorithm used by library implementation.


As you can see, the details are left to implementation, so there is no right answer. If you are really concerned about the details, you can check your STL headers out for yourself.

#3 Álvaro   Crossbones+   -  Reputation: 13658

Like
1Likes
Like

Posted 11 December 2011 - 11:10 PM

Some modern implementations use introsort, which is as fast as quicksort in practice and is O(n*log(n)) even in the worst case.

#4 Darkbouncer4689   Members   -  Reputation: 110

Like
0Likes
Like

Posted 11 December 2011 - 11:18 PM

No, it's not. RandomAccessIterator is simply a template parameter. Sort is expecting you to pass it iterators that support random access. This means, for instance, that std::sort cannot be used on a std::list.


I'm a bit confused here, what exactly is an iterator that supports random access?

Thanks!

#5 Chris_F   Members   -  Reputation: 2437

Like
1Likes
Like

Posted 11 December 2011 - 11:51 PM

Well, a forward iterator is one that can be incremented, meaning you can always iterate forward and get the next object, but you can't iterate backwards and get the previous. A bidirectional iterator is one that can iterate in both directions, meaning you can get the next object or the previous. A random access iterator means you can get any object; the the next, the previous, one 5 places ahead or one 5 places behind...

#6 Hodgman   Moderators   -  Reputation: 31046

Like
2Likes
Like

Posted 12 December 2011 - 12:01 AM

An iterator is just a "concept", it's kind of like an "interface" or an "abstract base class", but it's invisible. It's a contract that you make, saying "this class satisfies these requirements".

e.g. to satisfy the RandomAccessIterator contract, needs to be able to (among other things)
* be incremented by an integer: e.g. i = i + 7 -- should increment the iterator forwards 7 places
* be incremented/decremented with ++/-- operators
* support the square bracket opeator, e.g. foo = i[7]
* support dereferencing, e.g. foo = *i;

N.B. standard raw pointers satisfy all of these requirements, which means that a raw pointer is a RandomAccessIterator, and thus can be passed to any template-algorithm that requires one:
const char* text = "Hello!";
const char* begin = text;
const char* end = text+6;
std::sort( begin, end );
printf(text);

Basically, if an algorithm says that it operates on RandomAccessIterators, it actually means that it will operate on any type that you pass to it, but it expects this type to satisfy the above list of requirements.

#7 Darkbouncer4689   Members   -  Reputation: 110

Like
0Likes
Like

Posted 12 December 2011 - 12:02 AM

Okay, this is making sense now. I appreciate the help. I'm having trouble finding any examples that use RandomAccessIterator, for regular iterators I could do vector<int>::iterator itr. How can this be done with a random iterator?

Edit: Okay, so the random access is handled for me then? If make a sorted vector and call sort on it, passing in vector.begin and vector.end iterators, the expected run time will still be O(n log n)?

#8 iMalc   Crossbones+   -  Reputation: 2313

Like
1Likes
Like

Posted 12 December 2011 - 12:10 AM

The iterators belong to a class heirarchy. Because a Random Access Iterator satisfies the requirements of a Bidirectional Iterator in that it can definitely go forwards and backwards, RandomAccessIterator is derived from BidirectionalIterator. Also, because a Bidirectional Iterator satisfies the requirements of a plain old Iterator in that it can definitely go forwards, BidirectionalIteratoris derived from an ordinary Iterator.

e.g. If a function takes a BidirectionalIterator then it can be passed a RandomAccessIterator, or a BidirectionalIterator, but not an ordinary forwards only iterator.

Since a lot of algorithms out there don't actually require random access, they don't force you to have a RandomAccessIterator. That doesn't mean that you can't give them one, as it still fits the requirements even if it goes above and beyond them.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

#9 Wooh   Members   -  Reputation: 637

Like
1Likes
Like

Posted 12 December 2011 - 03:47 AM

Hodgman's example doesn't work because the string is const. This works better:
char text[] = "Hello!";
char* begin = text;
char* end = text+6;
std::sort( begin, end );
printf(text);


#10 taz0010   Members   -  Reputation: 275

Like
0Likes
Like

Posted 12 December 2011 - 05:38 AM

An iterator is considered "random access" if you can move an arbitrary number of elements forward or back in constant time. For example, an array pointer is random access because you can look up any element in the array through simple pointer arithmetic. In some containers, such as list, you need to visit each element in order to find the element adjacent to it, so the container only supports bidirectional iteration. Certain structures such as singly-linked lists or connections to external resources are unidirectional, so you can only move forward.

So "random access iterators" have nothing to do with avoiding quick-sort's worst case whatsoever.

The actual implementation of quick-sort can vary considerably, but the objective is to choose pivot points as close to the middle of the range as often as possible. A particular implementation MAY experience it's worst case when the elements are sorted. Or it may check for that, possibly resulting in a sorted range being the best case instead. A naive implementation might overflow the stack in it's worst case, but this is easily countered by only recursively calling the function on the smaller of the two partitions. Finally, pure quick-sort results in too many recursive function calls, so smarter implementations will opt to perform some other sort (e.g. insertion) when the partitions are split small enough.
The main purpose of random pivot selection is actually about preventing deliberate exploitation of quick-sort's weakness. Simple issues such as preventing poor performance when sorting an already sorted range can be solved with other means.




#11 tbrick   Members   -  Reputation: 109

Like
0Likes
Like

Posted 12 December 2011 - 09:38 AM

Maybe a dumb question, but why do you need to ensure a n log n run time?

Unless you are sorting VERY large lists in a real-time environment, and you have profiled that this is a verified choke-point in your program (and it matters), you are simply doing "optimization for optimization's sake." And that is nearly always a bad thing.

#12 Adam_42   Crossbones+   -  Reputation: 2564

Like
0Likes
Like

Posted 12 December 2011 - 10:30 AM

Actually lists don't need to be that big for an N log N algorithm to be much faster than an N^2 one. For example if you have 10,000 objects the N log N algorithm should be about 750 times quicker (using a base 2 logarithm, and ignoring the constant factors).

#13 Matt-D   Crossbones+   -  Reputation: 1467

Like
0Likes
Like

Posted 12 December 2011 - 10:39 AM

Regarding the iterators -- just read this:
http://www.sgi.com/t.../Iterators.html
and then this:
http://www.cplusplus...e/std/iterator/
This should be all that you need to get started! :-)

Interestingly, you might notice that (other than asymptotic algorithmic complexity) C++ std::sort is faster than C qsort since it (or, strictly speaking, the comparison function) can be inlined and thus more effectively optimized by the compiler:
http://stackoverflow...sort-vs-stdsort

#14 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 12 December 2011 - 11:41 AM

Maybe a dumb question, but why do you need to ensure a n log n run time?

Unless you are sorting VERY large lists in a real-time environment, and you have profiled that this is a verified choke-point in your program (and it matters), you are simply doing "optimization for optimization's sake." And that is nearly always a bad thing.


Because having your application randomly hang for no apparent reason and in an impossible to debug way is a bad practice. Having average time of 0.1 second but spikes of 2.5 minutes is not acceptable.

Ruby on Rails exhibited some such fundamental flaws in parsing which caused no end to problems in production due to naive or incorrect algorithms being used for some fundamental problems. Biggest pain was unexpected delays, since average times were as expected.

For something as fundamental as sorting, the excuse of premature optimization simply doesn't hold.

Another example: I once used a third-party multi-threading task library. I crashed hard and fast. Turns out, it used recursive algorithm for partitioning, resulting in n^2 stack growth. If developer of a library makes such a fundamental mistake, it loses any and all credibility on quality of rest of code. In my case it led to automatic rejection of any third-party threading library not published by one of proven players in the field.

#15 Hodgman   Moderators   -  Reputation: 31046

Like
0Likes
Like

Posted 12 December 2011 - 05:40 PM

Actually lists don't need to be that big for an N log N algorithm to be much faster than an N^2 one. For example if you have 10,000 objects the N log N algorithm should be about 750 times quicker (using a base 2 logarithm, and ignoring the constant factors).

In that case, why not go the whole hog and use an O(N) sorting algorithm instead, which would be about 13 times faster than the O(N log N) one, and 10,000 times faster than the O(N^2) one Posted Image

#16 Matt-D   Crossbones+   -  Reputation: 1467

Like
0Likes
Like

Posted 12 December 2011 - 08:53 PM


Actually lists don't need to be that big for an N log N algorithm to be much faster than an N^2 one. For example if you have 10,000 objects the N log N algorithm should be about 750 times quicker (using a base 2 logarithm, and ignoring the constant factors).

In that case, why not go the whole hog and use an O(N) sorting algorithm instead, which would be about 13 times faster than the O(N log N) one, and 10,000 times faster than the O(N^2) one Posted Image


AFAIK, there's no such thing as a general sorting algorithm with worst-case time complexity below O(n log n) (linearithmic) that remains practical:
http://en.wikipedia....n_of_algorithms

You might fare better in special (not general) cases where you know something extra about the data to be sorted, such as integer sorting.

In theory, there are some "general" algorithms w/ O(n) complexity, but they "are impractical for real-life use due to extremely poor performance or a requirement for specialized hardware."

EDIT: this abstract explains the situation pretty well:
"For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial permutation into a sorted permutation.
Linear O(N) sorting algorithms exist, but use a priori knowledge of the data to use a specific property of the data and thus have greater performance. In contrast, the linearithmic sorting algorithms are generalized by using a universal property of data-comparison, but have a linearithmic performance lower bound. The trade-off in sorting algorithms is generality for performance by the chosen property used to sort the data elements. "

From the above-cited paper:
"Linearithmic Sorting

A sort algorithm performs sorting, and most sorting algorithms are comparison based sorting al-
gorithms. The comparison sorting algorithms include such algorithms as the merge sort, quick
sort, and heap sort. These sorting algorithms use comparison to arrange the elements in the
sorted permutation, and are general-purpose in nature.

The comparison sorting algorithms have a well-known theoretical [Johnsonbaugh and Schaefer
2004] performance limit that is the least upper bound for sorting that is linearithmic or O(N lg N)
in complexity. This theoretical lower bound is from the basis of the comparison sorting algo-
rithm-using sorting to arrange the data elements. A decision tree for N elements is of logarithmic
height lg N. Thus the time complexity involves the cost that is the cost of using a decision tree to
compare elements lg N and the number of elements N. Hence the theoretical least upper bound is
the product of the two costs involved, O(N lg N), which is linearithmic complexity."

#17 Hodgman   Moderators   -  Reputation: 31046

Like
0Likes
Like

Posted 12 December 2011 - 09:39 PM

AFAIK, there's no such thing as a comparison-based sorting algorithm with worst-case time complexity below O(n log n) (linearithmic)
...

^^ fixed that for you (bold bit).

Any data where you can express ordering as a bitwise greater/less-than can be sorted with an integer sort. This includes integers, floating-point numbers, text, etc... pretty much all the fundamental data-types. Composite sorting rules simply place higher-priority rules in more significant bits, however, the complexity of most of these algorithms is related to the size of your integer 'key', so once it increases past a certain point, you'd be better off with the 'inferior' O(N log N) algorithms.

It's very common to see these kinds of O(K.N) sorting algorithms used inside game engines on decently sized data-sets. On smaller data-sets, introsort is good enough (and has lower 'constant' overhead).

#18 Matt-D   Crossbones+   -  Reputation: 1467

Like
0Likes
Like

Posted 12 December 2011 - 09:51 PM


AFAIK, there's no such thing as a comparison-based sorting algorithm with worst-case time complexity below O(n log n) (linearithmic)
...

^^ fixed that for you (bold bit).

Any data where you can express ordering as a bitwise greater/less-than can be sorted with an integer sort. This includes integers, floating-point numbers, text, etc... pretty much all the fundamental data-types. Composite sorting rules simply place higher-priority rules in more significant bits, however, the complexity of most of these algorithms is related to the size of your integer 'key', so once it increases past a certain point, you'd be better off with the 'inferior' O(N log N) algorithms.

It's very common to see these kinds of O(K.N) sorting algorithms used inside game engines on decently sized data-sets. On smaller data-sets, introsort is good enough (and has lower 'constant' overhead).


Well, if you're concerned about using precise terminology (a worthy concern, so do not take this as a criticism), I believe you wanted to say that you "specified" or "explained", not "fixed" that. ;-)
Correct me if I'm wrong, but you're not disagreeing with me that the general-purpose in this context (sorting algos) is interchangeable with comparison-based, while integer sorting is still special-purpose (even "pretty much all the fundamental data-types" is not "all the data-types", which is important for anyone concerned with precision), right?
The complexity and dependence on key w/ trade-offs this introduces is what I meant by "general" in conjunction w/ the "remains practical" part, not disagreeing with your explanation here.

#19 Hodgman   Moderators   -  Reputation: 31046

Like
0Likes
Like

Posted 12 December 2011 - 10:27 PM

Should've put a cheeky Posted Image onto FTFY.

I wouldn't necessarily agree that comparison-based is general-purpose whereas bit-based in special-purpose -- I've personally not run into a case where I couldn't convert between one method or the other.

#20 iMalc   Crossbones+   -  Reputation: 2313

Like
1Likes
Like

Posted 13 December 2011 - 12:12 AM

My sources tell me that C++0x requires a guaranteed worse case complexity of O(n * log(n)) for std::sort.

Though it was only specified as average case O(n * log(n)) in C++03, most compilers tended to use IntroSort anyway which meets the C++0x requirement.

Bottom line is that you should probably not be concerned about worst case if you use std::sort on any compiler made or updated this millenium.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS