Sign in to follow this  

ordering table vs quicksort

This topic is 4855 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i have tried ways to optimize polygon sorting. i have found the ordering table. it said that it runs O(n). is this true? maybe i just don't quite understand them yet. currently, im using quick sort. need your help.

Share this post


Link to post
Share on other sites
darookie: quicksort is has a completely evil worst case - it will probably be slower than bubble sort. Mergesort, although slower in the best case it better if you are likely to have stuff in reverse order. Radix sorting is indeed close to O(n) where as Quicksort and Mergesort are about O(n ln n) in the average case. Quicksort is O(n^2) in the worst case.

In the first two examples I think theta notation is more accurate. O indicating an upper bound, o indicating a lower bound, theta indicating both upper and lower bounds IIRC.

Anyway, the big thing is Radix sort should be faster, but it'll use more memory, if you had a software engine, you could divide Z into the number of unique Z values in your Z buffer, then assemble all the polygons into linked lists for each value. This is not radix sort, but it is close.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
for(i=0 to number_of_polygons)
sorting_table[polygon[i].z] = i;

this line sumarize a O(N) sort using a table. You can see that a lots of memory is used.

Share this post


Link to post
Share on other sites
if you want to get rid of the veil quicksort case, choose a random refernce element for spilling, evil case density lowers as array size increases.
If you have data that is almost well sorted, use the middle value as your reference, to have better performance.
Ordering tables can run at O(n) if the data is evenly distributed along the sorting scalar.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
for(i=0 to number_of_polygons)
sorting_table[polygon[i].z] = i;

this line sumarize a O(N) sort using a table. You can see that a lots of memory is used.

If any polygon[].z are equal, it'll drop polyygons out of the table. You can fix it with

while(sorting_table[polygon[i].z + offset].occupied)
offset++;

but then it's O(N^2) when all polygon[].z's are equal.

-Morten-

Share this post


Link to post
Share on other sites
How about quicksort to insertion sort? Just recurse/return from the current depth once a partition is smaller than five elements or so. And run an insertion sort over it when its done. It's actually pretty fast and it should help ease over the worst case if it happens.

You could also take a median of three, using the middle value of the first, last, and middle indexes of the partition. These two methods would probably actually increase the speed, considering that you are sorting maybe a few thousand polygons at a time.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
radix is definitely faster on high amounts of polys and uses only 2^radix memory elements. down side is that quicksort is more generic, you need somekind of conversion for floating point numbers in the case of radix.

Share this post


Link to post
Share on other sites

This topic is 4855 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this