Jump to content
  • Advertisement
Sign in to follow this  
davenirline

ordering table vs quicksort

This topic is 5199 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i have tried ways to optimize polygon sorting. i have found the ordering table. it said that it runs O(n). is this true? maybe i just don't quite understand them yet. currently, im using quick sort. need your help.

Share this post


Link to post
Share on other sites
Advertisement
darookie: quicksort is has a completely evil worst case - it will probably be slower than bubble sort. Mergesort, although slower in the best case it better if you are likely to have stuff in reverse order. Radix sorting is indeed close to O(n) where as Quicksort and Mergesort are about O(n ln n) in the average case. Quicksort is O(n^2) in the worst case.

In the first two examples I think theta notation is more accurate. O indicating an upper bound, o indicating a lower bound, theta indicating both upper and lower bounds IIRC.

Anyway, the big thing is Radix sort should be faster, but it'll use more memory, if you had a software engine, you could divide Z into the number of unique Z values in your Z buffer, then assemble all the polygons into linked lists for each value. This is not radix sort, but it is close.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
for(i=0 to number_of_polygons)
sorting_table[polygon.z] = i;

this line sumarize a O(N) sort using a table. You can see that a lots of memory is used.

Share this post


Link to post
Share on other sites
if you want to get rid of the veil quicksort case, choose a random refernce element for spilling, evil case density lowers as array size increases.
If you have data that is almost well sorted, use the middle value as your reference, to have better performance.
Ordering tables can run at O(n) if the data is evenly distributed along the sorting scalar.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
for(i=0 to number_of_polygons)
sorting_table[polygon.z] = i;

this line sumarize a O(N) sort using a table. You can see that a lots of memory is used.

If any polygon[].z are equal, it'll drop polyygons out of the table. You can fix it with

while(sorting_table[polygon.z + offset].occupied)
offset++;

but then it's O(N^2) when all polygon[].z's are equal.

-Morten-

Share this post


Link to post
Share on other sites
How about quicksort to insertion sort? Just recurse/return from the current depth once a partition is smaller than five elements or so. And run an insertion sort over it when its done. It's actually pretty fast and it should help ease over the worst case if it happens.

You could also take a median of three, using the middle value of the first, last, and middle indexes of the partition. These two methods would probably actually increase the speed, considering that you are sorting maybe a few thousand polygons at a time.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!