# sort algorithm

This topic is 4378 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I've run into a little problem with sorting. I have a bunch of structures containing unsigned ints sotred in a std::vector.
struct data
{
unsigned int  vb;
unsigned int  ib;
unsigned int  texture;
};

I'd like to sort those structs by vb, ib and texture. I do it thise way for the moment :
// the function used by std::sort :
bool dataGreater(data * left, data * right)
{
if (left->vb > right->vb)   return(true);
if (left->vb < right->vb)   return(false);

if (left->ib > right->ib)   return(true);
if (left->ib < right->ib)   return(false);

if (left->texture > right->texture)   return(true);
if (left->texture < right->texture)   return(false);
}

// the std::vector is defined like that :
std::vector<data *> m_Data;

// I sort with this line :
std::sort(m_Data.begin(), m_Data.end(), dataGreater);


The problem is : if I have a lot of structs with different vb and ib, but with the same texture, this algorithm will sort by descending vb, then ib. After thinking a bit about this, I realized I don't want to implement a "sort" algorithm. Rather a "group" algorithm which would groupe struct depending on their vb, ib, texture, in this priority order. But I don't really know how to implement that. I thought about parsing the vector, grouping by vb, then re-parsing it, and inside of a group, sorting by ib, etc. But that would be pretty slow since I'd have to parse the vector for each parameter :/ Any help welcomed ^^

##### Share on other sites
I don't understand what you mean by grouping and how that differs from sorting by the criterea you have.

##### Share on other sites
At any rate, though, if you convert your comparison function to an overloaded less-than operator, you can use std::sort.
#include <algorithm>std::sort( vector.begin(), vector.end() );

Illco

##### Share on other sites
Following is an example :
not sorted :  vb  |  ib  |  tex----------------------   1      1      0   2      2      1   3      3      0   3      4      1sorted with the previous algo :  vb  |  ib  |  tex----------------------   1      1      0   2      2      1   3      3      0   3      4      1I'd like to have something like that :  vb  |  ib  |  tex----------------------   1      1      0   3      3      0   3      4      1   2      2      1

When I have my datas, I loop through them, and set their resources (vb, ib, texture) Each resource type has a certain cost (cost(vb) > cost(ib) > cost(texture)) and I'm looking for an algorithm to organize my datas, such that I would minimize the resources changes.

I hope It's a bit more clear.

Edit : Illco, I'm storing pointer to the structure, so the less than operator would apply on the (data *). So I would have to declare a global operator < (data * left, data * right), and that would be the same as what I use, right ? (correct me if I'm wrong)

##### Share on other sites
Quote:
 // the function used by std::sort :bool dataGreater(data * left, data * right){ if (left->vb > right->vb) return(true); if (left->vb < right->vb) return(false); if (left->ib > right->ib) return(true); if (left->ib < right->ib) return(false); if (left->texture > right->texture) return(true); if (left->texture < right->texture) return(false);}

is your problem. You need to move the texture up if its the most significant thing.

Alternatively work out an index to sort by.
Depends how many textures/ib/vb you need but you could always do something like this

int left = left->vb + (left->ib << 8) + (left->texture << 16);int right= right->vb + (right->ib << 8) + (right->texture << 16);return left > right;

This gives 8 bits for vb and ib, and 16 for textures. More than enough.

##### Share on other sites
I don't know if it is worth the effort to spend too much time on regrouping the items, I seem to remember that the number of state changs should be minimized, but the accual type of state change is not that important. However you could look into Decision Tree's, specially the ID3 algorithm (used in AI for assifying data). It's been a while since I worked with it, but I think you could tune the algorithm to produce the results you are looking for.

##### Share on other sites
As a bit of an aside - how many vertex and index buffers do you have? (I'm taking a wild guess at what vb and ib stand for here) I'd think under most circumstances an index list was associated with a vertex list and so usually each vb would correspond to one ib. You should probably also be packing your VBs and IBs together - somebody on the graphics forum mentioned an optimal size a while back, but I can't remember what it was.

##### Share on other sites
Quote:
 Each resource type has a certain cost (cost(vb) > cost(ib) > cost(texture)) and I'm looking for an algorithm to organize my datas, such that I would minimize the resources changes.

If I understand that correctly changing vb has a bigger cost than changing ib for instance. So this means you want to sort by vb first. For all vb which are the same you want to sort by ib. For all vb and ib which are the same you want to sort by texture. This will reduce the total cost for changing values.

However the problem you have is that this sorting will only reduce the total cost, but not minimize it, as shown in your example:
Quote:
 sorted with the previous algo : vb | ib | tex---------------------- 1 1 0 2 2 1 3 3 0 3 4 1I'd like to have something like that : vb | ib | tex---------------------- 1 1 0 3 3 0 3 4 1 2 2 1

Total cost for the sorted version is: 2*cost(vb) + 3*cost(ib) + 3*cost(tex)
Total cost for the second version is: 2*cost(vb) + 3*cost(ib) + 1*cost(tex)

What you want to to is to calculate a distance between the data, similar to the Hamming distance for instance, but with weigths:

d(i, j) = cost(vb)*(vb_i != vb_j ? 1 : 0) + cost(ib)*(ib_i != ib_j ? 1 : 0) + cost(tex)*(tex_i != tex_j ? 1 : 0)

This reduces your problem to that of the Traveling Salesman

Edit:
Corrected distance formula, it's either 0 or cost. Maybe you can try different distance formulas. Also reusing a previous salesman route may speed things up, i.e. only apply changes to that route.

##### Share on other sites
Thx both of you.

@dave_ : I thought about that, but the problem is that the program will run for a long time, and I can't garanty that the resources will always be lower than 2^8 (in your example). And moreover I need to sort by a few more parameters, so I can't use that method except if I can work with 32x10 bit integers ^^ But I heard that some libraries allow this, so I should take a look.

@goodbyte : you should always manage to minimize state changes. Especially vertex buffer, index buffers, shaders, textures. Changing shaders a lot of times each frame really have an impact on performances. And textures could have even more impact, especially when dealing with really big textures. So spending a few ms per frames to sort the primitives is really worth the effort (I gained a +10% frame rate when I simply sorted by shaders ^^)
Anyway, I'll look into this "Decision Tree" algorithm.

Any other idea ?

Edit : woops, I took too long to reply ^^
Thx for the other replies.

@nmi : sounds intersting. I hope this algorithm is fast enough for my use.

##### Share on other sites
Quote:
 Original post by paic@dave_ : I thought about that, but the problem is that the program will run for a long time, and I can't garanty that the resources will always be lower than 2^8 (in your example). And moreover I need to sort by a few more parameters, so I can't use that method except if I can work with 32x10 bit integers ^^ But I heard that some libraries allow this, so I should take a look.

Well, you've got to be careful if thats your excuse. Doesnt matter how big a number you use, it will eventually run out!

If you go for go for another approach you wont get the same precision.

##### Share on other sites
Sorry, but 8 bits for vb is really low ^^ I currently am way above that (in a town, I already have hundreads of buildings, and I don't consider all the trees, city lights, etc.)
And anyway, your method would result in the same order as the std::sort I use (well, in fact, the reverse order since texture uses the most significan bits in your example)

to nmi : I've looked into what you said. It's exactly what I want to obtain. But it's too slow for a real-time approach :/
Maybe I can solve that precomputed the order of all the entities of my scene, and then during runtime, simple do a table look-up.

There isn't any method that allow to minimize state changes in real time ??

##### Share on other sites
Quote:
 Original post by paicThere isn't any method that allow to minimize state changes in real time ??

Maybe, but it will take time to run - possibly more time than you can save versus a plain sort (which you probably shouldn't do very often either).

Do you know this is a problem anyway? Have you profiled the application?

##### Share on other sites
Quote:
 Original post by paicto nmi : I've looked into what you said. It's exactly what I want to obtain. But it's too slow for a real-time approach :/Maybe I can solve that precomputed the order of all the entities of my scene, and then during runtime, simple do a table look-up.

There is an "Iterative improvement" solution for the salesman problem. This may work in Realtime if you can reuse a former solution.

##### Share on other sites
Zahlman: Yes, I know state changes are a problem in my case : when directX output something like "texture thrashing : blahblahblah" although I have enough video memory to handle the textures correctly, I can safely assume that I should avoid useless texture sorting ^^
And +10% speed gain when I sort by shader is a lot (and I already wrote that on a previous post)

What I try to do now is to have a function that will minimize state changes instead of sorting by only 1 state (at the moment, I can sort by texture, by shader, by whatever, but not by 2 or more criteria at the same time)

I think I'll precompute the optimal ordering just after I load my scene (using nmi's suggestion), and then re-use the result in real-time. It seems the best approach for the moment (until I find something better ^^)

##### Share on other sites
Something tells me that treating the input data as if it were grey-codes, and converting them to decimal inside the compare routime, might help minimise state changes. You would need to combine the bits somewhat like dave_ showed though. However, I'm not sure if there are very efficient grey code conversions for very large numbers. Obviously you wouldn't be able to do it in 1 table lookup.

Regardless of the above, you must put a return false; at the end of your dataGreater function, so that it (a) keeps the compiler happy, and (b) compares an item equal with itself.

##### Share on other sites

This topic is 4378 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628697
• Total Posts
2984273

• 20
• 9
• 13
• 13
• 11