Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


whats the fastest way to bucket sort pointers?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 thedodgeruk   Members   -  Reputation: 124

Like
1Likes
Like

Posted 20 October 2011 - 11:46 AM

i have shaders only 12 of them so far , they will be increasing and entitys,

every entity has a pointer to a shader

i want to bucket each entity into groups of shaders.

i used stl::maps but this seams slow , is there a faster way ?


Sponsor:

#2 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 20 October 2011 - 12:29 PM

You'll need to clarify, I can't understand what you're asking.

#3 thedodgeruk   Members   -  Reputation: 124

Like
1Likes
Like

Posted 20 October 2011 - 12:51 PM

i have shaders only 12 of them so far , they will be increasing and entitys,

every entity has a pointer to a shader

i want to bucket each entity into groups of shaders.

i used stl::maps but this seams slow , is there a faster way ?




i ahve entitys , ie. models , and they all have shader pointers . this is so that they have a ppointer to the differnt shaders

i want to sort all the models into seperate vectors, for fast access. they need to be sorted vai the pointers of the shaders.



#4 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 20 October 2011 - 12:58 PM

And
std::map<shader*, std::vector<model*>>
(using smart pointers where appropriate) is insufficient?

Is it too slow to populate, to iterate over, to search through?

#5 thedodgeruk   Members   -  Reputation: 124

Like
1Likes
Like

Posted 20 October 2011 - 05:57 PM

And

std::map<shader*, std::vector<model*>>
(using smart pointers where appropriate) is insufficient?

Is it too slow to populate, to iterate over, to search through?


tred that , was way too slow .

had to re configure my engine to use enums : got the speed now though

#6 ApochPiQ   Moderators   -  Reputation: 16079

Like
0Likes
Like

Posted 20 October 2011 - 06:01 PM

Did you profile your code to see what was slow? What exactly do you mean by "using enums"? How would it gain you speed?

#7 Hodgman   Moderators   -  Reputation: 31143

Like
0Likes
Like

Posted 20 October 2011 - 06:11 PM

Just sort them all into the one vector. Your "buckets" are then different ranges within that vector.

#8 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 20 October 2011 - 06:44 PM


And

std::map<shader*, std::vector<model*>>
(using smart pointers where appropriate) is insufficient?

Is it too slow to populate, to iterate over, to search through?


tred that , was way too slow .

had to re configure my engine to use enums : got the speed now though


enums aren't any smaller or easier to hash than pointers. If you're not using pointers and are copying your entire object every time... yeah, that's going to suck.

But since you won't actually tell us anything meaningful... best of luck with that.

#9 iMalc   Crossbones+   -  Reputation: 2313

Like
0Likes
Like

Posted 21 October 2011 - 12:12 AM

maps can be slow if you don't know how to use them properly, and fast if you do. There are various tricks like making use of swap and const-references etc that you need to know to use them efficiently.

Without seeing your code, my experience tells me to assume that you used them poorly, because that assumption is most often correct.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

#10 thedodgeruk   Members   -  Reputation: 124

Like
1Likes
Like

Posted 21 October 2011 - 05:06 AM



And

std::map<shader*, std::vector<model*>>
(using smart pointers where appropriate) is insufficient?

Is it too slow to populate, to iterate over, to search through?


tred that , was way too slow .

had to re configure my engine to use enums : got the speed now though


enums aren't any smaller or easier to hash than pointers. If you're not using pointers and are copying your entire object every time... yeah, that's going to suck.

But since you won't actually tell us anything meaningful... best of luck with that.



erm , need to sort my entitys so that i have less state changes on the GPU , so need to bucket sort all my enttiys via the shader pointer , so when done i have one bucket for all entitys that have shader plaincolour, other plainTexture , other phong , other normalmapping ect

and did an analize and it was saying with map, it was saying the slowest thing in my engine was itterating through the map , once i collected all my info into the buckets




#11 __Homer__   Members   -  Reputation: 58

Like
-3Likes
Like

Posted 21 October 2011 - 05:26 AM

Use assembly language, stop screwing around, if you want speed, size, or both, come to the dark side @ asmcommunity.net
No we do not support malicious stuff, we are good people who help each other, and welcome novices and experts alike.
I was forced to program in c and c++ all this year, and i learned a few things, like, MSVC IS CRAP and I like codeblocks, and so on.
In C++, friends have access to your privates.
In ObjAsm, your members are exposed!

#12 rip-off   Moderators   -  Reputation: 8527

Like
0Likes
Like

Posted 21 October 2011 - 05:29 AM

How were you profiling? Were you profiling a Debug or Release build? If iterating through a 12 element std::map was the most expensive thing in your "engine", then you mustn't be doing a lot of work elsewhere in your program.

Can you show us some code? Maybe you are making a minor mistake that ends up doing unnecessary work.

For small numbers of keys, a map has a lot of constant and hidden* overheads. It is only when the number of keys is large that you see the benefits. I agree with Hodgman, I think a sorted linear contiguous structure like std::vector<> would be much more efficient, and not too hard to code.

* Hidden overhead includes cost of cache misses and allocations, which is ignored by big O analysis.

#13 Hodgman   Moderators   -  Reputation: 31143

Like
1Likes
Like

Posted 21 October 2011 - 06:46 AM

A map (i.e. balanced binary tree) of vectors is totally overkill. Implementing it in assembly also wont help, as the inefficiency is in the algorithm / data-structure, not the implementation.

All you need is one std::vector plus std::sort (or a custom radix sort if you've got thousands of entities and want that little bit of extra speed).




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS