Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


#ActualsnowmanZOMG

Posted 29 April 2013 - 08:22 AM

Sorting data is a huge way to get gains in performance.  This is due to a number of effects, the chief ones being caching and driver or state switching overhead.  When you sort data, you're more likely to hit the instruction cache since you're probably going to be doing similar operations for a large group of data at once.  You may also get data cache hit rate improvements as well.

 

With respect to graphics in particular, the GPU is very intolerant of state switching.  Its entire performance potential is contingent on not having special cases!  Whether this means branching or switching off entire buffers/textures, it doesn't really matter, they all have pretty significant costs on execution speed.  The reasons for this are due to GPU architectures which I'm not very well versed in, but if you have a basic understanding of CPU architectures and pipelines, and extrapolate that to hundreds of simple execution units, then you can get an idea of why the GPU is so intolerant of handling "uncommon cases" (branching, state switching, etc).

 

The article you refer to touches upon many different ideas used by people to improve the performance of game engines; more specifically, rendering speed.  Suppose you weren't sorting your draw calls and instead you just drew every item in your game individually and kicked off a draw call for each item.  Since your items may not be sorted, you may have to set up state just to draw the item.  When you finish with that, you end up going to the next item which could be totally different and requires a different set of state to be enabled.  This is very costly.

 

Remember, the GPU wants to be fed HUGE batches of data at once for it to munch on.  It's completely against what the GPU wants for you to process data and feed it in this manner.  When you sort the data/state, you can give it larger batches for it to draw.  Because you're switching state much less often and you're giving it larger amounts of data to process on each draw, it should lead to higher performance during rendering.

 

Additionally, it may be required for you to sort to even get a correct result.  Transparency requires you to draw back to front.  If you violate this, you get an image that does not look correct.  There are many ways people solve this problem, but they pretty much all revolve around doing some kind of sort to get the primitives fed to the GPU in the right order for alpha blending.

 

Now, to your main question:

 

Do people use std::map for sorting?  Almost never.  Not if you care about performance.  STL map is incredibly slow.  Its slowness can be attributed to the nature of the data structure: pointer based tree.  Almost certainly, the STL is allocating a new node on the heap for every single element.  You have no real guarantees on where that allocation is in the memory space.  This leads to poor cache behavior.  Additionally, each node contains a lot of extra linkage information just to make the tree work, so it's pretty bad in memory use as well.

 

If you're only interested the data and its sorted version, there is no reason for you to use the map.  Just throw it into an array and then sort the array.  This will be significantly faster.  Your example of 2000 items is just far too small to be able to see any difference given the resolution of your timer.

 

Here's an example piece of code I wrote to illustrate (requires C++11 support, I use g++ 4.8.0 to compile):

#include <vector>
#include <map>
#include <algorithm>
#include <chrono>
#include <random>
#include <utility>
#include <cstddef>
#include <cstdint>
#include <cstdio>

using namespace std;

class Clock
{
public:
    Clock()
        : m_constructTime(GetTime())
    {
    }

    static double GetTime()
    {
        TPSeconds now(HighResClock::now());

        return now.time_since_epoch().count();
    }

    double GetTimeSinceConstruction() const
    {
        return GetTime() - m_constructTime;
    }

private:
    typedef std::chrono::high_resolution_clock HighResClock;
    typedef std::chrono::duration<double, std::ratio<1>> Seconds;
    typedef std::chrono::time_point<HighResClock, Seconds> TPSeconds;

    double m_constructTime;
};

int main()
{
    const size_t N = 5000000;
    mt19937 rng(0);
    vector<pair<uint32_t, size_t>> randomData;
    randomData.reserve(N);

    // Generate random data.
    {
        Clock clock;

        for (size_t i = 0; i < N; ++i)
        {
            randomData.emplace_back(rng(), i);
        }

        printf("Random data generation took %f seconds!\n", clock.GetTimeSinceConstruction());
    }

    // Insert random data into a map.
    {
        multimap<uint32_t, size_t> data;
        Clock clock;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            data.insert(*iter);
        }

        printf("Map sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = data.cbegin(); iter != data.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Map iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    // Sort our random data.
    {
        Clock clock;

        sort(randomData.begin(), randomData.end());
        printf("Vector sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Vector iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    return 0;
}

 

Compiled with:

 

g++ --std=c++11 sortspeed.cpp -O3 -o sortspeed

 

This code generates 5 million random pieces of data (just some unsigned ints) and then inserts them into a map and sorts an array version of the data.  Time is measured for each version and printed out.  The sum performed in the code is bogus.  It's just there to give the iteration something to look at and forces the compiler to actually generate code that touches each element in the containers.

 

On my Intel Q9550 @ 2.83 ghz, I get the following output:

Random data generation took 0.096909 seconds!
Map sort time: 5.418803 seconds.
Map iteration time: 0.550815 seconds, sum 1642668640.
Vector sort time: 0.563086 seconds.
Vector iteration time: 0.013663 seconds, sum 1642668640.

 

As you can see, the map is significantly slower to the vector, both in terms of sorting speed and iteration speed, by a factor of 10 or more.  You need to carefully determine your requirements for the algorithm you're trying to execute and use the correct data structures to maximize performance.  Although a toy piece of code, this illustrates that certain structures may provide what you need, but they aren't always the minimum requirement.  The map might be more suitable for a problem if it were necessary for you to insert/delete from the list many times.  In games, especially for rendering, insertion and deletion is often not required.  Many engines simply keep a memory block for queuing up draw requests and once the frame is finished, it resets a pointer back to the beginning of the list to effectively erase it in constant time.  Then, the engine just requeues up all elements that need to be drawn again for the next frame.  Because of the fact that the engine doesn't always know what needs to be drawn at any given time, it's often simpler and faster to just rebuild the list of drawables instead of trying to apply deltas to it (inserting/removing the elements that are viewable or not).

 

Edit:  I decided to run some perf tools on the code to provide a little more context.

 

Map only:

Random data generation took 0.097099 seconds!
Map sort time: 5.419488 seconds.
Map iteration time: 0.553151 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

       6846.409743 task-clock                #    1.000 CPUs utilized          
       263,232,170 cache-references          #   38.448 M/sec                   [33.36%]
        81,311,169 cache-misses              #   30.890 % of all cache refs     [33.40%]
    19,219,819,177 cycles                    #    2.807 GHz                     [33.38%]
     3,390,728,460 instructions              #    0.18  insns per cycle         [50.03%]
       762,888,168 branches                  #  111.429 M/sec                   [49.96%]
        80,913,764 branch-misses             #   10.61% of all branches         [49.97%]

       6.849046381 seconds time elapsed

Vector only:

Random data generation took 0.097330 seconds!
Vector sort time: 0.595129 seconds.
Vector iteration time: 0.013621 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

        716.372024 task-clock                #    0.997 CPUs utilized          
        22,809,109 cache-references          #   31.840 M/sec                   [33.01%]
           849,197 cache-misses              #    3.723 % of all cache refs     [33.01%]
     1,845,627,135 cycles                    #    2.576 GHz                     [33.88%]
     1,508,337,419 instructions              #    0.82  insns per cycle         [50.53%]
       352,990,250 branches                  #  492.747 M/sec                   [50.24%]
        48,613,614 branch-misses             #   13.77% of all branches         [50.26%]

       0.718218847 seconds time elapsed

Notice the cache miss rate on each: 30.89 % (map) vs 3.723 % (vector).

 

Edit 2: I decided to run my code with 2000 elements.

Random data generation took 0.000035 seconds!
Map sort time: 0.000456 seconds.
Map iteration time: 0.000042 seconds, sum 1999000.
Vector sort time: 0.000149 seconds.
Vector iteration time: 0.000004 seconds, sum 1999000.

Your timer does not have enough resolution to be able to detect the difference.  Again, 2000 items on a computer is very small.  You need to be thinking about hundreds of thousands or millions for it to really matter to computers (unless your algorithm is very slow...).


#3snowmanZOMG

Posted 28 April 2013 - 09:02 PM

Sorting data is a huge way to get gains in performance.  This is due to a number of effects, the chief ones being caching and driver or state switching overhead.  When you sort data, you're more likely to hit the instruction cache since you're probably going to be doing similar operations for a large group of data at once.  You may also get data cache hit rate improvements as well.

 

With respect to graphics in particular, the GPU is very intolerant of state switching.  Its entire performance potential is contingent on not having special cases!  Whether this means branching or switching off entire buffers/textures, it doesn't really matter, they all have pretty significant costs on execution speed.  The reasons for this are due to GPU architectures which I'm not very well versed in, but if you have a basic understanding of CPU architectures and pipelines, and extrapolate that to hundreds of simple execution units, then you can get an idea of why the GPU is so intolerant of handling "uncommon cases" (branching, state switching, etc).

 

The article you refer to touches upon many different ideas used by people to improve the performance of game engines; more specifically, rendering speed.  Suppose you weren't sorting your draw calls and instead you just drew every item in your game individually and kicked off a draw call for each item.  Since your items may not be sorted, you may have to set up state just to draw the item.  When you finish with that, you end up going to the next item which could be totally different and requires a different set of state to be enabled.  This is very costly.

 

Remember, the GPU wants to be fed HUGE batches of data at once for it to munch on.  It's completely against what the GPU wants for you to process data and feed it in this manner.  When you sort the data/state, you can give it larger batches for it to draw.  Because you're switching state much less often and you're giving it larger amounts of data to process on each draw, it should lead to higher performance during rendering.

 

Additionally, it may be required for you to sort to even get a correct result.  Transparency requires you to draw back to front.  If you violate this, you get an image that does not look correct.  There are many ways people solve this problem, but they pretty much all revolve around doing some kind of sort to get the primitives fed to the GPU in the right order for alpha blending.

 

Now, to your main question:

 

Do people use std::map for sorting?  Almost never.  Not if you care about performance.  STL map is incredibly slow.  Its slowness can be attributed to the nature of the data structure: pointer based tree.  Almost certainly, the STL is allocating a new node on the heap for every single element.  You have no real guarantees on where that allocation is in the memory space.  This leads to poor cache behavior.  Additionally, each node contains a lot of extra linkage information just to make the tree work, so it's pretty bad in memory space as well.

 

If you're only interested the data and its sorted version, there is no reason for you to use the map.  Just throw it into an array and then sort the array.  This will be significantly faster.  Your example of 2000 items is just far too small to be able to see any difference given the resolution of your timer.

 

Here's an example piece of code I wrote to illustrate (requires C++11 support, I use g++ 4.8.0 to compile):

#include <vector>
#include <map>
#include <algorithm>
#include <chrono>
#include <random>
#include <utility>
#include <cstddef>
#include <cstdint>
#include <cstdio>

using namespace std;

class Clock
{
public:
    Clock()
        : m_constructTime(GetTime())
    {
    }

    static double GetTime()
    {
        TPSeconds now(HighResClock::now());

        return now.time_since_epoch().count();
    }

    double GetTimeSinceConstruction() const
    {
        return GetTime() - m_constructTime;
    }

private:
    typedef std::chrono::high_resolution_clock HighResClock;
    typedef std::chrono::duration<double, std::ratio<1>> Seconds;
    typedef std::chrono::time_point<HighResClock, Seconds> TPSeconds;

    double m_constructTime;
};

int main()
{
    const size_t N = 5000000;
    mt19937 rng(0);
    vector<pair<uint32_t, size_t>> randomData;
    randomData.reserve(N);

    // Generate random data.
    {
        Clock clock;

        for (size_t i = 0; i < N; ++i)
        {
            randomData.emplace_back(rng(), i);
        }

        printf("Random data generation took %f seconds!\n", clock.GetTimeSinceConstruction());
    }

    // Insert random data into a map.
    {
        multimap<uint32_t, size_t> data;
        Clock clock;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            data.insert(*iter);
        }

        printf("Map sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = data.cbegin(); iter != data.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Map iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    // Sort our random data.
    {
        Clock clock;

        sort(randomData.begin(), randomData.end());
        printf("Vector sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Vector iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    return 0;
}

 

Compiled with:

 

g++ --std=c++11 sortspeed.cpp -O3 -o sortspeed

 

This code generates 5 million random pieces of data (just some unsigned ints) and then inserts them into a map and sorts an array version of the data.  Time is measured for each version and printed out.  The sum performed in the code is bogus.  It's just there to give the iteration something to look at and forces the compiler to actually generate code that touches each element in the containers.

 

On my Intel Q9550 @ 2.83 ghz, I get the following output:

Random data generation took 0.096909 seconds!
Map sort time: 5.418803 seconds.
Map iteration time: 0.550815 seconds, sum 1642668640.
Vector sort time: 0.563086 seconds.
Vector iteration time: 0.013663 seconds, sum 1642668640.

 

As you can see, the map is significantly slower to the vector, both in terms of sorting speed and iteration speed, by a factor of 10 or more.  You need to carefully determine your requirements for the algorithm you're trying to execute and use the correct data structures to maximize performance.  Although a toy piece of code, this illustrates that certain structures may provide what you need, but they aren't always the minimum requirement.  The map might be more suitable for a problem if it were necessary for you to insert/delete from the list many times.  In games, especially for rendering, insertion and deletion is often not required.  Many engines simply keep a memory block for queuing up draw requests and once the frame is finished, it resets a pointer back to the beginning of the list to effectively erase it in constant time.  Then, the engine just requeues up all elements that need to be drawn again for the next frame.  Because of the fact that the engine doesn't always know what needs to be drawn at any given time, it's often simpler and faster to just rebuild the list of drawables instead of trying to apply deltas to it (inserting/removing the elements that are viewable or not).

 

Edit:  I decided to run some perf tools on the code to provide a little more context.

 

Map only:

Random data generation took 0.097099 seconds!
Map sort time: 5.419488 seconds.
Map iteration time: 0.553151 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

       6846.409743 task-clock                #    1.000 CPUs utilized          
       263,232,170 cache-references          #   38.448 M/sec                   [33.36%]
        81,311,169 cache-misses              #   30.890 % of all cache refs     [33.40%]
    19,219,819,177 cycles                    #    2.807 GHz                     [33.38%]
     3,390,728,460 instructions              #    0.18  insns per cycle         [50.03%]
       762,888,168 branches                  #  111.429 M/sec                   [49.96%]
        80,913,764 branch-misses             #   10.61% of all branches         [49.97%]

       6.849046381 seconds time elapsed

Vector only:

Random data generation took 0.097330 seconds!
Vector sort time: 0.595129 seconds.
Vector iteration time: 0.013621 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

        716.372024 task-clock                #    0.997 CPUs utilized          
        22,809,109 cache-references          #   31.840 M/sec                   [33.01%]
           849,197 cache-misses              #    3.723 % of all cache refs     [33.01%]
     1,845,627,135 cycles                    #    2.576 GHz                     [33.88%]
     1,508,337,419 instructions              #    0.82  insns per cycle         [50.53%]
       352,990,250 branches                  #  492.747 M/sec                   [50.24%]
        48,613,614 branch-misses             #   13.77% of all branches         [50.26%]

       0.718218847 seconds time elapsed

Notice the cache miss rate on each: 30.89 % (map) vs 3.723 % (vector).

 

Edit 2: I decided to run my code with 2000 elements.

Random data generation took 0.000035 seconds!
Map sort time: 0.000456 seconds.
Map iteration time: 0.000042 seconds, sum 1999000.
Vector sort time: 0.000149 seconds.
Vector iteration time: 0.000004 seconds, sum 1999000.

Your timer does not have enough resolution to be able to detect the difference.  Again, 2000 items on a computer is very small.  You need to be thinking about hundreds of thousands or millions for it to really matter to computers (unless your algorithm is very slow...).


#2snowmanZOMG

Posted 28 April 2013 - 08:52 PM

Sorting data is a huge way to get gains in performance.  This is due to a number of effects, the chief ones being caching and driver or state switching overhead.  When you sort data, you're more likely to hit the instruction cache since you're probably going to be doing similar operations for a large group of data at once.  You may also get data cache hit rate improvements as well.

 

With respect to graphics in particular, the GPU is very intolerant of state switching.  Its entire performance potential is contingent on not having special cases!  Whether this means branching or switching off entire buffers/textures, it doesn't really matter, they all have pretty significant costs on execution speed.  The reasons for this are due to GPU architectures which I'm not very well versed in, but if you have a basic understanding of CPU architectures and pipelines, and extrapolate that to hundreds of simple execution units, then you can get an idea of why the GPU is so intolerant of handling "uncommon cases" (branching, state switching, etc).

 

The article you refer to touches upon many different ideas used by people to improve the performance of game engines; more specifically, rendering speed.  Suppose you weren't sorting your draw calls and instead you just drew every item in your game individually and kicked off a draw call for each item.  Since your items may not be sorted, you may have to set up state just to draw the item.  When you finish with that, you end up going to the next item which could be totally different and requires a different set of state to be enabled.  This is very costly.

 

Remember, the GPU wants to be fed HUGE batches of data at once for it to munch on.  It's completely against what the GPU wants for you to process data and feed it in this manner.  When you sort the data/state, you can give it larger batches for it to draw.  Because you're switching state much less often and you're giving it larger amounts of data to process on each draw, it should lead to higher performance during rendering.

 

Additionally, it may be required for you to sort to even get a correct result.  Transparency requires you to draw back to front.  If you violate this, you get an image that does not look correct.  There are many ways people solve this problem, but they pretty much all revolve around doing some kind of sort to get the primitives fed to the GPU in the right order for alpha blending.

 

Now, to your main question:

 

Do people use std::map for sorting?  Almost never.  Not if you care about performance.  STL map is incredibly slow.  Its slowness can be attributed to the nature of the data structure: pointer based tree.  Almost certainly, the STL is allocating a new node on the heap for every single element.  You have no real guarantees on where that allocation is in the memory space.  This leads to poor cache behavior.  Additionally, each node contains a lot of extra linkage information just to make the tree work, so it's pretty bad in memory space as well.

 

If you're only interested the data and its sorted version, there is no reason for you to use the map.  Just throw it into an array and then sort the array.  This will be significantly faster.  Your example of 2000 items is just far too small to be able to see any difference given the resolution of your timer.

 

Here's an example piece of code I wrote to illustrate (requires C++11 support, I use g++ 4.8.0 to compile):

#include <vector>
#include <map>
#include <algorithm>
#include <chrono>
#include <random>
#include <utility>
#include <cstddef>
#include <cstdint>
#include <cstdio>

using namespace std;

class Clock
{
public:
    Clock()
        : m_constructTime(GetTime())
    {
    }

    static double GetTime()
    {
        TPSeconds now(HighResClock::now());

        return now.time_since_epoch().count();
    }

    double GetTimeSinceConstruction() const
    {
        return GetTime() - m_constructTime;
    }

private:
    typedef std::chrono::high_resolution_clock HighResClock;
    typedef std::chrono::duration<double, std::ratio<1>> Seconds;
    typedef std::chrono::time_point<HighResClock, Seconds> TPSeconds;

    double m_constructTime;
};

int main()
{
    const size_t N = 5000000;
    mt19937 rng(0);
    vector<pair<uint32_t, size_t>> randomData;
    randomData.reserve(N);

    // Generate random data.
    {
        Clock clock;

        for (size_t i = 0; i < N; ++i)
        {
            randomData.emplace_back(rng(), i);
        }

        printf("Random data generation took %f seconds!\n", clock.GetTimeSinceConstruction());
    }

    // Insert random data into a map.
    {
        multimap<uint32_t, size_t> data;
        Clock clock;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            data.insert(*iter);
        }

        printf("Map sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = data.cbegin(); iter != data.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Map iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    // Sort our random data.
    {
        Clock clock;

        sort(randomData.begin(), randomData.end());
        printf("Vector sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Vector iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    return 0;
}

 

Compiled with:

 

g++ --std=c++11 sortspeed.cpp -O3 -o sortspeed

 

This code generates 5 million random pieces of data (just some unsigned ints) and then inserts them into a map and sorts an array version of the data.  Time is measured for each version and printed out.  The sum performed in the code is bogus.  It's just there to give the iteration something to look at and forces the compiler to actually generate code that touches each element in the containers.

 

On my Intel Q9550 @ 2.83 ghz, I get the following output:

Random data generation took 0.096909 seconds!
Map sort time: 5.418803 seconds.
Map iteration time: 0.550815 seconds, sum 1642668640.
Vector sort time: 0.563086 seconds.
Vector iteration time: 0.013663 seconds, sum 1642668640.

 

As you can see, the map is significantly slower to the vector, both in terms of sorting speed and iteration speed, by a factor of 10 or more.  You need to carefully determine your requirements for the algorithm you're trying to execute and use the correct data structures to maximize performance.  Although a toy piece of code, this illustrates that certain structures may provide what you need, but they aren't always the minimum requirement.  The map might be more suitable for a problem if it were necessary for you to insert/delete from the list many times.  In games, especially for rendering, insertion and deletion is often not required.  Many engines simply keep a memory block for queuing up draw requests and once the frame is finished, it resets a pointer back to the beginning of the list to effectively erase it in constant time.  Then, the engine just requeues up all elements that need to be drawn again for the next frame.  Because of the fact that the engine doesn't always know what needs to be drawn at any given time, it's often simpler and faster to just rebuild the list of drawables instead of trying to apply deltas to it (inserting/removing the elements that are viewable or not).

 

Edit:  I decided to run some perf tools on the code to provide a little more context.

 

Map only:

Random data generation took 0.097099 seconds!
Map sort time: 5.419488 seconds.
Map iteration time: 0.553151 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

       6846.409743 task-clock                #    1.000 CPUs utilized          
       263,232,170 cache-references          #   38.448 M/sec                   [33.36%]
        81,311,169 cache-misses              #   30.890 % of all cache refs     [33.40%]
    19,219,819,177 cycles                    #    2.807 GHz                     [33.38%]
     3,390,728,460 instructions              #    0.18  insns per cycle         [50.03%]
       762,888,168 branches                  #  111.429 M/sec                   [49.96%]
        80,913,764 branch-misses             #   10.61% of all branches         [49.97%]

       6.849046381 seconds time elapsed

Vector only:

Random data generation took 0.097330 seconds!
Vector sort time: 0.595129 seconds.
Vector iteration time: 0.013621 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

        716.372024 task-clock                #    0.997 CPUs utilized          
        22,809,109 cache-references          #   31.840 M/sec                   [33.01%]
           849,197 cache-misses              #    3.723 % of all cache refs     [33.01%]
     1,845,627,135 cycles                    #    2.576 GHz                     [33.88%]
     1,508,337,419 instructions              #    0.82  insns per cycle         [50.53%]
       352,990,250 branches                  #  492.747 M/sec                   [50.24%]
        48,613,614 branch-misses             #   13.77% of all branches         [50.26%]

       0.718218847 seconds time elapsed

Notice the cache miss rate on each: 30.89 % (map) vs 3.723 % (vector).


#1snowmanZOMG

Posted 28 April 2013 - 08:44 PM

Sorting data is a huge way to get gains in performance.  This is due to a number of effects, the chief ones being caching and driver or state switching overhead.  When you sort data, you're more likely to hit the instruction cache since you're probably going to be doing similar operations for a large group of data at once.  You may also get data cache hit rate improvements as well.

 

With respect to graphics in particular, the GPU is very intolerant of state switching.  Its entire performance potential is contingent on not having special cases!  Whether this means branching or switching off entire buffers/textures, it doesn't really matter, they all have pretty significant costs on execution speed.  The reasons for this are due to GPU architectures which I'm not very well versed in, but if you have a basic understanding of CPU architectures and pipelines, and extrapolate that to hundreds of simple execution units, then you can get an idea of why the GPU is so intolerant of handling "uncommon cases" (branching, state switching, etc).

 

The article you refer to touches upon many different ideas used by people to improve the performance of game engines; more specifically, rendering speed.  Suppose you weren't sorting your draw calls and instead you just drew every item in your game individually and kicked off a draw call for each item.  Since your items may not be sorted, you may have to set up state just to draw the item.  When you finish with that, you end up going to the next item which could be totally different and requires a different set of state to be enabled.  This is very costly.

 

Remember, the GPU wants to be fed HUGE batches of data at once for it to munch on.  It's completely against what the GPU wants for you to process data and feed it in this manner.  When you sort the data/state, you can give it larger batches for it to draw.  Because you're switching state much less often and you're giving it larger amounts of data to process on each draw, it should lead to higher performance during rendering.

 

Additionally, it may be required for you to sort to even get a correct result.  Transparency requires you to draw back to front.  If you violate this, you get an image that does not look correct.  There are many ways people solve this problem, but they pretty much all revolve around doing some kind of sort to get the primitives fed to the GPU in the right order for alpha blending.

 

Now, to your main question:

 

Do people use std::map for sorting?  Almost never.  Not if you care about performance.  STL map is incredibly slow.  Its slowness can be attributed to the nature of the data structure: pointer based tree.  Almost certainly, the STL is allocating a new node on the heap for every single element.  You have no real guarantees on where that allocation is in the memory space.  This leads to poor cache behavior.  Additionally, each node contains a lot of extra linkage information just to make the tree work, so it's pretty bad in memory space as well.

 

If you're only interested the data and its sorted version, there is no reason for you to use the map.  Just throw it into an array and then sort the array.  This will be significantly faster.  Your example of 2000 items is just far too small to be able to see any difference given the resolution of your timer.

 

Here's an example piece of code I wrote to illustrate (requires C++11 support, I use g++ 4.8.0 to compile):

#include <vector>
#include <map>
#include <algorithm>
#include <chrono>
#include <random>
#include <utility>
#include <cstddef>
#include <cstdint>
#include <cstdio>

using namespace std;

class Clock
{
public:
    Clock()
        : m_constructTime(GetTime())
    {
    }

    static double GetTime()
    {
        TPSeconds now(HighResClock::now());

        return now.time_since_epoch().count();
    }

    double GetTimeSinceConstruction() const
    {
        return GetTime() - m_constructTime;
    }

private:
    typedef std::chrono::high_resolution_clock HighResClock;
    typedef std::chrono::duration<double, std::ratio<1>> Seconds;
    typedef std::chrono::time_point<HighResClock, Seconds> TPSeconds;

    double m_constructTime;
};

int main()
{
    const size_t N = 5000000;
    mt19937 rng(0);
    vector<pair<uint32_t, size_t>> randomData;
    randomData.reserve(N);

    // Generate random data.
    {
        Clock clock;

        for (size_t i = 0; i < N; ++i)
        {
            randomData.emplace_back(rng(), i);
        }

        printf("Random data generation took %f seconds!\n", clock.GetTimeSinceConstruction());
    }

    // Insert random data into a map.
    {
        multimap<uint32_t, size_t> data;
        Clock clock;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            data.insert(*iter);
        }

        printf("Map sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = data.cbegin(); iter != data.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Map iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    // Sort our random data.
    {
        Clock clock;

        sort(randomData.begin(), randomData.end());
        printf("Vector sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Vector iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    return 0;
}

 

Compiled with:

 

g++ --std=c++11 sortspeed.cpp -O3 -o sortspeed

 

This code generates 5 million random pieces of data (just some unsigned ints) and then inserts them into a map and sorts an array version of the data.  Time is measured for each version and printed out.  The sum performed in the code is bogus.  It's just there to give the iteration something to look at and forces the compiler to actually generate code that touches each element in the containers.

 

On my Intel Q9550 @ 2.83 ghz, I get the following output:

Random data generation took 0.096909 seconds!
Map sort time: 5.418803 seconds.
Map iteration time: 0.550815 seconds, sum 1642668640.
Vector sort time: 0.563086 seconds.
Vector iteration time: 0.013663 seconds, sum 1642668640.

 

As you can see, the map is significantly slower to the vector, both in terms of sorting speed and iteration speed, by a factor of 10 or more.  You need to carefully determine your requirements for the algorithm you're trying to execute and use the correct data structures to maximize performance.  Although a toy piece of code, this illustrates that certain structures may provide what you need, but they aren't always the minimum requirement.  The map might be more suitable for a problem if it were necessary for you to insert/delete from the list many times.  In games, especially for rendering, insertion and deletion is often not required.  Many engines simply keep a memory block for queuing up draw requests and once the frame is finished, it resets a pointer back to the beginning of the list to effectively erase it in constant time.  Then, the engine just requeues up all elements that need to be drawn again for the next frame.  Because of the fact that the engine doesn't always know what needs to be drawn at any given time, it's often simpler and faster to just rebuild the list of drawables instead of trying to apply deltas to it (inserting/removing the elements that are viewable or not).


PARTNERS