Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


snowmanZOMG

Member Since 29 Aug 2010
Offline Last Active Today, 12:50 PM

#5126994 A* polishing - optimising implementation for large map

Posted by snowmanZOMG on 28 January 2014 - 11:24 AM

Correct me if I am wrong (I am not a C++ programmer), but I don't think you can use a priority queue because you need to update the values.

 

Sure you can.  The STL heap functions don't provide an update function, but you don't exactly need one in the context of A* if you keep enough extra state.  A properly written heap update performs in O(log n) time, but the only way to perform an update using the STL provided functions is to modify the contents directly yourself and then call make_heap() which runs in O(n) time.  This is not desirable.  However, you don't need to do a full heap rebuild; as long as you keep flags to know when a node is opened or closed, you can simply keep throwing on the updated node cost values into your heap and every time you pop off the heap you should check if the node you retrieved is open or not.  If it isn't open, just skip it!

 

This is how I've implemented my A* function.  You may be concerned that you'll have multiple instances of a node in the heap with different costs, but this doesn't actually affect the correctness of the algorithm as long as you have a flag (or some other structure) to determine if the node is in the open list or not.  The heap serves only to keep track of the next min cost node to explore, not to actually tell you if a node is open or not.  And due to the relaxation process of edges inherent in shortest path algorithms, you will always be popping off the most current cost estimates to a node if you have multiple copies.

 

Using a good structure to retrieve the min cost node is critical to good performance in cases where you have a large map to path through, but possibly just as important is the initialization time.  I haven't looked thoroughly at your code but I also suspect your graph initialization can be a bottleneck.  You should profile or instrument your code so you know exactly which parts take the most time.  If your main loop takes up a lot of time, chances are you need a faster structure to retrieve your min cost nodes.  Currently, I'm inclined to believe that most of your performance is dependent the open list data structure, simply because you stated earlier that the performance depends heavily on the length of the path.




#5114515 Should I make my game engine cross platform?

Posted by snowmanZOMG on 05 December 2013 - 02:36 AM

  • Have you made a game before?
  • Do you have experience with the operating systems on all the platforms you are thinking about supporting?
  • Are you comfortable with getting into the guts of setting up a proper build system and understand the compilers you'll need to use for each platform?
  • Do you know which language features are supported by the compilers?
  • Are you using libraries which are available on all platforms?

If you've answered no to any of these questions, you almost certainly should not try to make a game that is cross platform.   You'll waste so much time and energy on making something work on all the systems rather than finishing a game on just one of them.




#5111060 Pathfinding Algorithm, grid to regions

Posted by snowmanZOMG on 21 November 2013 - 01:55 PM

You might want to look up rectangulation and rectangular partitioning algorithms.  It has actively been researched in other areas, specifically for VLSI and chip manufacturing, so you may need to look up some literature with those terms.  But this is a very active area of research (basically anything that has to do with computation geometry is).  In the general case, rectangulation can be quite difficult and most of those algorithms are probably not fast enough for your needs, and quite frankly, you probably don't need something that general or optimal.  Check out http://www.math.unl.edu/~s-dstolee1/Presentations/Sto08-MinRectPart.pdf

 

I think you'll find that an optimal solution will be quite difficult to achieve.  Without going into a lot of digging and careful reading of the literature, I suspect this problem is NP-complete.  http://mathoverflow.net/questions/132603/algorithms-for-covering-a-rectilinear-polygon-using-the-same-multiple-rectangles not exactly the same problem, but very similar to yours.

 

Am I correct in understanding that your search space is 16 * 16 * 64 = 16384?  If that's the number of nodes you have in your A* search space, it's large but still manageable without significant optimization...  My game currently has some levels which can have 10000 nodes if I force a very fine nav mesh resolution and with ~80 agents I still have ~300 FPS (can't tell you the A* search time since I haven't instrumented that).  It's not completely smooth, but it definitely isn't as bad as 30 ms, so I don't agree with your assertion that "this is of course to be expected".  My A* implementation isn't hyper optimized or anything.

 

I think your best strategy right now is to drop the quest of "minimum" or optimality and go for a reasonable reduction in the search space.  Try something dead simple first, like trying to build the largest cube/square and just go in the same direction every time.  Perhaps before even that, make sure your A* search is reasonably performant.  It doesn't sound like you have a very good implementation to me, but I need a lot more context to be able to know.  Are your data structures designed for good cache locality?  Are you using a decent priority queue to maintain your open set?

 

I've found that in my own A* implementation, the biggest gains were in simply making the initialization of the graph faster.  My first implementation had this terrible Array of Structs format where each node contained all of its A* state information.  For me to initialize the whole graph required individual setting of f, g, and parent states on each node.  Very bad for memory performance.  I rearranged the data structures to be a Struct of Arrays format where the graph now contains an entire array of f, g, and parent values and now initialization of the graph are very fast memsets() on entire blocks of memory.  There are some other clever ways to make the initialization closer to O(1) time but I didn't want to deal with that.

 

This was a very simple transformation for me to make and yielded enormous performance improvements, so much so that I basically have come to a point where I just don't have to worry about A* performance in my game.




#5110100 Need advice on pathfinding implementation

Posted by snowmanZOMG on 18 November 2013 - 01:06 AM

Sticking to a rectangular grid/mesh can be easy from the initial implementation point of view, but they aren't without their own problems.  You have to consider the needs of your game to reach a good solution.  If your game's actual play space is grid based then the rectangular grid solution should work quite well, but if you have a full 3D continuous game space, you'll probably find lots of potential problems with using a grid (How do you generate the grid?  What resolution?  How many nodes are necessary to get good coverage/accuracy?).

 

Going with a triangulated mesh isn't a clear win either.  Given a 3D game, you could use some of the level geometry as a starting point for your triangulation, but you have to solve a lot of different problems (many of which don't have clear right answers/solutions) to get something that would be usable for your game.  If you have the expertise and the capability, you can have the mesh and it will give you a lot of nice properties, but getting the mesh in the first place is often very difficult.

 

Check out Recast/Detour https://github.com/memononen/recastnavigation. It's a very nice (so I've heard, never used it myself, but the features look impressive) navigation mesh generation library and if I'm not mistaken, it was used in Killzone: Shadow Fall!  The source is available so you can take a look at some of the techniques used.

 

If you're learning the basics of A*, pathfinding, and path following, I would highly recommend sticking to the rectangular grid as a starting point.  If you do in fact have a 3D game, it probably won't be ideal but it's simple enough that it should get you going and have you focused on getting a complete workable solution for AI agents to reason about your world and path through it.  Once you complete this, you could move on to having some sort of triangulated mesh.  If so, I would recommend reading up on Delaunay triangulations and more specifically, constrained Delaunay triangulations.




#5109774 Need advice on pathfinding implementation

Posted by snowmanZOMG on 16 November 2013 - 02:53 PM

I'm not quite sure what your question is?  If it's trying to get pathfinding to work in 3D games, your rectangular grid should be workable, as most pathing cases (even in 3D games) tend to have a topology that maps down easily to 2D space.

 

What are you trying to consider as a part of your A* search and what is necessary for your game?  You don't always need to have full 3D world knowledge represented in your search space.  Most of the time, the 3D information involved is just portions of the walkable geometry so the game can map from the playable 3D space down to a representative graph for the pathing problem that is being solved, which often ignores most of the other 3D aspects of the game.




#5102192 Pro and Cons to building a game engine

Posted by snowmanZOMG on 17 October 2013 - 11:48 AM

Part of the problem here is the way you're using the vocabulary to describe different things.

 

It is true that you need to write some piece of code to run the logic that actually executes on a computer that would, as a whole, constitute some sort of interactive game.  This code is often referred to as the "engine", but it's really pretty broad and somewhat difficult to come with a concise definition of what an engine really is.  However, when someone talks about "engine programming", they typically mean working on core infrastructure that lets the entire game be built on top of.  Core engine systems typically don't require huge changes from game to game if an engine is reused.  But, the engine isn't particularly useful to developers if it isn't tailored towards some design goals.

 

There's a reason why the vast majority of games developed with the Unreal Engine tend to be shooters; that engine was designed to handle those kinds of cases.  If you're talking about engine programming in the sense of creating and working on core infrastructure, you could work on that forever.  There's always something you can improve.  But you won't ever finish a game.  You'll just end up having some piece of code that can draw and update stuff but there probably won't be anything fun to do.

 

But if you're going to aim to finish a game, you'll still have to do some of the same things an engine programmer would do, since you need that code.  But the subtle difference here is that you're going to only work on the infrastructure you need up to the point that it satisfies the needs of your game.  Once you're there, you should stop working on that and work on getting the actually gameplay stuff interesting and fun.

 

In summary, the typical image of an engine programmer is someone who works solely on core software systems that let a game be possible to be built/run.  They might work on graphics, physics, audio, asset loading/management, memory systems, debugging tools, gameplay scripting systems.  But if you took all these systems together and ran that software, you still won't have a game.  And each of those systems could be worked on and improved indefinitely.  All games need some subset of these things, there's no question.  The point is, if you don't have a full blown team behind you where you can dedicate an entire person to each of these systems, you need to prioritize your efforts.

 

This is why large companies like Valve, Epic, Crytek, and Unity are even able to sell their software (engine) at all.  The engine isn't a game, it's just a piece of software that runs that can simulate an interactive system.  The actual game part people write themselves within whatever framework the engine provides.  Most small teams aren't able to invest the time and energy to build a piece of software that can be that modular.  You might be writing some code right now that could be reusable in another project, but that's very different from the kind of modularity and reusability that can be offered by engines like Unreal.




#5098777 Pro and Cons to building a game engine

Posted by snowmanZOMG on 04 October 2013 - 11:23 AM

Biggest pro:

 

You get to build everything yourself.

 

Biggest con:

 

You get to build everything yourself.

 

It's staggering how much work you can create for yourself when you try to do everything.  You get absolute control but you pay the cost for it by having to do it all yourself.  If you want to actually finish games, you would be very wise to aggressively seek libraries to handle the crap that's just not that interesting in terms of game development (namely, the things EVERYBODY has to implement in one way or another).  Focus on what makes your game unique, that's where you need to spend the time, energy, investment to maximize the strengths of your game.

 

My question is: if building an engine takes too long, eventually when you need to use it, you can start popping out games using that engine when it is done because of the reusable code in the engine, no?

 

This is very dangerous in my opinion.  The amount of work you can put into just building a game engine is literally unending.  It is a software project onto itself that can be worked on indefinitely in the absence of any true game.  Aim to build a game, not an engine.  For God's sake, build a game, especially if you're just starting out.  You need a lot of experience in building games to make any software that's suitable for reuse across multiple games to even be remotely feasible.




#5057628 Draw Call Sorting Using std::map

Posted by snowmanZOMG on 28 April 2013 - 08:44 PM

Sorting data is a huge way to get gains in performance.  This is due to a number of effects, the chief ones being caching and driver or state switching overhead.  When you sort data, you're more likely to hit the instruction cache since you're probably going to be doing similar operations for a large group of data at once.  You may also get data cache hit rate improvements as well.

 

With respect to graphics in particular, the GPU is very intolerant of state switching.  Its entire performance potential is contingent on not having special cases!  Whether this means branching or switching off entire buffers/textures, it doesn't really matter, they all have pretty significant costs on execution speed.  The reasons for this are due to GPU architectures which I'm not very well versed in, but if you have a basic understanding of CPU architectures and pipelines, and extrapolate that to hundreds of simple execution units, then you can get an idea of why the GPU is so intolerant of handling "uncommon cases" (branching, state switching, etc).

 

The article you refer to touches upon many different ideas used by people to improve the performance of game engines; more specifically, rendering speed.  Suppose you weren't sorting your draw calls and instead you just drew every item in your game individually and kicked off a draw call for each item.  Since your items may not be sorted, you may have to set up state just to draw the item.  When you finish with that, you end up going to the next item which could be totally different and requires a different set of state to be enabled.  This is very costly.

 

Remember, the GPU wants to be fed HUGE batches of data at once for it to munch on.  It's completely against what the GPU wants for you to process data and feed it in this manner.  When you sort the data/state, you can give it larger batches for it to draw.  Because you're switching state much less often and you're giving it larger amounts of data to process on each draw, it should lead to higher performance during rendering.

 

Additionally, it may be required for you to sort to even get a correct result.  Transparency requires you to draw back to front.  If you violate this, you get an image that does not look correct.  There are many ways people solve this problem, but they pretty much all revolve around doing some kind of sort to get the primitives fed to the GPU in the right order for alpha blending.

 

Now, to your main question:

 

Do people use std::map for sorting?  Almost never.  Not if you care about performance.  STL map is incredibly slow.  Its slowness can be attributed to the nature of the data structure: pointer based tree.  Almost certainly, the STL is allocating a new node on the heap for every single element.  You have no real guarantees on where that allocation is in the memory space.  This leads to poor cache behavior.  Additionally, each node contains a lot of extra linkage information just to make the tree work, so it's pretty bad in memory use as well.

 

If you're only interested the data and its sorted version, there is no reason for you to use the map.  Just throw it into an array and then sort the array.  This will be significantly faster.  Your example of 2000 items is just far too small to be able to see any difference given the resolution of your timer.

 

Here's an example piece of code I wrote to illustrate (requires C++11 support, I use g++ 4.8.0 to compile):

#include <vector>
#include <map>
#include <algorithm>
#include <chrono>
#include <random>
#include <utility>
#include <cstddef>
#include <cstdint>
#include <cstdio>

using namespace std;

class Clock
{
public:
    Clock()
        : m_constructTime(GetTime())
    {
    }

    static double GetTime()
    {
        TPSeconds now(HighResClock::now());

        return now.time_since_epoch().count();
    }

    double GetTimeSinceConstruction() const
    {
        return GetTime() - m_constructTime;
    }

private:
    typedef std::chrono::high_resolution_clock HighResClock;
    typedef std::chrono::duration<double, std::ratio<1>> Seconds;
    typedef std::chrono::time_point<HighResClock, Seconds> TPSeconds;

    double m_constructTime;
};

int main()
{
    const size_t N = 5000000;
    mt19937 rng(0);
    vector<pair<uint32_t, size_t>> randomData;
    randomData.reserve(N);

    // Generate random data.
    {
        Clock clock;

        for (size_t i = 0; i < N; ++i)
        {
            randomData.emplace_back(rng(), i);
        }

        printf("Random data generation took %f seconds!\n", clock.GetTimeSinceConstruction());
    }

    // Insert random data into a map.
    {
        multimap<uint32_t, size_t> data;
        Clock clock;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            data.insert(*iter);
        }

        printf("Map sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = data.cbegin(); iter != data.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Map iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    // Sort our random data.
    {
        Clock clock;

        sort(randomData.begin(), randomData.end());
        printf("Vector sort time: %f seconds.\n", clock.GetTimeSinceConstruction());

        double start = Clock::GetTime();
        int sum = 0;

        for (auto iter = randomData.cbegin(); iter != randomData.cend(); ++iter)
        {
            sum += iter->second;
        }

        double end = Clock::GetTime();
        printf("Vector iteration time: %f seconds, sum %d.\n", end - start, sum);
    }

    return 0;
}

 

Compiled with:

 

g++ --std=c++11 sortspeed.cpp -O3 -o sortspeed

 

This code generates 5 million random pieces of data (just some unsigned ints) and then inserts them into a map and sorts an array version of the data.  Time is measured for each version and printed out.  The sum performed in the code is bogus.  It's just there to give the iteration something to look at and forces the compiler to actually generate code that touches each element in the containers.

 

On my Intel Q9550 @ 2.83 ghz, I get the following output:

Random data generation took 0.096909 seconds!
Map sort time: 5.418803 seconds.
Map iteration time: 0.550815 seconds, sum 1642668640.
Vector sort time: 0.563086 seconds.
Vector iteration time: 0.013663 seconds, sum 1642668640.

 

As you can see, the map is significantly slower to the vector, both in terms of sorting speed and iteration speed, by a factor of 10 or more.  You need to carefully determine your requirements for the algorithm you're trying to execute and use the correct data structures to maximize performance.  Although a toy piece of code, this illustrates that certain structures may provide what you need, but they aren't always the minimum requirement.  The map might be more suitable for a problem if it were necessary for you to insert/delete from the list many times.  In games, especially for rendering, insertion and deletion is often not required.  Many engines simply keep a memory block for queuing up draw requests and once the frame is finished, it resets a pointer back to the beginning of the list to effectively erase it in constant time.  Then, the engine just requeues up all elements that need to be drawn again for the next frame.  Because of the fact that the engine doesn't always know what needs to be drawn at any given time, it's often simpler and faster to just rebuild the list of drawables instead of trying to apply deltas to it (inserting/removing the elements that are viewable or not).

 

Edit:  I decided to run some perf tools on the code to provide a little more context.

 

Map only:

Random data generation took 0.097099 seconds!
Map sort time: 5.419488 seconds.
Map iteration time: 0.553151 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

       6846.409743 task-clock                #    1.000 CPUs utilized          
       263,232,170 cache-references          #   38.448 M/sec                   [33.36%]
        81,311,169 cache-misses              #   30.890 % of all cache refs     [33.40%]
    19,219,819,177 cycles                    #    2.807 GHz                     [33.38%]
     3,390,728,460 instructions              #    0.18  insns per cycle         [50.03%]
       762,888,168 branches                  #  111.429 M/sec                   [49.96%]
        80,913,764 branch-misses             #   10.61% of all branches         [49.97%]

       6.849046381 seconds time elapsed

Vector only:

Random data generation took 0.097330 seconds!
Vector sort time: 0.595129 seconds.
Vector iteration time: 0.013621 seconds, sum 1642668640.

 Performance counter stats for './sortspeed':

        716.372024 task-clock                #    0.997 CPUs utilized          
        22,809,109 cache-references          #   31.840 M/sec                   [33.01%]
           849,197 cache-misses              #    3.723 % of all cache refs     [33.01%]
     1,845,627,135 cycles                    #    2.576 GHz                     [33.88%]
     1,508,337,419 instructions              #    0.82  insns per cycle         [50.53%]
       352,990,250 branches                  #  492.747 M/sec                   [50.24%]
        48,613,614 branch-misses             #   13.77% of all branches         [50.26%]

       0.718218847 seconds time elapsed

Notice the cache miss rate on each: 30.89 % (map) vs 3.723 % (vector).

 

Edit 2: I decided to run my code with 2000 elements.

Random data generation took 0.000035 seconds!
Map sort time: 0.000456 seconds.
Map iteration time: 0.000042 seconds, sum 1999000.
Vector sort time: 0.000149 seconds.
Vector iteration time: 0.000004 seconds, sum 1999000.

Your timer does not have enough resolution to be able to detect the difference.  Again, 2000 items on a computer is very small.  You need to be thinking about hundreds of thousands or millions for it to really matter to computers (unless your algorithm is very slow...).




#5047891 How to update a lot of random tiles in a 2D array?

Posted by snowmanZOMG on 28 March 2013 - 11:55 PM

You haven't provided nearly enough detail about your problem to really get at the potential problems.  Based on what I've read you may have one or more of the following problems (among others which may not be listed here):

  1. Bad algorithm.
  2. Poor memory access patterns.
  3. Bad memory allocation and usage causing excess garbage collection pressure.

Updating only tiles within your "update radius" should be very fast unless your radius is so large, that it contains the entire map.  Even then, you'd have to have a colossal map to have it take enough time to notice a really long pause (~1 second or more).  How are you determining which tiles are in your "update radius"?

 

Maintaining homogeneous lists of each tile type could be advantageous, but you pay for it with memory.  Suppose you put each type of tile in its own list and each list element points directly to the actual tile in your grid.  This gives you the advantage of not having to search the entire grid for all elements of a certain type.  But you're paying for this advantage in speed (by not having to search) by spending memory to have all that information already computed.  I would say it's a worthy trade as long as your game requires you to perform many operations/queries on only subsets of your data, where your subsets typically match your tile types.  If you have operations that require iterating over the entire 2D grid anyways, then this may not be worth doing, since at each tile you visit, you could perform the operation right then and there.

 

Creating new tiles entirely for the update is probably a bad idea.  C# is garbage collected.  If you create tons of new tiles every frame and remove references to the old tiles, you could be creating enormous pressure on the garbage collector.  You should avoid orphaning data at all costs and just mutate existing memory if you want to avoid garbage collection hiccups.

 

Your description of "update" is woefully vague, but I suspect your update is performing something pathologically expensive or your overall algorithm for iteration is just far too excessive for what you want to accomplish.  Even for large 2D grids, you shouldn't have much of a problem updating the entire grid if you've carefully designed your algorithms and data structures.




#5042735 How to analyze run time complexity in code in a trivial way

Posted by snowmanZOMG on 13 March 2013 - 10:37 AM

For a very large number of algorithms you'll end up writing in every day work, it's pretty easy. They tend to be simply counting up the number of times a loop iterates and possibly accounting for nested loops.
 
O(n):

for (int i = 0; i < n; ++i)
{
    ...
}

 
O(n2):

for (int i = 0; i < n; ++i)
{
    for (int j = 0; j < n; ++j)
    {
        ...
    }
}

 
The same method can be analogized to include any function calls made within the loop bodies.  Say, for instance:

for (int i = 0; i < n; ++i)
{
    for (int j = 0; j < n; ++j)
    {
        f(j, n); // f() is O(n).
    }
}

The function call is basically another nested loop, hence O(n3).

But this kind of analysis really only works for very simple algorithms that do simple iteration. Once you get into recursion, you need to start doing some more math to be able to come to asymptotically tight analyses. For some illustrative examples, take a look at some of the material here: http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/

 

In particular:

 

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/99-recurrences.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/98-induction.pdf

 

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/00-intro.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/05-dynprog.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/06-sparsedynprog.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/14-amortize.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/16-unionfind.pdf

http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/27-lowerbounds.pdf

 

The first two links are particularly important/relevant.  The rest sort of just show you the analyses in action.  I would highly recommend knowing how to solve for recurrences.  You cannot hope to analyze anything but the simplest algorithms without knowing how to solve for recurrences.




#5032718 Performance & Struct sizes

Posted by snowmanZOMG on 15 February 2013 - 10:57 AM

* Optimal struct size
On a 32bit CPU, are there advised sizes (32,64, 128, ...)? And, do these double on a 64bit CPU? When dealing with vectors and structs, the size often grows quickly.

 

Optimal struct size depends entirely on the nature of the memory access patterns and the entire memory system for the machine on which the algorithm is being executed.  Your question here seems to imply that if you choose a certain size for the struct, you'll get good performance (or some other nice optimization by some other metric), when no such thing is guaranteed.  What matters to the CPU is that you're always grabbing memory locations in an order that guarantees a high likelihood of touching something already in the cache.  A small struct size can make that easier to achieve, but a small struct size can be worthless if you're grabbing array[0] then you grab array[1024].

 

 

* Filling records
Lets say an optimal size would be 64 bytes, but I only have 60. Is it wise to add some dummy bytes to fill (not concerned about memory usage)?

I guess this is what Delphi does by default btw, unless you declare a "packed record".

 

Let your compiler handle this.  In fact, I would say don't worry about this at all unless you truly have some time critical piece of code.  The only places I've seen that consistently had gains from something like this were acceleration structures for graphics algorithms where you could literally be doing millions of queries.  I remember stressing over bits to try to put a Kd-tree node into a cache line for a particular system, and in the end, it only netted me 10%-15% performance improvement.  Don't get me wrong, that kind of performance improvement can be huge, but it's one of those last resort optimizations and all you're doing is trying to reduce the size of the constant in your asymptotic analysis.

 

 

* Splitting in multiple records
In case 64bytes would be an optimal size, but I need at least 100 bytes... Would it be smart to divide the data in 2 records, each 64 bytes each?

 

Again, this depends entirely on your memory access pattern and the nature of the machine you're running on.  The only correct answer here is: "Maybe".  If we were to assume that the algorithm you're executing touches a particular set of data more often than another, you may find gains by splitting your algorithm and data into one that relies on a hot data and cold data split.  You put things you commonly access into the hot data struct and things you access less frequently into the cold data.  The goal with this sort of strategy is to pack more of the hot data into cache lines so you can take advantage of the caching system.  The rationale for the performance gain hinges on the fact that you should be accessing hot data much more frequently so it is very costly if you cache miss on this memory access, however, if you access cold data, you're likely to cache miss, but since you don't access this data very often it shouldn't have a huge negative impact on performance.  It's using Amdahl's law to arrange memory.

 

 

* Dunno how caching works exactly, but I guess there rules apply especially for looping through arrays structs. But, does it also apply when passing just a single struct to a function?

 

The memory hierarchy for modern computer systems are both simple and very complicated.  They are simple in the high level concept, but very complicated in the nitty gritty details.  I would highly recommend learning up on computer architecture to get a better idea of how the hardware works in general.  You can find some great amount of detail of memory and caches in http://www.akkadia.org/drepper/cpumemory.pdf.  But, to answer your question, caching always applies.  Every single data access you do will be affected by the cache hierarchy on modern CPUs (since just about every single CPU now has a cache).  You have your instruction cache, your data cache, and nowadays, multiple levels of data cache (sometimes shared between cores, sometimes not).  The CPU works pretty much exclusively with registers and the cache, not with main memory.

 

Whether or not these things matter to you in your particular situation is another question.  I'm of the opinion that performance always matters, but you only have so much time in your life to implement your algorithms.  The real issue is which parts of your program have performance implications that actually matter to your end user.  Again, Amdahl's law.  Use it, use it everywhere.




#5016258 Relation between memory used and frames per second

Posted by snowmanZOMG on 31 December 2012 - 09:53 PM

You really should strive to have no memory leaks though.  Even though desktop platforms can be forgiving, it can be a huge nuisance to end users if you leak memory since it will cause the whole system to drag even though it's just your application that's leaking.  Mozilla has been fighting this exact fight for years now, and only recently have they made significant improvements to the way Firefox reclaims memory, specifically from addons, which are a prime source of memory leaks.

 

There are some good points made in this GDC 2008 presentation by Elan Ruskin about a bunch of things related to game development: http://www.valvesoftware.com/publications/2008/GDC2008_CrossPlatformDevelopment.pdf.  I would highly recommend taking a look at slide 25 and onwards for some treatment of memory in games.  A lot of those things are really only necessary for large studios or games that really push limits of systems, but there are a lot of useful tips in there, such as knowing where all the memory is going.




#5016052 Relation between memory used and frames per second

Posted by snowmanZOMG on 31 December 2012 - 07:45 AM

It depends on the platform.  On console platforms, it's bad.  You almost cannot leak any memory at all, because consoles don't have a sophisticated virtual memory system that modern desktop PCs do.  When you run out of memory on a console, you just crash and burn.  But it's unlikely you're working on a console.

 

Desktop PCs, because of their fancy virtual memory systems, memory leaks are still bad but they're not nearly as catastrophic as on consoles.  The amount of "fast" working memory for the operating system to hand out to running processes will slowly decrease and it will start to page out processes onto the disk, which is extremely slow.  I like to think of memory use on desktop PC platforms as mostly about being what I like to call "A good software citizen".  Use your share of memory; don't be greedy and send back what you don't need.  If you do end up using too much, it's usually not the end of the world, but everyone is going to hate you, especially the user of the computer.




#5013657 Relation between memory used and frames per second

Posted by snowmanZOMG on 23 December 2012 - 07:40 AM

You should definitely profile.  Optimizing without a profiler is just hopeless and foolish for all but the simplest of programs.  What profiler to use depends on your system.  I haven't used Visual Studio in quite some time, but if you're using that then you may be able to use the Performance Analyzer.

 

Another thing you could do is to add timers to your code; enclose pieces of code you want to time so you can figure out how long that portion took.  Be sure to also take note of the total frame time as well so you can see how much time that portion takes up in relation to the entire frame.

 

If you want a poor man's sampling profiler:

 

http://stackoverflow.com/questions/375913/what-can-i-use-to-profile-c-code-in-linux/378024#378024

 

The profiler will tell you what portion of the code to focus your attention on, but it doesn't really tell you anything about why something is slow.  You need to at least have an understanding of algorithms and computer architecture (probably also a little dash of operating systems) to be able to know what that "why" is.  Typically, people inspect the algorithm first, since that usually yields the largest gains for relatively minimal effort.  A terrible algorithm replaced with a good one can yield huge gains, especially as problem sizes increase.  But once you're at a fast algorithm, you may be stuck at a wall that's limited by your particular implementation.  This is where computer architecture knowledge often becomes useful.  People then proceed to optimize out slow instruction sequences with fast ones and also rearrange data to allow for faster access.  Sometimes, people flat out "cheat" because they know something specific about the problem and can precompute things and start some computation further along because of those precomputed results.

 

The top answer to this question gives a pretty good account of how optimizations usually go: http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort




#5013619 How do you make an AI follow an A* path in a 2D platformer game

Posted by snowmanZOMG on 23 December 2012 - 03:08 AM

IADaveMark basically just told you the answer you're looking for.

 

You have to remember that A* is just a search algorithm that goes through different states and pruning out those states that aren't desirable so you can find the path you're looking for.  It doesn't care about the exact details of how you get to each location in the state graph.  It just matters that you can get from one state to another in some way that reflects the cost of that transition.

 

If you properly set up the graph, the graph itself retains the information about where you can jump to or drop to.  The A* search simply tells you to get from A to B to F to G, where there might be a drop from B to F.  It is up to your agent to realize that you need to drop down from B to F when you reach B.






PARTNERS