Mystery of the Game Loop: How high can the 'n' in Big O can go?

Started by
17 comments, last by wodinoneeye 10 years, 4 months ago

I'm processing them.

All I am trying to do from the very beginning is to try and link the relationships between complexity and nested loops, that's all there is to it. If this relationship doesn't exist, then I really don't understand how I pass my exams in Algorithms courses. angry.png I'm not happy with the way I am.

Advertisement

All I am trying to do from the very beginning is to try and link the relationships between complexity and nested loops, that's all there is to it.


The first post answers it:

"n" might not be the same in every loop (and in general it isn't).


for( int a = 0; ++a != n; )
    for( int b = 0; ++b != n; )
        for( int c = 0; ++c != n; )
            for( int d = 0; ++d != n; )
                ; // This would be O(n^4)
for( int a = 0; ++a != w; )
    for( int b = 0; ++b != x; )
        for( int c = 0; ++c != y; )
            for( int d = 0; ++d != z; )
                ; // This is NOT O(n^4), because w!=x!=y!=z
"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

You have to look how many times each of those loops runs using variables when it depends on n, maybe add a few more variables if it depends on a few other things and multiply it. Now look if you can substitute those other variables like the w,x,y,z above with an expression that depends on n or if they are just constant. Then the ivory tower computer scientists discard most information and just keep the things that depend on n and call it O notation. Thats the fundamentals of it.

They also like to only count comparisons and/or exchanges for sorting algorithms and neglect anything else to ease their calculations.

The problem is what they do may have been a good enough measurement of performance 40 years ago when a memory read was 1 cycle always and a comparison+branch always took the same number of cycles. But all that discarded information is getting more valuable in real life every year, because of branch predictions (and mis-predictions with long pipelines to fill), practically free cpu cycles and less and less predictable time of memory access with many levels of caching.

There is also the problem of counting average case, when worst case can be much longer and/or more relevant.

It feels a bit like the popularity of quick sort in spite of its worst case behaviour is in some way influenced by its name and not only by its properties (someone needs any sorting algorithm that runs fast, looks at a book and finds something thats called "quick" or types into the search engine "quick sort algorithm" and only gets quick sort). I also find it a bit weird that often people say bubble sort was good, if you need something simple to program fast, when there is, for example, insertion sort.

There are many other sorting algorithms and for different uses one may be happier with radix sort, merge sort, bucket sort, heap sort, ...

I wouldn't put O-notation purely into the realm of the ivory tower. You can get some useful knowledge from it. And if nothing else it really simplifies communication with your colleagues (like "I found out why our startup was so slow, the data loading happened in O(n²) when it could have been done in O(n)").

Of course, if you take it as the measure of be-all-end-all, then it will fail you. But then, most tools and concepts will fail you when you apply them without thinking.

I'll just go ahead and point this out now, there is no algorithmic way to profile a game engine. Game engines are trees of function calls doing greatly varying amounts of work, in fact the function calls themselves can vary with different iterations depending on the kind of work they are doing. I'm not even really sure what you're trying to accomplish by giving any sort of algorithmic number to them.

In fact when profiling games the most literal way to determine performance is to check the time spent in each function out of a total. Big-O notation is only useful for explaining the abstract comparison between amounts of work. It doesn't say or even imply anything about what happens inside complex computation.

So basically it's not that algorithms are bad and you don't understand them its that you're trying to apply them to an example that they don't apply to.

I don't think the analysis of the asymptotic behavior of an algorithm as a number grows is useless. For instance, if you are writing a game engine, or a library to do a specific task for games (physics, pathfinding...), you probably want to know how well it scales as the number of objects or the size of the scene grows. But if you are working on a specific game, I tend to agree with Satharis: Listen to your profiler.



for( int a = 0; ++a != n; )
    for( int b = 0; ++b != n; )
        for( int c = 0; ++c != n; )
            for( int d = 0; ++d != n; )
                ; // This would be O(n^4)

for( int a = 0; ++a != w; )
    for( int b = 0; ++b != x; )
        for( int c = 0; ++c != y; )
            for( int d = 0; ++d != z; )
                ; // This is NOT O(n^4), because w!=x!=y!=z

This helped me immensely. And it's sad, because I can only understand programming codes and not texts and paragraphs. sad.png

Big-O

I saw that. It's an introduction to algorithms and Big-O.

My problem is about how codes and Big-O relate, which was answered by TheComet.

The complexity is actually either O(n^3) or more likely O(n^2)

as that outer loop is infinite (just keeps going until you escape out - and program ends??) it dont count.

'unprocessed time' isnt specified how it works - if its an interval limiter for some periodic processing (whose processing isnt indicated here) so this appears to be a time loop - so also shouldnt count in processing complexity (which is usually based on processing per 'turn' of the game)

The processing of EVERY unit per tick (which is some turn interval?) and THAT processes comparing with every other unit (for distance tested for some interaction...) that part is O(n^2) for your brute force method.

((then plus whatever else gets done with that distance.... which again is not specified))

---

Simple optimization :

Ive sometimes used a half-table caching where the distance calc is symetrical and I find it once between units and filled both in to the table (distance[j] = distance[j]i] = calculated distance, and use mutated loops (below) to make it O(n*n/2) (it also skips calculating the units distance from itself which is always 0)


for I=0  to  totalunits
   {
   Distance[I][I] = 0    // note- unit always 0 from itself
   for J=I+1  to  totalunits  // note starts at where I is 
       {
       D = calculate_distance(I,J);   // whatevere
       Distance[I][J] = Distance[J][I] = D;
       }  // end J loop
   } // end I loop


    0   1   2   3   4
0   0   *   *   *   *
1   .   0   *   *   *  these traversed and calculated
2   .   .   0   *   *
3   .   .   .   0   *
4   .   .   .   .   0
   these filled with symetrical data 

SO effectively you actually calculate half of what the simple looping does...

I usually use that when the majority of interactions are none (but still need testing for) and the distance winds up being the majority of the processing (Ill use some space-partitioning if the 'unit' count goes high enough to warrant implementing it.

---

Sometimes (depending on the simulation) the interaction between the two units can be carried out once and that half of the table needs no traversal, while other times the interactions are unsymetrical (if the interactions are really sparse, building a list of only those within threshold is also possible - useful if game mechanics require sorting by interaction distance or somesuch)

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement