Why is collision cast expensive?

Started by
32 comments, last by EWClay 11 years, 1 month ago

It's not about memory it's about how much CPU time doing a lot of ray casting takes really.

May you elaborate more in detail and gives example pertaining to CPU time? Interesting stuff

As cr88192 pointed out it's really just the CPU having to do "more stuff." CPU's aren't magic, they work in linear time and the only limit to them really is how many things they can do over a period of time.

Often in games you have something like the logic processing stage of the game loop, this is where the CPU takes control, doing mathematical comparisons, collision detection, AI choices, ray casting in this case, so on. Basically it does everything to make game objects interact, subsequently the more time the CPU spends doing tasks the less time is available for other tasks. If you do 100k checks compared to 10k checks you are spending 10x as much CPU time on that.

CPU's these days are pretty robust, it takes a lot of math work to bog them down but in higher end games it certainly becomes a problem, if you think about something like an AAA fps you may have an entire level loaded with geometry, AI soldiers or something running around and interacting, and the game is constantly checking their collision with things near them as well as every other task.

This is why games are often considered "CPU" or "GPU" bound, because it is a linear step(one game may have very little for the video card to process but a ton of CPU work to process), for simplicity sake most games do the logic processing then the graphic updates. If the CPU is slow it may take a long time to do all the processing required, and then the GPU only has so much time to render to maintain the framerate. But that's what it comes down to, the two real ways to optimize when coding are to do something that takes -less- CPU time, or to do less things in general. As an example, one simple addition statement takes a lot less cycles than casting and checking 10k rays every logic update(which may be every 20 ms or something.)

Advertisement
In a modern engine, ray casts are much more affordable, but often they have to be deferred.

Take a typical game logic thread. Usually it is single threaded and it has a budget to stick to. Doing thousands of ray casts would slow the thread down; they are not that cheap. But suppose the code can request a ray cast ahead of time? Then another thread can pick up the request, do the calculations and store the result to be used later.

Pretty much every system has multiple cores these days. It's not just how much work you can do, it's whether you can parallelise it.

In a modern engine, ray casts are much more affordable, but often they have to be deferred.

Take a typical game logic thread. Usually it is single threaded and it has a budget to stick to. Doing thousands of ray casts would slow the thread down; they are not that cheap. But suppose the code can request a ray cast ahead of time? Then another thread can pick up the request, do the calculations and store the result to be used later.

Pretty much every system has multiple cores these days. It's not just how much work you can do, it's whether you can parallelise it.

Unfortunately that entirely depends on the thing in question and if it can run concurrently, there actually are a lot more things that can't run concurrently than can do to the simple fact that even if you thread them they just end up doing a task that the main thread is waiting to be completed, so it actually ends up worse performance due to context changes ontop of the same wait.

Of course some things can be run concurrently but that's where smart optimization comes in, probably one of the worst things people do that discover the power of threads is to just try and thread everything and then end up with worse performance, deadlocks or memory corruption, so on.

Doing less of something is always a straight performance gain unless it somehow causes more work later on(which doesn't happen very often.) Threading is a lot more.. dependant.

"Suppose the code can request a ray cast ahead of time"

It doesn't sit there and do nothing, it gets on with other work until the result is ready. Yes, it's a bit more complicated, but that's the world we live in, and that's what games have to to to get the best performance.

An old article, but it describes the current state of affairs:

http://www.gotw.ca/publications/concurrency-ddj.htm

In a modern engine, ray casts are much more affordable, but often they have to be deferred.

Take a typical game logic thread. Usually it is single threaded and it has a budget to stick to. Doing thousands of ray casts would slow the thread down; they are not that cheap. But suppose the code can request a ray cast ahead of time? Then another thread can pick up the request, do the calculations and store the result to be used later.

Pretty much every system has multiple cores these days. It's not just how much work you can do, it's whether you can parallelise it.

while partly true, there are limits:
if a person manages to fill up all the cores with work, they will not get more performance out of more threads;
inter-thread communication typically introduces a certain level of latency (with tradeoffs regarding how things are organized, ...).

for a CPU-bound program, this means around a 2x-4x max speedup on a quad-core.
IMO, threads are generally better for coarse-grained asynchronous work, rather than fine-grained or synchronous tasks.

...

"Suppose the code can request a ray cast ahead of time"

It doesn't sit there and do nothing, it gets on with other work until the result is ready. Yes, it's a bit more complicated, but that's the world we live in, and that's what games have to to to get the best performance.

An old article, but it describes the current state of affairs:

http://www.gotw.ca/publications/concurrency-ddj.htm

I honestly don't understand the point you're trying to make, you're using a very arbitrary example and completely ignoring my point that often code(especially for games) does things in linear steps and has to wait on different parts to finish so threading may not be an optimal solution. I.e. there is no point to threading AI and collision detection because neither can do their job at the same time if the state of the objects isn't set from the opposite step already.

You can't really "request a ray cast ahead of time" when it checks the collision it does it all at once, broad phase then narrow phase on all the objects, more than that.. depends on the implementation but that is the gist of it. That especially becomes impossible to run concurrently if you loop through once and move objects because that would skew later collisions if run in seperate parallel loops.

Also that article.. I really don't get the point of you linking, it has no relevant information to software development or even any code or examples and is basically a rhetoric document on "the magic of concurrency" which is like a magic of classes document, classes have many limitations we have successively found more and more restrictive as time goes on and coding practices evolve. Thinking we can just magically thread things that cannot logically be threaded is a dreamer statement. Threading has a time and place like every other tool.

Satharis:

You say ray casts cannot run concurrently with other game code. I say they can and and they do.

Beyond that, I can't be bothered to argue. I've made my point to anyone who cares to listen.

Edit: I lost patience here, sorry. Normal service will be resumed shortly.

the main difference is that an oct-tree divides the space into 8 regions in each step, so requires less recursion, but has to do a few more checks at each step (since there are 8 possible regions to step into). dividing the space in a binary-fashion allows simpler logic, but a larger number of internal nodes.

That's not necessarily true, you can implement an "oct-tree" with a b-tree under the hood; your recursion step should cycle over each axis. Thus the number of recursive steps would be similar in complexity to a BSP-tree.

But I think my question was poorly worded. Simply put, I'm curious about if it's worth it to "choose" a splitting plane versus naively cutting it down some midpoint in the dynamic real-time situation you were describing. The obvious difference is that choosing the latter would be O(1), and the former would be (presumably) at least O(N). That of course would pay off as searching would be faster. If the tree were generated before-hand and kept static, then of course it'd be better to load the cost up front, but in the case of a real-time dynamically generated tree, I'm wondering if, generally, such an approach is still worth it.

It's a bit of a subjective question and it begs more for intuition/experience than a purely academic response, I suppose.

It's not about memory it's about how much CPU time doing a lot of ray casting takes really.

I'm wondering if there's a subtle point here that you're also describing. If you're just talking about memory allocations, then of course it's almost never an issue, but isn't memory bandwidth is a large bottleneck for speed these days? I don't work in the games industry, so I'm not aware of the current state, but isn't it a challenge to maintain cache coherency in collision detection systems, too? Or is cache coherency kind of a solved issue at this point?

Polarist, on 18 Feb 2013 - 19:22, said:

cr88192, on 15 Feb 2013 - 19:07, said:
the main difference is that an oct-tree divides the space into 8 regions in each step, so requires less recursion, but has to do a few more checks at each step (since there are 8 possible regions to step into). dividing the space in a binary-fashion allows simpler logic, but a larger number of internal nodes.

That's not necessarily true, you can implement an "oct-tree" with a b-tree under the hood; your recursion step should cycle over each axis. Thus the number of recursive steps would be similar in complexity to a BSP-tree.

But I think my question was poorly worded. Simply put, I'm curious about if it's worth it to "choose" a splitting plane versus naively cutting it down some midpoint in the dynamic real-time situation you were describing. The obvious difference is that choosing the latter would be O(1), and the former would be (presumably) at least O(N). That of course would pay off as searching would be faster. If the tree were generated before-hand and kept static, then of course it'd be better to load the cost up front, but in the case of a real-time dynamically generated tree, I'm wondering if, generally, such an approach is still worth it.

It's a bit of a subjective question and it begs more for intuition/experience than a purely academic response, I suppose.

ok.

well, it depends on whether or not the oct-tree has a calculated mid-point (as opposed to simply dividing into 8 equal-sized regions).

if it does, then both methods will involve a similar cost (a loop over all the items to find the midpoint), though with a potential difference that an oct-tree doesn't have to (also) calculate a distribution vector.

interestingly, regardless of how exactly it is done, the total complexity would remain the same: O(n log n).


this is because (unlike a traditional BSP), my approach doesn't "choose" the plane, it calculates it.
basically, all you really need is an averaged center-point, and a vector describing how the "mass" is distributed relative to the point.

so, pseudocode:


point=vec3(0,0,0); count=0; cur=list;
while(cur)
{
    point=point+cur->origin;
    count++;
    cur=cur->next;
}
point=point/count;

cur=list;
dirx=vec3(0,0,0);
diry=vec3(0,0,0);
dirz=vec3(0,0,0);
while(cur)
{
    dir=cur->origin-point;
    dirx=dirx+dir*dir.x;
    diry=diry+dir*dir.y;
    dirz=dirz+dir*dir.z;
    cur=cur->next;
}
dir=v3norm(v3max(dirx, v3max(diry, dirz)));  //normalized greatest-vector
plane=vec4(dir, v3dot(point, dir));

left=NULL; right=NULL; mid=NULL; cur=list;
while(cur)
{
    f=v3ndot(cur->origin, plane);
    if(fabs(f)<cur->radius)
        {cur->chain=mid; mid=cur; }
    if(f<0)
        {cur->chain=left; left=cur; }
    else
        {cur->chain=right; right=cur; }
    cur=cur->next;
}
...

Quote

Satharis, on 16 Feb 2013 - 06:07, said:
It's not about memory it's about how much CPU time doing a lot of ray casting takes really.

I'm wondering if there's a subtle point here that you're also describing. If you're just talking about memory allocations, then of course it's almost never an issue, but isn't memory bandwidth is a large bottleneck for speed these days? I don't work in the games industry, so I'm not aware of the current state, but isn't it a challenge to maintain cache coherency in collision detection systems, too? Or is cache coherency kind of a solved issue at this point?

personally, I haven't usually found cache to be a huge issue on recent PC hardware.

even then, it isn't usually as much of an issue at present, as memory bandwidth has increased considerably over the past several years (relative to CPU speed increases), making the limit harder to run into (currently typically only really happens during bulk memory-copies and similar AFAICT, rather than in general-purpose code).

it was much worse of a problem 10 years ago though.


ATM, branch-prediction-failures seem to have become a much bigger issue (manking conditionals very costly in some algorithms where the ability of the CPU to accurately predict branches is fairly low).

a particular example in my case was devising a branch-free version of the Paeth filter, mostly as the conditionals inside the filter were eating lots of clock-cycles.

this is because (unlike a traditional BSP), my approach doesn't "choose" the plane, it calculates it.
basically, all you really need is an averaged center-point, and a vector describing how the "mass" is distributed relative to the point.

Ah, I see, that sort of calculation should be relatively cheap to what I was imagining. Thanks for explaining.

even then, it isn't usually as much of an issue at present, as memory bandwidth has increased considerably over the past several years (relative to CPU speed increases), making the limit harder to run into (currently typically only really happens during bulk memory-copies and similar AFAICT, rather than in general-purpose code).

it was much worse of a problem 10 years ago though.

This is good to know. A lot of what I know about game programming is, unfortunately, dated to roughly 10 years ago. It doesn't help that I was reading a bunch of articles from Intel recently to catch up, who may be blowing the bandwidth issue out of proportion (I don't know, just a guess).

This topic is closed to new replies.

Advertisement