Jump to content

  • Log In with Google      Sign In   
  • Create Account


Polarist

Member Since 08 Feb 2013
Offline Last Active Feb 24 2013 12:47 PM

Posts I've Made

In Topic: Why is collision cast expensive?

19 February 2013 - 06:48 PM

this is because (unlike a traditional BSP), my approach doesn't "choose" the plane, it calculates it.
basically, all you really need is an averaged center-point, and a vector describing how the "mass" is distributed relative to the point.

 

Ah, I see, that sort of calculation should be relatively cheap to what I was imagining.  Thanks for explaining.

 

even then, it isn't usually as much of an issue at present, as memory bandwidth has increased considerably over the past several years (relative to CPU speed increases), making the limit harder to run into (currently typically only really happens during bulk memory-copies and similar AFAICT, rather than in general-purpose code).

 

it was much worse of a problem 10 years ago though.

 

This is good to know.  A lot of what I know about game programming is, unfortunately, dated to roughly 10 years ago.  It doesn't help that I was reading a bunch of articles from Intel recently to catch up, who may be blowing the bandwidth issue out of proportion  (I don't know, just a guess).


In Topic: Why is collision cast expensive?

18 February 2013 - 07:16 PM

the main difference is that an oct-tree divides the space into 8 regions in each step, so requires less recursion, but has to do a few more checks at each step (since there are 8 possible regions to step into). dividing the space in a binary-fashion allows simpler logic, but a larger number of internal nodes.

 

That's not necessarily true, you can implement an "oct-tree" with a b-tree under the hood; your recursion step should cycle over each axis.  Thus the number of recursive steps would be similar in complexity to a BSP-tree.

 

But I think my question was poorly worded.  Simply put, I'm curious about if it's worth it to "choose" a splitting plane versus naively cutting it down some midpoint in the dynamic real-time situation you were describing.  The obvious difference is that choosing the latter would be O(1), and the former would be (presumably) at least O(N).  That of course would pay off as searching would be faster.  If the tree were generated before-hand and kept static, then of course it'd be better to load the cost up front, but in the case of a real-time dynamically generated tree, I'm wondering if, generally, such an approach is still worth it.

 

It's a bit of a subjective question and it begs more for intuition/experience than a purely academic response, I suppose.

 

It's not about memory it's about how much CPU time doing a lot of ray casting takes really.

 

I'm wondering if there's a subtle point here that you're also describing.  If you're just talking about memory allocations, then of course it's almost never an issue, but isn't memory bandwidth is a large bottleneck for speed these days?  I don't work in the games industry, so I'm not aware of the current state, but isn't it a challenge to maintain cache coherency in collision detection systems, too?  Or is cache coherency kind of a solved issue at this point?


In Topic: A question on style regarding includes (C++)

18 February 2013 - 06:38 PM

So of course, it doesn't *really* matter if you include everything as long as it compiles.

 

The typical rule of thumb, however, as mentioned above, is to include as little as possible in each place (both in the .h and in the .cpp, meaning that most includes per translation unit should end up in the .cpp file).  This is primarily for 2 reasons, managing compile time and managing dependencies.

 

Compile time is obvious, because every time you modify something and something else includes it, the latter will need to be compiled again.  And clearly, you'd rather spend more time coding and testing than waiting for things to compile.

 

Managing dependencies is the other, you can use your includes to document how many dependencies each file has.  As a program gets to be complex, you generally want pieces of your program to be as independent as possible from the other pieces.  Thus, you can evaluate how many dependencies something has by looking at how many includes there are at the top.  (This heuristic works only if you followed the rule above of having as few includes as possible.)

 

But of course, this rule shouldn't always be followed 100%.  Especially if you're working in a small team or individually.  There's a competing idea of optimizing for coding time ("optimizing for life"), which means that you should do these things only to the point at which you are actually gaining time in the long run.

 

If you find that having some often-used things in a big "globals.h" include saves you time in the end, and you know you can manage your project's complexity well, then you should consider putting stuff in the global include file to save you typing time in the end.  For instance, if you wrote a run-time debugger, profiler, or even a global game state that you need to query almost everywhere, then just put it in the globals.h to save you typing time in the end.  If you know where your dependencies are, you can always fix up your includes later if you wish.  Especially towards the beginning of a project, when you are still prototyping a lot of different systems.  I think it makes sense to use larger global includes.

 

Other people may have other wisdom to share in this regard though, as when and where to use global includes is pretty subjective.


In Topic: Why is collision cast expensive?

15 February 2013 - 03:50 PM

but, if a person instead simply calculates a plane that "roughly divides things in half" (while not giving a crap how "good" the division plane is), then it is *much* faster (as the whole *expensive* part is simply removed).

and, it only takes O(n) to calculate an average and a relative distribution (or a 3D analogue of a standard deviation), and a single O(n) pass to divide the set, ...

stated another way: a few linked-list walks, and some vector math.

 

 

 

I'm about to implement some broad phase checks into my engine, and I'm curious about the approach you've listed here.  In practice, how much better is it to "sensibly" construct a BSP-tree over "naively" dividing the scene by some notion of a midpoint (as in a typical quad- or oct-tree)?

 

So far, I've been leaning towards a a purely naive oct-tree construction.  Has your experience with the "sensible" BSP-tree determined that it's been worth it over a simple oct-tree?


In Topic: What effects are being used in this game?

13 February 2013 - 01:41 PM

Take a look at Screen-Space Ambient Occlusion (SSAO), it's a relatively cheap way to achieve AO.

 

I'm not sure what effect you're referring to in particular, as there are a lot of things going on in those screenshots..  specular maps, reflections, lens flares, to name a few


PARTNERS