Z(x,t) = Z(x,t-1) U {y: y in X\{x and Z(x,t-1)}, C(x,y)<d}} \ {z: z in Z(x,t-1), C(x,z) >= d}
Can anyone think of a good data structure (perhaps an STL container+STL algorithm) that would provide some of this functionality easily? I'd rather not reinvent the wheel... and I'm sure the data management performed in the STL is far better than what I could write.
The actual problem is slightly more complex than this, but this is the core of the problem. I should be able to work out the extraneous details if I can solve the above problem.
Any advice and guidance would be appreciated.
Cheers,
Timkin
Data structure ideas needed
I'm looking for ideas on how to mantain some information efficiently. It's the sort of problem I expect would have been solved in database programming, but I'm working in C++ and not SQL! ;) It's game related... here's the scenario.
I have a set of moving objects X. For any member x in X I want to know all other members y in X that satisfy some constraint C(x,y) (for example, all objects closer than a given distance, which might be written as
Z(x)={y: y in X\x, C(x,y)<d}). Clearly, for the distance case, satisfaction of the constraint is a function of time and the velocities of the objects.
Now, obviously, the brute force approach is to compute, every time step, the distance between all objects and then to update the listing in the set Z. However, clearly there is some work being done here that doesn't need to be done. (If an object satisfies the constraint at one instance, it will for some finite number of instances after that, until it doesn't). A more complex lookup of X that would save some work would be
I'm not sure there is a general solution for this since it would require knowing more about the constraint. I also don't think there is all that much in the standard library to help you - you get associative containers and sorted containers, and that's pretty much it.
You could use some sort of priority queue for a 'not in Z' group, where you can calculate the lower bound on when it could be in Z, and schedule another check for that time. If it is now in Z, you promote it to the 'in Z' group, which may well be another priority queue that allows you to postpone a check for when it is possible to drop out of that group. The more accurate your estimate is, the fewer checks you end up making. Obviously it can't be an overestimate or you have objects in the wrong group; underestimates are safe but the bigger the error, the more unnecessary checks you make (although the worst-case scenario is no worse than the naive/brute force method thankfully).
I don't think SQL would have a better answer for this than C++, either. In my experience most databases just store everything in a big contiguous chunk and use cunning boolean logic and indices to optimise your queries.
You could use some sort of priority queue for a 'not in Z' group, where you can calculate the lower bound on when it could be in Z, and schedule another check for that time. If it is now in Z, you promote it to the 'in Z' group, which may well be another priority queue that allows you to postpone a check for when it is possible to drop out of that group. The more accurate your estimate is, the fewer checks you end up making. Obviously it can't be an overestimate or you have objects in the wrong group; underestimates are safe but the bigger the error, the more unnecessary checks you make (although the worst-case scenario is no worse than the naive/brute force method thankfully).
I don't think SQL would have a better answer for this than C++, either. In my experience most databases just store everything in a big contiguous chunk and use cunning boolean logic and indices to optimise your queries.
The way that I handle this thing is by tiling. Say you have a 10 by 10 grid (10 by 10 array) You have your object that you want to test say at C-6. So the only object groups that you would need to test are B-5, B-6, B-7, C-5, C-6, C-7, D-5, D-6, D-7. So instead of testing 100 groups you are only testing nine. As the grid size grows it increases the perfomance you gain.
So you create a group class to hold all of your object for that class, you also crate a group manager class that holds the group arrays and manages movement between the groups. As an object passes out of one group it triggers an event in the manager that then passes the object to the next group.
This takes care of macro testing of your objects while the brute force methods take care of the micro testing.
Just my two cents.
theTroll
So you create a group class to hold all of your object for that class, you also crate a group manager class that holds the group arrays and manages movement between the groups. As an object passes out of one group it triggers an event in the manager that then passes the object to the next group.
This takes care of macro testing of your objects while the brute force methods take care of the micro testing.
Just my two cents.
theTroll
These problems sounds like the area of collision detection, range and distance queries.
This:
Looks like a distance query problem, you can have look at GJK algorithm. Range quries can be speed up using kd-trees.
Whether or not you can use standard library containers depends on specificy what your doing/implementing but by itself all standard library containers are specifically meant for single-dimensional data, spatial data structures are used for storing multi-dimensional data.
You'll probably need to use dynamic intersection routines aswell. If you're looking for something to just use then look into collision detection libraries.
This:
Quote:Original post by Timkin
I have a set of moving objects X. For any member x in X I want to know all other members y in X that satisfy some constraint C(x,y) (for example, all objects closer than a given distance, which might be written as
Z(x)={y: y in X\x, C(x,y)<d}).
Looks like a distance query problem, you can have look at GJK algorithm. Range quries can be speed up using kd-trees.
Whether or not you can use standard library containers depends on specificy what your doing/implementing but by itself all standard library containers are specifically meant for single-dimensional data, spatial data structures are used for storing multi-dimensional data.
You'll probably need to use dynamic intersection routines aswell. If you're looking for something to just use then look into collision detection libraries.
Quote:Original post by snk_kid
These problems sounds like the area of collision detection, range and distance queries.
Thanks for the feedback guys. Yes, the problem I'm working on is very similar to what's at the heart of collision detection (I'm working on an information flow problem). They have essentially the same core problem in that I need to identify and keep track of dynamic entities based on functional constraints. In the context of solving such problems in a spatial sense, as in collision detection and spatial arrangement problems, spatial partitioning algorithms are not useful once the cost of maintaining the spatial database exceeds the cost of brute forcing the solution. This happens very quickly when many/most of your objects are moving in close proximity to each other. I've done a lot of work over the years on using k-d trees for nearest neighbour and range search and they are very inefficient when objects are moving in and out of bins rapidly, unless you enforce fixed structure on the tree (in which case, you lose efficiency in your searches due to inefficient packing of objects). The problem is exacerbated for objects with nonlinear trajectories.
I was hoping for some efficient set-theoretic code/containers. I guess I'll have to write my own (maybe built over a coarse, fixed spatial partion) and do some tests on when it's more efficient to use this approach over brute force. I suspect the answer is that brute force will still be the way to go until my object density is quite high.
Again, thanks for the thoughts.
Cheers,
Timkin
It sounds like the distance queries are uniform? (ie, always or mostly the same distance threshhold?)
If so, Spatial Hashing.
If so, Spatial Hashing.
Quote:Original post by Rockoon1
It sounds like the distance queries are uniform? (ie, always or mostly the same distance threshhold?)
In any given invocation of the problem there is a finite set of constraints. Each object has a finite set of constraints taken from this set that are applicable to it (possibly an empty set, no duplicates), so they need to keep track of other objects given their constraint set. The obvious way to do this would be to keep different lists of objects satisfying each constraint and then the constraints are constant over time and can be considered disjoint problems. More memory is needed for the results of queries, but its a very small overhead. I might be able to order the constraints based on the attributes they test on to reduce the search space.
Quote:If so, Spatial Hashing.
...but isn't spatial hashing just using spatial keys for indexing into a hash table? Which would suffer from exactly the same problems of using a k-d tree or any other spatial partitioning method that uses a fixed partition set.
Have you looked into the GJK algorithm yet? it even uses set notation to describe things however i do not know whether this is suited to dynamic objects.
Regarding how to store dynamic objects bounding volume hierarchies they are typically better for dealing with dynamic objects than spatial partitioning structures like kd-trees but i doubt you'll get any range/distance information from them since they do not provide a strict spatial ordering if any at all.
It might be worth looking into the sweep and prune algorithm as well.
Regarding how to store dynamic objects bounding volume hierarchies they are typically better for dealing with dynamic objects than spatial partitioning structures like kd-trees but i doubt you'll get any range/distance information from them since they do not provide a strict spatial ordering if any at all.
It might be worth looking into the sweep and prune algorithm as well.
Quote:Original post by snk_kid
Have you looked into the GJK algorithm yet?
I did take a brief look at it, yes. However, it seemed more concerned with separation of arbitrary bounding polygons and so I stopped reading too far into it. Perhaps I should go back and see if there are any core methods they implement that would be appropriate. In my problem I can assume my objects are point masses, since they're moving in an information space, so some aspects of the problem are greatly simplified (i.e., separation is trivial to compute for any two objects).
Quote:It might be worth looking into the sweep and prune algorithm as well.
Thanks, I will.
Cheers,
Timkin
[Edited by - Timkin on May 25, 2007 10:08:15 PM]
Well I'm at a loss then.
You might want to look into what real time raytracer developers are currently doing for dynamic objects since they also have to do many spatial queries on dynamic data.
You might want to look into what real time raytracer developers are currently doing for dynamic objects since they also have to do many spatial queries on dynamic data.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement