• Create Account

# Emergent

Member Since 03 Apr 2007
Offline Last Active Aug 04 2016 03:48 PM

### In Topic: Kalman filter without using matrix inverse

14 September 2013 - 07:47 AM

There are three forms of the Kalman filter.  You might consider using another form,

There's the covariance form, which you are using.  This requires a matrix inversion at each step, but gives you the state estimate directly.  This is the form most people know.

There's also an information form, which works with the inverse of the covariance matrix (called the (Fisher) information matrix).  This does not require a matrix inversion at each step -- but the state estimate is not directly accessible. Instead of a state estimate, you have an information vector.  You can reconstruct the state estimate from the information vector, but this requires a matrix inverse.  One nice thing about this form is that you can specify a totally noninformative prior for the state by setting the initial information matrix to zero.  Also, there are reasonable assumptions under which information matrices are sparse (but covariance matrices are not), so this form can sometimes be efficient for really large problems with sparsity.

Finally, there's the square root form.  Here, instead of working with either the covariance or its inverse, you work with a matrix square root of the covariance matrix (e.g., its Cholesky decomposition), and actually never construct the covariance matrix directly.  This is usually the most computationally efficient filter. [EDIT: Actually, it's the most expensive; what it is is numerically stable.]  It also avoid the problems of ensuring that your covariance or information matrix remain positive definite in the face of roundoff errors, since it's impossible to represent anything but positive definite matrices this way [EDIT: this is largely the point of this form].

### In Topic: Pathfinding A* with multiple agents

20 June 2013 - 09:32 PM

There is a lot that you can do to parallelize pathfinding even for a single agent.

I would start by considering more basic algorithms than A*, and only incorporating some of A*'s features (prioritized sweeping, a heuristic) as you understand how to do the more basic algorithms in parallel.

Conceptual Starting Point #1: Breadth-first search

A* is basically a souped-up breadth-first search.  You can do BFS in parallel: Expand all the nodes in a ply in parallel, and synchronize between plys.  Can you do that on the GPU?

Conceptual starting point #2: Value Iteration

Value Iteration is more general than A* and very easy to parallelize.  In the case of the single-source shortest path problem, it reduces to the following.  A value d(v) is associated to each vertex v; it represents the distance to the goal.  It is initialized,

d(goal ) = 0

d(v) = Infinity for all v != goal

Then, you simply do the following:

d(v) := minu in neighbors(v) [ edge_cost(v,u) + d(u) ]

E.g., if vertex v has three neighbors, u1, u2, and u3, then it looks at the three quantities,

edge_cost(v, u1) + d(u1)

edge_cost(v, u2) + d(u2)

edge_cost(v, u3) + d(u3)

and picks whichever one is smallest.

This minimization can be performed in parallel by all vertices.  To do it, a vertex needs only to look at its neighbors' values.  This can be done either in a "Jacobi style" (in which new values are written to a separate buffer, and buffers are swapped at the end of the iteration -- this is the classic Value Iteration), or in a "Gauss-Seidel style" (in which new values are written "in-place" back to the original buffer -- sometimes called "Asynchronous dynamic programming").  The point is that you have a great deal of flexibility in the order in which you do these updates, while still guaranteeing convergence.  Dijkstra's algorithm and A* are "just" very, very careful optimizations of this basic iteration, first to avoid updating nodes that don't need updates, and then to bias the updates in the direction of the goal (while making certain guarantees).

Detail: Bidirectional search

Shortest path problems can be solved more quickly by bidirectional search.  At the very least, you can perform these two searches in parallel (though this is more of "cpu-level parallelism" than "gpu-level parallelism").

### In Topic: How to calculate polyhedra in realtime...

03 April 2013 - 11:13 PM

Personally, I'd just make five levels, corresponding to the five Platonic solids.  It's kind of cool that there are only five; to me, it seems to lend them additional importance.

I like the subdivision stuff too, but my (totally subjective / aesthetic) opinion is that it's more useful for approximating spheres.

### In Topic: Ordinary Differential Equation ?

15 March 2013 - 10:26 AM

I also think linear algebra is very important to appreciate some of the things you'll do in course on ODEs.  I'd say the most important concept for this purpose is eigendecomposition (eigenvectors and eigenvalues) and the spectral theorem.  For instance, local equilibria of ODEs are characterized by the eigendecomposition of the Jacobian; and ODEs like the heat and wave equations are themselves solved by eigendecomposing a linear operator (the sines/cosines are its eigenvectors).

### In Topic: RTS AI: Arbitrary Units And Maps

10 March 2013 - 09:12 PM

For chokepoint detection, see this thread:

http://www.gamedev.net/topic/601508-choke-point-detection/

PARTNERS