Emergent

Members
  • Content count

    1236
  • Joined

  • Last visited

Community Reputation

982 Good

About Emergent

  • Rank
    Contributor
  1. Kalman filter without using matrix inverse

    There are three forms of the Kalman filter.  You might consider using another form,   There's the covariance form, which you are using.  This requires a matrix inversion at each step, but gives you the state estimate directly.  This is the form most people know.   There's also an information form, which works with the inverse of the covariance matrix (called the (Fisher) information matrix).  This does not require a matrix inversion at each step -- but the state estimate is not directly accessible. Instead of a state estimate, you have an information vector.  You can reconstruct the state estimate from the information vector, but this requires a matrix inverse.  One nice thing about this form is that you can specify a totally noninformative prior for the state by setting the initial information matrix to zero.  Also, there are reasonable assumptions under which information matrices are sparse (but covariance matrices are not), so this form can sometimes be efficient for really large problems with sparsity.   Finally, there's the square root form.  Here, instead of working with either the covariance or its inverse, you work with a matrix square root of the covariance matrix (e.g., its Cholesky decomposition), and actually never construct the covariance matrix directly.  This is usually the most computationally efficient filter. [EDIT: Actually, it's the most expensive; what it is is numerically stable.]  It also avoid the problems of ensuring that your covariance or information matrix remain positive definite in the face of roundoff errors, since it's impossible to represent anything but positive definite matrices this way [EDIT: this is largely the point of this form].
  2. Pathfinding A* with multiple agents

    There is a lot that you can do to parallelize pathfinding even for a single agent.   I would start by considering more basic algorithms than A*, and only incorporating some of A*'s features (prioritized sweeping, a heuristic) as you understand how to do the more basic algorithms in parallel.   Conceptual Starting Point #1: Breadth-first search   A* is basically a souped-up breadth-first search.  You can do BFS in parallel: Expand all the nodes in a ply in parallel, and synchronize between plys.  Can you do that on the GPU?   Conceptual starting point #2: Value Iteration   Value Iteration is more general than A* and very easy to parallelize.  In the case of the single-source shortest path problem, it reduces to the following.  A value d(v) is associated to each vertex v; it represents the distance to the goal.  It is initialized,   d(goal ) = 0 d(v) = Infinity for all v != goal   Then, you simply do the following:   d(v) := minu in neighbors(v) [ edge_cost(v,u) + d(u) ]   E.g., if vertex v has three neighbors, u1, u2, and u3, then it looks at the three quantities,   edge_cost(v, u1) + d(u1) edge_cost(v, u2) + d(u2) edge_cost(v, u3) + d(u3)   and picks whichever one is smallest.   This minimization can be performed in parallel by all vertices.  To do it, a vertex needs only to look at its neighbors' values.  This can be done either in a "Jacobi style" (in which new values are written to a separate buffer, and buffers are swapped at the end of the iteration -- this is the classic Value Iteration), or in a "Gauss-Seidel style" (in which new values are written "in-place" back to the original buffer -- sometimes called "Asynchronous dynamic programming").  The point is that you have a great deal of flexibility in the order in which you do these updates, while still guaranteeing convergence.  Dijkstra's algorithm and A* are "just" very, very careful optimizations of this basic iteration, first to avoid updating nodes that don't need updates, and then to bias the updates in the direction of the goal (while making certain guarantees).   Detail: Bidirectional search   Shortest path problems can be solved more quickly by bidirectional search.  At the very least, you can perform these two searches in parallel (though this is more of "cpu-level parallelism" than "gpu-level parallelism").
  3. How to calculate polyhedra in realtime...

    Personally, I'd just make five levels, corresponding to the five Platonic solids.  It's kind of cool that there are only five; to me, it seems to lend them additional importance.   I like the subdivision stuff too, but my (totally subjective / aesthetic) opinion is that it's more useful for approximating spheres.
  4. Ordinary Differential Equation ?

    I also think linear algebra is very important to appreciate some of the things you'll do in course on ODEs.  I'd say the most important concept for this purpose is eigendecomposition (eigenvectors and eigenvalues) and the spectral theorem.  For instance, local equilibria of ODEs are characterized by the eigendecomposition of the Jacobian; and ODEs like the heat and wave equations are themselves solved by eigendecomposing a linear operator (the sines/cosines are its eigenvectors).
  5. For chokepoint detection, see this thread:   http://www.gamedev.net/topic/601508-choke-point-detection/
  6. Levenberg-Marquardt NN learning

    I haven't looked at your code.  However, the first thing I do whenever I write any local optimization routine is to check the derivatives with finite differences.  Unless something more obvious pops out, I'd recommend starting with that.  (And it's a useful test to have anyway.)
  7. slope of a line

    How many points are you requesting as input?  (I understand how two points specify a line.  But if you have three or more points, then I would like to understand how you want to interpret them.  For example, are you drawing a polyline?  Or do you just have two points?)   You write,     The slope of a line is the same everywhere; it does not depend on what point you look at.   Also, what do you mean by "tangent?"  Do you mean the "tan" function from trigonometry?  I ask because the "tangent" to a curve at a point is a line that (a) passes through that point, and (b) has the same slope as the curve at that point.  But the tangent to a line, at any point, is just the line itself...     If your line contains the points (x1,y1) and (x2, y2), then its slope is,   m = (y2 - y1)/(x2 - x1) .
  8. Scaling physics simulations

    What is the golf ball colliding with?  A heightmap?  Anything else?   I ask because the problem of detecting collisions between a sphere and a heightmap is a lot simpler than the general collision detection problem.  There is a lot of structure to the problem that a general-purpose physics engine won't exploit.  Specifically, you always know whether the ball is below or above ground by comparing its altitude to the terrain altitude -- regardless of whether you have "noticed" an intersection between the sphere and the terrain's infinitely-thin surface.   If you were to approximate the ball by a point, then this would be trivial: [source] if(ball.z > terrain_height(ball.x, ball.y)) {   // Above ground } else {   // Below ground } [/source]   Since your ball has a nonzero radius, this gets very slightly trickier, but not by much.  Formally, the thing you'd want is the Minkowski sum of a sphere with your heightmap.  I'm assuming you actually have a very special case -- that your sphere's radius is smaller than 1/2 the vertex spacing in your heightmap -- which will make things simpler.  I haven't worked out the details myself, but I assume other people have and you can find them with some googling.  It's going to boil down to a few cases: When you're near a concave vertex, your dilated heightmap will return the z-coordinate of a sphere of radius 'r' centered at that vertex, and when you're away from a vertex, it will return the height of the corresponding plane pushed out by a distance 'r' along its normal (where 'r' is the radius of the golf ball).  I've illustrated this in 1d below: The original heightmap is in blue, and the "dilated" heightmap is in red.   [attachment=13653:heightmap-sphere-minkowski-sum.png]   [EDIT: In fact, other people have had similar problems with ODE and heightmaps.)
  9. Ordinary Differential Equation ?

    Linear algebra. Linear algebra. Linear algebra.
  10. Compressed Sensing for Voxel Data Compression

    To answer my own question: When you throw out the small coefficients of the Hadamard transform in 1d, what you get looks awful (see attached image).
  11. Compressed Sensing for Voxel Data Compression

      I think a lot of people conflate "compressed sensing" with L1 regularization...'   Taking a step back from L1 regularization, and thinking just about a choice of basis: I wonder how far OP'd get with splitting the grid up into blocks and doing a Hadamard transform of each.  It's more-or-less the direct generalization of JPEG to the "3d binary pixels" case...
  12. Compressed Sensing for Voxel Data Compression

    I get what you're going for: You throw a huge set of basis functions at your data; you find a representation in terms of a small number of them; and you send the coefficients.   From one angle, this is the same reason all the Fourier image compression techniques work (JPEG and its descendents): It happens that, with this choice of basis, most of your image energy gets concentrated in a small number of coefficients (in this case, even without doing anything to explicitly encourage sparsity).  So why not try other bases and actually reward sparsity with L1 regularization?  It sounds plausible, right?   Unfortunately, other people have had the same idea, and as far as I know, none of them have actually been able to achieve competitive compression of e.g. images in this manner.   So if this is for your job and you need results soon, I might encourage you to look elsewhere.  But if this is for research or a hobby project -- then who knows?  Go grab a convex programming solver and see if you get anywhere.
  13. Short post:   Convex optimization.   In reality, there are about four kinds of problems in the world that you can solve, and everything consists of reducing other problems to them: 1.) Linear system of equations 2.) Eigenvector/eigenvalue problems 3.) Convex optimization problems (which really includes 1, and mostly also includes 2) 4.) Low-dimensional dynamic programming   I exaggerate slightly, but not by much.     Long post:   To follow up on alvaro's comment about control theory:   There are a couple of core ideas that a lot of different people study, from slightly different angles, and with slightly different tools.  These people include, - control theoreticians - roboticists - classical AI (planning) people, - machine learning researchers, and - operations researchers.   The differences between these groups are as much social, cultural, and historical, as they are technical.   In my own view, there are a small number of core problems that they all aim to solve.  The two that really stand out are,   1.) "Markov decision problems" -- a term that I use broadly to include both discrete time and state spaces (what people normally mean when they say "Markov Decision Problem" (MDP)), and with continuous time and state spaces (which people normally call "optimal control").  Additionally, problems here may or may not contain randomness, and each of these variants have very slightly different corresponding theories, but the underlying ideas are essentially the same.  Do you want to find the shortest path through a maze?  Or how to steer a fighter jet to perform a barrel roll?  Or how to move a robot arm around an obstacle?  These are MDP/optimal-control problems.   2.) State estimation problems ("filtering," "smoothing," and trajectory estimation).  Whereas in my last bullet point the goal was to get a system to a particular state, here the goal is to figure out what state a system is in.  Depending again on whether you are interested in discrete- or continuous- time- or state-spaces, depending on what noise model you assume, and depending on which of these problems you want to solve (filtering, smoothing, trajectory estimation) you end up with e.g. the "forward algorithm" for hidden Markov models, the Kalman filter for linear systems, or the Viterbi algorithm, among other closely-related algorithms.   There are generalizations and combinations of these ideas in all sorts of directions. E.g., an operations researcher might say that Markov decision problems are "just" optimization problems with a particular structure and go off to look at Lagrange multipliers and duality gaps.  He might also point out all the problems that don't most naturally have this structure.  Likewise, some machine learning people might say that the idea of different "states" at different "times," again, "just" describes a particular structure of Bayes net.  You can also combine #1 and #2 to arrive at POMDPs, which are the most general (single-player) discrete--time-and-state problem.  But despite all this, I think the two problems I listed above capture the essence of most things.   As for "statistics:" I do not think that there is a single unified intellectual edifice with this name.  The only thing that makes any sense to me philosophically is personalist Bayesian statistics, and even that I'm still figuring out.
  14. How to simulating a turn radius

    Dynamics for a simple ("unicycle") vehicle in two dimensions:   The way to think about this is that your vehicle has some state that evolves according to some dynamics -- this captures the physics of the vehicle -- and it also has some control inputs that describe what the driver is commanding the vehicle to do (e.g., accelerate, brake, turn).  To organize our thoughts, we'll separate these different things.   The vehicle's state x = (p, R, s) consists of p in R2  (position), R in SO(2)  (orientation: First column is forward vector; second column is left vector), and s in R  (speed).   The control input is u = (a, w), with a in R (acceleration: positive for going faster; negative for braking), and w in R (turning rate).   Finally, the dynamics (or time-evolution of the state) are given by the following differential equations: dp/dt = s R e1   (where ei is the ith column of the 2x2 identity matrix) dR/dt = w R J  (where J is the 90-degree rotation matrix) ds/dt = a   This fully describes the vehicle.  By evolving everything according to these dynamics, you will get consistent behavior, without side-slip.   One question remains, and that is how to choose your control inputs.  This depends on what you are trying to achieve -- e.g., going to a specific point, following a road, etc -- and is typically done by designing a controller (aka behavior), which is a function that takes the state x and returns a control input u.   You seem concerned with achieving a specified turn radius.  To do that, for a radius r, you need to choose your control input w so that s = w r.
  15. Quirky neural network AI

    [quote name='Psilobe' timestamp='1351111677' post='4993544'] I play against myself and every time I move the pad that the AI is going to use I save away some data [...] The data I save away is the balls x,y position as well as ball direction x and y. The y position of the pad. This is what I use for input. If the pad moves up then I set target outputs for the two output neurons to 1 and 0 and if I move down I set it to 0 and 1.[/quote] [quote name='dominik81'] So in fact you're not training the network how to play pong, but how you play pong.[/quote] Interesting. There is a whole area called [i]Apprenticeship Learning[/i] that is very interesting. The name that pops to my head is Andrew Ng; he has a nice paper [url="http://www.robotics.stanford.edu/~ang/papers/icml04-apprentice.ps"]here[/url] (PostScript format. Use [url="http://www.ghostscript.com/download/gsdnld.html"]Ghostscript[/url] to view it, or view online with [url="http://docs.google.com/viewer?url=http%3A%2F%2Fwww.robotics.stanford.edu%2F~ang%2Fpapers%2Ficml04-apprentice.ps"]Google Docs[/url] ). The idea here is not to directly learn [i]what[/i] the player does in each state, but rather to learn [i]which states[/i] the player is trying to get to. Another good paper would be [url="http://www.cs.princeton.edu/~schapire/papers/SyedSchapireNIPS2007.pdf"]this one[/url]. Moving on... So you're assuming that the state of the game is the ball's position and velocity, together with the player's paddle position. What about the other player's paddle position? It would seem that you would also need that to encode the whole state. Anyway, basically what you're doing is using two binary classifiers (which together can produce four output values: 00, 01, 10, and 11) to learn one of three things (UP, DOWN, or STAY_STILL). In your case, you're mapping 10 to UP, 01 to DOWN, and (I guess?) both 00 and 11 to STAY_STILL. Seems reasonable. It wouldn't be a bad idea to try some easier-to-train classifiers (than ANNs) on the problem, like SVMs. There's also nothing to stop you from doing this in a "lazy learning" way by just searching for the closest data point and doing whatever was done there (or take the k nearest neighbors and use a majority-rules scheme, or something of that sort). Both approaches will be much faster than multilayer ANNs.