Jump to content

  • Log In with Google      Sign In   
  • Create Account


AI Algorithms on GPU


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
21 replies to this topic

#21 Marmakoide   Members   -  Reputation: 132

Like
0Likes
Like

Posted 11 August 2005 - 08:45 PM

- Making A* on a GPU is silly : it needs random access on memory, slow with GPU.
- A* it's for finding path...
- Algorithm to find path a 'matrix' way ? Perhaps this
1) Build a matrix A, where A(i,j) = 0 if we can reach node(j) from node(i) in one step
2) For the matrix multiplications, replace + by max, and * by +
3) Compute A^n : A(i,j) will give you the distance you need to reach node(j) from node(i), if this distance is < n
- A^n is cheap to compute. Example
A^7 = (A^4) * (A^3)
A^4 = (A^2) * (A^2)
A^3 = (A^2) * A
A^2 = A * A
4 multiplications instead of 7, log2(n) instead of n ;)

- It could help for a pathfinding, I'am not sure it's very interesting (complexity, efficency)
Anyway, with a special matrix (path on a square grid as example), I'am nearly sure there is some tricks to reduce computations.

Sponsor:

#22 Name_Unknown   Members   -  Reputation: 100

Like
0Likes
Like

Posted 11 August 2005 - 10:52 PM

Quote:
Original post by Sneftel
Mark my words: within five years, we'll see multi-core processing on Intel and AMD processors. Get ready for that.


Hmm, you do know Athlon X2 and Pentium 4D are already available? ;-) I've had SMP dual and quad machines for years.









Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS