Jump to content
  • Advertisement
Sign in to follow this  
WeirdoFu

AI Algorithms on GPU

This topic is 4843 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

It seems that with the increasing power of GPUs on consumer graphics cards, more and more people are trying to figure out what to do with GPUs other than graphics. The scientific computing society has been extremely interested in its uses and have found that GPUs can do sort of large number of elements up to 50 times faster than a CPU can due to the built in parallelism. More on General Purpose Computation on GPU can be found here: http://www.gpgpu.org/ So, my question and idea is whether we can offload AI load from CPUs onto GPUs as well. Of course, to a certain extent it may not be practical, but it would just be one of those cool things that you can say, "hey look, my graphics card can do this too." type of deal. So, for AI in games, I guess the first question people will ask is, "Can we get a GPU to do the A* algorithm for us?" If we can, then that may be fairly interesting application wise too. I know they have pure GPU oriented particle system engines, so why not GPU based game AI engines? Steal a few cycles from the graphics and use it for better AI. So, what do people think?

Share this post


Link to post
Share on other sites
Advertisement
The problem is that its expensive to get data off the graphics card, no matter how easy it is to put it on the card or do the operations. I don't think it will be practical, but I have no facts to back this statement up.

Share this post


Link to post
Share on other sites
Your primary problem will by the types of operations the GPU can do. If you are really interested in pursuing this, I would suggest working out the simplest algorithm you can, and heading over to the graphics groups and see if anyone has any ideas.

Off hand, my guess is that your first difficulty would be handling random access needed when exploring neighbors in a search algorithm. Graphics cards are more designed for algorithms which take small, predetermined data sets and work in parallel.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
A friend of mine showed me this:

http://graphics.cs.uiuc.edu/~jch/papers/gpucluster.pdf

A paper where they run the k means clustering algorithm on a gpu and get 1.5x - 3x speed up. It's pretty cool but then I showed him this:

http://www.cs.cmu.edu/~dpelleg/download/kmeans.pdf

A paper where through clever data structures they speed up the same algorithm 170x

An easier way to take advantage of hardware acceleration in AI type algorithms is to use matrix math instead of loops where ever possible, then use a hardware accelerated matrix library (like a machine specific BLAS library). Matlab programmers are really good at this and call it "vectorizing" an algorithm. It also often times produces really elegant code. I'm not sure about A*, but a lot of neural network operations, for example, can be written in terms of matrix multiplications.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
The A* algorithm is actually pretty weel suited for gpu programming, by the way it looks at its neighboors. If you could fit map information til fit a float texture, this could and by my knowledge be done. But then again the trick by offloading the cpu by calculating stuff on the gpu, is great because then you have more power for the AI..

Share this post


Link to post
Share on other sites
I'd be pretty impressed if you could get a full A* algorithm working on the graphics card, but considering a real implementation should be able to handle an arbitrary graph (and not just a square grid), I don't imagine it'd run much faster.

Share this post


Link to post
Share on other sites
Quote:
Original post by Foxostro
The problem is that its expensive to get data off the graphics card, no matter how easy it is to put it on the card or do the operations. I don't think it will be practical, but I have no facts to back this statement up.


I don't know if that will be a huge problem. An A* algorithm only needs to return the completed path to be useful, and you can probably store the path in something like 500 bytes. An enormously complicated neural network only needs to return the values of the final layer. In general, the data that represents "what action should the AI take" is going to be pretty small. So as long as it's cheap for the GPU to send us back 1k or so of data, we'll be alright.

Share this post


Link to post
Share on other sites
Quote:
Original post by pinacolada
Quote:
Original post by Foxostro
The problem is that its expensive to get data off the graphics card, no matter how easy it is to put it on the card or do the operations. I don't think it will be practical, but I have no facts to back this statement up.


I don't know if that will be a huge problem. An A* algorithm only needs to return the completed path to be useful, and you can probably store the path in something like 500 bytes. An enormously complicated neural network only needs to return the values of the final layer. In general, the data that represents "what action should the AI take" is going to be pretty small. So as long as it's cheap for the GPU to send us back 1k or so of data, we'll be alright.


Alright, nevermind then. I hope this idea pans out! :)

Share this post


Link to post
Share on other sites
A* is not an algorithm that lends itself to parallelization, so it's unlikely that a GPU-based solution would be at all performant.

IMHO, the "Stupid GPU Tricks" fad is one that is fading, fast. The bottom line is that a graphics card is designed for graphics. It's more programmable than it used to be, but the transistor layout is still one that's optimized for the rasterization pipeline and not for other things. The need for generalized parallel processing hardware on consumer-level computers is clear, but the GPU is not the chip that will fill this niche. Time spent on bludgeoning a GPU into executing algorithm XYZ is time wasted.

Share this post


Link to post
Share on other sites
Well, in many aspects, if memory doesn't fail me, some modern GPUs have higher transitor counts than many CPUs currently on the market, but I guess the main issue in trying to "bludgeon" a GPU into doing what you want has to do with the fact that you need to first "bludgeon" programmers into thinking in a different way.

Architecturally, a GPU operates on the opposite extreme of most general purpose CPU, that is if memory surves me correctly. Where standard CPUs are constantly swapping operators, the GPU is constantly swapping data (streaming). So, programming for the two require the programmers to think differently.

Currently, it seems that mainly algorithms that can be massively parallelized will get benefits of GPU speed up, since your data is actually loaded in the form of a texture and them modified by a fragment program, which runs parallel for all pixels in the texture. So, I guess A* may not be the best candidate out there, but who knows, with some twisting of things, someone might be able to think differently and solve it in some strange convoluted way.

I guess the main reason I started this thread was to see how people thought about the possibility, whether its there or not, and just gauge reaction. As for anything concrete, I really don't have anything solid myself, which is something I'll probably work on in the future.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!