Jump to content

  • Log In with Google      Sign In   
  • Create Account

#Actualsheep19

Posted 18 April 2013 - 09:44 AM

What do you mean? In terms of the final path generated, A* is perfectly efficient. Are you talking about the problem of stating the algorithm in terms amenable to the GPU?

 

Yes. I meant spawn a GPU thread for calculating the path for each agent. Then it would be necessary to "inform" the other agents (write to GPU global memory). Or could this be avoided sometimes? (read below)

 

I read somewhere that I should spawn a thread for each agent, which seems reasonable.

 

No, for most applications it's not a reasonable approach and is likely to cause far more problems than it will solve, especially given that you are doing GPU work.

 

On the whole this sounds like a very difficult problem. Recalculating paths to avoid other agents is non-trivial and has spawned several different variations on A* to make it work. A* itself is probably not very suited to GPU acceleration as it's an algorithm that has a lot of stages that depend on reading previous writes. Attempting to solve both these issues in one project is likely to be quite challenging.

 

I see. I have talked with the professor and he has told me that it will be enough if I do some measurements (see how it scales with varying number of agents etc) of an already made implementation (there is one on the NVidia website).

 

If I don't manage to get the ready implementation to work, I will make one of my own, even if the performance is going to be bad - I know it's going to be hard to beat the CPU here. I was thinking of some "tricks" to reduce GPU communication, like an agent not sending any data if it's guaranteed that no other agent will be at  a cell at a certain time.

(for example, if there are two agents, the grid is 10x10 and each of them is at the opposite site, there is no chance that after two steps two agents will be at the same cell).


#1sheep19

Posted 18 April 2013 - 09:34 AM

What do you mean? In terms of the final path generated, A* is perfectly efficient. Are you talking about the problem of stating the algorithm in terms amenable to the GPU?

 

Yes. I meant spawn a GPU thread for calculating the path for each agent.

 

I read somewhere that I should spawn a thread for each agent, which seems reasonable.

 

No, for most applications it's not a reasonable approach and is likely to cause far more problems than it will solve, especially given that you are doing GPU work.

 

On the whole this sounds like a very difficult problem. Recalculating paths to avoid other agents is non-trivial and has spawned several different variations on A* to make it work. A* itself is probably not very suited to GPU acceleration as it's an algorithm that has a lot of stages that depend on reading previous writes. Attempting to solve both these issues in one project is likely to be quite challenging.

 

I see. I have talked with the professor and he has told me that it will be enough if I do some measurements (see how it scales with varying number of agents etc) of an already made implementation (there is one on the NVidia website).

 

If I don't manage to get the ready implementation to work, I will make one of my own, even if the performance is going to be bad - I know it's going to be hard to beat the CPU here. I was thinking of some "tricks" to reduce GPU communication, like an agent not sending any data if it's guaranteed that no other agent will be at  a cell at a certain time.

(for example, if there are two agents, the grid is 10x10 and each of them is at the opposite site, there is no chance that after two steps two agents will be at the same cell).


PARTNERS