Some ideas on AI approach for a multi agent cooperative puzzle solving game

Started by
2 comments, last by beetree2 1 year, 7 months ago

I am trying to come up with an AI architecture for a puzzle game. The basic idea is that there are teams of NPCs and teams of players. Each team is presented with the same puzzle. The first team to solve the puzzle wins. NPCs/players can move like a puzzle platformer game. So walk, run, slide, jump, interact etc. The puzzles are spatial arrangements where NPCs/players can move, slide, etc. objects to locations. To make things more complicated, team cooperation is required as some actions require multiple NPCs/players with careful positioning and synchronized motions.

Any ideas on how I could approach this ? I am wondering how similar games (are there similar games?) approach AI. I believe that the most common approach would be to use some search algorithm. I was thinking about GOAP. But I don't know how to express the search space. I have multiple agents, continuous motion of multiple types, discrete actions with synchronization constraints. Any suggestions or ideas would be really appreciated.

Advertisement

beetree2 said:
I am wondering how similar games (are there similar games?) approach AI. I believe that the most common approach would be to use some search algorithm.

I think about some game like Lost Vikings?
The problem i see is that the games rules do not describe the potential actions like Chess would do.
Things like moving objects in smooth steps, plus the option to do this in cooperative ways (parallel actions in time), are hard to represent as some quantized move. So how could you define a search space, like e.g. a graph of all potential moves in a chess game would be?

I have never seen a game AI which could do this. The closest seems stuff like this.
Eventually you could record play sessions of human players and use this to train some ML approach, like Forza does. But i doubt this would work robustly for a game requiring such amount of problem detection, planning and execution.

So maybe you could replace AI with the concept of leaderboards, where you see the most successful, recorded game session of a level so far, and try to beat that.

Yes some actions are continuous and others are discrete. There are search based techniques that can be used. Approaches like MCTS or sampling-based planning or meta-heuristics etc. Technically you could use more standard search like A* search. Its just that the search space is very large and complex. As you note, the continuous actions are hard to model without doing some kind of continuous simulation. One workaround is just to make all actions discrete, so movement would occur by applying atomic discrete steps. Planning in time and space is more difficult but there are approaches. Multi-agent planning makes it more difficult but again there are approaches. Most of these problems have been covered in academic literature in robotics, AI planning, etc. But the issue is that its all relatively difficult and incomplete. I was hoping that there might be easier ways, someone with some experience that could say “that's easy, I did this … or you can do this … ” , without me trying to understand some of the academic papers :( But thanks Joe for helping me. I think I have some more ideas, maybe they might work out.

This topic is closed to new replies.

Advertisement