Public Group

# Learning AI Logic for Games

This topic is 3032 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

So, I'm curious as to how to go about learning logic for games like puzzle games. I have a few books on AI, but most of them discuss pathfinding using A* and such. I can't seem to find any that discuss AI used in puzzle games like old school Tetris Attack and what not. Anyone have any suggestions on where to start?

##### Share on other sites
A* is even better for puzzle games than it is for pathfinding.

If you place a brick here, you can place another there, and a third there, but then you hit a wall (just like you do in a maze using A*).

Instead of a distance approximation heuristic, you're using a heuristic based on how optimal the current game state looks. In Tetris, if you have lots of small holes, you probably want to grade the state as pretty bad. If you have fewer, larger holes, you rate it as good. A state with many lines is generally worse than one with fewer. The absolute best state in Tetris to get to is four lines deep with an l shaped hole with an l piece up next. The worst states are ones where it's an unavoidable game over.

Your AI has to be able to simulate the game in it's artificial brain. Whenever the AI would make a move, it first clones the game state and searches the move tree as far as it can to find a reasonable move.

And actually, depending on how many of the upcoming pieces are visible to the AI, you may not have to search too hard. If it's like normal Tetris, and you can only see the next piece in line, you really only have to find the best configuration of the current and next piece on the board, then place the current piece in it's place according to that best configuration.

##### Share on other sites
Quote:
 Original post by Tac-TicsA* is even better for puzzle games than it is for pathfinding.If you place a brick here, you can place another there, and a third there, but then you hit a wall (just like you do in a maze using A*).Instead of a distance approximation heuristic, you're using a heuristic based on how optimal the current game state looks. In Tetris, if you have lots of small holes, you probably want to grade the state as pretty bad. If you have fewer, larger holes, you rate it as good. A state with many lines is generally worse than one with fewer. The absolute best state in Tetris to get to is four lines deep with an l shaped hole with an l piece up next. The worst states are ones where it's an unavoidable game over.Your AI has to be able to simulate the game in it's artificial brain. Whenever the AI would make a move, it first clones the game state and searches the move tree as far as it can to find a reasonable move.And actually, depending on how many of the upcoming pieces are visible to the AI, you may not have to search too hard. If it's like normal Tetris, and you can only see the next piece in line, you really only have to find the best configuration of the current and next piece on the board, then place the current piece in it's place according to that best configuration.

Well, since you have put it that way, maybe my problem then is the actual formal logic of determining what is the best move. Let's say you have a puzzle game similar to tetris attack, where the pieces come up in lines, you match three in a row to score, and you can form chains by having pieces above the ones you match come down and form more pairs.

What would be a good way to figure out the flow chart logic of how one would determine what the best move is?

##### Share on other sites
Don't think in terms of flowcharts. Think in terms of search trees.

Let's call each configuration of fixed bricks on the board a game state. So if I have an empty board, that's one game state. If I place an l block horizontal in the left corner on my next move, I now have a different game state. If I were to have placed it vertical in the right corner, that's a totally different game state.

(A game state might include other things, too, such as the upcoming brick lineup and what the opponent is up to).

Each game state has a heuristic value associated with it. How "good" or "ideal" a game state is. Like I said before, this is based off of the number of layers deep the board is, how many holes you have, the size of the holes, etc. You could make the heuristic as simple or as complicated as you'd like, but the important thing is you can judge at a glance whether or not one choice is better than another without looking at the future.

So now you build a tree of game states. Start off by making the current game state the root of the tree. From there, you need to figure out what possible moves the AI has. Where can the AI manage to legally put the next block? This will give you a set of new states that the AI can reach in one step after placing the next brick. These states become the child nodes of the root.

You then use recursion to grow the tree. How you decide to grow the tree is based on your heuristic. Grow branches that look promising first. When you run out of time, you choose the most promising next move and send it down the game pipeline.

This is pretty abstract, but that's because there's a number of legitimate ways you could do it.

##### Share on other sites
You know, hearing it that way, it makes a lot more sense! Thank you for your help sir. I'm not sure why I was so fixated on the flow-chart structure of things. Maybe programming from when I was a kid or something. But this has really helped a lot, thank you.

1. 1
Rutin
25
2. 2
3. 3
JoeJ
20
4. 4
5. 5

• 10
• 10
• 9
• 9
• 10
• ### Forum Statistics

• Total Topics
631752
• Total Posts
3002093
×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!