Tetris AI Algorithms

Started by
11 comments, last by dimovich 17 years, 11 months ago
I'm developing a tetris clone that provides the option to play against a computer. The bots are written in a C-like scripting language so that anyone could modify the bot AI. Currently there is only one bot available, and it is rather dumb. I would like to ask if anybody knows of any clever tetris AI algorithms that could do some more advanced stuff. Thanks.
We're doing this for a living, George. We are not amateurs.
Advertisement

Hi,

I once developed a tetris playing 'ai'. I'm quoting it on purpose, because it actually was a relatively simple algorithm. On average, it would die after ~50k lines made. Here's how it works:

- your playing field has a number edges. The ground and sides are edges too.
- your current tetris block has edges.
- each edge is one (1) tetris unit wide and long. Normal tetris blocks consist of 4 blocks, but this algo also works with random tetris blocks.
- for any given rotation and position of your block, drop it downwards
- when dropped, count the number of edges on your block that are touching the edges of block already dropped or those which are touching the wall or ground.

If you implement this algo, you will see it will start building in 'V' shaped formations of blocks. To avoid it building too much formations on the sides, penalize height. So for any given rotation and position of blocks, the score is this:

(number of edges touching) - (average block heigth / total height).

This algo is by no means perfect, but believe it or not, that's an advantage. Because it makes 'mistakes', you will also get some gaps in your formation. This will promote dropping blocks that eliminate several lines at once, resulting in more points.

Hope this helps,

Edo

Edo
BTW: if you want something more advanced, you could come up with some scoring criteria for your blocks, like

- height
- number of edges touching
- width
- number of empty gaps made
- number of edges touching a wall

Your overall score for dropping a block at a given rotation and position will then be:

score =
(weight1 * height) +
(weight2 * number of edges touching) +
(weight3 * width) +
(weight4 * number of empty gaps made) +
(weight5 * number edges touching a wall);

All weights are floating points between -1 and 1.

You can encode these weights in a genome (or was it chromosome?), and have a GA evolve these weights by letting a pool of chromosomes play against eachother for 10 games or so. The 'least' strong chromosomes will be killed, while the stronger weights will cross-over and mutate.

just some thoughts,

Edo
Edo
Quote:Original post by edotorpedo
- number of edges touching
Edo


We were using this one and this one only and it was working very good for us.
this is one of those problems that you can create the AI strategy to use the same strategy you use. when i play tetris i use the strategy of building up (and fitting) blocks on the left side first. they reach rather dangerously high. but i work my way to the right when i can, fitting the blocks in the blocks as high as necessary to the far left trying to leave no gaps. so after a while the whole area looks similar to a somewhat straight hill sloping steeply to the right, leaving me several areas to fit almost any piece given to me without leaving more than even a couple of gaps. this strategy really works great, until the speed of the falling blocks increases, which is just plain hard anyway.

i would almost suggest representing the playing field as a 2D matrix full of 0's and 1's. in my case, i would start off building up blocks to the left with a general rule to minimize unreachable gaps. and then when the AI was given a new piece it would identify its shape and identify which one of the surfaces below (and preferrably to the left side) would minimize gaps for each of the 4 rotations.

this is just peusado of what i would do. i could come up with some math formulas that i would use that properly weighs height of heightest block, potential for clearing a certain row, best fit position, etc. but i think it would be better for you to approach this using your own strategy. if you just happen to play tetris like i do then i'll be happy to help you. but like i initially suggested, this is one of those simplified problems that you could just make the AI do exactly what you would do in that situation. if it comes to it, write down the steps of how you would play tetris, convert it to how an AI would come to those conclusions of each step, and write the code.

or if all of this is too complex for your liking, i'm sure there are simpler algo's out there.
This would be an interesting problem for a neural network... it is overkill if you're just creating a single AI, but if you had it learn from 50 different players it would adapt each person's style... :)

You know you've written a good AI when it can win at Bastard Tetris!

http://www.happypenguin.org/show?bastet (Somewhere out there theres a Windows port too)

Check out my new game Smash and Dash at:

http://www.smashanddashgame.com/

Thanks for your advices. You really gave me some clever ideas.
My current bot uses this formula:

merit = rowsEliminated;
merit += -1*cellsOccupied;
merit += -5*holesShadowed;
merit += -4*pileHeight;

There are some minor issues with it but overal it does
fine. The NN ideas seems interesting. Bastard Tetris
looks promising :).

My project is open source. After I release it
I hope there will be enough people willing to write
their own Tetris AI.

[Edited by - dimovich on May 5, 2006 3:10:12 AM]
We're doing this for a living, George. We are not amateurs.
Couldn't you develop a recursive possibilities-exploring algorithm similar to the algorithm used to look ahead n number of chess moves?

Well, not similar to a chess algo, as that is a turn based game. And you only know the next tetris block that is going to fall, not the one after that.

So you can only recurse 2-deep (one for the current block, one for the next block). But by using my algo and recursing, I think you can achieve reasonably decent results, say ~100k lines.

Let me know if you finish your clone.

Edo
Edo
Quote:Original post by edotorpedo
You can encode these weights in a genome (or was it chromosome?), and have a GA evolve these weights by letting a pool of chromosomes play against eachother for 10 games or so. The 'least' strong chromosomes will be killed, while the stronger weights will cross-over and mutate.


A friend and I did this for a project in one of our AI classes. We built a simple Tetris repesentation (I say representation because it wasn't a game, you couldn't play, or even watch, but we could spit out the board when we wanted to in ASCII form :) ), and the used a GA to weight an evaluation, much like you described.

Our work was motivated by Colin Fahey... who's site seems to be down right now, but if you google for him you'll find all kinds of stuff. Using 3 or 4 features we got an agent that could play up to 100K lines, but on average much less than that.

We didn't spend a lot of time hand modifying the weights, but my point is that an evaluation function like that works pretty well for Tetris, and you can have some fun with learning algorithms with it too :).

This topic is closed to new replies.

Advertisement