#### Archived

This topic is now archived and is closed to further replies.

# pattern prediction

This topic is 5426 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

is there a way to utilize a back propagation network to predict the next item in a still growing pattern. an example might be an ongoing rock paper scissors game...as the game progresses the network gets to know the player and can eventually (sometimes) outwit him. obviously i use "outwit" loosely here. what i mean is know how he''ll probably respond to the last few rounds in the new round. i thought of having a cycling set of input nodes that input the last n number of items in the pattern but that seems unweildy and i thought you guys might have other ideas. any one have any? thanks, justo

##### Share on other sites
I don''t think a network would be the best tool for pattern prediction.
What I would do is have an array that stores the moves the opponent made. Then, each time you have to pick a move, search the array for the longest pattern that matches the last characters in the array and has one element following.

For example, if the array contained (r=rock, s=scissors, p=paper)
SPRPSRPS

and it is your turn, you start searching for the last character (S), and you find two of them that have something next (SP at 0 and SR at 4). Then you search for the sequence PS (the last 2 letters), and you only find one (PSR at 3). That means that the longest pattern you can find is 2 characters, and it predicts an R will be chosen next.

##### Share on other sites
I agree with the other respondent in thinking that a neural network might not be the best solution to this problem. I built a rock-paper-scissors game myself, some years ago, after reading that someone had built one using only a simple array of moves. My program would tabulate all sequences of 3 turns, basing its prediction for the current opponent on the previous 2 turns. The first few turns were random. It worked rather well, based on my (non-random) sample of human opponents. I noticed, as did some of my human testers, that once in a while, the thing could almost seem to read one''s mind, making a series of quite a few correct plays- it was spooky. One could convince oneself that the machine was not cheating by using a random source for playing, which reduced the machine to 50/50 against the player.

-Predictor
http://will.dwinnell.com

##### Share on other sites
There is a computer Ro Sham Bo (rock papers) tournament. I don''t have a link off hand. I seem to recall some of the contestant explaining the workings of their programs.

Will

##### Share on other sites
You could use an ANN to learn a function for playin Ro-Sham-Bo (rock-paper-scissors), although you must ask yourself is this the best thing to do (as the previous respondants have done). I'm not going to reiterate their comments, but I will provide some theoretical detail for the discussion.

In trying to predict your opponents next move, what you are trying to do has two parts. The first of these is to determine (compute/approximate/guess wildly) the following function:
p(Aot|Aut-1,Aot-1,Aut-2,Aot-2,...), which stipulates the probability of your opponents next action, Aot, given both yours and your opponents previous actions, the actions prior to those and so on back to the start of the game. In general, you will truncate this function and stipulate that the actions at some time are independant of the actions at some earlier time, given the sequence of actions between them. If you take it to the extreme, you might even force the problem to be Markovian by suggesting that the current action is indepedent of all other actions, given the previous pair of actions (yours and your opponents). This probably isn't a very sensible approximation though and the optimal strategy is tit-for-tat. The pathological case is to assume that your opponent has no strategy and that their choice is independant of any previous actions taken by either player.

How you derive this function, or an approximation to it, depends on your preferences for algorithms. There are many different approaches that would give 'better than random' results. As mentioned above, it might be satisfactory for you to simply compute the frequency of sequences of length 3 and use this as your predictive distribution. This method typically assumes that your opponent has no biased preferences for one action over another. If you suspect that your opponent does, or you want to take into account the possibility that they do, then you need to use a non-uniform prior in combination with the joint statistics you compute, to arrive at the correct conditional distribution over sequences of actions.

Once you have your function, you need to choose a value for your own action. Typically you would choose the value that maximises your reward. In Ro-Sham-Bo it is trivial to compute the payoff value at each iteration of the game from the game rules. Thus, p(Aut|predicted(Aot)) is easily defined from the utility, U(Aut|Aot).

To summarise, basically any algorithm that can learn an approximation to a conditional probability function will return results as a Ro-Sham-Bo player. The quality of those results will depend on how well that function models your true opponent.

Cheers,

Timkin

[edited by - Timkin on December 1, 2003 2:25:35 AM]

##### Share on other sites
In theory, it would be possible to feed the last N plays into an ANN and allow it to decipher what the opponent will most likely do next. Basically, one could feed in the last N moves as inputs to the base of the ANN and receive an output, check the output against the actual outcome and train the net.

m1 m2 m3 m4 ... mN-1 mN

The next time, you should be able to cycle mN-1 -> mN, mN-2 -> mN-1, ..., m1 -> m2. Given enough of a play history and a large enought ANN, you should be able to pick out a pattern if one exists. The benefit of this method is that the pattern can have "holes" in it and the network will be able to discern that. For instance, if the move three turns ago decided the next move and the moves in between were just random, given enough time the ANN would just ignore the random moves. Other methods rely on sequential methods and might not be able to see past the randomness in between.

Powell

"Codito, ergo sum" - (I code, therefore I am)
ThinkTank Enterprises

##### Share on other sites
http://www.cs.ualberta.ca/~darse/rsbpc.html

1. 1
Rutin
28
2. 2
3. 3
4. 4
5. 5

• 13
• 11
• 10
• 13
• 20
• ### Forum Statistics

• Total Topics
632952
• Total Posts
3009432
• ### Who's Online (See full list)

There are no registered users currently online

×