Sign in to follow this  
sooner123

ELO Rating System for more than 2 players

Recommended Posts

How would one go about applying the ELO rating system to more than two players? For example, if you made a four-player chess game or were going to apply ratings to poker games or horse races.

If we stick to the simple case where the outcome of a k-player matchup is one winner and (k-1) losers, then it seems incorrect to simply update player 1's rating on the basis of R_1 vs. average (R_2, R_3, ..., R_k).

The reason this seems invalid is that it's clearly more difficult to beat player X against whom you have a 60% chance of victory, followed by player Y against whom you also have a 60% chance of victory, than it would be to defeat player X only. The former constitues a greater consistent performance.

Another method I thought might work would be to update the rating (k-1) times treating it as though you defeated all (k-1) opponents individually one after the other. This doesn't seem to have the same problem as the first way of doing it, but suffers from the issue of deciding which order to compute the rating updates in. A tweak on this approach is to take the average of all the opponents ratings and then compute the update (k-1) times on that average rating. This avoids the order issue but seems to fail due to the fact that winning a 3-player game against a player you have a 1% chance against and another player you have a
99% chance against seems much more difficult than winning a match against two players against both of which you are expected to score 50%.

I may be assuming too much about what ELO is meant to predict and therefore should be letting some of the statistical issues with some of these proposed methods fly.

edit: another way of doing it would be increasing the total score of a game. in chess, a game is basically a battle over one point. a draw means both sides won half a point, a win means the winner took the full point and the loser took none. possibly with k players, there would be k/2 points up for grabs which would mean that for the S_a you'd have values up to k/2 instead of just 1 for a victory. applying S_a = k/2 to the formula and using average(opponent's ratings) for the rating, is another possible method, though it still suffers from the issue involving the difference between defeating a super strong and super weak player vs defeating two even players.

Brief explanation of ELO for those unfamiliar

http://en.wikipedia.org/wiki/Elo_rating_system

When two players have ratings: R_a and R_b, determined via the ELO model, you can compute their expected scores against each other:

E_a = 1 / [ 1 + 10^((R_b - R_a)/400) ]
and symmetrically:
E_b = 1 / [ 1 + 10^((R_a - R_b)/400) ]

When the two play each other, their ratings are updated by comparing the result of the game with what was expected.

R_a' = R_a + K*(S_a - E_a),
where S_a is the score of the game (1 for a win, 0 for a loss, .5 for a draw, etc.) and K is a volatility factor that is equal to the maximum number of rating points that can be gained/lost in a game.

Symmetrically, whatever was added to R_a will be subtracted from R_b.

Share this post


Link to post
Share on other sites
One can rephrase ELO the following way: The probability of a player winning the game is modeled as being proportional to 10^(rating/400). Of course, you need to normalize the probabilities by dividing by the sum of these numbers. Let's check that this is the same formula as before:

E_a = 10^(R_a/400) / (10^(R_a/400) + 10^(R_b/400))

Dividing numerator and denominator by 10^(R_a/400) we get

E_a = 1 / (1 + 10^((R_b-R_a)/400))

So we are good. This rephrasing is a lot easier to expand to more players: Each one has a probability proportional to 10^(rating/400) of winning the game.

E_a = 10^(R_a/400) / (10^(R_a/400) + 10^(R_b/400) + 10^(R_c/400) + 10^(R_d/400))

The update rule probably doesn't need to change.

[Edited by - alvaro on November 10, 2010 4:44:03 PM]

Share this post


Link to post
Share on other sites
This actually makes a lot of sense. I'll have to play with it that way and see if the results look right.

Seeing it in terms of a direct probability like that (like offense_initiative / (offense_initiative + defense_initiative) type calculations) makes this a simple problem.

Thanks for the help.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this