Thoughts on game complexity

Started by
69 comments, last by lodi_a 16 years, 8 months ago
Quote:Original post by Kylotan
I'm interested in how to design enough complexity to hide the optimal strategy, without just throwing more and more features at it until it seems to work.


Ok, lets consider games that currently incorporate this idea into their design. The best example I can think off would be chess. The piece movements are individually simple. The goal is simple. But the number of permutations of moves in any one game is well over a billion.

Thus, we beg the question... is there an optimal strategy to chess? Some would say yes. For each opening move by white, there are counter movements by black that lead to a higher probability of winning. For those who aren't familiar with chess, the first few opening moves are predetermined for good players. They even have fancy names such as dutch stone wall defense, queen side indian attack, birds opening, 4 horse opening, etc. With years of player experience, and with the aid of computers, all the possible opening sequences have already been optimized. However, once you get about 12 moves in, there are simply too many choices to go through to have a optimal path defined. While after 12 moves one player may have an advantage over the other, the optimal path is suddenly lost (and while it may be possible for computers to find the optimal path, i don't foresee people ever figuring it out). From here, a player must depend on his own skill, and not the predetermined moves of other players, to win the game.

Reverse Engineering
Is there a way to reverse engineer this idea? I believe there is. Chess only has 6 unique pieces with each piece having a unique abilities. It should be easy to use this idea to become the basis for other game design. I believe StarCraft is one of few games that have used this concept effectively. And yes, the first few moves in StarCraft are semi-defined. Anyone who has ever played Protoss knows how important it is to build a pylon on the 7th peon such that you can get a gateway/forge on the 8th peon in order to prepare for a rush. But, after that, there are too many variables to have a true optimal path.

Question:
Chess has enough complexity with such simple rules that it has kept a place in history for hundreds of years. However, it is not the most popular game in today's culture. Why?
Advertisement
Quote:Well, I was assuming that there was a benefit from having a powerful city too. The point I was making is that if you just have 2 variables, even if both feed into each other and both can be used in resolving whether the game is won or lost, there is an optimal strategy and it's likely to be trivial to find.

If you look closely at the example I gave, there is no perfect strategy. In the game, the none of the extreme strategies are a perfect one as each has a counter and going down the middle of the road can be beaten by any of the extreme strategies. So there is no perfect solution. Even with perfect information, you can't even settle on a perfect strategy, you will have to cycle between several different strategies as you opponent tries to counter your moves (with perfect information, the game will likely descend into a stalemate depending on the switching costs to change strategies - that is how much time and other resources it take to change to another strategy).

Quote:With games like Chess, or Civilization, the optimal strategy is hidden by the level of complexity, which is a large part of what makes it fun.

Actually, in chess there is sort of an an "optimal" strategy that can be employed, but if your opponent knows how to counter it (just like scissors counter paper), then this strategy breaks down.

This strategy give you the ability to make a check mate in only a few moves.

1) You move the Pawn in front of your King out (e2 to e3 or e4).
2) Move out you bishop to threaten the pawn the opponent's pawn (f1 to c4)
3) Move your queen out to threaten the same pawn (d1 to h5).
4) Take the pawn with your queen (h5 to f7).

This will automatically give you a checkmate unless they have move pieces out to either block you (example: e7 to e6), threaten one of the needed positions (example: g7 to g6) or allowed their king room to escape (example: d7 to d6 or d5).

It is a risky move as it places two of your powerful pieces out unprotected on the board. This would be the RTS equivalent of a rush strategy, and unless your opponent does not know how to counter this obvious strategy, you can easily win (but most competent chess players actually know this technique and it's easy counter).

So this is the chess equivalent of a Rush strategy and it is countered by a minimal positioning of pieces, which followed by a quick swarm can potentially eliminate two of your opponents powerful pieces (the equivalent to the Counter Rush strategy in the city game example form my previous post). The problem with this, is that a clever opponent might lead you into a false belief that they are going to do a rush and if you respond to that signal, then it can place your pieces into a position that is not as defensive and you opponent can exploit that to their advantage (the equivalent of the economic strategy in the city game). And of course, if your opponent tries for the long game and doesn't counter a rush strategy, then the rusher wins.

So, using this strategy set, Chess it's self presents the same (generally speaking) strategies as the City game. However, in chess, the switching costs of changing strategies is not too expensive (at worst you might loose a few pieces and take a few turns to get you pieces into the required positions). Also, with chess, these "Gambits" don't have to use all your pieces, so that even if a particular strategy fails, you might not have any costs to switch strategies.

As you learn more about chess and play it more, you learn about "Chunking", that is instead of just looking at each individual piece and its position relative to all the other pieces, you start to see larger patterns involving multiple pieces and how they influence the board (multiple pieces essentially start being parts of a single entity). It is usually at this point a chess player start to get good.

In the game concepts presented in this thread, the scenarios have been very simplified and because they are so similar to known game, the chunking for them is already known by us and so we can see these larger patters and their results more easily (that and we are specifically designing the examples to make this chunking obvious). The chunking that we are talking about are the various strategies that we are discussing.

For example, in the city game example, we didn't talk about the deployment of the armies (where a well deployed army might be able to overcome a larger but less well deployed army), but only discussed the relative strengths of the opposing players as the number of armies they have. So we "Chunked" the army deployments and assumed that both players were equally matched in their ability to deploy their armies.

If we had included that into the example. Say that, even though you are initially outnumbered, but you have the ability to chose when to attack and when to withdraw, and can do so successfully, you could withdraw from army groups that could beat you and attack army squads that you can beat. Then a player who initially chose to do an mostly Economic strategy, might just be able to beat a rusher player, even though the main rules says that this is not a good counter to a rush.

Also, to make the City game more complex, you could add in some complications, namely that the armies have a S/P/R relationship applied to them (for a medieval game: Knights -> Archers -> Pikemen -> Knights). This would then make the strategies of Rush, Defence and Economic much more uncertain and based on a player's skills and ability to adapt.

If you then put in signalling (spies) and the ability to fake a signal (counter intelligence), then the game becomes very complex and the chance of a simple "Best" strategy is almost non existent.

In fact, the Signal/Fake are used a lot in Beat-em-up games, where the gameplay is usually tightly based on the Scissors/Paper/Rock mechanic (or at least move/counter move). This faking ads a lot more complexity to the gameplay as it move the focus from the character's abilities to the person's abilities and psychology (just as chess does and it also makes us of the ability to fake signals - but not as simply implemented as in the computer game). In chess you can fake a signal by threatening pieces you don't intend to capture, forking then (moving one of your pieces so that it threatens multiple enemy pieces), and so on.

It is usually the incomplete informational aspect of computer games that means that signalling and faking are not as complex as chess. Because chess is a complete information game (you can see the positions of all the pieces all the time), and signalling and faking are by necessity much more subtle and indirect than might be in an incomplete information game (but even an incomplete information game can use subtle and indirect signals/fakes as in a complete information game).
Quote:Original post by Edtharan
Quote:Well, I was assuming that there was a benefit from having a powerful city too. The point I was making is that if you just have 2 variables, even if both feed into each other and both can be used in resolving whether the game is won or lost, there is an optimal strategy and it's likely to be trivial to find.

If you look closely at the example I gave, there is no perfect strategy.


There's a difference between an optimal strategy and a perfect strategy. R/P/S has 3 optimal strategies; it just happens to be true that they are also the worst strategies. Iterative R/P/S, while no longer having an optimal strategy as such, has one optimal metastrategy - totally random choice. Any other metastrategy gives information to the opponent which increases their chance of beating you to above 1 in 3. Most iterative systems work the same way - a single optimal strategy no longer suffices but it's usually quite trivial to derive an optimal metastrategy.

Quote:(with perfect information, the game will likely descend into a stalemate depending on the switching costs to change strategies - that is how much time and other resources it take to change to another strategy).


But that's the point - the optimal strategy probably just becomes one of the two following:
- if changing strategies is cheaper than the cost of using the wrong strategy, the optimal metastrategy is to always change so that you always have the counterstrategy ready.
- if changing strategies is more expensive than the cost of using the wrong strategy, the optimal metastrategy is to continue on with your current one, which will beat your opponent due to the cost of them changing.

There's not really much else to it - if certain strategies cost more to choose, or cost more to switch between, that factors in, but there's still ultimately an optimal metastrategy which, if employed once against an infinite number of random players, will win more often than any other metastrategy. The only hard part is gathering the data on 'costs'.

Quote:Actually, in chess there is sort of an an "optimal" strategy that can be employed, but if your opponent knows how to counter it (just like scissors counter paper), then this strategy breaks down.


That doesn't really fit the definition of an optimal strategy however. An opponent with no prior experience but with an understanding of the rules and the ability to plan ahead would trivially be able to defeat this.

Most of the rest you talk about involves anticipating future actions and reactions, which is outside the scope of what I'm interested in, because although it is interesting it its own right, it really just ignores the issue by substituting temporal changes in choices for initial breadth of choice.
1) Is an optimal strategy the best that the player can choose or the best that Laplace's Demon can choose? That is, is the optimal strategy determined by information available to the player or is it determined by the entire game state? Take the Monty Hall problem. If it's determined by the information the player has, then the optimal solution is to switch doors, but, if it's determined by the entire game state, then the optimal solution is to switch if the player chose the wrong door first and not switch if the player chose the correct door first.

2) If a strategy will not always win, can we still call it optimal? That is, is the optimal solution the one that is most likely to win, or the one that will win? This is similar to (1), but now I'm also asking if a game of chance like roullette where, even though you know the entire game state you still cannot determine the outcome (Laplace's Quantum Demon?), can have an optimal strategy.

3) If two strategies are equally likely to win, are they both considered optimal? Consider a coin flip. For simplicity, we'll ignore the possibility of it landing on its edge. If the coin is fair (50/50), then there are two equal strategies; are either, both, or neither of them considered optimal? If the coin is biased but the player doesn't know which way, does that change the answer?

4) If a strategy is only suboptimal by a small amount, does that matter? Let's say in the coin flip the play knows that the coin is biased 51/49. In this case, even chosing the suboptimal strategy is very likely to produce a win.

5) Can a strategy take into consideration the metagame? That is, can it bluff? Can it read other players? Can it count cards?
1) I'm primarily thinking of complete information games, in the belief that partial information games can be considered an extension of that. However I think the distinction between a typical player and Laplace's demon (linked for the benefit of others) is very important, as there is probably an optimal strategy for chess, but a typical player can't find it. This demonstrates that it's possible to make games which are essentially unpredictable, yet which require no random factors to make that so.

2) Yes, the optimal strategy just needs to be the most rational one that you'd choose, if all the information was available to you. I'm mostly ignoring stochastic effects for now though, assuming that they average out over enough trials.

3) I'd say yes - if there is no strategy more likely to yield victory, then it's optimal, even if the utility of that strategy is equal to that of the worst strategy. Obviously a game with these properties wouldn't be much fun.

4) Sure, why wouldn't it?

5) I'm trying to avoid any metagaming aspects, especially iterative versions of a game (eg. iterative RPS, or iterative Prisoners' Dilemma), because I don't think they are relevant here - obviously some games rely on adapting to change as a gameplay feature (whether slowly as in Civilization, or quickly as in a beat-em-up), but ultimately it usually only adds a trivial extra layer - you're either obviously compelled to change your strategy in response, or not.


I don't want to get fixated on the definition of the optimal strategy as such though. The key thing I'm trying to get at, is that it should be possible to design a game that has enough depth to stop players from being able to see the best thing to do very easily, yet without relying on random opponent behaviour - whether that be by dice rolls or by human unpredictability - to create that depth.

I'm just trying to work out how that depth is created - interaction between multiple aspects seems to be the key here, as someone mentioned above, but making that interaction non-trivial is important too. I think it's interesting that most complex games seem to have several resources in play, and you can trade one for the other. One book I read described chess as having material (pieces you've taken), position (areas of the board you control), and tempo (how well you've made uise of the time available). Magic: The Gathering has your in-play cards, your hand, your deck, and your health score. Civilization has money, food/trade/science resources, cities, and units. The complex ways in which these resources affect each other and can be converted into each other might be a major part of how these games create their depth.
Quote:Original post by Kylotan
1) I'm primarily thinking of complete information games, in the belief that partial information games can be considered an extension of that. However I think the distinction between a typical player and Laplace's demon (linked for the benefit of others) is very important, as there is probably an optimal strategy for chess, but a typical player can't find it. This demonstrates that it's possible to make games which are essentially unpredictable, yet which require no random factors to make that so.

2) Yes, the optimal strategy just needs to be the most rational one that you'd choose, if all the information was available to you. I'm mostly ignoring stochastic effects for now though, assuming that they average out over enough trials.


But in most games, not all information is available to you. You mention Magic and Civilization, both of which have hidden information and random elements (which can be considered a sort of hidden information). Even chess has some in that you're playing a human opponent.

Quote:
3) I'd say yes - if there is no strategy more likely to yield victory, then it's optimal, even if the utility of that strategy is equal to that of the worst strategy. Obviously a game with these properties wouldn't be much fun.


I didn't mean all strategies had to be equal. Maybe several strategies are "best", but they're only a little better than some and decently better than others. This game can still be fun.

Quote:
4) Sure, why wouldn't it?


If you had money riding on that 51/49 coin flip, how much more comfortable would you be if your opponent chose the suboptimal strategy?

I think that these near-optimal strategies are very important in that they are hard to distinguish from optimal strategies both by the player and statistically. The distinction is mostly academic, unless you're a casino.

Also, the existence of near-optimal strategies makes gameplay more interesting in that it's easier to recover from mistakes and the game can still be interesting for the better player (i.e. the one with the better strategy).

Quote:
5) I'm trying to avoid any metagaming aspects, especially iterative versions of a game (eg. iterative RPS, or iterative Prisoners' Dilemma), because I don't think they are relevant here - obviously some games rely on adapting to change as a gameplay feature (whether slowly as in Civilization, or quickly as in a beat-em-up), but ultimately it usually only adds a trivial extra layer - you're either obviously compelled to change your strategy in response, or not.


I don't think it's so trivial, but I'll allow it as a simplifying assumption. I'm curious what would be considered an optimal strategy in a game with a human opponent (e.g. chess). I can imagine the following situation, but I don't know how likely it is to occur in any given game:

Under the current game state, you have a choice of three moves (1, 2, 3) and, after you move, your opponent will make one of four moves (A, B, C, D). Let's say that 1 is best if your opponent choose A or B, 2 is best if he chooses C, and 3 is best against D. Let's also say that A and B are fairly bad choices no likely to be made by someone familiar with the game, C is technically the optimal move, but, knowing your opponent, he's most likely to choose D. That is, 1 counters the largest number of moves, 2 counters your opponent's optimal strategy, and 3 counters what the metagame says your opponent will do. So, is 1, 2, or 3 your optimal strategy?

Quote:
I don't want to get fixated on the definition of the optimal strategy as such though. The key thing I'm trying to get at, is that it should be possible to design a game that has enough depth to stop players from being able to see the best thing to do very easily, yet without relying on random opponent behaviour - whether that be by dice rolls or by human unpredictability - to create that depth.

I'm just trying to work out how that depth is created - interaction between multiple aspects seems to be the key here, as someone mentioned above, but making that interaction non-trivial is important too. I think it's interesting that most complex games seem to have several resources in play, and you can trade one for the other.


Take temperature, for example. The actual molecular movements of an object follow rules that are simple enough (for many purposes, you can get by with Newton's laws of motion), but their interactions quickly become complex. However, these complex interactions are easily characterized through temperature. I won't be able to tell you exactly how any given molecule will move, but I know how the temperature of the system will evolve, which is related to the individual motions of the molecules.

When I play chess, I don't really look several moves ahead, but I have a feel for the sort of threats available to both of us and the sort of defense available to both of us. I think the key is to have complex interactions that cannot be predicted, but make the sort of result knowable.

Quote:
The complex ways in which these resources affect each other and can be converted into each other might be a major part of how these games create their depth.


I think the resources act as a simplifying layer that feeds back into the more complicated layer. In Warcraft, I could glance over what I had and know the general power of my army, what resources I had, and my ability to produce those resources. Then I would look more closely and use resources to improve my army and production abilities and wage war with my army. Then I could glance over what I had just done and see what effect that had.

So, actually, I'd say it's the simple ways in which those resources affect each other that gives access to the depth created by the complex interactions of other aspects of the game.
Quote:Original post by Way Walker
Quote:Original post by Kylotan
1) I'm primarily thinking of complete information games, in the belief that partial information games can be considered an extension of that.

But in most games, not all information is available to you. You mention Magic and Civilization, both of which have hidden information and random elements (which can be considered a sort of hidden information). Even chess has some in that you're playing a human opponent.


When considering optimal strategies, it's usually assumed that the other player behaves entirely rationally, and the fact that they are human and fallible is ignored. As for my M:TG and Civ comparisons, they're really examples of other aspects rather than this one.

Quote:3) I'd say yes - if there is no strategy more likely to yield victory, then it's optimal, even if the utility of that strategy is equal to that of the worst strategy. Obviously a game with these properties wouldn't be much fun.


I didn't mean all strategies had to be equal. Maybe several strategies are "best", but they're only a little better than some and decently better than others. This game can still be fun.

I meant that a game where the optimal strategy was also the least optimal strategy wouldn't be fun.

Quote:If you had money riding on that 51/49 coin flip, how much more comfortable would you be if your opponent chose the suboptimal strategy?

I think that these near-optimal strategies are very important in that they are hard to distinguish from optimal strategies both by the player and statistically. The distinction is mostly academic, unless you're a casino.

Also, the existence of near-optimal strategies makes gameplay more interesting in that it's easier to recover from mistakes and the game can still be interesting for the better player (i.e. the one with the better strategy).


I'm not entirely sure what point you're making here though. Obviously it's useful to have many possible strategies that are non-trivial to rank; that is sort of what I've been saying from the very first post. But if it's trivial to rank them, then the game is broken. If you know which strategy is best then why would you ever choose a different one? Why would you choose the 49% rather than the 51%? The only reason could be for the bluffing aspect.

Quote:Under the current game state, you have a choice of three moves (1, 2, 3) and, after you move, your opponent will make one of four moves (A, B, C, D). Let's say that 1 is best if your opponent choose A or B, 2 is best if he chooses C, and 3 is best against D. Let's also say that A and B are fairly bad choices no likely to be made by someone familiar with the game, C is technically the optimal move, but, knowing your opponent, he's most likely to choose D. That is, 1 counters the largest number of moves, 2 counters your opponent's optimal strategy, and 3 counters what the metagame says your opponent will do. So, is 1, 2, or 3 your optimal strategy?


The 'optimal' strategy always assumes that the other person knows what they are doing. Therefore 'knowing your opponent' wouldn't be a factor, and the optimal strategy above depends on the almost infinite number of future moves in response.

This example is much like the Fools Mate chess example posted above - most of the moves leading up to Fools Mate are poor if you're expecting a long game, but good if your opponent was unable to forsee the combination. I think almost all games feature this to some extent - there are always attacks that a fallible opponent might overlook or misunderstand the defence for.

Obviously in real games, it often is a factor. How do we represent that, short of making a complex system and hoping that these various discrete routes to victory eventually emerge from the interaction between the systems?

Quote:When I play chess, I don't really look several moves ahead, but I have a feel for the sort of threats available to both of us and the sort of defense available to both of us. I think the key is to have complex interactions that cannot be predicted, but make the sort of result knowable.


I agree. You should be able to get a feel for what is going on, without necessarily knowing every individual interaction. However, every interaction you don't understand is a potential way for your opponent to exploit you, so there's always more to learn. So making those interact in a useful way is important. I'm just at a loss for any sort of formal system describing that.
Quote:Original post by Kylotan
The key thing I'm trying to get at, is that it should be possible to design a game that has enough depth to stop players from being able to see the best thing to do very easily, yet without relying on random opponent behaviour - whether that be by dice rolls or by human unpredictability - to create that depth.


What if the rules of the game are themselves subject to random alterations? For instance, if you have a turn based strategy game where there is no fog of war, then it necessarily follows (from the Gale-Stewart theorem) that one player or the other has a perfect strategy available. However, the "rules" of the game include the properties of the map you are playing on, so that the perfect strategy could be different (and belong to a different player) for each map. Allowing random maps creates a random alteration of the rules for each game, stopping a player from knowing the perfect strategy in hindsight.

This appears a lot in puzzle games. For example, in Tetris, given any state and a next piece, it is pretty easy for anybody to work out the best place to put it. However, the key characteristic of the game is to force the player into working this out on the spot rather than relying on memorised stock manouvres.
Quote:There's not really much else to it - if certain strategies cost more to choose, or cost more to switch between, that factors in, but there's still ultimately an optimal metastrategy which, if employed once against an infinite number of random players, will win more often than any other metastrategy. The only hard part is gathering the data on 'costs'.

If we are talking about random enemies as opponents, then for chess the clearly optimal strategy is the Fool's Mate as a random enemy will be very unlikely to counter it. There are thousands of potential strategies and only a handful counter the fool's mate, so clearly, using that strategy is the bet because the enemy is unlikely to use the counter strategy and it finishes the game quickly so it limits the chance that you opponent can mount an attack on you (you can't get a quicker mate than the fool's mate).

So under your own criteria for deciding what is an optimal strategy (or even meta strategy) Chess is an exceedingly shallow game as there is one very simple strategy that beast any set of random players.

However, what makes Chess so much better is the Human factor. The fact that they can bluff, adapt their strategies, spot simple tricks (like the fool's mate), etc.

Statistically, if we assume players have a complete write up of all possible strategies, in chess Black has a better chance to win as their opening moves tend to put White on the defensive. This has not been completely tested (as no one has calculated all possible outcomes of all possible chess games using all possible moves available), so it might just be that there are some yet to be discovered "games" of chess that evens this out. But as far as we know, black has the advantage in chess.

If we therefore eliminate the Human factor, then chess is a broken game. Black wins. If players only ever did the best possible move in chess, white would never win.

But, then why do people still play chess? Well it is because of the Human factors. Fallible and sneaky humans are in charge of playing the game, so they can do things that would not be "optimal" and by doing so, gain or loose the advantage.

Also, lets take Poker. This is a game that, if only the rules and their interactions are studied (that is assume that players only make perfect decisions), then it is only a game of chance.

But, why is it then, that there can be good Poker players and bad poker players? Again, it's the human element. Humans can bluff, make mistakes, take risks, get scared, etc. It is this human element that make poker a game worth playing, that k makes it a complex game.

People have tried to use game theory to work out the perfect poker strategy and it is quite trivial to do so. However, the player who work out the perfect strategy still loose to a good poker player. Why? Again, the human ability to bluff, make mistakes and take risks can throw the "perfect" or "optimal" strategies out and beat them.

This just shows that Game Theory does not always apply to Game Practical (that is real games played for fun or profit).

By considering the player and their psychology as part of the game system, this can massively increase the complexity of a game. This is why multiplayer games are so popular. The players like the added complexity and unpredictability (not just randomness) that playing real humans confers.

Quote:Most of the rest you talk about involves anticipating future actions and reactions, which is outside the scope of what I'm interested in, because although it is interesting it its own right, it really just ignores the issue by substituting temporal changes in choices for initial breadth of choice.

Most real games are repeated "games", therefore future actions and reactions are an important aspect of any strategy.

In an RTS, if you produce lots of Unit type A (which can be beaten by Unit type B), and the enemy knows this (which might be from a skirmish earlier), then they will build Unit type B. At the point when the second player learns of the first player's choice, the first player must consider the second player's future actions (and my choose to build Unit type C to beat the Unit type B that the second player is likely to produce).

Quote:I'm trying to avoid any metagaming aspects, especially iterative versions of a game (eg. iterative RPS, or iterative Prisoners' Dilemma), because I don't think they are relevant here

The fact that a game is repeated adds in a lot of complexity (which is what this thread is about). So, if you are looking for an easy way to add in complexity to a game, here is the simplest way: Make future strategic decisions be based to some degree on information learned from an earlier run-through of the game. In other worlds, make the game repetitive.

Quote:interaction between multiple aspects seems to be the key here, as someone mentioned above, but making that interaction non-trivial is important too.

Looking at this concept, I think the best way to make the interactions non trivial is to make the actions of individual components (units, choices, etc) not be directly related to the strategy. This is what chess does.

In chess, the moves of each individual piece does not directly relate to the overall strategy. However, as the relative interaction of the pieces (their placement on the board relative to each other) determines the strategy.

So with a simplified RTS, you have a Scissor/Paper/Rock relationship between the units. As the strategy relates to what proportion you have between them, the individual pieces (the units in this case) directly relate to the strategy.

By creating a layer of indirection between the components that are used to create a strategy and the strategy it's self, this will make it harder for a player to create an optimal or perfect strategy as the same set of component could be parts of a different strategy.

Also, by having components indirectly relate to the strategy, it means that changing some of the pieces doesn't necessarily change your strategies. Like in chess, pieces that are not directly being used in an attack can still be moved without effecting that particular attack strategy (and may be part of separate or possibly complimentary strategy). The move might be positive, negative or completely neutral to the overall game, and still have no effect on the current strategy being used (and a piece can also be in several potential positions and have the same strategic effect).

If the pieces in chess were directly linked to the strategy, then a strategy would only be described by the positions of the pieces on the board and changing any piece would change the strategy.
Quote:Original post by Kylotan
I was thinking recently about one way of making a game fun, and I think that one way is to make it so that it is possible to estimate the quality of a chosen strategy, but not with 100% accuracy. If it's always possible to reject the wrong strategies then it becomes quite boring (eg. tic-tac-toe), and if it's impossible to pick a better strategy (eg. rock-paper-scissors) then it's also not much fun.

So there's a sweet spot in the middle - but how do you create it?


I'm just gonna approach this complex topic from a simple angle and cite Worms Armageddon as an example.

In Worms, even though each weapon has a clearly defined damage level and very specific effects, there is practically no limit on how they can be used, and what your opponent may do in return.

It then becomes impossible to work out on paper the best strategy to use for any given situation. This to me is the "sweet spot" in games as you said, but also shows the key to achieving it. I believe you must embrace chaos (physics engines always help in this department) and just forget about trying to balance everything with 100% certainty in the first place.

Sure, this may leave your game open to exploitation once the "optimum strategies" are discovered, but you can always adjust the rules later via a patch.

This topic is closed to new replies.

Advertisement