As a related question, what games would you think of as offering the player direct control over the game world event probabilities? I think mostly of Cube 2: Sauerbraten, that allows the user simple tools for dynamically changing the world in a constrained way - or maybe of Second Life. I should warn, I have not played many of the games out there. So it would be especially interesting to know.
This is sort of bootleg psychology, but I was reading the following article on believable events in games..
The article basically applies the Laplace principle that "the weight of evidence for an extraordinary claim must be proportioned to its strangeness".
In this case a game player can expend a "credibility budget" to buy "weight of evidence" for weird actions, but eventually the budget will run out, so the player's actions will be constrained to the "believable" path. The example used is, if the game mechanics allow it, the actor could materialize a chicken out of thin air, but then the actor will not be able to perform many other bizarre actions in the game world.
This does not appear to be a very good approach, however, because it does not take into account the event path (although it does constrain the player to acting within the available game mechanics). In other words, the actor might materialize a chicken out of thin air, but now the player will likely consider the actor to possess some magical abilities - and will actually expect the actor to behave more oddly in the future, not less. The actor has jumped to a different position on the field of potential actions. If the credibility budget slowly replenishes, the player might wait to spend it on one incredible action, or might spend it little by little for more believable actions.
Each action impresses a different understanding of the actor upon the player. If the player causes the actor to act magically whenever possible, the actor will be a magician. This is related to the Heise affect control model, which updates an impression formation matrix for each observer every time an event occurs. The goal of the actors is to maintain their Evaluation-Potency-Action placement as defined by their role and their emotional state. Role and sentiment are the sort of the gravitational law that keeps the events on the believable path, and emotion is the response to events in the setting.
One small point about this model is that each event is discrete, and therefore each movement on the event "river delta" is actually a quantum leap. An interesting question would be, Is this the best approach, or might there be some sort of wave-based impression memory system to suit this model?
The person who wrote that article initially given above is pretty well known in the field of game design, and if he has not seen this sort of thing implemented, it probably has not been done well. Just wanted to write down and share these thoughts.
Did you recognize that Ernie assumed a duality relation to express his "ah ha" experience as to how to balance player freedom and designer freedom? I especially liked how you tied this to Heise's impression formation matrix. For Idyl, this cost function plays out on different grains of analysis. One of the obvious grains is in the affordance-effectivity coupling.
I reply, though, to point the way toward an answer to your question of how to deal with the discrete, greatest-lower bound on event costs. The mathematical probability approach to this has been worked by a researcher named Minsky. He formulated a probability amplitude description of the event path ("river delta") to express the probability cost for a particle's particular state value along the path. I believe with a little work, you and I could calculate a corridor.
Thanks for sharing this article with me. I want to work further on this with you.
Thanks for the quick reply -
Yes, now that you put it in the framework of decision theory, the idea doesn't sound at all novel - except, as you point out, when tied directly to affect control and impression formation. Maybe what gives the article the exciting veneer, besides the intimate style it is presented in, is the idea that the probability of events is dynamic and may be changed by the player, not only the designer of the game, although the integrated change cannot exceed a constant. That is very specific. Controlled randomness, as with Perlin noise or simluated annealing, shaped by the art of the player.
In actuality, the probability of events should change in less predictable ways every time the player makes some action. The impression formation matrices of all players would change, and thereby also the probability that the main actor can perform actions to maintain affect control (credibility). So there is that complicated chicken-and-egg duality occuring between impression formation (perception) and event probability (action). Whether the player can effectively reflect these complicated dynamics, or whether this should be left to the designed game mechanics, might be in some way quantifiable.
I will start finding out what I can about Minsky. I would really like to work on these topics with you. If we came up with specifics, it could all boil down to a mathematical model that anyone could program (although it would be nice to have some sort of adventure game interface around it).
Clarification on the second paragraph requested please. Why/what constraint were you considering when you wrote that with each next player perception-action cycle (pac) would change toward a less predictable (lower probability) event?
My inclination is to allow the change in probability density to be non-directional. The change in likelihood would be a function of how relevant the player's next state is to the player's intended state (i.e., goal path) or, in another case, how relevant the player's next step is to the designer's intended state. This would allow a graded determinism on the shared cost function between the player and designer. For example, if the player's pac path was far from the designer's pac path, then the probability amplitude corridor (a quantum probability operator) would reduce the likelihood of that particular player's pac (loose-shift). If on the other hand the player's pac path was near the designer's goal path, then the probability operator would increase the likelihood of the player's pac (win-stay). The probability would be a multiplicative weight that gauged the player's experience of freedom (hence a graded determinism).
Here are some references. I now realize I miss spelled Mensky's name. The way I sent it to you below confuses the issue with Marvin Minsky.
Continuous Quantum Measurements and Path Integrals by MB Mensky (1993)
Advances in technology are taking the accuracy of macroscopic as well as microscopic measurements close to the quantum limit, for example, in the attempts to detect gravitational waves. Interest in continuous quantum measurements has therefore grown considerably in recent years. Continuous Quantum Measurements and Path Integrals examines these measurements using Feynman path integrals. The path integral theory is developed to provide formulae for concrete physical effects. The main conclusion drawn from the theory is that an uncertainty principle exists for processes, in addition to the familiar one for states. This implies that a continuous measurement has an optimal accuracy-a balance between inefficient error and large quantum fluctuations (quantum noise). A well-known expert in the field, the author concentrates on the physical and conceptual side of the subject rather than the mathematical.
Mensky's teacher I believe,
Quantum Measurement by Vladimir Braginsky (1995)
This work is known as non-demolition filter theory. The formalism also trace back to the BRDF and the BSSRDF out of the Stanford computer graphics group. You showed me Henrik Jensen's work and then another person's work we've discussed, Eric Veach (http://www-graphics.stanford.edu/papers/veach_thesis/). I've worked this problem on the information/perception side of the path formulation.
Hmm, that sort of credible-path gutter seems to be what Ernie is doing. A preset, constant credibility budget will let you go a constant distance from the credible path, but then no further. So that is a walled gutter. Or that would be the case, except that 3 things (that I can find) are changing: the setting (events possible in the scene), the perceptions of the other actors (materializing a chicken may not even be noticed by a hypnotized actor), and the perceptions of the main player (who may consider her actor to be a magician now that she has seen the actor materialize a chicken).
Going back to the walled gutter analogy, we might instead think of the credibility budget as a string attaching the main actor state to the credible path; the string can go out so far, but then must stop, or the system breaks. We might also consider a spring analogy, where one wants the main actor's actions to be so far off the credible path, but not too far, so that you push and pull them. That might be sort of a double-gutter, with some sort of hysteresis determining which side the player falls on (I'm thinking the "dark side or light side" kind of story). Alternatively, the story may be more like a skier going over moguls, getting kind of pushed this way or that, with the path not entirely determined until the end. Or maybe somewhere in between, like many staggered gutters. I guess it is up to the resources and aesthetic determinations of the game creator.
Wow! I would really like to check out those books. I will try the library at your university, if I can ever get a moment to go outside. I don't want to take up all your time, but thanks for continuing to correspond with me on this!