Auxilliary Strategic Information

Started by
16 comments, last by Oluseyi 22 years, 2 months ago
Conceptual understanding could take on many different forms on a domain-specific basis. In general, we can think of it as knowledge of fundamental principles and their possible outcomes. Based on this we can analyze a given situation for the occurrence of any phenomena that conform to the basic principles (this is a clumsy choice of words, but I''ll concretize the discussion in a second) and anticipate the logically persuant outcomes.

Case in point: basketball "simulations". Current games display an irritating lack of awareness of time. If the in-game players were given conceptual understanding, then they would know that allowing the shot clock to expire would lead to a violation and a possession change as a direct consequence. Similarly, they would realize that allowing the match clock to expire while down would lead to a loss of the game, and as such should double their efforts to catch up as time winds down - thereby implictly simulating "increased pressure" (which could then be put on the box cover as another selling point). On the other hand, playing hard with 15 seconds to go and the team down by 23 is sort of pointless, so the players would need a conceptual understanding of how far they could push the pace of the game.

The preceding paragraph displays two different types of qualititative information - one stimulating activity while the other restrains it. These two pieces of information need to interact on a context-sensitive basis (in this case the context being how much time was left) to result in truly intelligent and immersive behavior.

I could go on, but I''d just be reiterating what others have said, so I''m off to build a conceptual understanding testbed within the limited context of a basketball game. Wish me luck!

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!
Advertisement
It is easy to wave your hands around and say that conceptual understanding is better. Yes, that is true. Jsut like "better AI" is better than "worse AI."

However, the article didn''t give any details really about HOW this scheme would work, other than vague references to their existing code that pointed out it would be difficult to generalize.

Yeah, sure, it sounds great. But there was no meat. As a game designer, how do I go from nothing to having an AI with a conceptual understanding of anything? Essentially all the article says is that they have something that works for some specific thing (whatever is was, I forget, not very exciting stuff) and that it would be nice if it could be applied to games as well.

Essentially the entire article may as well have read:

If the AI in a game had a conceptual understanding of the game, it wouldn''t act so damn dumb, and that would be a good thing.

So, it''s true, but not that informative. I suspect they have more there but perhaps didn''t want to get too technical. I would be more impressed if they had applied what they had to any even rudimentary game.
AP:
So you''re saying that if you don''t see code or an example, then the resource is useless? Heavens man! Allow the article to stimulate your creativity and begin to think of applications and means of implementing the system yourself. Academic papers are very often theoretical and not directly applicable to current commercial products, but in a few years their techniques often start to find their way into consumer products. Just take a look at cutting edge graphics research - radiosity is only just starting to show up in games in a very approximate form.

The perils of a closed mind.

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!
I think maybe the problem comes in here when most of us don''t have a really good appreciation for the nuances of domain theory, which is really the heart of the article. It''s a basic step in the evolution of design sense to say you need a higher-level behaviour to manifest; it''s quite another to provide a method for addressing that need, and the author falls short of demonstrating anything compelling in this arena. To my mind, the entire article an be summed up as:

"Domain theory as a useful tool for abstract reasoning in games. Self-explanatory simulators are a useful framework for implementing domain-theoretic reasoning for games. Self-explanatory simulators are dynamics engines which have domain-theoretic facts embedded within their simulations."

The recommendation is nice, HOWEVER he skips the really hard step, which is creating the libraries of abstractions in the first place; I don''t really feel I am further ahead as a result. As far as I can tell, given the prerequisites he lists I can use model-based expert systems just as easily.

Is there something I''m missing, perhaps?

ld
No Excuses
quote:Original post by liquiddark
Is there something I''m missing, perhaps?

Specificity. The article is bland, and can be summed up exactly as you and the Anonymous Poster preceding have, but to gain something tangible from it an attempt must be made to restrict iss application to a confined domain of moderate complexity - such as my sports simulation example (I''m sure better examples exist). Within this specific domain, the application of these principles becomes much clearer (especially if one is extremely familiar with the given domain) and the results become more tangible. Possible methods of application then begin to spring to mind...

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!
But what advantages do these systems offer me? This is the question which vexes.

I get into this quandary: In order to use the proposed system, I have to embed my high-level reasoning code into my low-level simulation routines. But in doing so I violate just about every software engineering principle I have at my disposal. Alternatively, if I decouple the two systems and simply use "hooks" to activate the high-level reasoning, how am I improving on the well-developed paradigm of expert systems?

Is my difficulty any clearer?

ld
No Excuses
quote:Original post by liquiddark
Is my difficulty any clearer?

Um, no.

I can''t give any well-reasoned advice or suggestions on implementation, as I am myself trying out some elementary methods at the moment. If I find anything that is coherent and structurally sound, I''ll be sure to post it (and perhaps submit an article on the topic).

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!
I don''t consider it to be an implementation problem, but rather a design flaw.

Ah well, good luck

ld
No Excuses

This topic is closed to new replies.

Advertisement