Archived

This topic is now archived and is closed to further replies.

Oluseyi

Auxilliary Strategic Information

Recommended Posts

Oluseyi    2108
I couldn''t decide which of the recent RTS engines to post this in, so I started a new one. I was browsing a short while ago and came across Towards Articulate Game Engines. It struck me as being very similar to bishop_pass'' annotation/annotated cellar experiment, as well as resonating with some recent discussions on unit tactics in RTS games. Some of you may have read it while others may not (I, for example, only skimmed the surface - I have a full multimedia assignment due today [Friday] for my AI in Narrative... class, and I haven''t started yet!) Anyway, I would be interested in hearing reactions - just sitting back and learning as you deconstructed and analyzed the article. Thanks. I wanna work for Microsoft! [ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ] Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites
Dauntless    314
Let me see if I understood this article correctly, as alot of it seemed to fly over my head.

Basically the objects in the game need to have a conceptual understanding of the game world. In other words, if I have a platoon of tanks, it needs to be able to understand certain things like whether it is in danger, or if it can see a vulnerability in the enemy''s position.

But in order to do that, would the objects need to have some sort of awareness of what other objects are and are capable of? Almost like having a database that it can compare what it knows, and uses it''s sensory capabilities to compare what it has empirically viewed and translate that raw data into "conceptual information"?

He goes on to differentiate quantitative knowledge vs. Qualitative knowledge, and that the key to having qualitative knowledge is to having a conceptual understanding of the domain problem at hand. Once this conceptual understanding is fed into the compiler it generates both quantitative analysis (number crunching), and the "self explanatory authoring" that provides an objects "intelligence" in the appropriate behavior or choice of action.

My question is...how do you teach objects this conceptual understanding? How do you feed an object the information that it needs to be able to understand the priorities and interrelated links that tie objects together? For example, let''s say that I sent a recon unit to what I think is my enemy''s right flank. My recon unit "sees" that his flank is virtually unguarded, and more to the point, sees a valuable supply area that is unguarded. How does the recon unit gain the "conceptual knowledge" that tells it that;

A) it is in no immediate danger
B) the supply depot is a valuable target
C) it will help the battle effort by destroying the unguarded depot?

I don''t think the author tells how this is really possible, and I think that everything else he says makes sense, but he makes it sound so easy. Admittedly, my programming skills are incredibly basic, so a lot of this I''m trying to comprehend, but I don''t understand how some of the things he says can be done...at least from the basic description that he gives. In his little diagram, he even points that there is a "Domain Theory" which I presume is the "conceptual knowledge base", and the scenario. So in my above example, the scenario is the recon unit, and the Domain Theory is the a), b) and c). These two go through his SIMGEN compiler and produce a Qualitative Analysis (in my example, "hey, why not blow up the depot while we''re out here since it will be beneficial?").

The scenario is easy to explain, but what about the "Domain Theory" and "Conceptual knowledge"? Like I said, I would imagine that in order to do that, objects would have to be able to recognize what other objects were, what those objects do, and how they relate to environmental factors. Then it could formulate "judgements". That sounds like a very tall order to me, but admittedly one that I hope that comes about.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
I haven''t read the article yet but going by your
description, Dauntless, I thought there was ai that
did this sorta thing. I''m no expert on ai so I don''t
remember what it''s called.

Share this post


Link to post
Share on other sites
bishop_pass    109
Well, I glanced at the article, but I really didn't read it completely. That doesn't mean I won't or that I don't find such things interesting. Quite the contrary: I am a big advocate of academic AI and giving common sense reasoning abilities to computers, especially within the context of games. Given that, let me try and answer Dauntless' question about how conceptual understanding arises with my take on it all.

Conceptual understanding arises from inferencing about perceptions and existing knowledge. It is implicit knowledge made explicitly available. Let's look at the microtheory of family relationships as an example.

Let's assume that we already know about mothers and fathers and grandparents and children and aging. Let's assume that we also know particular existing relationships, such as whom Mary's parents are. Now, let's assume we just recently learned that Mary has a daughter named Susan and Mary is the mother of Susan.

Due to our conceptual understanding of this domain, we can now infer all of the following:

Susan is female.
Susan is younger than Mary.
Susan is younger than the parents of Mary.
Susan is the granddaughter of the parents of Mary.
Susan is the child of Mary.
Susan is the grandchild of Mary's parents.
Mary's parents are older than Susan.
Susan is of the same species as Mary.
Susan is of the same species of Mary's parents.
Susan has a father.
Susan's father would be male.
Susan has a mother.
Mary is female.

All of these facts are inferred because of the domain knowledge about family relationships, age, gender, and so on. A new fact, no matter how small, triggers a great deal of new knowledge. Now, what if one of the above inferred facts was in fact the one nugget of knowledge which could save our lives. By having the domain knowledge and the ability to make the inferences, this knowledge is available.

This is why I have been continually advocating the research of semantic nets, predicate calculus, first order logic, and resolution refutation here on these boards for quite some time. Such a system would enable exactly the above, plus the ability for agents to catch contradictions. The system would also enable an authoring system where the common sense coder would not be able to enter logically inconsistent rules. Such a system, augmented with non-monotonic logic cuold be powerful. Unfortunately, resolution refuation doesn't scale well, but research into partitioning common sense knowledge bases using a vertex minimum cut is proving successful.

As an aside, let me discuss non-monotonic logic and one way of implementing it. Look at the truth table below:

    

T = True absolutely
TD = True by default
U = Unknown
FD = False by Default
F = False absolutely
C = Contradictory

| T TD U FD F
--------------------
T | T T T T C
TD | T TD TD U F
U | T TD U FD F
FD | T U FD FD F
F | C F F F F



The above truth table is applicable for determining which fact has precendence over another in the face of conflicting truth values. Look at the example axiom below.

If x is the mother of y AND z is the spouse of x THEN z is the father of y.

If we attach a truth value to this axiom of TD, meaning true by default, it can be overridden by any rule which conflicts with it which has a truth value of T, meaning true absolutely. For example, the two rules below have truth values of T:

If x is the parent of y and x is male, THEN x is the father of y.
Everyone has exactly one father.

Those two rules, if fired, produce T, which would override the TD produced by the first rule about the spouse, thus enabling non-monotonic logic.

Another example might be if we learned that the spouse of Mary was NOT the father of Mary's child. In other words, we learned the fact about z being the father of y with a truth value of F. Well, if we look at the truth table, we see that F gets precendence over TD.

Of course, what a rule implies is only as strong as its premises. Consider the rule below:

If x is the mother of y, x is the parent of y.

That rule has a truth value of T. However, if the premise, which is (x is the mother of y) only has a truth value of TD, the inference, which is (x is the parent of y) only gets a truth value of TD. So, if we know absolutely that Mary is the mother of Susan, than we know absolutely that Mary is the parent of of Susan. If, on the other hand, our knowledge about Mary being the mother of Susan is sketchy, then this propogates to our knowledge of Mary being the parent of Susan, giving that fact a truth value of TD.

___________________________________



Edited by - bishop_pass on February 9, 2002 2:42:43 AM

Share this post


Link to post
Share on other sites
bishop_pass    109
This thread is not getting the attention it deserves. True AI requires a conceptual understanding of the domain it operates in. Think of it as common sense.

The article Oluseyi mentioned discusses battle tactics, and how an effective agent in strategic battles needs a conceptual understanding of that domain. I would appreciate that readers look at my above thread for detail on the aspects of conceptual understanding, but for flavor, I'll provide some simple examples below related to battle.

Let's say an enemy has just unleashed a planet killer bomb on Vega 4. The princess from Orion 2 was visiting Vega 4 to visit the Festival of Colors as a sign of goodwill to the people of Vega 4 at the time the incident happened. The planet is destroyed. How does the general commanding the military powers which are against this enemy use conceptual understanding?

The general knew before hand that the princess was visiting Vega 4. Later, he learns that Vega 4 was destroyed and by whom. He knows that when planets are destroyed, everyone on the planet dies as a result of murder. He is then able to reason that the princess was murdered. And he knows by whom. He knows that the political ramifications of this, and seeks to gain an ally by seeking out the governing body or Orion 4. Only through a conceptual understanding of the situation, incuding murder, blowing up planets, royalty, etc. does all of this occur.

___________________________________



Edited by - bishop_pass on February 9, 2002 12:27:04 AM

Share this post


Link to post
Share on other sites
bishop_pass    109
Yeah, well I tried to elaborate on the whole theme. I would have hoped that someone would have entered the thread and either added something, argued about something, or asked about something.

Share this post


Link to post
Share on other sites
bishop_pass    109
Well, things that go *bump* in the night are all good and well, but how about some content as well?

Yes, that means you specifically. Must I carry the burden of this thread all by my lonseome? Oluseyi introduced it, and I built on it. Now, it''s everybody else''s turn.

Feedback? Questions? Ideas? Criticisms? Discussion? A synopsis?

___________________________________

Share this post


Link to post
Share on other sites
Dauntless    314
Actually I''m still trying to digest alot of this I think a part of the problem is there aren''t too many real programmers here...myself included (though I could be mistaken).

My level of programming knowledge is very abstract and very much at the theoretical rather than a practical nature. But, logic is logic, and if its spoken in enough one-syllable words, I''ll eventually get it

AI is actually the most intriguing part of programming for me. Since I really want to make a strategy and with my concepts in mind, my game would have to have SUPERB AI. I was browsing www.gameai.com for some tidbits and to familiarize myself with some AI terminology (I''m still not really sure what the difference is between genetic algorithms and neural networks and A*life for example). Well, anyways, I stumbled on a German government sponsored site for game AI believe it or not. The researchers there were going over Autonomous Agents and at first, it looked like it held some promise for what I wanted to do.

However, the more I looked at it, I realized it was totally unsuitable for my game. Indeed, autonomous agents as I understood it seemed much more geared to say, Bots for FPS style games. There was no sort of information exchange between the Agents, nor was there any sort of collective planning and organization...both critical elements to a strategy style game.

I''m interested in this topic precisely because I want to do two things. I want to break down units at their smallest level (I call it an OU for Organized Unit) and each of these will have a Leader. So there must be OU intelligence, and there must be Leader intelligence. Perhaps OU intelligence is a bit misleading, maybe I should say, OU awareness. The OU will react to certain events, but the LEADER is the brains. And more importantly, the OU must be able to pass information that it recognizes is important to the leader, and the leader in turn must pass it to HIS leader. Down the chain of command it goes until the information is passed to what I term an "avatar", which is a physical representation of the player on the map.

In other words, information and decision making is neither automatic nor God-like. The Leaders of the OU must have a comprehensive understanding of what is important in the context of the standing orders given to him. For example, let''s say my Avatar sends an order down the chain of command to a Leader to have his OU "patrol the southern ridgeline, and draw enemy out".

There are three key concepts here:
Patrol- an action which is primarily reconnaisance
Ridgeline- a geographical location
Draw Out- a complex action wherein a unit exposes itself to bait the enemy out.

How does it understand these concepts? As Bishop said, you can give the Leader some basic rules, and then be able to formulaically determine interrelationships from the empirical information it can obtain. However, I think there is a limit here. What happens if you don''t give the Leader enough initial information so that it can "create a logical formula"? In other words, the knowledge is somewhat hardwired, and the unit is not adaptive to situations it has never encountered before nor has any intial information on.

I know that Neural Nets are designed to make programs, "learn" about certain situations so perhaps this is something I should be looking into?

Going back to the original article, I''m still wondering how "Domain Theory" and "conceptual understanding" are done. I haven''t studied Heuristics since my high school calculus days, but I think there has to be some kind of "problem solving" set that are given to programs. For example, let''s say I ask a human what he thinks is true; do more english words begin with the letter K, or have K as the third letter in them. Chances are, he will answer, "start with K". The answer is actually the other one. Our human heuristics uses "shortcuts" to try to provide us with answers that seem consistent with the world around us, but they often fail. Perhaps another example would be to show a child an apple, a pear and a nectarine. Then explain that these are all fruits because they have seeds inside them. Then show a child a tomato and say that it too has seeds inside it. Finally, ask the child if he thinks a tomato is a fruit (I know some adults that can''t accept it). I think any programs made with built in heuristics or monotonic logic (still not sure what that means..."one shape" logic?) will have these same limitations.

Share this post


Link to post
Share on other sites
Oluseyi    2108
Conceptual understanding could take on many different forms on a domain-specific basis. In general, we can think of it as knowledge of fundamental principles and their possible outcomes. Based on this we can analyze a given situation for the occurrence of any phenomena that conform to the basic principles (this is a clumsy choice of words, but I''ll concretize the discussion in a second) and anticipate the logically persuant outcomes.

Case in point: basketball "simulations". Current games display an irritating lack of awareness of time. If the in-game players were given conceptual understanding, then they would know that allowing the shot clock to expire would lead to a violation and a possession change as a direct consequence. Similarly, they would realize that allowing the match clock to expire while down would lead to a loss of the game, and as such should double their efforts to catch up as time winds down - thereby implictly simulating "increased pressure" (which could then be put on the box cover as another selling point). On the other hand, playing hard with 15 seconds to go and the team down by 23 is sort of pointless, so the players would need a conceptual understanding of how far they could push the pace of the game.

The preceding paragraph displays two different types of qualititative information - one stimulating activity while the other restrains it. These two pieces of information need to interact on a context-sensitive basis (in this case the context being how much time was left) to result in truly intelligent and immersive behavior.

I could go on, but I''d just be reiterating what others have said, so I''m off to build a conceptual understanding testbed within the limited context of a basketball game. Wish me luck!

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
It is easy to wave your hands around and say that conceptual understanding is better. Yes, that is true. Jsut like "better AI" is better than "worse AI."

However, the article didn''t give any details really about HOW this scheme would work, other than vague references to their existing code that pointed out it would be difficult to generalize.

Yeah, sure, it sounds great. But there was no meat. As a game designer, how do I go from nothing to having an AI with a conceptual understanding of anything? Essentially all the article says is that they have something that works for some specific thing (whatever is was, I forget, not very exciting stuff) and that it would be nice if it could be applied to games as well.

Essentially the entire article may as well have read:

If the AI in a game had a conceptual understanding of the game, it wouldn''t act so damn dumb, and that would be a good thing.

So, it''s true, but not that informative. I suspect they have more there but perhaps didn''t want to get too technical. I would be more impressed if they had applied what they had to any even rudimentary game.

Share this post


Link to post
Share on other sites
Oluseyi    2108
AP:
So you''re saying that if you don''t see code or an example, then the resource is useless? Heavens man! Allow the article to stimulate your creativity and begin to think of applications and means of implementing the system yourself. Academic papers are very often theoretical and not directly applicable to current commercial products, but in a few years their techniques often start to find their way into consumer products. Just take a look at cutting edge graphics research - radiosity is only just starting to show up in games in a very approximate form.

The perils of a closed mind.

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites
liquiddark    350
I think maybe the problem comes in here when most of us don''t have a really good appreciation for the nuances of domain theory, which is really the heart of the article. It''s a basic step in the evolution of design sense to say you need a higher-level behaviour to manifest; it''s quite another to provide a method for addressing that need, and the author falls short of demonstrating anything compelling in this arena. To my mind, the entire article an be summed up as:

"Domain theory as a useful tool for abstract reasoning in games. Self-explanatory simulators are a useful framework for implementing domain-theoretic reasoning for games. Self-explanatory simulators are dynamics engines which have domain-theoretic facts embedded within their simulations."

The recommendation is nice, HOWEVER he skips the really hard step, which is creating the libraries of abstractions in the first place; I don''t really feel I am further ahead as a result. As far as I can tell, given the prerequisites he lists I can use model-based expert systems just as easily.

Is there something I''m missing, perhaps?

ld

Share this post


Link to post
Share on other sites
Oluseyi    2108
quote:
Original post by liquiddark
Is there something I''m missing, perhaps?

Specificity. The article is bland, and can be summed up exactly as you and the Anonymous Poster preceding have, but to gain something tangible from it an attempt must be made to restrict iss application to a confined domain of moderate complexity - such as my sports simulation example (I''m sure better examples exist). Within this specific domain, the application of these principles becomes much clearer (especially if one is extremely familiar with the given domain) and the results become more tangible. Possible methods of application then begin to spring to mind...

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites
liquiddark    350
But what advantages do these systems offer me? This is the question which vexes.

I get into this quandary: In order to use the proposed system, I have to embed my high-level reasoning code into my low-level simulation routines. But in doing so I violate just about every software engineering principle I have at my disposal. Alternatively, if I decouple the two systems and simply use "hooks" to activate the high-level reasoning, how am I improving on the well-developed paradigm of expert systems?

Is my difficulty any clearer?

ld

Share this post


Link to post
Share on other sites
Oluseyi    2108
quote:
Original post by liquiddark
Is my difficulty any clearer?

Um, no.

I can''t give any well-reasoned advice or suggestions on implementation, as I am myself trying out some elementary methods at the moment. If I find anything that is coherent and structurally sound, I''ll be sure to post it (and perhaps submit an article on the topic).

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites