Original post by alexjc
Timkin: Wouldn't your latest idea also be known as a blackboard architecture?
No, that's not what I had in mind. Think of the sensor as a filter and translator rolled into one. It generates internal messages akin to the type you describe (which I call external messages), but it does so based on filtering incoming information. Presumably that information is available whenever the agent updates in the game loop, or at some other frequency, like when it actually looks at things, or stops and listens.
As to Dave's comments...
A bullet hitting a player is an "event". It is the function of the bullet and should notify the player.
One could argue that the player was in the way of the bullet... but your point is valid in this example. However, there are two ways to classify events with respect to agents: 1) Those caused by the agent; and, 2) those caused by the environment (anything external to the agent). Certainly, for the latter class of events, one might want to have the environment tell the agent about that event. However, for the former, one would probably want the agent to notice the affect it has on its environment (which is particularly important in learning algorithms).
For that matter, "agent" and "object" are being loosely used here.
I disagree. I know exactly what I am referring to when I say agent... from the AI perspective, an agent is an entity that perceives its environment and acts in response to its perceptions. An object on the other hand is just a thing. Now, if you want to talk data structures and internal representations, sure, agents and walls are both 'objects'... but we're not talking about them in that way... at least, I know I wasn't!
you aren't solving anything. You aren't designing anything
I disagree. We're discussing general design principles of agents, from which their applicability follows. Sure, we're not having a discussion about "how do I get a neural network to write my homework", or "what's a finite state machine". We are though, discussing important and fundamental issues in modelling environments that have agents embedded in them (in Alex's terminology, Emboddied Agents).
Both theoretical and practical discussions have value. I completely disagree that the only way to tackle AI, particularly Game-AI, is to think only in terms of the practical problem in front of you (and to allow for overlap of problems ONLY in so far as the tools used to solve those problems are the same or similar). Such thinking would almost never lead to new generalised architectures or methods, only new applications of old techniques, or ad hoc
, one-off solutions that are all too common in both AI and Game-AI. Having generalised architectures and design principles leads to quality standards that are taken by many developers. The architectures are tested and refined on many problem instances and lead to better designs. This is how we will one day achieve software agents with believable human behaviours. The thing to remember Dave is that not all of us are here because we're writing a game (or dreaming of it)... some of us are here for the computer science and to see the field of Game-AI develop and improve itself so that one day it lives up to the expectations of the players.
Timkin [edited by - Timkin on March 18, 2004 6:32:39 AM]