Original post by alexjc
However, I'm not sure it's possible to disregard the implementation that easily, as it has fundamental ramifications on the design of the agent itself. The agent's implementation MUST either poll or handle events, so your scheme needs to fit into that somehow.
Ah... now I see the way through to explaining this... the sensor itself can either poll or accept messages from the environment... whereas the agent would be better to just handle messages from the sensor.
Why is this any different than just an agent accepting messages from the environment? Predominantly because the sensor can also filter information, either by passing only information that fits a certain schema (like enemy spotted, but ignore friendly spotted), or by focusing information depending on the agents state and/or focus. We might expect a badly wounded agent to not see an enemy while it is rushing toward a health pack. However, it might be nigh impossible to ignore the ambush waiting around that health pack. This filtering can be performed as a dependent function of the agent's state.
Such sensors would also permit reactive agents built solely on messaging systems to be able to implement active information gathering via polling sensors.
One could also envisage heirarchically layered sensors, designed to extract and filter information from outward layers, depending on the state of the agent and perhaps even the state of other sensors. So the 'enemy spotted' scenario might be accomplished by two layered sensors... an outward visual sensor that identifies all objects of relevance in the game and an inward filter sensor that isolates and identifies only enemy agents.
Of course, adding extra layers usually adds complexity to a system. However, such sensors would provide a clear mechanism for emboddying the 'Oracle' mentioned earlier and provide a centralised manner for handling information transfer to the agent. Writers of physics engines need only specify how information is emmitted from an event. The task of how this information is perceived by agents now falls to the designer of the sensor set for a given agent. How that information is then utilised by the agent finally falls to the designer of the decision modules. Separating the three seems to me at least to provide some clear design and programming task boundaries which generally makes software engineering easier.
Of course, I've not implemented this idea, so I could be wrong!
Timkin [edited by - Timkin on March 20, 2004 10:34:17 PM]