• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

IADaveMark

GDC = AI_Fest; while(GDC == AI_Fest) {CheerWildly();}

71 posts in this topic

quote:
Original post by Timkin
This is the one problem I have with messaging systems... you STILL need an oracle watching over the environment and deciding when events occur.



Welcome to computer games! In games, you already have something doing that, both physics engine and the game logic. (They usually tightly integrated.)


quote:

This oracle is going to need to run at a fine resolution
...
instead of having agents poll their local environment, we have the environment essentially polling agent states and checking them against environmental states.



You need your oracle/game logic to run at a fine resolution anyway; your gameplay relies on it. The difference with events is that things happen lazily, you only watch the area around the player rather than getting all the agents in the world to poll. It's efficient because it's done lazily, and you can use optimizations of the physics engine (like space partitions) to find the set of applicable agents.


In my project, there's one kind of sensing that's done by polling; line traces. These need to be computed on a custom basis, as requested by the agents. I'm trying my best to batch them up and return the results as messages, by sorting line traces according to the world-representation and processing them in a sensible order. Computation time drops down to a fraction of what it was because you've got fewer cache misses as everything is still in memory while you compute the batch results.

Now I'd be very interested if you can name one single other thing in games that isn't suited to batch processing...


quote:

Personally, I think we need a whole new paradigm... something along the lines of an agent-centered message system, placing the onus on agents to notice events in the environment, rather than have the environment tell them that they notice them.



What for?


InnocuousFox:

There are many thing to take from this beyond definitions:

  • Use reactive behaviors if you can get away with it

  • Use reactive techniques as much as memory allows

  • Try to design your AI as an event-driven reactive architecture


Now maybe Timkin and the "planning community" would get upset about this (hehe ), but this is very good advice for practical game developers.

Alex

AiGameDev.com: Artificial Intelligence from Theory to Fun!

[edited by - alexjc on March 18, 2004 5:35:34 AM]
0

Share this post


Link to post
Share on other sites
Timkin: Wouldn''t your latest idea also be known as a blackboard architecture? The senses post the events to the blackboard, and the brain gets to "perceive" then selectively...

Alex
0

Share this post


Link to post
Share on other sites
quote:
Original post by alexjc
Timkin: Wouldn't your latest idea also be known as a blackboard architecture?

No, that's not what I had in mind. Think of the sensor as a filter and translator rolled into one. It generates internal messages akin to the type you describe (which I call external messages), but it does so based on filtering incoming information. Presumably that information is available whenever the agent updates in the game loop, or at some other frequency, like when it actually looks at things, or stops and listens.

As to Dave's comments...

quote:

A bullet hitting a player is an "event". It is the function of the bullet and should notify the player.



One could argue that the player was in the way of the bullet... but your point is valid in this example. However, there are two ways to classify events with respect to agents: 1) Those caused by the agent; and, 2) those caused by the environment (anything external to the agent). Certainly, for the latter class of events, one might want to have the environment tell the agent about that event. However, for the former, one would probably want the agent to notice the affect it has on its environment (which is particularly important in learning algorithms).

quote:

For that matter, "agent" and "object" are being loosely used here.



I disagree. I know exactly what I am referring to when I say agent... from the AI perspective, an agent is an entity that perceives its environment and acts in response to its perceptions. An object on the other hand is just a thing. Now, if you want to talk data structures and internal representations, sure, agents and walls are both 'objects'... but we're not talking about them in that way... at least, I know I wasn't!

quote:

you aren't solving anything. You aren't designing anything



I disagree. We're discussing general design principles of agents, from which their applicability follows. Sure, we're not having a discussion about "how do I get a neural network to write my homework", or "what's a finite state machine". We are though, discussing important and fundamental issues in modelling environments that have agents embedded in them (in Alex's terminology, Emboddied Agents).

Both theoretical and practical discussions have value. I completely disagree that the only way to tackle AI, particularly Game-AI, is to think only in terms of the practical problem in front of you (and to allow for overlap of problems ONLY in so far as the tools used to solve those problems are the same or similar). Such thinking would almost never lead to new generalised architectures or methods, only new applications of old techniques, or ad hoc , one-off solutions that are all too common in both AI and Game-AI. Having generalised architectures and design principles leads to quality standards that are taken by many developers. The architectures are tested and refined on many problem instances and lead to better designs. This is how we will one day achieve software agents with believable human behaviours. The thing to remember Dave is that not all of us are here because we're writing a game (or dreaming of it)... some of us are here for the computer science and to see the field of Game-AI develop and improve itself so that one day it lives up to the expectations of the players.

Timkin

[edited by - Timkin on March 18, 2004 6:32:39 AM]
0

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
Think of the sensor as a filter and translator rolled into one.



So the sensor is a closure (a function with state) that is linked to the events generated, caches the data internally, gives a boolean indication to the agent as an event too, but only provides the data when the agent requests it?


quote:

I completely disagree that the only way to tackle AI, particularly Game-AI, is to think only in terms of the practical problem in front of you



Indeed. The hacker culture is great, but it''s a very narrow minded attitude...

Alex


AiGameDev.com: Artificial Intelligence from Theory to Fun!
0

Share this post


Link to post
Share on other sites
I may be off here, but I think you are trying to make the definition of sensor a bit too tight Alex. That would be one way to implement a sensor.

We could just as easily have a ''heartbeat sensor'', which every 10 seconds sends an event to the AI. This sensor does not even look outside the AI for things.

A sensor could also be the recipient of an external event from the engine (''you bumped into a wall'') which it translates and passes on to the AI.

A sensor could proactively scan the environment for hostiles, maintaining a list of the n most threatening enemies.

Now, the question of how the sensor communicates with the Agent is may vary. A sensor may post information on a blackboard or another shared location (and may set a flag/send an event that this info changed). It could be implemented to post events to the Agent with processed information in a digestable form similar to your event system ideas. It could cache it and wait to be read.

Since any of these are valid, it seems like the definition of a sensor is separate from the information flow system that handles communication between these modules. I am not saying that we shouldn''t discuss this component; only that it is outside of the sensor itself.
0

Share this post


Link to post
Share on other sites
quote:
Original post by BrianL
I may be off here, but I think you are trying to make the definition of sensor a bit too tight Alex.



Oh, I wasn''t defining sensors. I was trying to understand Timkin''s approach


I agree mostly with what you said. I see a sensor as a generic concept, which can translate into poll-based queries or event-driven messages...
0

Share this post


Link to post
Share on other sites
On the point of batching trace lines; this only helps if cache coherency is maintained through through the search. If your world representation is multiple megs, then it is very easy to ignore cache coherency completely as doing any trace line will result in a sequence of misses.

I am not anti-batching, but this level of optimization may not be commonly helpful, particularly due to the complexity of a decent scheduler.

I think it may be easier at times to optimize number of trace lines performed when the Agent can simple say ''that first trace line was a success, I don''t need to do any more''. That is easier to do when the agent is controlling the order itself instead of relying on a batch system.

Again, I am not trying to shoot down your ideas at all, just providing a view based on my experiences. I am very open to other views on this, as I have not worked with a substantial system like the one you describe.
0

Share this post


Link to post
Share on other sites
quote:
Both theoretical and practical discussions have value. I completely disagree that the only way to tackle AI, particularly Game-AI, is to think only in terms of the practical problem in front of you


I didn''t dispute either of these. I never claimed that it was the "only" way or that you should think "only" in those terms. However, there are times when this discussion has tended towards solving "all" problems with all-inclusive theories. That was my only contention.

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"
0

Share this post


Link to post
Share on other sites
Ack... one-hour talks in a dark room about AI just suck.
0

Share this post


Link to post
Share on other sites
Granted BrianL. Line traces are a challenge to handle elegantly, which is probably why I still use polling queries for some of them... However, while your agent may need custom control over some line traces, there''s a lot of stuff that''s quite common (e.g. checking for visibility of a bounding box that optionally requires multiple traces). That kind of thing is ideally suited to batching, and can be done lazily... which leaves only a few exceptions.

Counter-examples are really welcome by the way, I''m curious what kind of things are difficult to code with messages (but are worth it), and those things that are just not suited to messages.


I''ve not worked with a pure event-driven approach either, there are some little details that just aren''t worth it. But right now I am putting in the effort to make sure my project handles a great majority of the sensing via messages. It''s proving to be worthwhile so far.

Alex

AiGameDev.com: Artificial Intelligence from Theory to Fun!
0

Share this post


Link to post
Share on other sites
I have something running which is similar to what alexjc describes, a pure event driven reactive agent design ( well almost, still have some test code in there which polls ). Some observations, you don''t nessecarly need to use an oracle to capture events which are pertient to the AI. Events are generated within the subsystems already, most of the time they serve multi purpose. An event indicating collision with another object is used by the collision routines, however having the AI hook into those events, now it can be collision aware. It will have to filter such event so it won''t be innudated. Within a purely event driven architecture, there will be a profusion of events, which can be hooked and rerouted to the AI.

That''s not so say an occasional oracle here or there isn''t bad. I do have an world handler object which manages all AI sensory information and AI combat results. Much more efficent than polling, for large number of agents ( 100+ ).

Though I''m seeing purely reactive agents just being driven by the immeidate events, don''t seem very smart. Even though they respond to threats and show some imputus toward events, their behaviors overall are disconnected, stateless. To complete the sense of them being aware and intelligent, there needs to be anditional layer. Perhaps a super-brain which forms overarching plans and retains memories. Though it wouldn''t be on an per agent basis, rather it would be a group contoller. Perhaps that is where Timkins AI comes into play. Using a SPA architecture would be approriate here, but not at the individual agent level. If individual agents did use SPA, coordinating them would require inter-agent communication and a host of other techniques (agent-reflection, meta agent-modeling etc..)

Good Luck!

-ddn
0

Share this post


Link to post
Share on other sites
quote:
Original post by alexjc
Oh, I wasn''t defining sensors. I was trying to understand Timkin''s approach

I see a sensor as a generic concept, which can translate into poll-based queries or event-driven messages...


I think you''re reading too much into what I wrote Alex. I''m not trying to tie down the exact implementation of the sensor or its exact architecture, merely its function relationship between the environment and the agent.

Think of the sensor as an interface between the agent and the environment. Its role is to translate environmental events into messages appropriate to the agent... but further to this, is should also filter events/messages based on the agents state, intentions and actions. This could be a two stage process, if one chose to implement it that way.

I see the issue of whether the sensor is a polling inteface or message-passing interface as less relevant to the functionality of the sensor. I could certanily envisage an agent that utilised both polling and messaging sensors.

Does this make it more clear?

Timkin
0

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
I see the issue of whether the sensor is a polling inteface or message-passing interface as less relevant to the functionality of the sensor. I could certanily envisage an agent that utilised both polling and messaging sensors.

Does this make it more clear?



To some extent. I understand that in theory you just want to model data flow, whether it''s being pushed by the engine or pulled by the agent. However, I''m not sure it''s possible to disregard the implementation that easily, as it has fundamental ramifications on the design of the agent itself. The agent''s implementation MUST either poll or handle events, so your scheme needs to fit into that somehow.

I was trying to get the implementation clear because of these limitations. I think you''re on to something, but the devil is in the details!

Alex

0

Share this post


Link to post
Share on other sites
quote:
Original post by alexjc
However, I'm not sure it's possible to disregard the implementation that easily, as it has fundamental ramifications on the design of the agent itself. The agent's implementation MUST either poll or handle events, so your scheme needs to fit into that somehow.



Ah... now I see the way through to explaining this... the sensor itself can either poll or accept messages from the environment... whereas the agent would be better to just handle messages from the sensor.

Why is this any different than just an agent accepting messages from the environment? Predominantly because the sensor can also filter information, either by passing only information that fits a certain schema (like enemy spotted, but ignore friendly spotted), or by focusing information depending on the agents state and/or focus. We might expect a badly wounded agent to not see an enemy while it is rushing toward a health pack. However, it might be nigh impossible to ignore the ambush waiting around that health pack. This filtering can be performed as a dependent function of the agent's state.

Such sensors would also permit reactive agents built solely on messaging systems to be able to implement active information gathering via polling sensors.

One could also envisage heirarchically layered sensors, designed to extract and filter information from outward layers, depending on the state of the agent and perhaps even the state of other sensors. So the 'enemy spotted' scenario might be accomplished by two layered sensors... an outward visual sensor that identifies all objects of relevance in the game and an inward filter sensor that isolates and identifies only enemy agents.

Of course, adding extra layers usually adds complexity to a system. However, such sensors would provide a clear mechanism for emboddying the 'Oracle' mentioned earlier and provide a centralised manner for handling information transfer to the agent. Writers of physics engines need only specify how information is emmitted from an event. The task of how this information is perceived by agents now falls to the designer of the sensor set for a given agent. How that information is then utilised by the agent finally falls to the designer of the decision modules. Separating the three seems to me at least to provide some clear design and programming task boundaries which generally makes software engineering easier.

Of course, I've not implemented this idea, so I could be wrong!

Cheers,

Timkin

[edited by - Timkin on March 20, 2004 10:34:17 PM]
0

Share this post


Link to post
Share on other sites
Timkin. The concept of filtering seems like an orthogonal issue. You can filter information just as easily with messages or queries. For the polling approach, you just have a small 'if' section that checks for certain data fields. With messages, you'd just put that code inside a proxy that serves as an intermediate layer between the source and destination of the message. The idea of hierarchical sensors can also be applied to both paradigms -- just like C4.


The essence of the idea you describe is in the client/server design. The AI (client) can upload a small part of it's code into the game engine (server) to send back information as necessary. I'm pretty sure there are many ways of doing this, but I've tried two so far -- with mixed results.


  1. Have a small sensor script designed with the AI that is run on the server at regular intervals, and passes messages back.

  2. During initialization, have the AI perform polled queries and get the server to remember the parameters instead of returning a result (this is a special setup phase). The server then knows that the AI wants that information on a more regular basis, and can send it along in messages.



Either approache is at the extreme end of the same scale; at one end, you have full sensory functions implemented with a powerful scripting language, and at the other end you use simple data sent as parameters understood by the engine.


The problem with the scripts is that the engine cannot really "understand" what's in the sensory function. The best strategy for the server is to just execute the scripts regularly and hope for the best. So in theory, this is no more efficient than a polling sensor based system.
On the other hand, when the server receives parameters for a sensory query, it knows exactly what they mean as the implementation is designed to implement these queries. This allows the engine to "understand" the parameters, and use any form of optimisation implemented by the coder.

So by chosing a format for expressing your sensor functions, you're trading off efficiency for flexibility. I wonder what kind of compromize is the most useful

Alex


AiGameDev.com: Artificial Intelligence from Theory to Fun!

[edited by - alexjc on March 23, 2004 11:33:53 AM]
0

Share this post


Link to post
Share on other sites