Learning and remembering an environment
Depending on how many agents need to remember a location, it might be quicker to store the agents info in the location rather than on the agent for searches. For example. Quake 3 only has like maximum 16 entities running around for the most part. It might be easier to store the entity's knowledge of a location on a BSP leaf than on the entity itself for lookup. If it has no info in that location it's never been there.
If you use grid cells instead of circles, and quantize agent positions to cell coordinates, then you can store these in a hash table. When you want to know whether the agent has been in cell X, just look it up.
An alternative is to use a spatial hierarchy, such as a quadtree, to insert the various points/spheres; this will let you query in O(log n). If you keep a link to the currently active cell/node/point, you can probably cut out most of the queries entirely with an early out check.
An alternative is to use a spatial hierarchy, such as a quadtree, to insert the various points/spheres; this will let you query in O(log n). If you keep a link to the currently active cell/node/point, you can probably cut out most of the queries entirely with an early out check.
From the OPs description, it sounds like the requirement is to build a unique labelling of the environment based only on movement and observation. There is no mention of uncertainty in either action outcomes or observations. Therefore there exists a one-to-one mapping between actions and state pairs. We might further assume that this mapping is invertible (so the agent can backtrack). Thus, given a label for an initial position and a sequence of actions sufficient to traverse all states (with no constraints on how many times a state may be visited), it is a trivial exercise to uniquely label all states.
The only difficulty one faces is when considering a continuous state space. In which case, it is not feasible to write down an infinitely long list of states and labels. In which case, one quantises actions so as to define regions of the state space (generates a discretisation of states based on quantised actions) and labels these regions accordingly.
This problem only gets interesting when one considers noisy observations and/or uncertain action outcomes. It is these conditions that make SLAM non-trivial. In the aforementioned scenario, SLAM is trivial (and probably overkill).
Timkin
The only difficulty one faces is when considering a continuous state space. In which case, it is not feasible to write down an infinitely long list of states and labels. In which case, one quantises actions so as to define regions of the state space (generates a discretisation of states based on quantised actions) and labels these regions accordingly.
This problem only gets interesting when one considers noisy observations and/or uncertain action outcomes. It is these conditions that make SLAM non-trivial. In the aforementioned scenario, SLAM is trivial (and probably overkill).
Timkin
@ Timkin:
U are right. Under these assumption SLAM is feasible... But at the same time it's no more SLAM at all... I think it would be overkill too... But maybe not to much if the feature to be observed are few... Anyway there always exists the problem of the state space: even with perfect observations (and odometry), as the features are observed, one as the problem of tracking all of them... I saw some work connecting grid-maps with SLAM techniques, maybe this is the right direction.
U are right. Under these assumption SLAM is feasible... But at the same time it's no more SLAM at all... I think it would be overkill too... But maybe not to much if the feature to be observed are few... Anyway there always exists the problem of the state space: even with perfect observations (and odometry), as the features are observed, one as the problem of tracking all of them... I saw some work connecting grid-maps with SLAM techniques, maybe this is the right direction.
Quote:Original post by UnsilentStorm
as the features are observed, one as the problem of tracking all of them...
To me, that's just book-keeping! ;)
Quote:
I saw some work connecting grid-maps with SLAM techniques, maybe this is the right direction.
There was an interesting presentation at the IAV conference last year on combining SLAM with tessalation methods such as quadtrees. I've got a preprints CD around here somewhere... I'll see if I can find the paper and see if it had anything interesting to say on the problem.
Quote:Original post by Timkin
There was an interesting presentation at the IAV conference last year on combining SLAM with tessalation methods such as quadtrees. I've got a preprints CD around here somewhere... I'll see if I can find the paper and see if it had anything interesting to say on the problem.
I saw the presentation of Tardos in Portugal (I think this is the paper u are speaking about)... I was just thinking about that! :-)
Forgive me if I'm wrong about this, but isn't SLAM more of use in robotics applications where the map of the environment is unknown? Most of the SLAM techniques that I have read about revolve around clever ways of integrating the errors and lack of information that we can get from robot sensors into building and using maps of their environments. I'm thinking this would be overkill in a game engine where every A.I. agent has access to a 100% accurate model of the world already.
In other words, shouldn't it be easier to 'fake' the map building that the agents do rather than do full blown SLAM techniques for every one of your A.I. agents (I'm assuming you want more than one). Also, even with today's SLAM techniques, robots aren't nearly up to the level of a human for internal map building.
But if you're interested in SLAM techniques, I've got a stack of papers on my desk at the moment that I've been meaning to properly learn (it's where my Ph.D. thesis is very gradually leading; I'll be able to answer your queries a lot more thoroughly in a year or so though!)
In other words, shouldn't it be easier to 'fake' the map building that the agents do rather than do full blown SLAM techniques for every one of your A.I. agents (I'm assuming you want more than one). Also, even with today's SLAM techniques, robots aren't nearly up to the level of a human for internal map building.
But if you're interested in SLAM techniques, I've got a stack of papers on my desk at the moment that I've been meaning to properly learn (it's where my Ph.D. thesis is very gradually leading; I'll be able to answer your queries a lot more thoroughly in a year or so though!)
Quote:Original post by Trapper Zoid
Forgive me if I'm wrong about this, but isn't SLAM more of use in robotics applications where the map of the environment is unknown? Most of the SLAM techniques that I have read about revolve around clever ways of integrating the errors and lack of information that we can get from robot sensors into building and using maps of their environments. I'm thinking this would be overkill in a game engine where every A.I. agent has access to a 100% accurate model of the world already.
U are absolutely right when u say that SLAM is used in Robotics and so on. But notice that the original post asked for techniques such that: 1. An agent find himself into an environment (like kidnapped); 2) It has no experience of this environment (i.e.: no map); 3) It wants to sense features and recognize them in the future.
Quote:
In other words, shouldn't it be easier to 'fake' the map building that the agents do rather than do full blown SLAM techniques for every one of your A.I. agents (I'm assuming you want more than one). Also, even with today's SLAM techniques, robots aren't nearly up to the level of a human for internal map building.
In a game case, it is like doing SLAM with perfect odometry, which feasible under certain assumption.
Quote:
But if you're interested in SLAM techniques, I've got a stack of papers on my desk at the moment that I've been meaning to properly learn (it's where my Ph.D. thesis is very gradually leading; I'll be able to answer your queries a lot more thoroughly in a year or so though!)
Me too :-) Good luck in reading the papers from Thrun &C.! :-)
Quote:Original post by UnsilentStorm
In a game case, it is like doing SLAM with perfect odometry, which feasible under certain assumption.
There's one big difference, as far as I can see; in a game your agent has a direct link to a perfect representation of the world. It's not just the perfect odometry, it's not even just having perfect rangefinder and camera data, it's more being able to get the Creator to purposefully design the universe so your agent can run perfectly.
That's what I mean by "faking" it; the agent already has a perfect map and perfect localisation, which is what SLAM is meant to achieve. What needs to be done is methods for "fuzzifying" the agent's behaviour so it acts like it was building the map from scratch. I guess from a pure A.I. perspective this could be called "cheating", but I don't think full SLAM will get the effect needed for the agent in a game, unless said agent was a forgetful drunk [grin].
For the original question (for noaktree), can you set the world up so there's a series of defined landmarks at key points in the map (they don't actually have to be anything physical in your gameworld, just predefined points or junctions). Then your agents could flag which landmarks they'd seen, and use that as the basis of what they know about the world.
Quote:
Me too :-) Good luck in reading the papers from Thrun &C.! :-)
Heh, reading them is easy. It's understanding them that's difficult! [smile]
Quote:Original post by Trapper Zoid
There's one big difference, as far as I can see; in a game your agent has a direct link to a perfect representation of the world. It's not just the perfect odometry, it's not even just having perfect rangefinder and camera data, it's more being able to get the Creator to purposefully design the universe so your agent can run perfectly.
You've missed the point I think Trapper. Yes, one could just give the game agent acces to the global map representation. However, we may not want to do this. We might want to limit the agent's initial knowledge so that part of its activity is directed toward learning the environment, much as the player must do. It's all about limiting the initial knowledge state of the agent while still making it viable. Predominantly, you would do this because you believe it will generate interesting behaviours that the player will identify with and add to the immersion of the game, as opposed to 'dumbing down' a perfect AI to try and create these behaviours.
One of the most frustrating aspects of FPS games is the 'perfect knowledge' of the enemies. Why does the enemy always know (and take) the shortest/safest/fastest route to an ambush position? What about in strategy games. Why should your enemy not be hindered by the 'fog of war'? When such AI are dumbed down, it's usually very obvious and very easy to predict how they will behave.
Beyond simple pathfinding requirements, being able to learn a semantic or abstract representation of the environment becomes very important when you want your agents to make pseudo-intelligent decisions regarding positioning relative to the environment, or objects within it (like other agents). When the objects are dynamic, you simply cannot provide this information at compile time.
Anyway, it's already been discussed that SLAM is overkill... it was mentioned orginally (by me) because it's a standard technique for localisation for mobile agents in uncertain environments that begin with very limited domain knowledge. No one would seriously implement full blown SLAM in a game environment, unless you were doing a sim genre game. The point of mentioning it was to direct the OP to literature on this problem and one solution method, so as to inspire their thought processes to find a solution for their specific problem.
As for landmark technques... you might as well just give them the map, because all your doing is (at design time) translating the map into a lower dimensional representation (landmarks) and giving the agent the ability to interpolate paths on the higher dimensional surface. This is essentially a skeletonisation methodology.
Cheers,
Timkin
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement