Archived

This topic is now archived and is closed to further replies.

Strategic sensor placement

This topic is 5037 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello! I''m looking for AI algorithms for strategically placement of objects in some continously changing environment. Does anyone know of one? More specifically: I want to place sensors (radars) in an area of interest and continously track objects in the best way although the sensors can''t cover the whole area simultaneously. This is part of a final year project in my university, and the point is to compare "traditional" algorithms with my own solution.

Share this post


Link to post
Share on other sites
More information is needed to be able to offer a solution to this problem:

1) Does the field of a sensor overlap with the field of another sensor (for data fusion)?
2) Are the sensors perfect (give exact localisation information) or do they return a sample from a distribution of possible values (like real sensors do)?
3) Is the number of sensors fixed, or are you trying to work out how many sensors are needed to cover an area?
4) How are you measuring the quality and extent of coverage?

The answers to these questions will determine whether you can utilise a simple geometric optimisation algorithm (ignoring the sensor data) or whether you will need to consider aspects like the value of information arising from data returned by each sensor.

Cheers.

Timkin

[edited by - Timkin on February 19, 2004 7:35:46 PM]

Share this post


Link to post
Share on other sites
Hello Timkin and thanks for answearing!

1) The sensors might overlap, but data fusion can be assumed implicitly.

2) For simplicity the sensors are assumed to be perfect.

3) I have a fixed number of sensors to use. The point is to deploy these in the way that best covers the area. However the point is that if an object enters the area, the sensors should continue to track this object even though it''s moving into a (by the sensors) non-covered part of my area of interest, e.g. they should move in some intelligent way.
The first priority should be tracking known objects and the next priority to keep coverage over the rest of the area of interest.

4) The measurement is how many of the objects the system detects and continously tracks. This is what I want to compare.

Share this post


Link to post
Share on other sites

Well,

I thought this was 1 of the x problems in mathemathics that hasn''t been solved yet. I remember a problem like this:

- you have a circular area to guard from incoming parachutes
- how do you deploy 10 tanks to optimally guard the circular piece of land?

I thought this problem wasn''t solved yet... somebody any links??

But now for the problem at hand:

If I understand correctly your sensors can move with the object, if so:
- When an object is to move out of range from the sensors, let the sensor closest to it follow it. The distance from object to sensor maintains to be the same as the sensor radius
- Let x sensors close to the ''tracking'' sensors'' original position be repositioned to cover the ''empty'' space by the tracking sensor
- When an other sensor gets closer to the object than the tracking sensor does, give the ''tracking responsibility'' to this sensor. The current tracking sensor can either return to its original position, or stay at its current position and reposition al other sensors around it.

This way, when you have a tight grid of sensors, an object will ripple through an array of sensors.

Remember that the distance between sensors always has to be MAXIMIZED in order to have optimal coverage. How to do this, I don''t know. I would start with just a linear grid.

Edo

Share this post


Link to post
Share on other sites
An idea might be to introduce something similar to a flocking concept. That is, have your sensors try to stay near the center point while attepting to keep a certain distance between neigbouring sensors at the same time (I guess this would be done at the time of deployment). When a sensor breaks off to give chase on a target, remove it from the group calculations and the other sensors will shift automatically.
If your sensors do not cover the entire area then it is probably best that they move dynamically and rotate about around constantly in order to be most effective.
It is worth noting that this is just an idea and it may not prove at all efective.

Share this post


Link to post
Share on other sites
Okay, from your enhanced problem description this is a problem in the field of coordinated agents (multi-agent systems). Each of your sensors should be viewed as an agent since it can sense and act upon its beliefs. Thus, each agent should formulate a belief model of its environment, obtain observations and act according to its intentions (track an object). However, each agent must take into account what every other agent is doing before it makes its decision.

There are three predominant approaches to this sort of problem:

1) Each agent is given the observations of every agent and permitted to make its own decision;

2) A single agent receives observations from every agent and orders all agents to act; or,

3) Each agent can communicate their probable actions with every other agent so that all agents can work out the best solution between them (hard to do).

Which one you do is up to you. Before you begin though, you need your agents to be able to: a) recognise moving objects in their field of view that are NOT other sensor agents; b) be able to model the trajectory of these targets within an appropriate parameter space; and, c) determine the optimal action given the model of the domain. Once you have one agent that can do that, then you can consider building multiple agents into a cohesive system.

Presuming you''ve made it that far, then my considered opinion is that you should start with a systematic approach. You can adapt this later into a more ''complex systems'' approach if you desire... but get something going first from which to measure future algorithms by.

You should probably try to minimise the resources consumed to observe a target. That is, each sensor agent should be able to estimate the cost to change their position to monitor the target and estimate the cost of tracking the target on its current trajectory until it moves out of their field of view (which needs to be defined based on the maximum movement of the camera). A plan to track a target would then consist of the possible actions of a sequence of agents. You could select an appropriate plan based on minimising cost, maximising distance target tracked and/or maximising time agent tracked. However, since the trajectory of a target may change, you''ll need to be able to update plan costs and replan accordingly. You might find that the D* algorithm will work for you in this planning domain. If not, there are other replanning algorithms out their for deterministic domains.


Now, when there are multiple targets to track, you have a resource scheduling problem. You need to consider that there are now several different possible actions each agent could perform. Should an agent abandon its current target to watch another target, or should it leave it to some other sensor agent? This is where the multi-agent system arises. I''m by no means an expert on multi-agent systems, so I''ll leave it to you to read some appropriate literature on the subject. If you don''t find anything that helps you, then I suggest you go back to the Operations Research approach, which is optimised resource allocation.

Good luck... and keep us informed of how you go!

Cheers,

Timkin

Share this post


Link to post
Share on other sites