Jump to content
  • Advertisement
Sign in to follow this  
johnnyBravo

Any known methods for resolving goals using whatever is at hand?

This topic is 5093 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i was curious if there are any known methods for ai, if it has a certain goal and has a bunch of actions which it can mix and match to complete that goal? eg spaceship mining an asteroid space ship has laser to shoot asteroids, thrusters to get there, radar to find asteroids,scoop to pick up asteroid fragments etc,

Share this post


Link to post
Share on other sites
Advertisement
One of the difficulties there is the world/action representation. The computer needs to know that a scoop can pick up "things" and that ore in pieces is a "thing" that can be picked up. It needs to know that a laser can take embedded ore and make it into a "thing" that can be picked up. In order to model that sort of world where these things are known takes a great deal of complexity. Once that modelling is done, however, it would be a relatively simple task to walk through the possible actions and objects in order to solve the task at hand.

Share this post


Link to post
Share on other sites
Simple enough. Be very specific, and work backwards. Our goal isn't to mine an asteroid, its to pick up asteroid fragments coming from an asteroid we have mined (this prevents us from picking up random asteroid fragments we have not mined [use an event identifier to show who the asteroid was mined by])

So here is how I would do it:

1. Goal: Pick up Asteroid Fragments mined by me
1. Answer: Use scoop
1.1 Problem: Creating Fragments
1.1 Answer: Use Laser
1.1.1 Problem: No asteroid
1.1.1 Answer: Use radar
1.1.1.1 Problem: Found asteroid, how do I get there?
1.1.1.1 Answer: Thrusters

Okay, simple enough. But how do we get the AI to know all that stuff? Well, a simple way would be to hardcode the answer in. When the AI wants to pick up fragments, it must use some kind of scoop. If it wants to find something, it must make a call to its radar. Now just work backwards.


If you are talking about more difficult "knowing," such as realizing that not only can your ship's laser kill enemies, but it can create asteroid dust, then a possible way to do this is to give each item a type of flag that gives the AI a hint about what items they will be affected by, and what it will be like. For example, our flame thrower melts enemies, but your laser shatters them. For our asteroid it could be like:

ITEM: Flamethrower
LIKE: Enemies

ITEM: Laser
LIKE: Enemies | Rocks | Planets

So lets say your AI has never used a flamethrower on an enemy. Then it might try it on an asteroid. When this happens, it sees the asteroid put up the flag that its state is "Melted", and fills in the appropriate spot for what will happen when an enemie is hit by a flame thrower.


So, hopefully that gave you some ideas. No real code, but maybe you can put it together to make somehting useful.

Share this post


Link to post
Share on other sites
I'm more interested in realisation. eg to test its abilities, i give it an array list of commands that it can perform. It doesn't actually know what the functions do, it just knows what the limit of input to these functions and to run them.

So if it uses the thruster function, it will notice that sending values larger than 0 at the thrusters will move the ship, and different angle variable with move in different directions, then hopefully later on it is able to use that to move towards things.

I am hoping its possible to just give a spaceship one goal, that is to survive, and by experimenting it can resolve that goal...but i guess that might be a bit far fetched

Share this post


Link to post
Share on other sites
I say make a list of possible things to do in a situation, and depending on the set AI level, use a rnd statement for making odds. I posted a topic on this a little while down.

Share this post


Link to post
Share on other sites
i read the post, i think more of the ai acting randomly depending on the different situations it would be going through,

eg there would be a couple of ships that are capable of creating other ships as pawns to do their bidding, eg set commands depending on how the main ship realised how something can be done to complete the overall goal of surviving->getting energy->miniing ore ->converting ore in a reactor to energy

if ore that has been seen by this ship is being taken by other ships, the ship will realise that if this keeps happening it itself will have no energy and in end die because of that, so it would find a way of destroying those other ships.

and if those ships keep coming it might try and find the source of these ships..

Share this post


Link to post
Share on other sites
Quote:
Original post by johnnyBravo
I'm more interested in realisation. eg to test its abilities, i give it an array list of commands that it can perform. It doesn't actually know what the functions do, it just knows what the limit of input to these functions and to run them.

So if it uses the thruster function, it will notice that sending values larger than 0 at the thrusters will move the ship, and different angle variable with move in different directions, then hopefully later on it is able to use that to move towards things.
What you are talking about here is, at best, experimentation and learning. This can be accomplished in a number of ways. The root problem, however, IS the cause and effect modelling in the environment. Visage pointed out a little of that.

In real life, through our own experiences and memories (which may also have been taught rather than learned on our own) we know much of the cause/effect pairs. What's more, we can apply a knowledge of similarities to further augment our knowledge. If a hammer can break a brick into smaller parts then, if we know the nature of a brick AND a rock, we can deduce that a rock could be also broken by a hammer. In fact, we can then take that hammer and expand its uses to break lots of things - a table, for example - which, on the surface, shares very few traits with a rock or brick.

In the computer world, however, each of these relationships (or at least the traits that determine the potential relationships) needs to be modelled. It would be very inefficient to model something along the lines of:

Hammer:
breaks bricks
breaks rocks
breaks tables

Brick:
Breakable by hammer

Rock:
Breakable by hammer

Table:
Breakable by hammer

The quandary is obvious when you consider adding either another tool (e.g. an explosive?) or another material (e.g. a head?) to the equation. We now have to update the database to show that hammers can break heads and explosives can break all of the above. The combinatorial explosion of possible solutions gets out of control quickly.

Now, if you were to limit the items to a finite number of things, that's fine. At that point, the relationships can be entirely modelled and the computer can sort through them to determine what it can and cannot use. Visage hit it on the head when he talked about starting at the OTHER end of the equation. If you start from "what do I have" and try to work forward, you get a lot of crap branches on your tree that aren't getting you anywhere. If you start from the desired solution and then "build" your list of actions and requirements (which will likely have dependancies), you can work your way back to a point where you are asking "what do I have to do first and what do I have that will accomplish this sub-goal."

This approach is used all the time in RTS games, for example. If you design the architecture right, you can specifiy and end goal that is WAY in the distance and the computer will spit out a list of prerequisites, IN ORDER, that need to be performed or acquired in order to accomplish that goal.

I'm not sure that this was the initial intent of this thread, however.

Share this post


Link to post
Share on other sites
The way I like to look at AI is utilitarian: There is a process that has a list of possible final results. Associate a real number (called utility) to each state, representing how happy we are with the result. Now for each possible action, estimate the expected value of the utility. Pick the largest.

That is a general recipe, and it can be proven that under very a sensible definition of "rational", every possible rational action selection mechanism can be expressed in this way.

I didn't tell you what the utility function should be or how to estimate its expected value. Those are difficult problems that will require assumptions and simplifications and that's why this is no magic-bullet. It's just a general paradigm that can help you design a solution.

Share this post


Link to post
Share on other sites
Well in regarsd to your orignal post, I have similar situation in a project I'm working on. Essentially an agent has a goal to achomplish and a list of actions they can perform. How I handle it is by setting obstacles which the agent has to overcome using one or more of its available actions.

So in your case you have the following actions
Fire thrusters
Fire Laser drill
Scoop ore
Scan for asteroid.

So the first obstacle the ship encounters is it needs to find an asteroid, it then choose the action most apporprite to solving that problem, it then has to fly to the asteroid, mine the asteroid, and gather the ore.

In this case each obstacle can only be overcome by a single corresponding action so you don't get a situation where the agent can overcome an obstacle in multiple ways. Where things get interesting is when you add resources into the mix.

For instance if the ship has the following resources:
100 units of fuel
1,000 tons of cargo space
10 laser charges.

and actions have the following costs.
Fire thrusters = 1 fuel/space moved
Fire Laser drill = 10 fuel and 1 charge per firing, which frees 25% of an asteroids ore.
Scoop ore = 1 fuel/ 25 tons of ore gathered.
Scan for asteroid. = 5 fuel

Goal: Gather ore and return to port.

This now allows for the agent to make more complex decisions.
since it has several factors to consider in how to respond to each obstacle as part of an overall strategy to complete the goal.


For instance take this situation
Action: ship leaves port F=100
Obstacle: Locate Asteroid
A: scan F=95
Result: Asteroid A1 found distance 5 space units ore density 500
*now the AI has to make a decision does it travel to the asteroid or scan for another one*
A: scan F=90
R: asteroid A2 found distance 20 space units ore density 4,000
*now it has two asteroids to choose from and has to decided which asteroid is the better choice or whether it should keep scanning at the cost of precious fuel.*
**Assuming it has mined before it should have list of obstacle action pairs in memory which will allow to decided which asteroid to choose.**
*** A simple compute revels that A1 would require 5 fuel to travel to, 40 fuel and 4 laser charges to completely mine, another 20 fuel to gather the ore, and another 5 fuel to return to port, which amounts 70 of 90 fuel and 500 tons of ore. While A2 requires 20 fuel to reach, 10 fuel and 1 charge to mine a full cargo load of ore, 40 fuel to gather ore and another 20 to return to port, which amounts 90 of 90 fuel and 1000 ore. So in then end the second proves to be the better choice. Of course if the game world is more complicated and other events can be encountered along the way, then the agent may choose the first instead since it leaves sufficient fuel to cope with the unforseen.***

Share this post


Link to post
Share on other sites
If you want your AI to be able to figure ou different things to do, and how to solve problems, you need it to be able to recognize different attributes to items, etc. You need to set up flags such as "BRITTLE" and "DENSE" or "GOOEY", something like that.

alvaro's answer works really well for this. For each action, rate its success from -1 to 1. If two things, done by the same tool, are both above .5, then perhaps we should look for similarities, right? So we see that our hammer works well on a brick and that it works well on a rock. Then we could associate that the hammer may work well on things with the flags "BRITTLE" on them. So if we wanted to break something, we could see if it had the BRITTLE flag on it, and try the hammer. If it didn't work, then we would see the difference in the old actions with the new one, and change which flags we associate with the hammer.
This would mean that if we tried to use the hammer on something GOOEY, and it didn't work and we rated it as -1, we probably wouldn't try the hammer on something GOOEY ever again.

That would cause the AI to be like a baby for a while until it learned associations, but this could be rappidly improved with several AIs action, and sharing what they have learned.

The problem with this is it requires a large memory for each acting AI about past actions. You can figure out something though.


But this only really answers how to associate items with actions. The next step is associating actions with problems. Problem: Movement. Answer? Well, the thing just has to keep trying random shit until it finds that it moves using the thrusters. There is no real answer to that.

Unless you hardcode in some initial values, the AI will be baby like during a development period where it will try to thrust into rocks and hammer on plasmic goo, etc. Just deal ;)

But! If you do want smart AI, then just run some sort of evolution program, that after 100 text output evolutions, it prints the table for the items and their flags. Just hardcode this in your game.

Alright! Right on! By the way, allow for mutations. It may allow for some unexpected things. Who knows, thrusting into an asteroid and losing 1 health and only 1 energy may be better than using 5 energy to mine, especially if it does the same thing. And you wouldn't have thought of that, right? ;)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!