Sign in to follow this  
Jeremiahg

LifeAI open source C++ library

Recommended Posts

Jeremiahg    120
LifeAI is an open source C++ library for Artificial Intelligence decision-making. It can be integrated in computer games, robotics, or anything that requires simulated intelligence.

Entities and actions are created in C++ and text files with LifeAI's syntax. Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.

LifeAI is licensed under the MIT X11 license. The library and a demonstration application can be downloaded at this link:

[color="#000000"][url="http://www.lifeai.com/download.html"]http://www.lifeai.com/download.html[/url][/color]

Share this post


Link to post
Share on other sites
[quote name='Jeremiahg' timestamp='1315525519' post='4859280']
LifeAI is an open source C++ library for Artificial Intelligence decision-making. It can be integrated in computer games, robotics, or anything that requires simulated intelligence.

Entities and actions are created in C++ and text files with LifeAI's syntax. Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.

LifeAI is licensed under the MIT X11 license. The library and a demonstration application can be downloaded at this link:

[color="#000000"][url="http://www.lifeai.com/download.html"]http://www.lifeai.com/download.html[/url][/color]
[/quote]

An EmotionAI would be pleasant too.

Share this post


Link to post
Share on other sites
Jeremiahg    120
Thanks for the responses.

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.

@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities. A basic description is provided in the original post, but a more detailed description can be obtained here: [url="http://www.lifeai.com/guide_files/general.html"]http://www.lifeai.co...es/general.html[/url] I recommend viewing the video of the demonstration application here: [url="http://vimeo.com/28839032"]http://vimeo.com/28839032[/url]
You can also download it [url="http://www.lifeai.com/download.html"]http://www.lifeai.com/download.html[/url] if you'd like to try it. It includes a text file that demonstrates how entities, stats, values, and actions all contribute to the logic of the system.

Share this post


Link to post
Share on other sites
[quote name='Jeremiahg' timestamp='1317444128' post='4867872']
Thanks for the responses.

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.

@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities. A basic description is provided in the original post, but a more detailed description can be obtained here: [url="http://www.lifeai.com/guide_files/general.html"]http://www.lifeai.co...es/general.html[/url] I recommend viewing the video of the demonstration application here: [url="http://vimeo.com/28839032"]http://vimeo.com/28839032[/url]
You can also download it [url="http://www.lifeai.com/download.html"]http://www.lifeai.com/download.html[/url] if you'd like to try it. It includes a text file that demonstrates how entities, stats, values, and actions all contribute to the logic of the system.
[/quote]

[quote]@ VJo1 - LifeAI can already simulate emotions to some extent. You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.[/quote]

I think it is emulate.

Share this post


Link to post
Share on other sites
Trienco    2555
[quote name='Jeremiahg' timestamp='1317444128' post='4867872']@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.
[/quote]

Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.

To get an idea what I'm talking about: http://www.uni-ulm.de/~hhoffman/emotions/downloads/simplex.pdf
It's a heavily edited and shortened version for some book.

Share this post


Link to post
Share on other sites
[quote name='Trienco' timestamp='1317456803' post='4867905']
[quote name='Jeremiahg' timestamp='1317444128' post='4867872']@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.
[/quote]

Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.

To get an idea what I'm talking about: http://www.uni-ulm.de/~hhoffman/emotions/downloads/simplex.pdf
It's a heavily edited and shortened version for some book.
[/quote]


OCC model is good and implementable right to left without mistake for video games and RPG books.

The term SIMPLEX may have trademark infringement because it is famous in linear programming.

About Fig 2:
imo, Memory -> Mood-state is more accurate then Memory -> Appraisal.
Latter implies self-study of mind with reflection.

Share this post


Link to post
Share on other sites
IADaveMark    3731
[quote name='Jeremiahg' timestamp='1315525519' post='4859280']Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.[/quote]
This is the basis for what many of you have heard referred to as [i]utility theory[/i] around here.

[quote name='VJ01' timestamp='1317401697' post='4867643']
An EmotionAI would be pleasant too.
[/quote]
Utility can represent anything that can be put on a continuum. Whether it be bullets in your gun, hunger, or a level of happiness, anger, etc. It's all numbers.

[quote name='Jeremiahg' timestamp='1317444128' post='4867872']
@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities.
[/quote]
I seriously disagree with this premise. While it is not in as wide of usage as FSMs, BTs, etc., utility architectures have been around in game AI for a long time. In fact, there are entire games that are built on this foundation... e.g. The Sims. Admittedly, it has only recently been coming to the forefront... mostly in the past 3 years -- coincidentally since the release of my book which is almost [i]entirely[/i] about modeling behavior through utility.

[quote name='Trienco' timestamp='1317456803' post='4867905']
Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.[/quote]
You are correct in pointing out that this is the major pitfall. With no sleight to the OPs project, putting together a routine to crunch numbers is actually a trivial process. Many AI programmers can build a utility-based decision [i]architecture [/i]in a day or two. The trick is how to model what numbers to use. It's somewhat of a dark art to many people.

While dealing in concrete numbers is relatively simple (e.g. "how many bullets to I have left?"), taking a concrete number and overlaying an abstract "meaning" to it is more difficult... e.g. "how does the number of bullets I have left relate to my necessity for reloading?" Further still, attempting to codify something that is, in essence, an abstract concept from the start is even more amorphous -- e.g. "how angry am I?" The ultimate is converting an abstract into another abstract -- "how does my overall perception of danger affect my anxiety level?". Now THAT is the part that is tough for people to get a handle on.

So again, while the [i]tool [/i]presented might shave a few hours off your workload, it is just a tool. And [i]tools [/i]don't create believable AI... they only [i]enable [/i]it. That said, many people sure could use some more tools.

Share this post


Link to post
Share on other sites
_orm_    112
Sounds to me like it is being lauded as a silver bullet approach to AI then, to which I would be inclined to agree with IADaveMark; while being able to model emotions and overall human satisfaction could greatly enhance any AI, it still has to compliment other AI techniques such as goal or state driven systems in order to turn these numbers into meaningful action on the part if the agent.

I like the work you are doing, and it could actually prove to be a viable alternative to fuzzy data sets and other forms of decision making, being based on emotions rather than on sets of data such as aggression, guts, and determination. I actually can see much more human like behavior coming from such techniques.

Why does that prospect send a chill down my spine? :blink:

Share this post


Link to post
Share on other sites
IADaveMark    3731
Here's the slides from my lecture with Kevin Dill on utility theory from the 2010 GDC AI Summit.

[url="http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf"]http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf[/url]

Share this post


Link to post
Share on other sites
Jeremiahg    120
@ IADaveMark - Wow, there appear to be many similarities in our approaches to AI. I can see some of the challenges I've encountered discussed in your work too. I would love to get your feedback on LifeAI sometime :)

Share this post


Link to post
Share on other sites
[url="http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf"]http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf[/url]


There is missing abstract and executive summary. I am not saying it is necessary but it would of been helpful explanation for me. Then I see a macroeconomics break because I didn't get the meaning because I wanted to know what is couple other alternates besides utility in macroeconomics. I understand this is about utility too but there is other things in macroeconomics too. "In economics, utility is a measure of the
relative satisfaction from, or desirability of, consumption of various goods and services."

I think it was lost between [b]the lack of meaning on utility in micro and macro and then just economics[/b]. Imo overall, just don't think it was a strong explanation.

Share this post


Link to post
Share on other sites
IADaveMark    3731
[quote name='VJ01' timestamp='1317558308' post='4868240']
There is missing abstract and executive summary I am not saying it is necessary but it would of been helpful explanation for me.
[/quote]
It's lecture slides from GDC... the only thing "missing" is Kevin and myself actually [i]speaking[/i].


[quote]
I think it was lost between [b]the lack of meaning on utility in micro and macro and then just economics[/b]. Imo overall, just don't think it was a strong explanation.
[/quote]
Again, if you actually hear a lecturer, it makes more sense. Otherwise the lecturer would have nothing to do because he would simply be reading the slides like a book. Kind of a silly approach, don't you think?


Share this post


Link to post
Share on other sites
The EmotionAI declared above is composed by me in an Objects Library project in CodePlex:
http://olib.codeplex.com/wikipage?title=Emotion%20Engine%20start-up%20business%20with%20freeware%20version%201

Copy, paste of 1/2 of the Essay:
[quote]
Sets and Multi-map data structures to start. Avoidance of function pointers in first version.
Useful is map levels would be designed to feed input into a few syllogisms and then a value from multimap.

For example in 3d first person game:

requirement: player 2 used/held her/his large thunder-hammer weapon for 2.4+ secs.
result: Weapon Use Profiling recorded this event-sequence concurring 40%-60% of time in level 2.
concern1: Player 2 trying to hack his future profile changes weapons intermittently to pistol and fists to hack up the profiling.
concern2: Player 1 tries to setup Player 2 by changing positions for aliens to be trapped in a arcane situation that allows for 1-2.3. second use of weapon.

Standard Player Profile upon leaving level 2 (1-7 emotion award grades)
A) Extremely Brave (coward = 1, too brave = 7)
B) Confidant / very confidant (paranoia = 1, arrogant = 7) (Version 2, pompous = 10)
...extreme preference for 3 emotions. up to 4 emotions max in version 1.

Arbitrary Weapon Use:
0-30 = DNU, do not use, AND use other profiling results.
40-60 = explained as it is
60.% - 70%
70.% - 85%
85.% - 100% -> use your imagination

How to use this profile which is simplification valid regardless of circumstance.
====
[/quote]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this