Jump to content

  • Log In with Google      Sign In   
  • Create Account

LifeAI open source C++ library


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 Jeremiahg   Members   -  Reputation: 120

Like
3Likes
Like

Posted 08 September 2011 - 05:45 PM

LifeAI is an open source C++ library for Artificial Intelligence decision-making. It can be integrated in computer games, robotics, or anything that requires simulated intelligence.

Entities and actions are created in C++ and text files with LifeAI's syntax. Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.

LifeAI is licensed under the MIT X11 license. The library and a demonstration application can be downloaded at this link:

http://www.lifeai.com/download.html

Sponsor:

#2 VJ01   Members   -  Reputation: 80

Like
0Likes
Like

Posted 30 September 2011 - 10:54 AM

LifeAI is an open source C++ library for Artificial Intelligence decision-making. It can be integrated in computer games, robotics, or anything that requires simulated intelligence.

Entities and actions are created in C++ and text files with LifeAI's syntax. Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.

LifeAI is licensed under the MIT X11 license. The library and a demonstration application can be downloaded at this link:

http://www.lifeai.com/download.html


An EmotionAI would be pleasant too.

#3 _orm_   Members   -  Reputation: 112

Like
0Likes
Like

Posted 30 September 2011 - 11:20 AM

So what all is in this library? Goal hierarchies? Fuzzy logic modules? Or is this everything wrapped into one package?

#4 Jeremiahg   Members   -  Reputation: 120

Like
1Likes
Like

Posted 30 September 2011 - 10:42 PM

Thanks for the responses.

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.

@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities. A basic description is provided in the original post, but a more detailed description can be obtained here: http://www.lifeai.co...es/general.html I recommend viewing the video of the demonstration application here:
You can also download it http://www.lifeai.com/download.html if you'd like to try it. It includes a text file that demonstrates how entities, stats, values, and actions all contribute to the logic of the system.

#5 VJ01   Members   -  Reputation: 80

Like
0Likes
Like

Posted 30 September 2011 - 11:37 PM

Thanks for the responses.

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.

@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities. A basic description is provided in the original post, but a more detailed description can be obtained here: http://www.lifeai.co...es/general.html I recommend viewing the video of the demonstration application here:
You can also download it http://www.lifeai.com/download.html if you'd like to try it. It includes a text file that demonstrates how entities, stats, values, and actions all contribute to the logic of the system.


@ VJo1 - LifeAI can already simulate emotions to some extent. You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.


I think it is emulate.

#6 Trienco   Crossbones+   -  Reputation: 2224

Like
1Likes
Like

Posted 01 October 2011 - 02:13 AM

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.


Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.

To get an idea what I'm talking about: http://www.uni-ulm.de/~hhoffman/emotions/downloads/simplex.pdf
It's a heavily edited and shortened version for some book.
f@dzhttp://festini.device-zero.de

#7 VJ01   Members   -  Reputation: 80

Like
0Likes
Like

Posted 01 October 2011 - 04:34 AM

@ VJo1 - LifeAI can already simulate emotions to some extent. :) You can make stats that represent emotions such as "happy, sad, fear, anger", ect... You can then design actions that involve or modify their emotion levels. LifeAI also contains a function laiGetSat() that attempts to retreive an overall satisfaction level of an entity based on their stat levels and values of those stats.


Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.

To get an idea what I'm talking about: http://www.uni-ulm.de/~hhoffman/emotions/downloads/simplex.pdf
It's a heavily edited and shortened version for some book.



OCC model is good and implementable right to left without mistake for video games and RPG books.

The term SIMPLEX may have trademark infringement because it is famous in linear programming.

About Fig 2:
imo, Memory -> Mood-state is more accurate then Memory -> Appraisal.
Latter implies self-study of mind with reflection.

#8 IADaveMark   Moderators   -  Reputation: 2531

Like
0Likes
Like

Posted 01 October 2011 - 08:37 AM

Entities have characteristics that describe them called "stats". They consider their value of the stats and how much they are modified to assess the "pros" and "cons" of an action. They also consider their value of other entities involved in the action and how it affects them. They can then determine an overall value for performing the action. As stats in the simulation change, LifeAI updates the best action options for the entities. The actions can then be implemented as needed in a simulation.

This is the basis for what many of you have heard referred to as utility theory around here.

An EmotionAI would be pleasant too.

Utility can represent anything that can be put on a continuum. Whether it be bullets in your gun, hunger, or a level of happiness, anger, etc. It's all numbers.

@ _orm_ - LifeAI is not based on a common approach to AI, but it probably has similarities.

I seriously disagree with this premise. While it is not in as wide of usage as FSMs, BTs, etc., utility architectures have been around in game AI for a long time. In fact, there are entire games that are built on this foundation... e.g. The Sims. Admittedly, it has only recently been coming to the forefront... mostly in the past 3 years -- coincidentally since the release of my book which is almost entirely about modeling behavior through utility.

Always an interesting subject (probably because it ended up my diploma thesis). Unfortunately the biggest challenge wasn't coming up with a model that would combine personality, mood and emotion (and different "spaces"), but the part where the system needs to know how to evaluate an event. The concept of "something with negative consequences happens to an agent I have high sympathy for causes pity" is easy, but judging the consequences of events was lots of tedious work with highly arbitrary weights. Even worse when you have to consider the cause of an event or morally judge an action.

You are correct in pointing out that this is the major pitfall. With no sleight to the OPs project, putting together a routine to crunch numbers is actually a trivial process. Many AI programmers can build a utility-based decision architecture in a day or two. The trick is how to model what numbers to use. It's somewhat of a dark art to many people.

While dealing in concrete numbers is relatively simple (e.g. "how many bullets to I have left?"), taking a concrete number and overlaying an abstract "meaning" to it is more difficult... e.g. "how does the number of bullets I have left relate to my necessity for reloading?" Further still, attempting to codify something that is, in essence, an abstract concept from the start is even more amorphous -- e.g. "how angry am I?" The ultimate is converting an abstract into another abstract -- "how does my overall perception of danger affect my anxiety level?". Now THAT is the part that is tough for people to get a handle on.

So again, while the tool presented might shave a few hours off your workload, it is just a tool. And tools don't create believable AI... they only enable it. That said, many people sure could use some more tools.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#9 _orm_   Members   -  Reputation: 112

Like
0Likes
Like

Posted 01 October 2011 - 11:18 AM

Sounds to me like it is being lauded as a silver bullet approach to AI then, to which I would be inclined to agree with IADaveMark; while being able to model emotions and overall human satisfaction could greatly enhance any AI, it still has to compliment other AI techniques such as goal or state driven systems in order to turn these numbers into meaningful action on the part if the agent.

I like the work you are doing, and it could actually prove to be a viable alternative to fuzzy data sets and other forms of decision making, being based on emotions rather than on sets of data such as aggression, guts, and determination. I actually can see much more human like behavior coming from such techniques.

Why does that prospect send a chill down my spine? :blink:



#10 Jeremiahg   Members   -  Reputation: 120

Like
0Likes
Like

Posted 01 October 2011 - 03:33 PM

@IADaveMark - Ah, okay. So there IS an established term for this approach, "Utility". I appreciate the info.




#11 IADaveMark   Moderators   -  Reputation: 2531

Like
1Likes
Like

Posted 01 October 2011 - 07:08 PM

Here's the slides from my lecture with Kevin Dill on utility theory from the 2010 GDC AI Summit.

http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#12 Jeremiahg   Members   -  Reputation: 120

Like
0Likes
Like

Posted 02 October 2011 - 01:04 AM

@ IADaveMark - Wow, there appear to be many similarities in our approaches to AI. I can see some of the challenges I've encountered discussed in your work too. I would love to get your feedback on LifeAI sometime :)

#13 VJ01   Members   -  Reputation: 80

Like
0Likes
Like

Posted 02 October 2011 - 06:25 AM

http://www.intrinsicalgorithm.com/media/2010GDC-DaveMark-KevinDill-UtilityTheory.pdf


There is missing abstract and executive summary. I am not saying it is necessary but it would of been helpful explanation for me. Then I see a macroeconomics break because I didn't get the meaning because I wanted to know what is couple other alternates besides utility in macroeconomics. I understand this is about utility too but there is other things in macroeconomics too. "In economics, utility is a measure of the
relative satisfaction from, or desirability of, consumption of various goods and services."

I think it was lost between the lack of meaning on utility in micro and macro and then just economics. Imo overall, just don't think it was a strong explanation.

#14 IADaveMark   Moderators   -  Reputation: 2531

Like
0Likes
Like

Posted 02 October 2011 - 08:14 AM

There is missing abstract and executive summary I am not saying it is necessary but it would of been helpful explanation for me.

It's lecture slides from GDC... the only thing "missing" is Kevin and myself actually speaking.


I think it was lost between the lack of meaning on utility in micro and macro and then just economics. Imo overall, just don't think it was a strong explanation.

Again, if you actually hear a lecturer, it makes more sense. Otherwise the lecturer would have nothing to do because he would simply be reading the slides like a book. Kind of a silly approach, don't you think?



Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#15 VJ01   Members   -  Reputation: 80

Like
0Likes
Like

Posted 30 October 2011 - 08:01 AM

The EmotionAI declared above is composed by me in an Objects Library project in CodePlex:
http://olib.codeplex.com/wikipage?title=Emotion%20Engine%20start-up%20business%20with%20freeware%20version%201

Copy, paste of 1/2 of the Essay:

Sets and Multi-map data structures to start. Avoidance of function pointers in first version.
Useful is map levels would be designed to feed input into a few syllogisms and then a value from multimap.

For example in 3d first person game:

requirement: player 2 used/held her/his large thunder-hammer weapon for 2.4+ secs.
result: Weapon Use Profiling recorded this event-sequence concurring 40%-60% of time in level 2.
concern1: Player 2 trying to hack his future profile changes weapons intermittently to pistol and fists to hack up the profiling.
concern2: Player 1 tries to setup Player 2 by changing positions for aliens to be trapped in a arcane situation that allows for 1-2.3. second use of weapon.

Standard Player Profile upon leaving level 2 (1-7 emotion award grades)
A) Extremely Brave (coward = 1, too brave = 7)
B) Confidant / very confidant (paranoia = 1, arrogant = 7) (Version 2, pompous = 10)
...extreme preference for 3 emotions. up to 4 emotions max in version 1.

Arbitrary Weapon Use:
0-30 = DNU, do not use, AND use other profiling results.
40-60 = explained as it is
60.% - 70%
70.% - 85%
85.% - 100% -> use your imagination

How to use this profile which is simplification valid regardless of circumstance.
====






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS