• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Nice Coder

Creating A Conscious entity

131 posts in this topic

What I find strange is how easily you step over the whole intelligence problem.

It's actually a long standing point of discussion that started with Alan Turing (the designer of the model our computers are still build by, memory and processor) in the 1940's and has still not been decided.

Question: "When is a computer (-program) intelligent?"

Try and find a clear nonambiguous answer to this question. Turing's answer was the following experiment (called the Turing test):

We have 1 researcher, call him A and a test subject, B, and ofcourse our computer C. We put A, B and C in separate rooms but allow them to communicate in some way (usually 'chatting', but you could ofcourse implement a speech production program etc etc, but that's not the point). Now the test is that A must try to find out which of the two unknown 'persons' he is talking to is the computer and which is the test subject. In order to do this he can ask any kind of question and B and C are allowed to answer however they like. Turing thought that if C was able to consistently be thought to be the human subject than C was to be called intelligent.

What Turing does in this experiment is define intelligence by reference to the only known form of intelligence, namely our own. Ofcourse his experiment leaves a lot of questions about intelligence open, but how might you define intelligence, which we only know from ourselves without reference to ourselves?

A couple of questions to think about:
- Say you had a baby brother, about when would you start calling him intelligent?
- Would you call a chimp intelligent, and what about a dolphin?
- What about an ant? Or a nest of ants?
- What about a single neuron in our brain, or 2 or 5 billion?

Edit: Oh and before you type another word you just MUST have read "Goedel, Escher, Bach: An eternal golden braid". This will give you enough questions to think about that we could just as well close this discussion because you aren't gonna solve them in your lifetime.
0

Share this post


Link to post
Share on other sites
Consider the basics:
Chinese Room.
The question of intelligence and understanding has been discussed for millenia. It is quite possible that following some argument like Brain in a vat as well as Gödel's Incompleteness Theorem, it is not possible for us to ever define consciousness or self-awareness in an objective way. It also seems from a purely logical point of view that our current languages are not able to express some basic concepts.

It's a bit like the kitten argument smeone provided. Even our brightest phycicists are not able to provide a sound definition for space and time without running into circular arguments.

Example: your average four-year-old has an abstract concept of natural numbers. (S)he knows what 'three apples' means and can easily extend this concept to any concrete (like other objects) and abstract (like hours/days or the number of times a specific action is performed) entities without the need of a (formal) definition.
In fact, it took mathematicians millenia to come up with a (very crude) formal concept of natural numbers, which then falls into Gödel's incompleteness theorem and while being sound (to some extend, depending on the set of axioms used) cannot be proven.

Current effort have lead to a number of chatbots that act like chinese rooms and appear intelligent, self aware and conscious at first glance (like Alice.
There has also been an AI project (started long before AI winter) written in LISP, which was indeed able to act intelligent and had an internal representation of very limited world. This world only consited of a set of simple three dimensional geometric shapes like cubes, cones, pyramids and cylinders of different color placed on a table. The program was able (using a very limited vocabulary) to describe this world and the relations of the objects therein. It could also alter this world from user input (like "place the red cube on top of the bigger one") and afterwards describe the changes (e.g. it would perceive another
'pyramid' formed by the staclked cubes).

Provided the last example was developed about 40 years ago, it seems that AI research shifted its focus to more practial things like expert systems, ANNs (for very well defined tasks like pattern recognition), GAs (automated processes, advanced scheduling) and data-/knowlegde mining (and any combination of them).

You are aiming for the Holy Grail of computer science, philosophy and psychology. Not that this implies that you must fail (heck, I dreamt of creating such thing , too some years ago), but a lot of knowlegde and insight is required so you have a long road ahead of you. Otherwise you might end-up like this guy who hand-wired thousands of artificial neurons (in hardware) and propably still wonders why his creation doesn't show any sign of intelligence [smile].

Sorry for the long post and hang on to your dreams,
Pat.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Horizon
What I find strange is how easily you step over the whole intelligence problem.


its not strange, actually its pretty logic. You have pointed out yourself how futile it is to attempt to answer it. So we'll just go around the question.

sidestepping it is strange? no, mathematicians do it all the time. just cancel the thing out.

People, people, stop looking into philosophy for answers. When was the last time philosophy gave an answer to something? (don't answer that. I'ts sort of trollish.)

Meanwhile, we can take potshots at it, poke around and see what happens.

darookie: we don't need a formal concept of intelligence (yet). we just need an implementation that seems roughly intelligent, even if we don't know for sure what that means. we'll know anyway when we see it.

Alice was a gramatical parser kind of bot, and i'm betting the Lisp based bots were too (string handling.. lisp.. etc) This is not what i'd like to aim for. I don't really care if it talks or beeps, i care about the learning. Language should follow accordingly.

now, can we get a link or name to that 40-year old AI with a limited 3D world?

Nice Coder: so you preseeded the PAD scale with tokens and associated values... do you modify those weights afterwards? or add new tokens? i'd want to do that

preseeding is probably the only way, since in animals and humans this is accomplished with physical punishment/rewards, and well, there are no inherently negative/positive things to do to a program.

Quote:

It also changes the values of the pad, from other things. like, when it gets new information, it gets happier. When it gets told a lot of what it has been said before, it gets bored, ect.


my approach was modifying dominance according to the model hits, if it succeeds, dominance increases (it knows whats going on) and if it doesnt it decreases (its getting lost).
arousal would come from the speed of the input... this would need timestamped inputs.
and i'm unsure about pleasure/displeasure

and the effect of the internal state on the model would be:

dominance should affect the model thresholds, i am unsure of how.
arousal produces faster responses, probably skipping a calculation here and there. low arousal gives a throughout
response.
pleasure is what the AI will attempt to achieve, so this is where goals go.

Can you detail a bit more what you did in this regard?
0

Share this post


Link to post
Share on other sites
What I mean is that it's strange in a discussion that tends for a large part towards "creating intelligence", "creating consciousness", "creating self-awareness".
In any practical sense before you can create something you must know what it is and how it works, hence my question. If this discussion had been solely about creating an interesting chat-bot, then it would have been an entirely different matter.

About the chat-bot then: What kind of parser are you using? Have you tried looking at Categorial Grammars and formal semantics? Try this link http://www.phil.uu.nl/preprints/ckipreprints/PREPRINTS/preprint032.pdf. Or are you trying for a more natural implementation?

[Edited by - Horizon on November 9, 2004 10:29:52 AM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Madster
People, people, stop looking into philosophy for answers. When was the last time philosophy gave an answer to something? (don't answer that. I'ts sort of trollish.)

Madster, Madster, start giving credit whom credit is due.
FYI philosophy was the first science and defined the methodology for any scientific work that separates science from alchemy.
Quote:

Meanwhile, we can take potshots at it, poke around and see what happens.

Ok. So that qualifies you for an alchemist[wink].
Quote:

darookie: we don't need a formal concept of intelligence (yet). we just need an implementation that seems roughly intelligent, even if we don't know for sure what that means. we'll know anyway when we see it.

Contradiction. How can we know if a system is intelligent without a formal description of what intelligent is? Given an arbitary definition of intelligence I can provide a number of programs that will roughly fit into the definition, yet you wouldn't consider these programs intelligent. We need to classify different levels of intelligence by attaching properties to each level (bacteria -> insects -> fish -> mammals -> apes -> humans -> ...?). Even Turing tests have different levels and yet cannot finally decide whether the system is intelligent or not. Nowadays a system is considered to be intelligent if it is able to mimic intelligent behaviour (e.g. within a specified domain).
Quote:

Alice was a gramatical parser kind of bot, and i'm betting the Lisp based bots were too (string handling.. lisp.. etc) This is not what i'd like to aim for. I don't really care if it talks or beeps, i care about the learning. Language should follow accordingly.

That proves that you don't know what A.L.I.C.E. is. FYI intelligence can be regarded pattern matching and association, which is exactly what Alice does (in a very limited frame).
Quote:

now, can we get a link or name to that 40-year old AI with a limited 3D world?

I would love to give a brief historical overview of AI, but there are enough books on that topic (just take a look around).
The program I mentioned wa called 'Blocks Micro World', developed at the MIT by students of Minsky at the end of the 1960's. There was a similar programs written in 1964 by Adolfo Gunzman.
Other versions written by Minsky & Papert, Patrick Winston and the most famous (the one I was referring to) called 'SHDRLU' (don't ask - nerds[smile]) by Terry Winograd.

Sample dialog from the program (capital words are responses by SHRDLU):

pick up red block
OK
find a block which is taller than the one you are holding and put it into the box
BY “IT” I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I
AM HOLDING
OK
what does the box contain?
THE BLUE PYRAMID AND THE BLUE
what is the pyramid supported by?
THE BOX
how many blocks are not in the box?
FOUR OF
is at least one of them narrower than the one which I told you to pick up?
YES, THE RED
is it supported?
YES, BY THE TABLE
can the table pick up blocks?
NO
the blue pyramide is mine
I UNDERSTAND
I own all blocks which are not red, but I don’t own anything which supports a pyramid
I UNDERSTAND
do I own the box
NO
do I own anything in the box?
YES, TWO THINGS: THE BLUE BOX AND THE BLUE
will you please stack up both of the red blocks and either a green cube or a pyramid?
OK
is there a large block behind a pyramid?
YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE AND THE
BLUE ONE
put a small one into the green cube which supports a pyramid
OK
put the littlest pyramid on top of it
OK
had you touched any pyramid before you put the green one on the little cube?
YES, THE GREEN ONE
when did you pick it up?
WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND A
LARGE GREEN CUBE
why?
TO GET RID OF IT
why did you do that?
TO CLEAR OFF THE RED CUBE
why did you clear off that cube?
TO PUT IT ON A LARGE GREEN
why did you do that?
TO STACK UP THE RED CUBE, A LARGE RED BLOCK AND A LARGE GREEN
CUBE
why did you do that?
BECAUSE YOU ASKED ME TO


Looks pretty intelligent to me.
Yet, it is just a very primitive program that only has a very limited knowlegde.

Clicky.

I hope this helps you,
Pat.
0

Share this post


Link to post
Share on other sites
About knowing the insides of the car... no, you don't need to know that to drive. But I picked subset for a reason - when you start coding relationships, the natural inclination is to come up with some sort of language for things like drop CAUSES fall, apple ISA fruit, etc. These are concepts that you need some representation of in order to understand information, but when that representation is hard coded, the program can't learn their meaning. It's useless for a program to have information if it doesn't know what it means.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Horizon
In any practical sense before you can create something you must know what it is and how it works, hence my question.
Have you ever heard of emergence?
There have been a host of experiments in which emergent behaviour was discovered that wasn't expected or understood fully.
0

Share this post


Link to post
Share on other sites
Eh... well here we go =) this is such a controversial topic
I wish i had an implementation to show, arguments have much more weight that way.
Anyways, long post, please read, and keep replies civil...

first the funnies:
Quote:

Madster, Madster, start giving credit whom credit is due.
FYI philosophy was the first science and defined the methodology for any scientific work that separates science from alchemy.

i said that was trollish ;) i'm merely pointing out that philosophy isn't gonna help here. At least not using it to say why we can't do what we're trying to do. There are better tools to prove that, like mathemathics.
And yeah i checked a bit and philosophy was not the first science... it actually is what science was called before. so they're kind of the same. But enough of that.
Also, alchemists were the precursors of chemists, a respected and proven science nowadays. You gotta start somewhere. I'll never be ashamed of poking things until i see something interesting.

now the rebuttals:
Quote:
Contradiction. How can we know if a system is intelligent without a formal description of what intelligent is?

and then:
Quote:

Example: your average four-year-old has an abstract concept of natural numbers. (S)he knows what 'three apples' means and can easily extend this concept to any concrete (like other objects) and abstract (like hours/days or the number of times a specific action is performed) entities without the need of a (formal) definition.

This is a contradiction. Remember most of the time the practice comes before the theory (like you just said). So, we're trying practical ideas.


About ALICE, I've seen it and i talked to it. I don't remember where, but i found an online implementation.

Quote:

FYI intelligence can be regarded pattern matching and association, which is exactly what Alice does (in a very limited frame).

And pattern matching is all that we're talking here. Alice only does gramatical pattern matching, which makes it a gramatical parsing bot in my view.
Actually upon closer inspection it seems simpler than that. It only matches preprogrammed patterns defined in AIML (forgot about that bit.. its been a while). So its not intelligent by any means, and even if you don't know, you'll find out in a 3-minute chat.

For an example of non-gramatical pattern matching look up the Babble perl script and the MegaHAL bot.

from the SHRDLU info page linked:
Quote:

The system answers questions, executes commands, and accepts information in an interactive English dialog... The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system


As i mentioned earlier, every AI in lisp i know of is a gramatical parser. Its still an interesting implementation, and i'll give it a look (the graphical Java version at least). However, this one will appear intelligent only for a short time (like ALICE), until you find out it doesn't learn (remembering is not learning).

The megaHAL bot and its derivatives can be taught any language. I can fetch the links if needed, i have them somewhere.

and the explanations:
Horizon:
Quote:

In any practical sense before you can create something you must know what it is and how it works, hence my question.

Exactly. We must know how the process of learning and pattern matching works, and then create it. The intelligence in this kind of AI isn't created, it is taught. Btw yeah, im looking for a natural implementation that doesn't need a predefined ruleset. This would allow the same method to be used for more than just text, maybe controlling outputs and such.

jtrask:
Quote:

...the natural inclination is to come up with some sort of language for things like drop CAUSES fall, apple ISA fruit, etc.
These are concepts that you need some representation of in order to understand information, but when that representation is hard coded, the program can't learn their meaning. It's useless for a program to have information if it doesn't know what it means.

in a statistical AI the concepts are gathered and linked automagically. Our brains do this as well. Avoiding absolute definitions is probably a good idea. For example, the typical baby questions:

What is a lie?
something that is not the truth.
What is the truth?
a fact.
What is a fact?
...

And an easier example:

What is an apple?
its a fruit
What is a fruit?
blah blah...

and you'll eventually resort to showing an apple or a fruit.
And this, is a circular definition with a token from another set of data. So in the end, you can only relate gathered tokens amongst themselves, you can't define absolute meaning.
And this is OK.
My view is that a bot needs more than one sense to seem intelligent, thats why i find the 3D world approach interesting, and probably why it seems intelligent at first glance. It's relating tokens from two sets of data (geometry and text) and since we percieve both as well, we see the connections and regard it as intelligence (that is until you find the text engine doesn't learn new things).

now, can anyone comment on the coding part? i believe this is where the original posting was meant to lead. My main issue with this is linking different sets of data together without the size of the relationship matrixes exploding... also i can't find a way to model continuous input, like floats (discrete is easy, like text.. each new character is a new token, and there arent many of them so its ok).

ps: if i sound pedantic i don't mean to. This is a hot topic and has always interested me, and most people (AI winter) won't touch it with a 10-foot pole.
0

Share this post


Link to post
Share on other sites
Nice replies all [grin]!!

Yeah, i don't like alice either... (i would IMO call matrixbot more humanlike than alice!).

With mega-hal... It doesn't seem to be very... Convincing (or anything less then insane and dumb).

How about a general knowledge base (any kind, not known at present),

You ask the kb a question:

What is 2 times the square root of the logorithm of one billion and 5?

To which the bot would reply:
Well 2.65, of cource!

You can also feed it any type of information,

If X causes Y, and Nothing else Causes Y then if Y Exists then X Exists

If something causes something else, then if the second thing exists, then the first thing also exists.

Parsing its going to be hard tho... (but then again, this is for a language other then english, so it would be easier)

Perhaps another thing, which would be handy.

If user asks the question: "What is a " y, you will: output "A " y " is a " query("What is", y) "."

Thats still going to be hard to parse (a scripting language, in a string literal...) But it wouldmake it more extendable.

As another idea:
Data mine the statement (+ context (Prev statements/responces?))/responce structs. Use that information (+ questions) to figure out a probable answer.

Any good ideas here?
From,
Nice coder
0

Share this post


Link to post
Share on other sites
I've ben doing a bit of thinking.

How about this:
The entity, has two needs
The need for food and the need for water.

There are two types of food, a treat, and a meal.
There is one type of water.

When it hasn't had water for a long piriod of time, or food for a long pieiod of time, it dies.

From its hunger and thirst, it has the desire to be fed, and the desire to be given water. (only the user can do these things).

It also has the desire to learn, and the desire to speak.

These are all pre-programmed.

It uses rules (which were learned from the user), and backwards chained (like an expert system), until it reaches a point at which it gets fed, watered, ect.

It then checks the things it has to do, to the reward from doing it. if its too low, then it keeps checking other rules for other things it could do to get fed, watered, ect.

If it finds no matches, then it will just send a random responce to the user.

From its desire to speak, it uses an algorith to select the best responce, based on the input given.

It would probably look at the differences between previous inputs, and use those to figure out what to say.

From,
Nice coder
0

Share this post


Link to post
Share on other sites
What you want is nonambiguous information that is easy to reason with. The simplest way to do this apart from inventing your own language is probably to use some sort of formal logic. This you could implement on two levels:

Either just go with english, write a (for example categorial grammar, very good for grammatically correct input, bad for parts of speech) parser and combine it with a semantics (2nd order logic or something like that) linked to the parser.

Or just go with 2nd order logic right from the start.

Ofcourse an indefeasible logic which considers only truths and falsehoods (1's and 0's) may just not cut it. In that case consider using a nonmonotonic logic which can reconsider earlier "knowledge" and possibly include stochastics (probabilities of something being true, instead of absolutes).

Because we are talking an "intelligent" entity here, we probably want to include knowledge and belief (modal logic) and possibly desires and intentions (See for example Cohen & Levesque, 1990? or Rao & Georgeff, 19??).

What you need to realise is that you are going to need a whole lot of specialistic techniques to implement good models of knowledge and reasoning, language, or anything really, so you better try to do catch up with recent scientific papers on the subject, at least that's what I'd do.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Madster
Eh... well here we go =) this is such a controversial topic
now the rebuttals:
Quote:
Contradiction. How can we know if a system is intelligent without a formal description of what intelligent is?

and then:
Quote:

Example: your average four-year-old has an abstract concept of natural numbers. (S)he knows what 'three apples' means and can easily extend this concept to any concrete (like other objects) and abstract (like hours/days or the number of times a specific action is performed) entities without the need of a (formal) definition.

This is a contradiction. Remember most of the time the practice comes before the theory (like you just said). So, we're trying practical ideas.

Out of context - your parser is broken [wink] - the first refers to 'intelligence' while the second refers to an abstract concept (numbers) that is to be stored and interpreted without a formal system. Just because you read 'formal definition' you will need to put it into context. This is exactly what intelligent behavior is all about - context. Applying information in a specified context [smile]. How exactly this is done is another question and indeed requires scientific methods. The main difference between my natural number example and your 'practical way' is that my example requires a fully functional AI device (or program if you will), while your 'practice before theory' wants to create such thing without theory behind it.
Parsing and defining any kind of lanuage requires language and linguistic theory (maybe some pychology of perception). To define rules and semantic networks you need to know the algorithms and their limits. The list goes on.

Chatbots like Alice have very nice parsing engines. Look into these. Data mining provides ideas and implementations of knowlegde representation and -mining. Use these and combine with a parser. You might end up with a kb query system that uses natural language for in- and output. Next combine this system with the concepts of the GPS and derivates thereof - you should get a learning system that is able to acquire and reproduce knowlegde as well as solving problems - the Turing Award would be yours for sure.

If you finally add some concepts of modelling emotion and motivation (look into research on autonomous agents for more information) you *should* get what you are aiming for. I don't know if there are any research groups currently working on that, but I am sure that this task cannot be completed by a single person. We should form a group. I'd be glad to put some of my time and knowlegde into this. I was involved in an EU sponsored project that dealt with a fraework for autonomous intelligent agents (clicky) and know a little about some the aspects (esp. data-mining, rule-based programming and formalisms).

OT:
About A.L.I.C.E - the AIML is connected to a system that allows for learning new patterns and putting them into a context (again stored in AIML, which of course forms the limit of learning, e.g. no 2nd order logic machine attached).

Same goes for SHDRLU and similar. The systems can learn. They all have the same mechanisms built in that are used in knowlegde and data mining (at least rudimentary - remember that these programs are 3 to 4 decades old and very limited in their context [world]). Also you seem to over-estimate the parsing thing. Parsing is only necessary to communicate with the backend - like your brain does during language processing. While the human brain is a multi-interface device that handles tactile, visual, phonetic and emotional [internal] input, programs are limited in their communication interface. Written language is so much easier to process that it is just the best suited interface between man and machine (by now, at least).

I also think it's important to remember that language itself is much more than just parsing grammar. For a human speaker, most words are connected with some kind of meaning. The difference between Alice and SHDRLU is that SHDRLU knows the meaning of what it says and what the user requested. The program is able to prescind from parsed keywords to internal concepts of its (very small) world.

Nice discussion,
Pat.
0

Share this post


Link to post
Share on other sites
Nice discussion indeed, darookie. I see now that most approaches are different. I can't work on this until i'm on vacation, and when i do start i need to get some foundation work done first, like sparse matrixes and maybe a simple knowledge model API to try different approaches quickly. While its unlikely that a group could be formed for the looks of the different approaches, i'd gladly go somewhere to share ideas and advances... and more importantly, logs!

Quote:
Original post by Horizon
What you want is nonambiguous information that is easy to reason with. The simplest way to do this apart from inventing your own language is probably to use some sort of formal logic.

Yup. A statistical AI is hard to get exact information out of, and also needs extensive training. For math you'd have to put it trough some years of school =)
And it would still babble a lot and make people around it uncomfortable, as usual for this kind of AI.

Quote:
Original post by darookie
Out of context - your parser is broken [wink] - the first refers to 'intelligence' while the second refers to an abstract concept (numbers) that is to be stored and interpreted without a formal system.

this is what i meant. You still regard the child as intelligent even if he knows no formal definition of numbers. Some domestic animals can count to two or three (try feeding one 2 cups and th e other 3 while delivering the same total amount. They'll notice.), so by extension an AI doesn't need formal definitions of concepts to be intelligent.

btw its interesting you brought up the scientific method. We're at the hipothesys (mispelled?) stage here. When one is happy with the hipotesys one should go and try to prove it. If it turns out it was untrue, one can modify the hipothesys and give it another go. Being scientific doesn't mean one has to be sure that something is going to work before attempting it.

Quote:

Parsing and defining any kind of lanuage requires language and linguistic theory (maybe some pychology of perception). To define rules and semantic networks you need to know the algorithms and their limits. The list goes on.

And there's the main issue. Im not taking the parser road, so the reasoning doesn't apply.

Quote:

OT:
About A.L.I.C.E - the AIML is connected to a system that allows for learning new patterns and putting them into a context (again stored in AIML, which of course forms the limit of learning, e.g. no 2nd order logic machine attached).

O_o i didnt read that bit... i thought it was only storing tokens matched aganist an existing pattern.

btw i tried the Java version of SHDRLU and couldn't get it to run without hogging 100% CPU and becoming unresponsive within a few seconds =/ and this on a 2.5 Barton AMD with 1G RAM

Quote:

Also you seem to over-estimate the parsing thing. Parsing is only necessary to communicate with the backend - like your brain does during language processing. While the human brain is a multi-interface device that handles tactile, visual, phonetic and emotional [internal] input, programs are limited in their communication interface. Written language is so much easier to process that it is just the best suited interface between man and machine (by now, at least).

Thats why im not trying to parse. and a multi-interfase can be possible with some tinkering (webcams, microphones, etc... or simulated in a 3D world)... but thats too much work =D

In fact i've read about such work, but the approach taken was genetic algorithms.

Oh and there was some talk of binary logic producing not-too-good results and replacing with probablilities... this work has been done and you can pick up the algo's which are already studied in the Fuzzy Logic field. They work great for these things, but tweaking takes long.

Anyone knows of any sparse multi-dimentional matrix libs for C++ i could use?
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Nice Coder
With mega-hal... It doesn't seem to be very... Convincing (or anything less then insane and dumb).


Forgot to reply to this =)
yeah mega-hal is a trollish brat.. but still interesting
the reason it doesnt seem convincing is lack of continuity, and the reason for this is that it doesn't have internal state.
So each new response is from a blank slate and they're not connected to each other, and humans notice this right away. The PAD knowledge model would take care of this to some extent, so it would keep the tone of the outputs somewhat homogenous (sp)... but there's still the issue of topic-hopping. How can one make it stay on topic? maybe give it a sort of short-term memory?
how would this short-term memory work? hm.. a very small knowledge model reusing tokens from the big one?
0

Share this post


Link to post
Share on other sites
Short term memory... interesting.

Idea:

Objects can exist either in long term, or short term memory.

When an object gets talked about, mentioned, or is used by reson (ie. it is used by a rule), it is moved from long-term memrory to short term memory.

Objects move back to long term memory, after a piriod of time (amo of messages sent/recieved), which is changed based on how good the object is (amount of links, general goodness coeficient).

For the language:

Maybe something more asmovian, may be in order?

Matrix sayout, objects that walk.

?

Whatever it is, it should be:
Something either english-ish, or completely alian (think klingon, complete with symbols.)

I was thinking about the alian idea, now because everybody is translating things backward and forwards, it should be easier to make the bot act more humanish...

...

Another idea:

For each object, the bot has a set of pad values for it.
When it is mentioned, the values change the current pad values, thus allowing the bot to "like", "love", "hate", "Dislike" cirtain things. And can reply with semi-canned responces when asked [grin].

The ability to resay canned responces (with context when possible), will be nice. beause hopefully the language will only give people one way to say things. (so there would be more re-use.)

Also, we need to combat euphoric bots.
I once managed to talk matrixbot, by saying
"I love you", 10 million times to it. (happyness 0.997, an yes, i did use a for loop to do that).

Then, no matter what is said, it always replied with something along the lines of "Lol", or other synonymous remarks.

Any ideas on how to combat this? (with out ending up with a depressed, submissive, bored bot, hopefully)

A common framework would be very good.
But there would be language differences (which would mean that everything would have to be compiled... i for one, use vb6, which most people here probably won't touch with a ten foot pole)

Temporal changes should be difficult to code, how would we code this? (this would be practically essential, because a lot of events revolve arount today, yesterday, tomorrow)

From,
Nice coder
0

Share this post


Link to post
Share on other sites
I wish to thank the people posting in this thread for their ideas/knowlage, it has been extremly interesting and pushed my passion for AI further!
0

Share this post


Link to post
Share on other sites
If it's canned responses, it isn't consciousness, it's hard-coded. Remind me again, what are you aiming for here? It's all over the place between grammars, learning, thoughts, emotions, drives. Good to be thinking about these things, but if you have a goal keep it in mind.

Read this - http://fp.cyberlifersrch.plus.com/articles/ieee_1.htm - it's an explanation of the AI used in Steve Grand's Creatures. They solve a lot of the problems you've been dealing with, albeit in a neural context which you don't seem to like considering. Still, it's very interesting, highly recommended.
0

Share this post


Link to post
Share on other sites
Hello,
Your link doesn't seem to work...

The canned responces, would only be for cirtain things. The rest would be learned, off the user(s).

Something like:

User: Do you like Chicken?
Chatbot: I love chicken, why do you ask?

Would be very canned.

Soemthing like:

User: Do you like chicken?

Question: bot: affection(object(chicken)) is > 0.25 & < 0.75)

Affection(object(chicken)) = 0.70

So then, it would respond with something like
Responce: Afferm, like Chicken, [add]

Where [add] would be another phrase, which could be tacked on to the origional, like "Why did you ask?" or "Does that bother you?", ect.

of cource, it would reset the context, as it is in the output, so the context would have something like:

Chicken, Like, [addcontext]

In which, the objects chicken, and like (yes i know, but it makes it easier if everything is an object), and whatever is in [addcontext] is added to the context.

Once this is done, the context removes some things that it doesn't need anymore (ie. haven't been mentioned for a while, or not very importaint, ect.).

The context, i would say, would be part of the short-term memory.

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Pretty nice thread. I'll throw in a couple (okay, maybe a dozen) cents and a few things to consider.

First, I fully agree with Madster that debating the definition of intelligence and consciousness is, while a interesting philosophical exercise, largely pointless when writing an AI. Goes in circles and at the end of the day won't change a line of code. Like Madster said, if you do it, you'll know it. Who cares if it is defined rigourously or not?

Personally, I favor GA with some form of net approach to AI. Create a simulated world with simulated minds in simulated bodies. Play God and try to apply the evolutionary pressures that made us as quickly and efficiently as possible. And pray that your simulation is deep enough to allow intellegent creatures to evolve at the end of it all. Problem is, simulating all the variables for one intelligent mind to emerge would take an absurdly fast computer with tons of memory. But to evolve it you will need to simulate whole societies of creatures and a full world with some form of physics (for complexity and to have some bering on our reality). In short, I think one of the world's fastest supercomputers is needed for bottom up.

I like top down now as well though, and I think AI is also possible via symbolic processing. I believe Cyc is on the right track but a whole lot more mental mechanics are needed for AI. (Cyc is really more about knowlege representation than thinking like a human, or animal for that matter.) So since you're on the symbolic path as well Nice Coder here are some of the issues I've been dealing with.

Your first and primary concern should be how can you make your bot run through all your rules and connections within your CPU and memory budget. With a symbolic system, creating new knowlege is easy, finding and processing relevant knowledge in a reasonable time and storage space is what is hard. You'll need very savy search algorithms which will need to learn and adapt based on past experience. I recommend goal based, with solvers which break down goals into sub-goals, match goals with the best solvers, create new knowlege when needed (including new solvers), make suppositions and predict outcomes, and strengthen the matching of solvers which fullfill goals. I also recommend treating your resources like a metabolism. Humans only have so much oxygen available at a time and can fire only a limited number of neurons. I think that can be very useful in what is strenthened, what is forgotten, and promoting a train of thought.

Also you will need nested contexts. How would your bot handle the question, "If you were a bird and could fly, what would you do?" You obviously don't want to assert into your knowledge base that the bot is a bird, at least not permanently. So you need something which can hold a set of assumtions, goals, etc and has its own scope. A context would need to be consistant within itself, but can contradict knowlege in parent contexts. Good for "what if" questions, and thus planning, prediction, current subject, current problem in a larger problem etc.

Memory should go without saying. That means you also have to deal with time (and probably patterns in time) which is a real pain. Id use short term, long term, and whatever other terms you want in between. Ideally memory should be pattern matched for predictive purposes and learning. Probably usage and significance tracked for what moves to longer term.

Leads to clean up as was mentioned earlier. Computers, like humans, have limited capacity and so you have to throw some stuff away. (especially with a good memory!) Obviously use a short form of memory for quick temporary stuff. Move useful stuff to longer term. And I would suggest generalizing to save space as well. Often when moving to longer term. We do it all the time. Today: "I remember driving yesterday with Jill, Sue, and Betty and spilling coffee all over Betty." In a year: "I remember driving sometime with some girls and spilling something" In five years: "I remember driving sometime and something happened." Ten years: "Huh?" Bad example you get the idea. I think we do a lot of this in our sleep.

Generalizing is probably one of the most important things you simply must have. It's importance in improving problem solving efficiency and searching is enormous. Also for exting current knowlege to new novel situations or problems.

Human speech is highly compressed (lossly) because it has such a low bandwidth. We do this by predicting what the other person knows, feels, and is thinking, then just leave out all the bits we predict they can infer. Since you've stated you would use your own language, the need for this is lessened slightly, but you'll still have to model conversion partners as to current subject, suppositions, and knowlege etc. to be convincing.

Finally (at least for this post) you need the power of meta. Godel, Escher, Bach of course has to be mentioned once again as to it's power. The bot code should "execute" code written in the language of your knowlege base. This lets the bot understand and reason about its own reasoning and to some extent program itself and adapt. This could be dangerous though if it decides to recode something important and kills itself. Just imagine if you decided you had figured out a more efficient way of breathing. You implement it and die before you can debug yourself. My personal theory is that that is what consciousness is for. Like C++ it is a higher level language that we can work with without screwing up our assembly and killing ourselves. Just that fact that we can't read our machine language and 90% of the methods are private gives it all that mysticism. Just my take.

Anyways long post. Obviously I haven't written a full AI yet, so what do I know? Take what is useful and throw away the rest.
0

Share this post


Link to post
Share on other sites
Really it's going to come down to what are your goals. This semi-canned response system... yeah, you can probably make a fairly convincing chatbot with it. If you want to make the (next) next ELIZA-style bot, I'd expect it to work well. Furthermore it's something you can tackle without debating consciousness, etc.

On the other hand, if your goal was to make something conscious, something that really understands the language, then it's not the right path to follow.

Which is why I say, at this point, consider what it is that you want. Are you trying to make a great chatbot? Or are you trying to make real intelligence? Once you know that, you can start figuring out where you want to go from here.
0

Share this post


Link to post
Share on other sites
AP raises interesting points. Apparently i am diverging from the original post so i'll make this one short.

About sleep: yeah, i believe the same, specially considering the reports of people getting confused after a few days awake, even if properly fed and rested. On a statistical model sleep = token compression = find abstraction and equivalences,eliminate low-usage tokens, renormalize probablilities(depends on how you store them). This also means that the process needs some time to complete, and if interrupted the bot would be quite confused, just like we are when woken up on insufficient sleep.

About reading our own machine code: funny and insightful. yes, we can't read our code and thus we cannot (directly) generate new one. And if we could, we'd screw up an die as you said. The way to consciously modify one's own behaviors is by observing their effect, and then using the prediction model to find a new behavior with the desired effect.
What does that mean? that for the bot to be self coscious it has to be able to perceive itself, and perceive others as similar to itself.
In a chatbot, you would feed what the chatbot says to its own knowledge model along with a personal ID (nickname|IP for IRC).

woop, made it short. Agreed with jtrask, this thread seems to have no goal, but neither did I. And already got something interesting out of it, so yay!
0

Share this post


Link to post
Share on other sites
Careful now. Glad to see that you're getting focus, but be wary of drawing biological parallels to a non-biologically-inspired system. Why do these things need sleep? Can't it do these things in the background? True that we can't edit our code, but then again, our code is not the same as the code you would use for such a system. Our neural code isn't exactly editable, but at a local low-level it is changed. That's what learning is. Since you're not coding a neural net, it's not entirely applicable. But why not allow it complete access to its own code? Or at least within reason? No need to make circular connections that are harder for it to learn when you can just feed it in directly?
0

Share this post


Link to post
Share on other sites
i guess you could do it in the background, but i think its a rather expensive process so even if throttled there are some big atomic operations that cannot be interrupted... but hey, maybe it could be done with the extra paralelization work.
For allowing it to mutate it's own code, you'd need to make it interpreted code, and thats more work. If it's already interpreted then it should be interesting to try. Each script language could be a token, too.
0

Share this post


Link to post
Share on other sites
Yeah, this thread has no exect goal.
I was thinking of making a cross between a nice-sounding bot, and an intelligent bot. So that it would be rather nice to talk to, and could "understand" some consepts, as well. (along with what if questions, and the usual why, where, when, ect.)

With generalisation,
How about this:
for each node, counter x
For each other node, counter y
If the quotiont of the links (amounts different, if one has a link and the other doesn't, its treated as a link with the amo of 0.00), abs(x,y) , is smaller then some smallish constant, then:

Grab some random links, that x doesn't have (or that x has, but with different amounts).
Integrate X, and Y. (includes averaging link amo's, to find the new one, in the case of two links which are linking the same two nodes together).
Remove y
next node
next node

now, with this, it would start doing some generalizing, in the heigher order nodes (i would say, that the nodes for heigher order consepts tend to have a mass of links, but lower order conspets tend to have similar, small amounts of links).

??

For the what-if questions,
How about temperary nodes/links/rules?
If you uwere a bird, and could fly, what would you do?
Well, you would make a temp link, from self to bird, and (if not already there), make a temp link from bird to fly.
what would you do, signifies an action, that the thing "you" would perform. so, the bot should look up bird's links to actions, and figure out that birds eat, sleep, and fly.

It would then respond: I would eat, sleep and fly.

The temp links, ect. Would then be destroyed at the end of the session/ when it is no longer usefull (hasn't been talked about for a while)

From,
Nice coder
0

Share this post


Link to post
Share on other sites
I was wondering, how much processing power would this require to run?

How well does the algorithm fare?

What optimisations on it could be performed?
From,
Nice coder
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0