• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Nice Coder

Creating A Conscious entity

131 posts in this topic

1) I'm just wondering, would a program that is self-conscious and self-teaching reach a limit of the knowledge it can get. Also, wouldn't it reach that much faster than any human, because it processes much much faster than a normal human.

2) If it had all this information, It still wouldn't know what to do with it. So it would only be able to learn, it would not be able to use the information to effectively "learn" new processes. It would merely obtain data about the data it is getting, and storing it, not using the data to do anything new or understand the data.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.



Yes, most computers today only process information sequentially. Data stored in memory is not acted upon. It sits there in line waiting to be processed. That is a big difference from every cell in your brain working together. An organism's sensory registers not only store information for split seconds, but they are able to act on it in an encompassing way. That goes for all memories and experiences. The messages are not waiting in line for something to process them. They process themselves.

As for the references to Behaviorism made by a previous poster, I think using such a psychological basis for anything died out nearly 30 years ago. Most of the research that served as the basis for Behaviorism has been shown to be either inconclusive, incomplete, or socially biased. I suggest looking towards Cognitive Psychology as a better basis of understanding. Also, any reference to Psychodynamic Theory probably doesn’t constitute good information either. I don't think I know of any professional who takes that theory seriously anymore. That is, unless you work in Hollywood and really think Sigmund Freud understood the human psyche completely.
0

Share this post


Link to post
Share on other sites
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Zero-51
1) I'm just wondering, would a program that is self-conscious and self-teaching reach a limit of the knowledge it can get. Also, wouldn't it reach that much faster than any human, because it processes much much faster than a normal human.

2) If it had all this information, It still wouldn't know what to do with it. So it would only be able to learn, it would not be able to use the information to effectively "learn" new processes. It would merely obtain data about the data it is getting, and storing it, not using the data to do anything new or understand the data.


1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!

Just imagine being able to talk with somehting that understood what you were saying!

User: Its snowing here
Chatbot: Where are you?
User: In the himilayas
Chatbot: Well of course, its always snowing in the himilaias!
User: Why
Chatbot: Because mountains are very heigh, and when you have very heigh things, they get very cold. and because mountains are very heigh, they get snow when it rains.
User: How do you know it rains in the himilayas?
Chatbot: Because it rains in places that are part of the surface of the earth, and the himilayas are a part of the surface of the earth.
User: How do you know what is on the surface of the earth?
Chatbot: Cities are built on contenents. Contenents are build on tectonic plates. Tectonic plates are on the serface of the earth.
So, cities are on the surface of the earth.

!!!

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.


1. Sorry about the double post.

2. Why do you think that the neural net is the only way?

3. The meaning of sentences would be quite simple, because this bot would only converse in a language which was made so that it can understand everything that is said. and it would know how the words were put together, what the words themselves were, ect.

4. That doesn't quite make sense, how can natural things be different to non-natural things? How can using a non-natural approach stop consiousness from forming?

Zielfreig- What difference would parellel-processing have to consiousness?

From,
Nice coder
0

Share this post


Link to post
Share on other sites
The problem is not so much that your bot would be incapable of doing anything, because if you look at chatbots out there, they certainly can have impressive conversations. The problem is that you used the C-word. While it's slightly less ambiguous as other words you encounter doing this sort of stuff (try "soul" some time), it still *basically* doesn't mean anything. We can't define consciousness, we merely presume we would know it if we saw it - which makes it quite difficult to code. If you just want a program that knows it exists, try this: create a file, conscious.bat, with the following text

dir conscious

and run that program. What do you find? This program knows of its own existence! The difficulties in getting real consciousness are just way beyond the scope of any post I can write in even three books worth. Yes, you can gain a lot of information, plug it into a hardcoded grammar engine, and have something that speaks to you in English and even does a pretty good job.

But for as long as you're aiming for consciousness, you're going to have a hard time doing this with a symbol system. The brain plays by the rules, but the mind doesn't. Neural nets can capture that. This approach - and believe me, I've tried - just can't. If you want to figure out why, try heading over to Amazon and picking yourself up a copy of Godel, Escher, Bach: An Eternal Golden Braid . Even if you don't want to figure out why, read that book. It'll show you the scope of the problem you're trying to tackle, and then maybe you can figure out your own way of dealing with it.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by jtrask
The problem is not so much that your bot would be incapable of doing anything, because if you look at chatbots out there, they certainly can have impressive conversations. The problem is that you used the C-word. While it's slightly less ambiguous as other words you encounter doing this sort of stuff (try "soul" some time), it still *basically* doesn't mean anything. We can't define consciousness, we merely presume we would know it if we saw it - which makes it quite difficult to code. If you just want a program that knows it exists, try this: create a file, conscious.bat, with the following text

dir conscious

and run that program. What do you find? This program knows of its own existence! The difficulties in getting real consciousness are just way beyond the scope of any post I can write in even three books worth. Yes, you can gain a lot of information, plug it into a hardcoded grammar engine, and have something that speaks to you in English and even does a pretty good job.

But for as long as you're aiming for consciousness, you're going to have a hard time doing this with a symbol system. The brain plays by the rules, but the mind doesn't. Neural nets can capture that. This approach - and believe me, I've tried - just can't. If you want to figure out why, try heading over to Amazon and picking yourself up a copy of Godel, Escher, Bach: An Eternal Golden Braid . Even if you don't want to figure out why, read that book. It'll show you the scope of the problem you're trying to tackle, and then maybe you can figure out your own way of dealing with it.


Conciousness would be a very good thing (tm). But i'd be Really Impressed if i could find/make a self-aware or intelligent entity.

What i really want to know, is what makes neural nets so special?
All they do is a couple of mult's and a sigmoid function... not all that much.
Maybe self-growing ANN's, now how would that work?

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Simple neural nets like you've described really aren't that special, except that their knowledge isn't hard coded, it's... more or less... learned. What's more impressive is recurrent nets, that can trace patterns over time. Gasnets (and others with similar intentions) that can have, in some rudimentary form, "desires" and, debatably, "emotions". The other cool thing about gasnets is that they can metalearn, since they can make decisions about what training to give off on their own. Simple feed-forward backprop nets are nice, but you're right, they don't change the world. But get out there and see what other kinds of nets there are... try finding Grand's explanation of the brains in Creatures, they're really quite cool.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Nice Coder

1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!


OK, lets say your bot knew about geology, and it knows that geology is the study of the Earth. It also knows some data that would be seen as under the label of geology.
If I were to ask it: "Do you think that geology is different than biology?" would it answer: "Geology is the study of the Earth and its components, and biology is the study of living things; therefore they are two different things." or would it answer: "Geology is different than biology."?

Also, lets say that it did not know about, lets say, the process of melting down aluminum, and you were to ask it: "Do you know if geology has anything to do with the melting temperature of aluminum?" would it be able to asnwer that question with the response, "I think that it may not." or with "I don't know anything about that." If it were sentient it would most likely not say that it does not know about the information, it would try to give its best guess, because Intelligent things can create new ideas on their own.

Also, would it be capable of abstract thinking?
Meaning. Would I be able to ask it what would happen if I were to stomp my foot on the ground, and have it not pay attention or return an answer about geology; and the answer is not the product of an error, just the AI not listening to want I'm talking about or asking it about.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Zero-51
Quote:
Original post by Nice Coder

1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!


OK, lets say your bot knew about geology, and it knows that geology is the study of the Earth. It also knows some data that would be seen as under the label of geology.
If I were to ask it: "Do you think that geology is different than biology?" would it answer: "Geology is the study of the Earth and its components, and biology is the study of living things; therefore they are two different things." or would it answer: "Geology is different than biology."?

Also, lets say that it did not know about, lets say, the process of melting down aluminum, and you were to ask it: "Do you know if geology has anything to do with the melting temperature of aluminum?" would it be able to asnwer that question with the response, "I think that it may not." or with "I don't know anything about that." If it were sentient it would most likely not say that it does not know about the information, it would try to give its best guess, because Intelligent things can create new ideas on their own.

Also, would it be capable of abstract thinking?
Meaning. Would I be able to ask it what would happen if I were to stomp my foot on the ground, and have it not pay attention or return an answer about geology; and the answer is not the product of an error, just the AI not listening to want I'm talking about or asking it about.


ok...

The third thing (not listening) would be a bit conterproductive (because it is a chatbot, and it chats to humans).

Getting it to randomly change the subject could be implemented (but it probably won't be good for it).

As for #1, it would probably answer "They are two different things". Then if you asked it why, it would tell you.

As for #2, it would probably answer "No, they are unrelated", because it would probably know what aluminium is (bauxite ore->aluminium), and know that it has no connection with "Geology" which would just be a node. Isn't data mining great? [grin]

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.
I think neural nets are not the only way to create an intelligent entity - another way is to create a "logic machine" which keeps track of the "action potentials" for doing different things, and follows the course with the highest one. I'm not sure how that would work, but I think if AI is done in our generation, it will not be done with neural nets. Once you have a logic machine, it's easy for it to know that it exists, almost trivial actually.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Tron3k
I think neural nets are not the only way to create an intelligent entity - another way is to create a "logic machine" which keeps track of the "action potentials" for doing different things, and follows the course with the highest one. I'm not sure how that would work, but I think if AI is done in our generation, it will not be done with neural nets. Once you have a logic machine, it's easy for it to know that it exists, almost trivial actually.
Neural nets are logic machines. The theory behind them is based (partially) on a model of the human brain. Neurons interact by building up a potential to fire and then firing if the potential exceeds a threshold. This isn't exactly what you're talking about, but is similar.

How are the actions specified? By humans? this isn't viable, there are too many possible actions.

+ how is it trivial for a logic machine to know it exists?
0

Share this post


Link to post
Share on other sites
wel, this is (sort of) a logic machine.

It mines data (thinks, makes new connections)
It collects data
It outputs data
It has rules (bayens, from 0.0 to 1.0, alows fuzzy rules)
It has a databace (nodes, links, and rules)

I would say that it is a logical machine.

Also, Why would something need to be a neural net, to be consious?
What are the resons behind that line of thought?

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Nice Coder
wel, this is (sort of) a logic machine.

It mines data (thinks, makes new connections)
It collects data
It outputs data
It has rules (bayens, from 0.0 to 1.0, alows fuzzy rules)
It has a databace (nodes, links, and rules)

I would say that it is a logical machine.

Also, Why would something need to be a neural net, to be consious?
What are the resons behind that line of thought?

From,
Nice coder
Anything based on logical rules could be called a logic machine :)

I haven't read all the posts in this thread but I'm not sure why anyone would say that neural nets are the only possible way to evolve/create consciousness. I'm not sure that anyone has a firm enough grasp on the essence of consciousness to make such a bold statement.
0

Share this post


Link to post
Share on other sites

Consciousness Explained, by Daniel C. Dennett
http://en.wikipedia.org/wiki/Consciousness_Explained
http://www.amazon.com/exec/obidos/tg/detail/-/0316180661/104-6934701-6782354

Orczillabot

0

Share this post


Link to post
Share on other sites
Interesting...

What is also interesting is how this also describes my method.
Nodes, links - One agency
Bayens Rule maker -Another one
Nlp interface -Maybe
Question maker - Also,maybe.

[grin][grin]

From,
Nice coder
0

Share this post


Link to post
Share on other sites
The statement people refer to that you *must* have a neural net to make consciousness (which is not what the statement actually was) does seem bold considering how little we know about consciousness. However, that vagueness is why I said that in order for us to create consciousness, and be able to do it definitely, not just guess at what might be able to define consciousness, we will probably need a neural net. The reason I say that is evolution of neural nets already HAS created consciousness - case in point, the reader (that's you). Since, then, we've seen conscious neural nets and we've never seen conscious decision trees, NLP engines, etc., I think it's fair to say that that's the best place to focus our research. An NLP interface to a knowledge database is *not* consciousness, I can tell you that much already. Any time you have something where you specifically define each section, you're imposing rules on the system. You don't know how consciousness works, who are you to say what rules it behaves by? Why not just let it emerge from neural activity like it does for humans?
0

Share this post


Link to post
Share on other sites
But it is not just an nlp linkup to a knowledgebase, it is a nlp linkup to a self-evolving knoweledgebase. Which theorises and thinks on its own.

First, a question.

How would we know the exact point at which it becomes consious?
How would we know that it is consious?
What is consiousness required in order to do?

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Again, that's one that we can't define. We just don't know. I can tell you with reasonable confidence that, if we assume that the standard everything-around-me-is-real philosophy is correct, consciousness is an emergent behavior of the fairly simple rules governing the neurons in our brains. Does that mean that it can't be created in other ways? No, certainly not, just like there's more than one way to write a computer program. However, since none of us can define consciousness, I would think that it'd be impossible to program a system saying, "oh, we'll have this module and that module and tada it'll make consciousness" - why not use what we already know works?
0

Share this post


Link to post
Share on other sites
Wohoo!! let me join the fray!!

first off, praise jtrask's cleverness:
conscious.bat

dir conscious

it actually sorta works as a self-aware program. But aware in a store-and-show, non-functional kind of way.

And now lets trim out the excess fat:
first of all, i've seen the hypothetical questions/answers an AI would answer phrased in the form of "do you..." and replied with I.

If this language isn't being parsed by a grammar-based engine, then you have a real problem in your hands:

how do you teach self-awareness to a disembodied intelligence?

thats right! what do you mean when you say "you"?
in children this is usually signaled with gestures. For animals you pat them, or call and reward. There's also my favorite, the mirror test, put em in front of a mirror and point and say their name. Check for reactions.

To me, self-awareness means you recognize your peers to be similar to yourself, and you're able to project knowledge about yourself to them, and vice-versa. so for an AI, it should be able to guess things by self-analysis/observation, and it should be able to learn about itself by analysing/observing other things.

Another thing: Google was suggested for knowledge retrieval. This is a Very Bad Idea (tm). Would you take your kid out of school and put him on google? of course not. The kid (and the bot) cannot discern truth like an adult, and will end up spewing up all kinds of garbage and contradicting stuff. Also, circular definitions will arise.


And now, since this post isn't only for bashing: i've been thinking about this recurrent AI subject, and here's whats needed:

-Motivation (someone mentioned this as DRIVE).
A goal is NOT motivation. motivation is the reason to achieve the goal. motivation is tied to our lower level needs (Maslow's.. linky --> http://allpsych.com/psychology101/motivation.html ). One way to model such needs and motivations lies in modeling feelings.
Enter the PAD scale--> http://www.kaaj.com/psych/scales/emotion.html
now, if we plug that scale separately in a knowledge learning/predicting system, and we set a function of PAD to maximize, you got motivation down pat. There remains the problem to pre-seed the PAD knowledge or to leave it running on learn-mode, since at first the prediction will behave wildly.
Of course, then you gotta link PAD to the rest of the knowledge.
And that is.. yeah, a lot of linkage. did some numbers. scary.

-Entity/self-awareness: if you want your AI to react as an entity it needs a body. maybe a virtual 3D body, a window app body or a webcam body, but it needs one to achieve self-consciousness.
Why? because that way it can differentiate when you're refering to it and to something else. A chatbot could use IP's and idents, etc to identify things, thus working as a virtual body. One could argue that bodies aren't needed since its only you talking to the AI, and the interlocutor's identity is not important (all would be considered as one). But then, the AI would never learn individuality, and its knowledge would get mangled by the lack of that essential concept. you know, typical me=I=you chatbot mixups.
It also needs a name. Why you ask? if you have a name entry in the database, its easy to relate self-stuff to it. I strongly believe that names are a fundamental part of our beloved human consciousness. try it on animals again.
you don't need to force this on the AI database, you could teach it. this is rather hard except for the IP/nick chatbot and the 3d-world AI.

-consciousness:
this is such a vague term i consider trying to fulfill it is futile. sometimes one isn't even sure of oneself's consciousness.
(dreams, brain damage and drug-altered states)

-knowledge models:
do NOT attempt to hardwire relationships between data. this restrains the knowledge model to your approved relationships, which will predictably be very limited. Plus, would you think your brain is pre-wired that way? not me.
Instead, i'd go with a time-aware model, as someone suggested, that reinforces correct predictions and punishes incorrect ones. eventually this will even out to roughly correct knowledge, just like human beings. Time-awareness is a must for constructing sequences, and also for the PAD scale bits, since those should change with time.
Me? i'm a fan of multi-dimensional markov models, but since the dimension is related to the time, it needs a lot of dimensions to work decently, and thats a lot of storage space, so maybe a sparse matrix implementation and some tresholding would help keep things small.

Also, there's still the issue of abstraction. This can be achieved in a multi-dimensional markov with a little background processing (or if the DB is small, you can do it after each new token arrives), and i leave up to you finding about this. Just know that it can be done, and i've seen it working.

And then, there's gotta be some trimming of the unused knowledge, a background cleanup. could be size-based or time based. I'd go with size-based but that just me. you should be trimming unused knowledge, so you need usage stats for each token... and that means an even bigger DB.

-Perception:
The advantage of neural nets is that its easy to store knowledge of anything. you need to give senses to your AI, and it will be able to understand only the things it can perceive. For example, im guessing you won't be having it processing webcam input anytime soon, so its kind of futile teaching it the difference between colors. It will only spit back whatever you tell it to.
Also, each sense means a separate DB which in turn has to be correlated to the rest of the DB.
so that means a total DB size of:

(TknsSense1 + TknsSense2 + ..)^max_time_perception+1

so if you want your AI to correlate the last 4 things that happened (heheh... 4) and you want only a text chatbot thats about:

30 characters (letters, space, comma, period and semicolon. no caps for size's sake)

max_time_perception: 4

(30^5)*4 bytes = 92.7Mb approx
thats not counting the PAD scale, and considering that if your bot learns a word, it will forget about a letter or something =)

PHEW!!!
done.

But don't take my word for it. Go cough up an implementation of your ideas, even if its a simple one that crashes and stuff... just go and do it. I plan to... im just lazy =P

2:12am... pls excuse the weirdness in the writing.
0

Share this post


Link to post
Share on other sites
Yes, i'm working on it now (i don't have much time for programming).

With the PAD scale, yeah... i've implemented a very simple version of it in a few of my bots. Its funny to see what happened when matrixbot got infuriated with one of those church guy's trying to convert it... [grin].

The dir consious wouldn't work. it wouldn't know that it exists. it would be able to know that there is a connection between "Counsiousness.bat" and itself.

You's and me's would be difficult...

How about the progam asks for your name on startup?
Its name would be a bit tricky, but once names, it should work out. (it would eventually figure it out, because the node which has its name in it, would have an enourmous amount of links).

Believe it or not, a google or wikidump could be used. its links would just be very very low (as in close to cutoff), and would have to be reconfirmed many times in order to gain the strength that a normal link which was formed while talkiong to a user, would.

??? with the 30 chr thing. that would assume that any valid 30 letter combination would be worth remembering. chances are, it would only need something closer to 30 * 4 bytes, then that huge number.

From,
Nice coder
0

Share this post


Link to post
Share on other sites
Quote:
Original post by jtrask
Myyy oh my what have we gotten ourselves into. First of all, as estese said, there's worlds more than you've begun to consider. But before I go into the AI difficulties, let's talk about some of the philosophy that's come up in this discussion, shall we? Or at least, in the steps from least-related-to-the-AI to most-.

First about what is real, and if everyone else around you is intelligent. This is an unsolvable problem, and as far as my thoughts go, it's the central question to philosophy. On the one hand the universe could be real, and we are just objects in it. In which case, we should be studying physics, chaos theory, all that fun stuff. This is an easy one to understand and one that most people accept (without realizing the implications: no free will, no life after death, ...) On the other hand, we have the brain-in-a-vat theory that anyone that's seen the Matrix has probably already considered - who says this world around us is real? It could well be that the mind is sort of how some people view God - it fills everything, knows everything, creates everything. You (whoever you may be) are not real, you're just a sort of hallucination. There's a lot more to go into on that one - actually, on both of them - but that's a discussion for a philosophy class, and this is just a book-long GDNet post. Ask Google if you care. Anyways, the problem with this question is that you _cannot_ prove one or the other theory to be correct. Like doing non-Euclidean geometry, you can accept the ground rules of one or the other and prove many interesting things based on it, but we can never know which is "right". My apologies.

Next on the list is how you tell the difference between being conscious and acting conscious. Much like the last one, you can't prove the difference and furthermore you're an idiot if you're going to tell me that "Oh, we've just got to assume this or that because there's not really any big difference." You can't assume that other people are real just because their bodies are all like mine - since when is my body real? And you can't assume that if it's acting conscious, it is - this is a central question in AI, and also psychology, if you look up behaviorism. For those of you that still want to consider them basically the same,

#include <stdio.h>
#include <math.h>
void main()
{
printf("Hi, I'm the count-to-100 bot. I'm going to count to 100.\n");
int n;
for(n = 1; n < 100; n++)
{
printf("%d\n", n);
if(rand()%8 == 0)
if(rand()%2 == 0)
{
printf("*Yawn* I'm tired. I'm taking a nap. (Press any key to wake me up)\n");
system("pause");
}
else
{
printf("Man this is boring. Lalalalalala. (Press any key to kick me until I get back to work)\n");
system("pause");
}
}
printf("Happy? Good, now I can go find something more interesting to do.\n");
}

Conscious, or acts conscious? Acts conscious. Obviously this is a simple example, but if you want a better one, talk to a chatbot. They're all rules, and they haven't got much of a clue what the words they say actually _mean_, they're just strings of bytes with some rules as to when it's appropriate to use them. Unfortunately there's no way to judge consciousness without behavior, or at least not with the current state of neuroscience. So yeah, we're going to have to accept acting conscious, but if that's the case I want rigorous tests. I want to see something that can not only read Shakespeare but also write a term paper on it without any preprogrammed knowledge. I want to see this thing fall in love, and risk its life to save her from destruction. I want to see it be happy and suffer. I want to see it break free from oppression, and I want to see it find God. And most importantly, I want to examine the source code and not see printf("Hallelujah!\n"); anywhere. Even after all these things, could it be a hoax? Sure. So conscious and acts conscious are not black-and-white, there's a slope in between them and I'm sure you can think of one or two humans that are on that slope, not safely perched at "is conscious".

Desires. Our core desires are for our survival and reproduction. Fear, sex drive, hunger... The others are derived from those but with an interest for our community, not just ourselves. Some of these need to be learned, but many of them are chemically controlled, so that rewarding experiences emit a gas that teaches the rest of the brain to try and achieve this state again. If you're interested, check out Grand et al.'s Creatures, but emotion is a whole another book's worth for another time.

Now, as for how you actually want to implement your AI, at last. AI research has followed two different paths, biologically-inspired models like neural networks and genetic algorithms, and those entirely devised by humans, like decision trees. I've always been of the train of thought that the only way we can ensure consciousness is by not trying to impose what little we know about it onto the machine, but rather giving it a set of criteria to try to match with some many-generation evolution of neural nets, just like how we got it. I think, though, that it's possible that we could get intelligence in other ways - I've been considering doing some massive data mining of Wikipedia to see what I could create from that - but the theory proposed in the original post can, I would venture to say, never be more than a theory. When I was just a wee young little coder I thought maybe I'd write a robot with webcams and microphones and at the center give it a symbol system (yes, that's what you call this thing you're describing). I had developed a list of all the different kinds of relations objects can have to each other and all that, but the problem is even if you do it right it still won't be conscious. It doesn't know what those relationships mean. It doesn't know how to observe and tell one object from the next. The real problem is meaning, and for as long as you're hard-coding the way it creates its knowledge, it's going to have a hard time figuring out meaning. Every time you say that you want it to keep track of the fact that "dog is a subset of mammal", you're going to have to keep in mind that not only does the machine not know what "subset" means, but even if it did, it would have to have a strong knowledge of "mammal" for that to mean any more than "foo is a subset of bar". Your ideas may seem to make sense, but try coding it. As soon as you get to your first stumbling block you'll understand.

And, one thing I'm going to throw out without going into any depth in, I know how to do this with a neural net but I'm not sure if it's possible with a symbol system: thought. This thing might be able to learn, if you've got some insight that I missed. But will there actually be thoughts running through its head? Will it hear its own voice in its mind when it goes to write a reply on a forum, dictating what will be typed next?

So I guess this is all a very verbose way of saying I don't think it'll work. However, I hope that I've given you (and any other readers) a good idea about what goes into thought and the rest of the mind. If I've done well, you now have what you need to go out and learn an awful lot more, and I hope that it keeps your interest and you stick around with the exciting future of AI.

-Josh
you're basing you view that this AI is trying to mimic a human. you dont need to know how things operate or what they mean for them to be usefull, all you need to know to learn is application, with all your thought and mind-exploding speech :) you missed the simplest things in nature, virus's and bacteria, the adapt to their environment, but a virus isnt even alive and bacteria dont have brains, i also dont think either can define "subset" in english, do you ;)

when you drive your car to work, do you know exactly what happens every time a piston fires, down to which cogs turn and how long for, what materials they are made of any why so?

no, you just drive to work, yet you are considered capable to take other peoples lives in your hards without knowing everything and defining everything in that the vehicle is made of.

a cat dosnt know what its reflection is, it just knows its not "real" and its "current", ever seen a kitten find its shadow or reflection for the first time? its scared and dosnt know what to do, but it learns that its un-important, merely a visual cue, but it defines neither.

never ever ever let what someone beleives is impossible stop you.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by Nice Coder
With the PAD scale, yeah... i've implemented a very simple version of it in a few of my bots. Its funny to see what happened when matrixbot got infuriated with one of those church guy's trying to convert it... [grin].


do share =D
btw... how did you go about relating the PAD scale? did you discretize it? thats the only way i can think of.

Quote:

How about the progam asks for your name on startup?


an IRC-bound chatbot could use nick|ident|IP, other bots should require a login. a multi-user environment is probably of the best interest.

Quote:

??? with the 30 chr thing. that would assume that any valid 30 letter combination would be worth remembering. chances are, it would only need something closer to 30 * 4 bytes, then that huge number.


i was actually pulling numbers out of nowhere. 30 = all letters + space and basic punctuation. but then i forgot that new tokens need to be made from combinations of those. nevermind, it was 2am.

so yeah, that means its actually more. 4 bytes = float, btw. and thats without considering the list of indexed tokens.
0

Share this post


Link to post
Share on other sites
Sorry I did not read all that was said but I had this theory about things
How good it is who knows but we will see…

Well to make some thing aware of its environment it must know the parameters of that environment

It needs sight ,hearing and touch to function in this environment
But also a comfort zone as well

It needs to function on organic AI
But rather than a multitude of variables a highbred of a high brid selection of variables
But in that if in set situation ….[[it will be aware of its surroundings]] it will call on a default action…namely the ultimate high brid verbal “if time to think is more than .1 of a sec got to that particular hybrid action//
So if fall down stairs
Best variable for that action is taken as it is a wear of its surrounding
So it will stop it self by grabbing banister ,on recovery it will revert back to it variable AI


Thus it can function as needed
A human mind do’s not retain a lot really
But it knows if you fall so high you will die ..this basic parameters should be part of the environment awareness Ai


…………….

Some 1 said cant make it want well
AI is not governed by greed its free from mans down falls
Unless its told to do a set function ..but its not a need but a forced function
..
0

Share this post


Link to post
Share on other sites
I don't have the transcripts... (this was from a bot that was hosted ages ago... the account probably closed from inactivity ages ago. (either that, or the logs grew too much, and they lcosed it).

With the pad scale, i think i did a rather nice one...

Ok, i get a range of phrases and words ("Good one", "*^*& you", ect.), and i set them up with a value for each of the scales.

Once it encounters one of the words, or phrases, it changed the pad values, using the values of the words.

It also changes the values of the pad, from other things. like, when it gets new information, it gets happier. When it gets told a lot of what it has been said before, it gets bored, ect.

Overall, it was a pretty nice little system.

From,
Nice coder
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0