Jump to content

  • Log In with Google      Sign In   
  • Create Account

no-one can create ai


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
95 replies to this topic

#21 Promit   Moderators   -  Reputation: 7163

Like
Likes
Like

Posted 08 July 2003 - 05:45 PM

Penrose (The Emperor''s New Mind) suggests that the electrobiochemical reactions in the brain my be somewhat quantum or quantum-based in nature and as such operate under a calculated uncertainty, and also that this is one of the reasons that traditional silicion-electrical computers cannot achieve the sort of effects humans can...

Sponsor:

#22 Cyril   Members   -  Reputation: 122

Like
Likes
Like

Posted 08 July 2003 - 06:08 PM

Hi.

IMHO, the first step to "learning AI", is to make it generate code on the fly, meaning that for each new thing discovered, it should create some new actions, and thus, some new code. Hard-coding AI is a terrible mistake, once again, IMHO.

#23 MikeD   Members   -  Reputation: 158

Like
Likes
Like

Posted 08 July 2003 - 10:46 PM

Promit: there are many reasons why a specific computer or simulation cannot be made intelligent, but obviously they act on the quantum level the same as electrobiochemical brains (in the sense that quantum mechanics is the basis of all physics). To say this gives them some functionality that it is immpossible to simulate (you could simulate quantum effects in a computer) and that this is the "special thing" that prevents intelligence being man made detracts from the actual problems in artificial intelligence. Those being that symbol manipulating systems cannot understand, too few experiments involve embodiement and situatedness, that people hugely underestimate the complexity of the dynamical interactions that occur in 3 trillion neurons (I think that''s the count), that all intelligence emerges from the interactions between agents and their environments and that the problem space of "being human" (which is what most people define as being the goal of AI) arguably necessitates sensors, effectors and methods near idential to those we, as humans, have.

All IMHO of course, although I think I''ve argued all these points here before

Cyril: How do you make a simulated AI learn new actions outside the scope of its current programmed interactions with its simulation. The new actions humans have learnt have been of the extended phenotypic kind, i.e. tool use. By programming a simulation, you almost always program its limitations in first, then expect far too much of it.

Mike

#24 Popolon   Members   -  Reputation: 128

Like
Likes
Like

Posted 09 July 2003 - 01:57 AM

I don''t agree with considering the use of a tool as learning a new action. The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life. You just learn new combinations of actions, and that''s something that a computer program can perfectly do.

#25 MikeD   Members   -  Reputation: 158

Like
Likes
Like

Posted 09 July 2003 - 03:04 AM

Popolon: I think your definition of the term action is completely different to most other people here. Action without environment is as meaningless as behaviour without environment.

#26 drslush   Members   -  Reputation: 122

Like
Likes
Like

Posted 09 July 2003 - 03:21 AM

"The question of whether computers can think is like the question of whether submarines can swim." - Dijkstra

#27 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 09 July 2003 - 02:50 PM

Not many people believe Penrose on that point... particularly not neurologists. Indeed, TENM is a pretty poor book when compared with other works on the subject. There is no substantive evidence to support the claim that quantum effects play a significant role in determining the outcome of electro-chemical processes in the brain, which are the basis of information transfer. Indeed, the human olfactory system - which has been completely deciphered in terms of how it processes sensory information, requires no quantum effects to determine inference outcomes. It is unlikely then that other sensory systems require such effects. As to whether higher cognitive functions do, I doubt it. If the foundations of the brain don''t rely on quantum effects, then it is unlikely that emergent activity built from these foundations exhibits quantum fluctuations.

Furthermore, we don''t think with single neuronal loops, but with aggregates of hundreds and even thousands of loops performing the same oscillation with a little variation on phase and amplitude. On these scales, quantum effects would be washed out and all we would see are averaged effects, which are trivially modelled mathematically and hence easily placed in the context of a computer simulation.

Indeed, the Dynamic Bayesian Network is an AI tool for modelling sequential decision processes in dynamic environments. The mathematical function underlying the DBN model is a more general, complex version of Schrodinger''s equation, which describes the evolution of quantum wave functions. In other words, Schrodinger''s equation is just a special (simpler) case of a DBN without observations!

Cheers,

Timkin

#28 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 09 July 2003 - 02:59 PM

quote:
Original post by MikeD
that people hugely underestimate the complexity of the dynamical interactions that occur in 3 trillion neurons (I think that''s the count),



100 Billion neurons in the average brain. [Kendell, Schwartz and Jessel, Principles of NeuroScience].

As to the underestimation of complexity, I think that''s fairly true of many AI researchers who don''t study neurology, however there are quite a few research groups studying simulated neuronal systems (and I don''t mean your standard ANN), their complexity, dynamics and performance.

Cheers,

Timkin

#29 MikeD   Members   -  Reputation: 158

Like
Likes
Like

Posted 09 July 2003 - 10:24 PM

quote:
Original post by Timkin
100 Billion neurons in the average brain. [Kendell, Schwartz and Jessel, Principles of NeuroScience].
Timkin


A power of ten out again and I even own that book

My Masters dissertation was on spiking neural networks and spiketime dependent plasticity but that''s about as close to a biological neuron as I''ve ever got.

Out of interest Timkin, do you, off hand, know how much biology I would need to know to take a taught Masters in Neurobiology? My entire background is in computing from degree level upwards and I haven''t studied biology since I was 16. It''s just something I''ve wanted to do (I''m tempted by the life style of Make computer game > Take Masters > Make computer game > Take Masters etc.

Mike

#30 UlfLivoff   Members   -  Reputation: 122

Like
Likes
Like

Posted 09 July 2003 - 10:44 PM

quote:
The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life.



This is a widespread understanding and also what I learned in the class "Brain Physics"

However, I disagree: it is possible to change non-muscular parametres of my body in a indirect way. If sit and think about concentration camps, rape, childabuse etc. etc. eventually my body temperature will rise and I will start sweating etc. Very small details, but in principal it shows that the brains controls more than *just* muscular activities. In the same way I can affect my environment in other ways, without using muscels.

When it comes to the good old real-ai-or-not discussion, my view is that it's a question of definition. All too often people discuss details in intelligence without defining the word "intelligence" We don't need an universal all true definition - but mere statements like: when I talk about intelligence in my post, I mean bla bla.

You tell me what you mean with intelligence, and I will tell you whether it can be put into a computer or not

Another small comment: A very interesting definition of consiousness is "the ability to reflect on ones own cognitive [i.e. brain] processes"

think about that for a second

That definition fits well when it comes to humans and animals and it can be shown that monkeys are consious [however it's spelled], but ants are not.

So, do computerprograms have consciousness?? well, I don't know.
But some programs takes a look at their own calculations and says: Hey, is this result reasonable? - if an animal did that, it would be coinsidered to be a sign of consciousness. Interesting, right?
Perhaps consciousness and self-awareness isn't this big holy grail everybody makes it out to be.

I have a really weird friend, can you prove that he has self-awareness?? can you show that an alzheimer-patient has??

EDIT: Iknew I forgot something:
quote:
You just learn new combinations of actions, and that's something that a computer program can perfectly do.



Just because you can define all the 700+ muscels humans have, in a computer program doesn't necessary mean that you ever can simulate human behaviour. Just because I can define the 28 letters in the alphabet and do operations with them doesn't show that it is possible to make a computerprogram that writes Shakespeare. Theres is a classic ai problems that deals with this (symbol manipulation), It's called the chineese room or something like that.


*sigh*

Nice to get all this of my chest

Ulf

[edited by - UlfLivoff on July 10, 2003 5:57:39 AM]

[edited by - UlfLivoff on July 10, 2003 6:02:44 AM]

#31 MikeD   Members   -  Reputation: 158

Like
Likes
Like

Posted 10 July 2003 - 02:21 AM

quote:
Original post by UlfLivoff
quote:
The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life.



This is a widespread understanding and also what I learned in the class "Brain Physics"




Popolon''s statement that I disagreed with was "I don''t agree with considering the use of a tool as learning a new action." He says that using the muscles was the _basic_ actions that the human body can perform then went on to say that the basic actions we perform were the only actions we can perform and that all tool use, complex or not was still just the action of moving our muscles. Our genotype maps onto our phenotype in many epigenetic and epistatic ways. Richard Dawkins argues that the extended phenotype, i.e. tool use, beaver dams, certain soft shelled marine creatures using shells etc. was a natural extension of this and that you could not split one from the other without drawing arbitrary lines in the sand.
As you''ve stated, whatever actions we can perform to change our own state in a vacuum (i.e. without environment) might be considered our basic actions. But tool use and environmental interaction are certainly extended actions as much as they are part of our extended phenotype.



#32 UlfLivoff   Members   -  Reputation: 122

Like
Likes
Like

Posted 10 July 2003 - 02:34 AM

Ah,I see.
But that pretty much boils that discussion down to the definition of the word action, right?

In some situations, using a hammer is a action (for example in a computergame), but in a biological way it's not an action..

I just felt like pointing out that I can manipulate my enviroinment without using my muscles

Anyone having views on my other points? I'll be glad to hear your opinions...


[edited by - UlfLivoff on July 10, 2003 9:35:47 AM]

#33 UlfLivoff   Members   -  Reputation: 122

Like
Likes
Like

Posted 10 July 2003 - 04:11 AM

Cyril:
quote:

IMHO, the first step to "learning AI", is to make it generate code on the fly, meaning that for each new thing discovered, it should create some new actions, and thus, some new code. Hard-coding AI is a terrible mistake, once again, IMHO.


You''d be surprised how many things that are hard-coded in our brains.

Face-recognition, reacting to a screaming child and fear of snakes just to name a few.

Psychology books are full of examples of ''hardcoded'' stuff

Ulf


#34 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 10 July 2003 - 07:28 AM

The original poster apparently hasn''t tried out the new counter-strike bots. Granted, this isn''t in the realm of general AI or "real AI" as he puts it, but the bots do exhibit (an illusion of) some learning behaviour.

For example, when the bots are being defeated in a certain area of the level they will be less likely to go there in the future. They will also shift their overall style of play between defensive and offensive depending on a morale system. The more skilled bots don''t only aim better, they''re also more aware of the surroundings and "know" which areas are good to hide in (both by themselves and the enemy).

Another important point is that they have the same sensory inputs as the player, rather than typical computer-like omniscience. This alone makes them act more human because they can be surprised or ambushed.

#35 5010   Members   -  Reputation: 122

Like
Likes
Like

Posted 10 July 2003 - 08:54 AM

Does anyone think the original poster even bothered reading the responses to the claim made? I don''t see evidence of that. However, the discussion among the rest of the thread has been interesting and enlightening.

#36 starchy   Members   -  Reputation: 122

Like
Likes
Like

Posted 10 July 2003 - 09:58 AM

quote:
Original post by UlfLivoff
You''d be surprised how many things that are hard-coded in our brains.

Face-recognition, reacting to a screaming child and fear of snakes just to name a few.

Psychology books are full of examples of ''hardcoded'' stuff

Ulf



Well, if you''re telling us you read about it in a book somewhere, then it must be true!

Unfortunately, I am not afraid of snakes.

#37 UlfLivoff   Members   -  Reputation: 122

Like
Likes
Like

Posted 10 July 2003 - 11:50 AM

good for you. And now for a little excercise:

Consider how many people that are afraid of spiders.
Then think of how many people that are afraid of cars.
Compare the number of people killed by spiders each year to the number of people killed by cars each year.

One fear (stronger or weaker) is 'hardcoded' in the brain in most people, the other is not.


[edited by - UlfLivoff on July 10, 2003 6:51:18 PM]

#38 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 10 July 2003 - 02:12 PM

quote:
Original post by MikeD
A power of ten out again and I even own that book



Hey, an order of magnitude error isn''t that bad... it''d certainly be acceptable in astrophysics!

quote:
Original post by MikeD
Out of interest Timkin, do you, off hand, know how much biology I would need to know to take a taught Masters in Neurobiology?



How does one quantify a volume of knowledge? The main issue would be your understanding of the literature and your ability to a) identify the relevance of your research; and b) place your research in the context of other research in the field.

That really depends on you. Personally I have no doubt that you could learn the requisite background material... the question is, do you really want to???

Cheers,

Timkin


#39 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 10 July 2003 - 02:20 PM

Within the academic AI community at least, action is defined to be any cause of a state change initiated by an agent that isn''t explained by the transition laws of the environment. Typically the environment transition would be described by s(t+dt) = f(s(t)). An action is anything that causes a different state s''(t+dt) if the environment started in state s(t). This might be represented as s''(t+dt) = f(s(t),a(t)).

In terms of probabilistic representations, the environment transition function would defined by the conditional probability density function, p(s(t+dt)|s(t)), at least for Markovian processes. If you understand probabilities, you''ll recognise that p(s(t+dt)|s(t),a(t)) is a very different beast.

Remember though, this is just the academic (AI/scientific/engineering) definition of action and may differ from the psychological or physiological definition of action... which I don''t believe I''m qualified to comment on.

Cheers,

Timkin

#40 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 10 July 2003 - 02:50 PM

quote:
Original post by UlfLivoff
good for you. And now for a little excercise:

Consider how many people that are afraid of spiders.
Then think of how many people that are afraid of cars.
Compare the number of people killed by spiders each year to the number of people killed by cars each year.

One fear (stronger or weaker) is ''hardcoded'' in the brain in most people, the other is not.


<SPAN CLASS=editedby>[edited by - UlfLivoff on July 10, 2003 6:51:18 PM]</SPAN>


That has an easy explanation, people are used to see cars and drive them, but people aren''t used to deal with snakes or spiders. That''s definately not ''hardcoded'' in our brains.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS