Archived

This topic is now archived and is closed to further replies.

yumi_cheeseman

no-one can create ai

Recommended Posts

its funny how we in this forum have classified ai. at the moment probably noone here has created a real ai. one that actually learns and can be taught how to respond like a human. everyone here classidies ai as action and response. game ai are made to respond to an action. such as if the bot detects the human bot then attacks human. this is why playing games such as half life and unreal are so boring against bots. all that a harder bot does is be more accurate. a bot doesn''t figure out ur strategy or weakness and implement it in the next round does it. a human can. a human has the ability to far outpla a bot. this is because of the humans ability to analyse the other players and take advantage say by hiding in an alleyway. this is why you are not taking about real ai

Share this post


Link to post
Share on other sites
We sure have created artificial intelligence. They keyword here is ''artificial''. If you create something that appears to have some sort of intelligence it can be described as AI. Besides are you sure that the human mind isn''t purely mechanistic?

Share this post


Link to post
Share on other sites
The discussion here is about whats intelligence and whats not, and i dont belive anyone has an answer for that. Is a chess computer smart or intelligent? probobly not, it only has simple mathmatical rules to follow.
You cant tell if you playing against a computer or a human if your playing chess and cant see the opponent. So wouldn´t that be "real" AI?

Share this post


Link to post
Share on other sites
Learning is not a required component of intelligence. (My brother is a prime example.) A creature can be intelligent and respond to its surroundings through genetics and instinct - i.e. its "programming".

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites
quote:
Original post by yumi_cheeseman
at the moment probably noone here has created a real ai.



Actually I have... she's two month old and is already displaying some of the attributes of an intelligent, concious agent. She's definitely artificial in that my wife and I made her... I'm guessing that she'll be ready for the Turing test after a few more years of training.


As to that other sort of artificial intelligence... the sort imbued into computers... before you go and denounce what people are doing, why don't you define what you mean by 'real AI' and then offer people the opportunity to discuss whether it is possible to create such a thing, rather than just tell us that we cannot.

Timkin

[edited by - Timkin on July 4, 2003 1:09:50 AM]

Share this post


Link to post
Share on other sites
The biggest problem I think for ai is going to be the learning. We learned thru theories/experiments/failures. How you''re going to duplicate this in a computer is a tough question. If the ai can''t validate its theories then it won''t learn and build upon existing knowledge. Unless there is another learning process I''m unaware of. I think the ai needs a context in which it lives and interacts with as I don''t think it can live in a vacuum. For us, the environment is the context and survival is the goal.

Share this post


Link to post
Share on other sites
Fox: I would say that learning or, more generally, adaptation, is the decisive metric for defining real intelligence. It''s the difference between an autonomous agent and an automoton. If you do not adapt due to your environment you are merely an automoton following a predefined set of rules (however your brain might work), if you adapt to the environment (even if it is maladaption to some degree) then you are autonomous and you dynamically adjust your rule set based on your structural coupling to that environment (your structural coupling defining the domain of perturbations you and your environment can undergo due to interactions with each other). I''m using somewhat esoteric language here, so anyone can ask what the Hell I mean by it
By adaptation I mean everything that changes in you by interaction with your environment, even gaining memories, changing gut instincts and other low level intelligence stuff.

I think adaptation is the essential difference that denotes an important step up in the kind of intelligence an agent has. Consider it the difference between being able to regurgitate facts to get a high IQ (expressing a rules set) vs. being able to learn it in the first place and supply solutions to novel problems.

All IMHO of course

Mike

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Hello
I have read this thread with interest. It is a field I have dealt with in my degree for four years. Last year I wrote some of my thoughts no the matter down in an essay. It can be found at

http://www.richardjones.info/home/interests_ac_intel.html

I hope that it is constructive in some of what it says

Richard

Share this post


Link to post
Share on other sites
The moment an AI technique is mastered, it''s no longer AI, but just another algorithm. But, Yumi, you''re displaying a lack of knowledge: Modern AI game players are more ''intelligent'' than just being more accurate with their weapons. Halo bots will take effective cover, Unreal bots will predict where you''re going to be and intercept you (it''s a little freaky). These techniques take CPU time and research, and both are continually increasing.

In fact, a lot of games pare down the accuracy of bots to make them seem more human, and use better learning and prediction to be more difficult.

AI isn''t just about acting like a human, which is only one model of intellect. It''s about responding with prediction and learning. It''s getting better, not staying the same.

I like pie.

Share this post


Link to post
Share on other sites
Fox: I would say that learning or, more generally, adaptation, is the decisive metric for defining real intelligence.

i agree here.

and render target, they are not intelligent. they just are told to find the way you have gone. they are not actually changing

____________________________________________________________

as you can see i ramble, but at least i have my own opinion
have a nice day

Share this post


Link to post
Share on other sites
Actually darkawakenings, it''s electrobiochemical, but if you meant it''s deterministic and suffers under the laws of physics like everything else, then, yes, it does. If then you mean to say that it is possible to create AI as it has been created in the past and evolution is not a unique mechanism with which to create intelligence I would say I generally agree (though I can''t say for certain and have some pragmatic and theoretical reservations). If you were going to say that we (i.e. humans) could create AI (and by this I mean real AI with understanding, not just intelligent behaviour, which we''ve already created), by default, just because it''s possible, I would say that is no certainty and I would like to hear your reasons.

Mike

Share this post


Link to post
Share on other sites
> Is a chess computer smart or intelligent?
No, as it cannot learn how to play chess, or how to improve itself.

> But, Yumi, you''re displaying a lack of knowledge: Modern AI game players are
> more ''intelligent'' than just being more accurate with their weapons.
> Halo bots will take effective cover, Unreal bots will predict where
> you''re going to be and intercept you (it''s a little freaky)

Actually there is one game where some kind of AI learned how to improve itself and how to beat you.
It was a RTS called Conflict Zone.
Well, it wasn''t intelligent, it was just using weights to know what should be done next, ie: if it seems like the human player likes doing armies of tanks, then increase the weight of planes and helicopters, and so on.
Still, that was the best ai ever seen in a rts game I think - but other points - rendering, user interface, etc...- sucked too much and the game was a flop

Share this post


Link to post
Share on other sites
Whenever the AI community solves a problem, the solution ceases to be AI in the public imagination; it''s just another computer program. So AI researchers who succeed are never really credited with furthering AI researchers; they just invented another algorithm. That doesn''t mean that, in reality, it''s not AI of a sort. We just don''t see it that way.

As for strong AI, who cares? That''ll happen around the same time we invent personal teleporters, drive around flying cars, and vacation on the moon. Let ''em try though. It''s always healthy to have someone working for the impossible; they figure out useful stuff in the process.

Share this post


Link to post
Share on other sites
The reason AI becomes just a computer algoirthm once it''s developed is that we all have the same vague idea of what intelligence is, and it''s a lot more than a computer vision algorithm, although that''s likely part of it.

If you grow an eyeball from a human stem cell, you wouldn''t say that you cloned a human, would you?

Share this post


Link to post
Share on other sites
Penrose (The Emperor''s New Mind) suggests that the electrobiochemical reactions in the brain my be somewhat quantum or quantum-based in nature and as such operate under a calculated uncertainty, and also that this is one of the reasons that traditional silicion-electrical computers cannot achieve the sort of effects humans can...

Share this post


Link to post
Share on other sites
Hi.

IMHO, the first step to "learning AI", is to make it generate code on the fly, meaning that for each new thing discovered, it should create some new actions, and thus, some new code. Hard-coding AI is a terrible mistake, once again, IMHO.

Share this post


Link to post
Share on other sites
Promit: there are many reasons why a specific computer or simulation cannot be made intelligent, but obviously they act on the quantum level the same as electrobiochemical brains (in the sense that quantum mechanics is the basis of all physics). To say this gives them some functionality that it is immpossible to simulate (you could simulate quantum effects in a computer) and that this is the "special thing" that prevents intelligence being man made detracts from the actual problems in artificial intelligence. Those being that symbol manipulating systems cannot understand, too few experiments involve embodiement and situatedness, that people hugely underestimate the complexity of the dynamical interactions that occur in 3 trillion neurons (I think that''s the count), that all intelligence emerges from the interactions between agents and their environments and that the problem space of "being human" (which is what most people define as being the goal of AI) arguably necessitates sensors, effectors and methods near idential to those we, as humans, have.

All IMHO of course, although I think I''ve argued all these points here before

Cyril: How do you make a simulated AI learn new actions outside the scope of its current programmed interactions with its simulation. The new actions humans have learnt have been of the extended phenotypic kind, i.e. tool use. By programming a simulation, you almost always program its limitations in first, then expect far too much of it.

Mike

Share this post


Link to post
Share on other sites
I don''t agree with considering the use of a tool as learning a new action. The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life. You just learn new combinations of actions, and that''s something that a computer program can perfectly do.

Share this post


Link to post
Share on other sites
Popolon: I think your definition of the term action is completely different to most other people here. Action without environment is as meaningless as behaviour without environment.

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.