• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

yumi_cheeseman

no-one can create ai

95 posts in this topic

"The question of whether computers can think is like the question of whether submarines can swim." - Dijkstra
0

Share this post


Link to post
Share on other sites
Not many people believe Penrose on that point... particularly not neurologists. Indeed, TENM is a pretty poor book when compared with other works on the subject. There is no substantive evidence to support the claim that quantum effects play a significant role in determining the outcome of electro-chemical processes in the brain, which are the basis of information transfer. Indeed, the human olfactory system - which has been completely deciphered in terms of how it processes sensory information, requires no quantum effects to determine inference outcomes. It is unlikely then that other sensory systems require such effects. As to whether higher cognitive functions do, I doubt it. If the foundations of the brain don''t rely on quantum effects, then it is unlikely that emergent activity built from these foundations exhibits quantum fluctuations.

Furthermore, we don''t think with single neuronal loops, but with aggregates of hundreds and even thousands of loops performing the same oscillation with a little variation on phase and amplitude. On these scales, quantum effects would be washed out and all we would see are averaged effects, which are trivially modelled mathematically and hence easily placed in the context of a computer simulation.

Indeed, the Dynamic Bayesian Network is an AI tool for modelling sequential decision processes in dynamic environments. The mathematical function underlying the DBN model is a more general, complex version of Schrodinger''s equation, which describes the evolution of quantum wave functions. In other words, Schrodinger''s equation is just a special (simpler) case of a DBN without observations!

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
that people hugely underestimate the complexity of the dynamical interactions that occur in 3 trillion neurons (I think that''s the count),



100 Billion neurons in the average brain. [Kendell, Schwartz and Jessel, Principles of NeuroScience].

As to the underestimation of complexity, I think that''s fairly true of many AI researchers who don''t study neurology, however there are quite a few research groups studying simulated neuronal systems (and I don''t mean your standard ANN), their complexity, dynamics and performance.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
100 Billion neurons in the average brain. [Kendell, Schwartz and Jessel, Principles of NeuroScience].
Timkin


A power of ten out again and I even own that book

My Masters dissertation was on spiking neural networks and spiketime dependent plasticity but that''s about as close to a biological neuron as I''ve ever got.

Out of interest Timkin, do you, off hand, know how much biology I would need to know to take a taught Masters in Neurobiology? My entire background is in computing from degree level upwards and I haven''t studied biology since I was 16. It''s just something I''ve wanted to do (I''m tempted by the life style of Make computer game > Take Masters > Make computer game > Take Masters etc.

Mike
0

Share this post


Link to post
Share on other sites
quote:
The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life.



This is a widespread understanding and also what I learned in the class "Brain Physics"

However, I disagree: it is possible to change non-muscular parametres of my body in a indirect way. If sit and think about concentration camps, rape, childabuse etc. etc. eventually my body temperature will rise and I will start sweating etc. Very small details, but in principal it shows that the brains controls more than *just* muscular activities. In the same way I can affect my environment in other ways, without using muscels.

When it comes to the good old real-ai-or-not discussion, my view is that it's a question of definition. All too often people discuss details in intelligence without defining the word "intelligence" We don't need an universal all true definition - but mere statements like: when I talk about intelligence in my post, I mean bla bla.

You tell me what you mean with intelligence, and I will tell you whether it can be put into a computer or not

Another small comment: A very interesting definition of consiousness is "the ability to reflect on ones own cognitive [i.e. brain] processes"

think about that for a second

That definition fits well when it comes to humans and animals and it can be shown that monkeys are consious [however it's spelled], but ants are not.

So, do computerprograms have consciousness?? well, I don't know.
But some programs takes a look at their own calculations and says: Hey, is this result reasonable? - if an animal did that, it would be coinsidered to be a sign of consciousness. Interesting, right?
Perhaps consciousness and self-awareness isn't this big holy grail everybody makes it out to be.

I have a really weird friend, can you prove that he has self-awareness?? can you show that an alzheimer-patient has??

EDIT: Iknew I forgot something:
quote:
You just learn new combinations of actions, and that's something that a computer program can perfectly do.



Just because you can define all the 700+ muscels humans have, in a computer program doesn't necessary mean that you ever can simulate human behaviour. Just because I can define the 28 letters in the alphabet and do operations with them doesn't show that it is possible to make a computerprogram that writes Shakespeare. Theres is a classic ai problems that deals with this (symbol manipulation), It's called the chineese room or something like that.


*sigh*

Nice to get all this of my chest

Ulf

[edited by - UlfLivoff on July 10, 2003 5:57:39 AM]

[edited by - UlfLivoff on July 10, 2003 6:02:44 AM]
0

Share this post


Link to post
Share on other sites
quote:
Original post by UlfLivoff
quote:
The basic actions that the humans can perform are JUST and ONLY move the muscles of the body. And not more, you cannot perform any other action, nor learn any more action during your life.



This is a widespread understanding and also what I learned in the class "Brain Physics"




Popolon''s statement that I disagreed with was "I don''t agree with considering the use of a tool as learning a new action." He says that using the muscles was the _basic_ actions that the human body can perform then went on to say that the basic actions we perform were the only actions we can perform and that all tool use, complex or not was still just the action of moving our muscles. Our genotype maps onto our phenotype in many epigenetic and epistatic ways. Richard Dawkins argues that the extended phenotype, i.e. tool use, beaver dams, certain soft shelled marine creatures using shells etc. was a natural extension of this and that you could not split one from the other without drawing arbitrary lines in the sand.
As you''ve stated, whatever actions we can perform to change our own state in a vacuum (i.e. without environment) might be considered our basic actions. But tool use and environmental interaction are certainly extended actions as much as they are part of our extended phenotype.

0

Share this post


Link to post
Share on other sites
Ah,I see.
But that pretty much boils that discussion down to the definition of the word action, right?

In some situations, using a hammer is a action (for example in a computergame), but in a biological way it's not an action..

I just felt like pointing out that I can manipulate my enviroinment without using my muscles

Anyone having views on my other points? I'll be glad to hear your opinions...


[edited by - UlfLivoff on July 10, 2003 9:35:47 AM]
0

Share this post


Link to post
Share on other sites
Cyril:
quote:

IMHO, the first step to "learning AI", is to make it generate code on the fly, meaning that for each new thing discovered, it should create some new actions, and thus, some new code. Hard-coding AI is a terrible mistake, once again, IMHO.


You''d be surprised how many things that are hard-coded in our brains.

Face-recognition, reacting to a screaming child and fear of snakes just to name a few.

Psychology books are full of examples of ''hardcoded'' stuff

Ulf
0

Share this post


Link to post
Share on other sites
The original poster apparently hasn''t tried out the new counter-strike bots. Granted, this isn''t in the realm of general AI or "real AI" as he puts it, but the bots do exhibit (an illusion of) some learning behaviour.

For example, when the bots are being defeated in a certain area of the level they will be less likely to go there in the future. They will also shift their overall style of play between defensive and offensive depending on a morale system. The more skilled bots don''t only aim better, they''re also more aware of the surroundings and "know" which areas are good to hide in (both by themselves and the enemy).

Another important point is that they have the same sensory inputs as the player, rather than typical computer-like omniscience. This alone makes them act more human because they can be surprised or ambushed.
0

Share this post


Link to post
Share on other sites
Does anyone think the original poster even bothered reading the responses to the claim made? I don''t see evidence of that. However, the discussion among the rest of the thread has been interesting and enlightening.
0

Share this post


Link to post
Share on other sites
quote:
Original post by UlfLivoff
You''d be surprised how many things that are hard-coded in our brains.

Face-recognition, reacting to a screaming child and fear of snakes just to name a few.

Psychology books are full of examples of ''hardcoded'' stuff

Ulf



Well, if you''re telling us you read about it in a book somewhere, then it must be true!

Unfortunately, I am not afraid of snakes.
0

Share this post


Link to post
Share on other sites
good for you. And now for a little excercise:

Consider how many people that are afraid of spiders.
Then think of how many people that are afraid of cars.
Compare the number of people killed by spiders each year to the number of people killed by cars each year.

One fear (stronger or weaker) is 'hardcoded' in the brain in most people, the other is not.


[edited by - UlfLivoff on July 10, 2003 6:51:18 PM]
0

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
A power of ten out again and I even own that book



Hey, an order of magnitude error isn''t that bad... it''d certainly be acceptable in astrophysics!

quote:
Original post by MikeD
Out of interest Timkin, do you, off hand, know how much biology I would need to know to take a taught Masters in Neurobiology?



How does one quantify a volume of knowledge? The main issue would be your understanding of the literature and your ability to a) identify the relevance of your research; and b) place your research in the context of other research in the field.

That really depends on you. Personally I have no doubt that you could learn the requisite background material... the question is, do you really want to???

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Within the academic AI community at least, action is defined to be any cause of a state change initiated by an agent that isn''t explained by the transition laws of the environment. Typically the environment transition would be described by s(t+dt) = f(s(t)). An action is anything that causes a different state s''(t+dt) if the environment started in state s(t). This might be represented as s''(t+dt) = f(s(t),a(t)).

In terms of probabilistic representations, the environment transition function would defined by the conditional probability density function, p(s(t+dt)|s(t)), at least for Markovian processes. If you understand probabilities, you''ll recognise that p(s(t+dt)|s(t),a(t)) is a very different beast.

Remember though, this is just the academic (AI/scientific/engineering) definition of action and may differ from the psychological or physiological definition of action... which I don''t believe I''m qualified to comment on.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
quote:
Original post by UlfLivoff
good for you. And now for a little excercise:

Consider how many people that are afraid of spiders.
Then think of how many people that are afraid of cars.
Compare the number of people killed by spiders each year to the number of people killed by cars each year.

One fear (stronger or weaker) is ''hardcoded'' in the brain in most people, the other is not.


<SPAN CLASS=editedby>[edited by - UlfLivoff on July 10, 2003 6:51:18 PM]</SPAN>


That has an easy explanation, people are used to see cars and drive them, but people aren''t used to deal with snakes or spiders. That''s definately not ''hardcoded'' in our brains.
0

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
That has an easy explanation, people are used to see cars and drive them, but people aren''t used to deal with snakes or spiders. That''s definately not ''hardcoded'' in our brains.


People grow up seeing cars every day. People also grow up seeing spiders almost every day (unless you''re living in some oxygen tent there is probably a spider within a metre of you right now). Arachnophobia is several orders of magnitude bigger than a phobia of cars. You do the math.

0

Share this post


Link to post
Share on other sites
Ok - let''s take another example. Place a hungry newborn baby on the mother and it will automatically crawl to the breasts. Now how did it know that there was gonna be food there ??

This are mere examples and not explanations, but If you still disagree, I suggest that you read some books on the topic. I''m not referring to a specific book, anyone will do

Ulf
0

Share this post


Link to post
Share on other sites
Pipo DeClown: You haven''t really got the hang of reasoned arguments have you

Mike

P.S. The same could occasionally be said of many here including myself
0

Share this post


Link to post
Share on other sites
ulflivoff : the smell of milk my friend. I suggest you smell a few titties, and you''ll see the truth in that answer
The smell triggers reactions in the baby, like opening the mouth, grasping, etc. Note how a baby, when offered anything that vaguely has the shape of a nipple, will suck without ever questioning what it is that it is sucking.

Who said the whole suckling mechanism was a proof of intelligence ? If you wanted to show it was a hardcoded behaviour, good news, it is. Otherwise I am not sure where you are getting at.


Sancte Isidore ora pro nobis !
0

Share this post


Link to post
Share on other sites
quote:
Otherwise I am not sure where you are getting at


If you read the previous posts then it''s pretty obvious what we''re discussing.

It''s funny how some programmers with no experience in psychology are 100% confident in their own homemade psychological theories.

At least I''ve read a few books on the topic and it''s their theories am referring to here...

Reminds me of Bertrand Russels wise words:

The problem with humans is, that stupid people are always 100% confident in what thei''re doing is right, where as intelligent people area always doubtful.
0

Share this post


Link to post
Share on other sites
quote:
Original post by UlfLivoff
Ok - let''s take another example. Place a hungry newborn baby on the mother and it will automatically crawl to the breasts. Now how did it know that there was gonna be food there ??



Newborn babies don''t crawl...

There are several hard-coded stimulus-response behaviours that babies have that enable them to take a nipple and an autonomic suckling action as well. We''ve evolved with these... but put a baby out of reach of a lactating breast and it won''t know where to go or how to get there... it might smell the milk and get excited... but that''s a different story all together!

Timkin


0

Share this post


Link to post
Share on other sites
Newborn babies of _our_ species don''t crawl.

But apparently the word baby can mean other species as well (I looked it up to be sure).

Take the Kangaroo and other marsupials for example. The baby Kangaroo, looking like a foetus and being absolutely tiny in proportion to a Joey, (which is a young Kangaroo in case you didn''t know) crawls up the mother''s fur on birth and finds it''s way into the pouch for the last several months of development into a young Kangaroo, attaching itself to the mother''s teat inside the pouch. I can''t say it''s performing lactotaxis but it performs a set of hard-coded behaviours involving crawling towards milk the second it''s born. These Kangaroo babies are approximately 2.5 centimetres (1 inch) in length.
Some argue that the reason human''s are so unable to care for themselves at birth is because the high level of plasticity and potential for adaptation that we have necessitates a lack of hard coding at birth (we still have hard coding but a lot less than (perhaps almost) all other species).
The detachment from hard wiring in the brain might allow our bodies and brains to evolve more swiftly (you evolve the body but can''t evolve it away from the hard coding, so you can''t evolve it very far before letting the hardcoding catch up). This brings into question whether evolvability itself is an evolutionary advantage giving an individual increased fitness on an evolutionary scale. It probably is.

Mike
0

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.