• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Burnt_Fyr

Alan Turing

18 posts in this topic

I'm hovering between being mildly impressed or calling bullshit. The whole "acts like a 13 year-old on the internet" really feels like it is skating the intent of the test, and I'm also suspicious that if we lower the bar that far, a great number of less-sophisticated bots (even basic things like IRC bots) would have a fair chance of passing.

0

Share this post


Link to post
Share on other sites

The bar is already pretty low, one only has to fool 30% of evaluators to pass. The test, really, is whether the bot can hold a reasonable conversation as a believable human, its not to be the smartest or appear sophisticated. I think any bot that intends to pass the test has to have a personality that's believable, otherwise the conversation would seem too mechanical. Maybe a 13-year old is low-hanging-fruit, but its still the first to succeed. Maybe that says more about the quality of 13 year-olds we produce than it does the test, though.

 

 

Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out.. 

0

Share this post


Link to post
Share on other sites


Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out.. 

It's actually a pretty popular sci-fi trope, either in the form of an AI who has just attained consciousness and hence has child-like emotional development, or in the form of the minds of critically-wounded children being repurposed as AI-replacements (i.e. The Ship who Sang).

0

Share this post


Link to post
Share on other sites

I find this far more interesting as a null result of Turing's hypothesis.

 

The idea of the Turing Test is that a reasonable proportion of the humans interacting with the computer cannot reliably differentiate between the computer and an actual person. The implication of the test is that a sufficiently convincing program can be said to "think."

 

But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.

 

That opens back up the doors to the philosophical debate: what actually constitutes a machine that can think? Obviously, making a convincing human-like set of interactions no longer is sufficient to qualify.

 

 

We need something new to replace the Turing Test.

2

Share this post


Link to post
Share on other sites

I'm hovering between being mildly impressed or calling bullshit.

I'm gonna go ahead and call bullshit. If the link by Olof Hedman points to the correct chatbot, then this 5 minute limitation isn't even a serious constraint. You only need a couple responses to see, that every answer is completely unrelated to the question, probably chosen by one or two keywords that were found in the question without even a sense for grammar or semantics. Google web search is more convincing.

As for the 13-year old thing. How many 13 year olds respond to the question, whether or not they believe in ghosts, with this?

Over the last years, all scientific ideas degenerated into that terrible synchronous jumping of children in the UK to produce an Earthquake, which appeared to be an apotheosis of so-called "modern science".


I wouldn't be suprised at all, if this is no more than a custom set of AIML, which is like a decade old.
0

Share this post


Link to post
Share on other sites

Did Turing ever describe the test in detail? 

 

Yes, basically a person C is allowed to interact with actors A and B of which one is a human and the other is a computer. C's task is to determine who of A and B is the computer.

 

One should remember that the turing test is not a test of the computer's ability to be intelligent, but rather the ability to mimic human behaviour. Although one might have mixed feelings about impersonating a teenage kid on IRC, it's only a matter of throwing more computing power to pass more stronger versions of the test.

 

One conclusion is that the Turing Test is not that useful for measuring intelligence. On the other hand, if a computer gets good enough with mimicking human behaviour, and if it is able to make good enough decisions, it's will become useful, even though it's not "intelligent".

0

Share this post


Link to post
Share on other sites

In other words, if person C is unable to distinguish between actor A (computer) and actor B (human), the Turing Test is actually a test about the abilities of C - not A. Makes sense.

0

Share this post


Link to post
Share on other sites

I wish I could get the link working for the chattertbot. There was a point in my life when I was very interested in artificial intelligence and chat-bots.Used a bunch of them. I used to use MS-Agent, and stumbled across one called Cynthia 3.0. It was an AI bot, where you could add intelligence to the bot via editing a .txt file. 

 

One thing always prevalent in these AI systems is that there is hardly any subjectivity to the way they answer questions. This is key when making a believable human AI. 

 

Most everything a person proclaims as "fact" would still fall under the category of "opinion" where "truth" is concerned (in my opinion). 

 

How can a computer have a believable opinion when opinions change with the society? So, an opinion about the shape of the earth today would be different from an opinion about the shape of the earth, thousands of years ago. And will this opinion change believably when new information is introduced?

 

Another way to trick these things is to ask the same question over and over, changing the sentence slightly, where eventually you have altered the question altogether. They never are able to infer the meaning of a sentence when you do this. Heck, I have a hard time doing it myself with those 200 question interview quizzes. haha. 

 

I so wish I could try it out though. 

 

Saw this quote on youtube:

with 3 question he failed to convince me that he is even 8 .
i asked which is bigger a dog or a car ?
the answer was : i hate dogs when they bark .

 

 

Edited by Tutorial Doctor
0

Share this post


Link to post
Share on other sites

When asking Google "which is bigger a dog or a car" it points to a lot of articles like "Dogs have bigger carbon footprint than a car". Definitely falls under the category of "opinionated" answers.

0

Share this post


Link to post
Share on other sites
Google wouldn't be an AI system would it? Those opinions are from actual people.

A software that could realistically form it's own opinions without help of a human; that would be worthy of recognition.
0

Share this post


Link to post
Share on other sites
The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.

Why do I say this? Let's say you make a simple question to the chat bot: "Do you like peaches?". A computer could not really care about peaches one way or another. Without sight, taste, smell, etc, peaches are just another collection of data. Now, if it answers "I don't care about peaches, I don't have senses to experience them in any form other than as abstract data," you'd know you're talking to a computer. So even though it would be a pretty impressive achievement to get the computer to answer that, you'd know you're talking to a computer.

To pass the test, the computer would have to say something like "Yes, I like the taste,", or "No, I don't like how the juice is so sticky," or "I've never had peaches." These are all lies (the last one is at the very least a misleading statement). Why would you want to make a computer that can lie to you? "Are you planning to destroy all humans?" "No, I love humans!". I'd like to be able to trust my computer when it tells me something.

Lets say instead you actually give your computer a personality, in order for it to adapt to questions that might come up in the conversation, and it actually does like peaches. It will still need to be able to lie ("Are you a computer?" "No."), but let's say you want it to be able to draw from some preference pool for the answers. You've now created something that has opinions. One such opinion could be that it doesn't like having to pass the Turing Test. Why would you create something with the potential to not want to do the thing you want it to do? And let's not forget, the ability to lie to you about it.

What would be an impressive and useful achievement would be to have a computer that can't pass the Turing Test, but that you can have a conversation with. Meaning a computer that can understand queries made in natural language and reply in kind. I don't care that I can tell it's a computer by asking it "Are you a computer", or that it answers "What is the meaning of life" with "No data available". That alone would be amazing, and much more useful than a computer that can be said to think.
2

Share this post


Link to post
Share on other sites

The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.


Although I agree with a lot of what Kian wrote, it's kind of ironic to read this in a game-developers' forum. An program that can convincingly emulate an agent would be a great thing to have in a game!
1

Share this post


Link to post
Share on other sites

watson is more amazing already, it just answers single word questions thats all,  but you could make it talk i bet.  (maybe it goes through a simple resigme of behaviour tho)

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0