Wow, lots of great insights here!
quote:
Timkin: Will embodiment/situatedness deal with the SGP? I honestly don''t know that one.
That''s the beauty of it; grounding is not a problem with embodiment. Everything you deal with via sensors and effectors so your AI has been designed according to this principle. Historically, the symbol grounding problem was only a problem because the algorithms in the 1970s weren''t dealing with anything real, just hypothetical world models!
Besides, consider grounding as a skill with different levels. A search-based AI from the 70s would have poor grounding and need hard-coded interfaces to interact with anything; robots have much better grounding as they can deal with arbitrary obstacles and simple objects; humans intelligence is capable of keeping higher-level abstractions grounded, to some extent... though RPGeezus seems to be having trouble with that
quote:
RPGeezus:
No one can say, without facing arguments from different directions, that a cat, a dog, an ant, or even a simple bacteria, is capable of conciousness or intelligence. If we cannot agree on these terms within the realm of the living, then to me, it seems pointless to argue about high order intelligence amongst the non-living.
I have issues with your notions of how intelligence or counsciousness should be defined. Intelligence is not a boolean thing (I say this every time, you were right Timkin). Both cats and dogs are intelligent (and conscious) to certain degrees, but you have to test them. Both intelligence and consciousness are concepts, so you need empirical measurements to establish to what degree. (I''ll stop there before I type a few pages.)
Anyway, I''m not sure how this prevents the discussion. Strong AI is defined as recreating human-level intelligence using a method that mimicks the human brain (inspired by cognitive science). It doesn''t matter how you define it, you can test it.
To me, Searles'' Chinese room argument does not show it''s not possible. The fact that the human inside does not understand what''s going on doesn''t proclude the fact that the whole system is intelligent (i.e. capable of generating chinese, as defined for this problem). A single neuron does not understand the entire problem either!
Alex
AiGameDev.com