It seems to me that humans and computers currently are good at what the other one's bad at:
Vision/Hearing (humans MUCH better than computers in all aspects):
- Humans have a massive parallel signal processing network which recognizes an astounding amount of different kinds of features in visual/audio input VERY quickly.
- Computers have a tough time with this because we don't have hardware that's as good as human eyes/ears yet, parallelism that's massive enough to process the signal quickly, or the wide variety of different types of feature recognizers that humans have.
Control (robots are better in controlled situations, humans are better in novel situations):
- Robots are better for speed and precision.
- Humans are better for adaptation (for instance, if a servo/muscle stop functioning, robots often cannot perform their task anymore, whereas a human can probably figure out an alternate way to perform the task)
Natural languages (neither human nor computer is very good at this):
- NLP systems have thousands of individual systems all working together to handle typos, solve ambiguities, reason about likely meanings, and learn new meanings on the fly. Humans have lots of trouble getting their intended meaning across to other people. Computers have trouble dealing with imperfect grammar, meanings, ambiguity, error correction, and learning.
Logic and reasoning:
- Computers quickly follow the rules they're given, and have problems when the rules they've been provided aren't sufficient. Humans are slow at following rules but can adapt to cases they haven't seen before. Computers won't make mistakes. Humans make lots of mistakes.
- Computers are much better in all aspects.
- AI research has typically been approached one isolated piece at a time (vision, NLP, control, logic/reasoning, planning). Lots of complex AI problems need to be solved by having the different systems help each other out, but generally AI researchers focus only on their individual problems without seeking to interop with other fields. For example, when attempting to handle NLP input, you won't get very far without a logic/reasoning system to help resolve ambiguities and likely meanings. When handling vision, recognizing characters from a language requires dealing with different orientations/mirroring/perspective changes, stylistic variations, and reasoning about what a heavily corrupted glyph probably is based on the other glyphs around it.
Edited by Nypyren, 25 January 2014 - 04:35 PM.