Humans vs Machines... again!

Humans vs Machines... again!

In a recent TED talk Gary Marcus pressed the point that Artificial Intelligence is still very far away from human intelligence — at most at the level of a 8 grade student while in other aspects below a 3 year toddler. He points, correctly, that what machines do is pattern recognition — finding statistical relevant patterns in zillions of data. This “brute force” approach may solve complex perception level problems, like image recognition and language translation. However, they pale in comparison with other human capabilities, like abstraction, hierarchical representation, cognition or generalisation by observing a single example — inductive thinking. Machines seems to fail in making “sense” of our world and survive in it — to stress his case, he shows some poorly performing robots during the famous DARPA competition.

I do agree with his criticisms. However, I disagree in his conclusion that he are far away from creating a truly General Artificial Intelligent (GAI) machines. My main criticism is related to the fallacy that GAI has to mimic humans. Why intelligent machines should replicate our cognition capabilities? Just because we are smarter? Why they need to be as humans to solve problems, find explanations, predict or discover surviving mechanisms? Lots of animals survive without having human skills.

Another question he raises is a pertinent one: machines can search million of web pages in a blink but they never ask the question “why?” or “is it relevant?”. Again, I think we are being unfair to algorithms: they don’t ask “why” not because they can’t but because we don’t allow them to. We expect them to give answers, not more questions. Furthermore, we are again anthropomorphizing intelligence. Just because we learn by socializing and asking “why” doesn’t mean machines will learn the same way. Maybe they ask internal “whys” million of times. They never “tell” us because we don’t allow them to do so. The goals we set for them is very clear, optimize loss functions that give the best answers, not challenge our questions. Can we build machines to do this kind of self-consistent “thinking”? Actually I think we are very close.

He also points how poorly algorithms perform in “common sense” situations, like confusing a park sign with a refrigerator. They may lack this common sense not because they are stupid but because they were not been sufficiently exposed to our “real world”. They just see what we provide them to and try to make up an answer — in this case images. They never walked, they never traveled around cities, they never touch, they never had a family. How can we ask them to understand our world if they weren’t exposed to it? Isn’t it unfair? By the same token, can we understand them? Can we make calculations of 10 digit numbers in milliseconds? “It’s not relevant?”, well they may say the same about our world.

The point of this note is not state how wrong Gary Marcus but more that he is unfair in comparing humans to machines. I don’t think we are very far from GAI, it may share some aspects of human intelligence but it will be probably very distinct in many others. In the end, all the things that makes us so special, the emotions, ideas, language, mathematics, art … they are nothing more than electric signals bouncing back and forth in 1400 cubic centimeters of matter.

Armando A.

Entrepreneur | Growth

8y

Complete misdial on my end here.

Like
Reply
Armando Vieira

Physicist PhD, Data Scientist, Professor and Entrepreneur

8y

on what?

Armando A.

Entrepreneur | Growth

8y

Could you elaborate?

Like
Reply

To view or add a comment, sign in

More articles by Armando Vieira

Insights from the community

Others also viewed

Explore topics