4 Questions about AI You Wanted To Ask

4 Questions about AI You Wanted To Ask

4-part series, By Cortnie Abercrombie, Founder AI Truth.org

I thought I’d start off the new year with a 4-part series that answers questions about Artificial Intelligence that I’ve been asked by friends, clients and relatives. Maybe you have had these questions yourself. 

Part 1: What is AI anyway?

Part 2: What can AI do that no other systems can do?

Part 3: Hasn’t AI been around forever? Why the comeback?

Part 4: Why do businesses want AI? What are they using it for?

This series aims to make you knowledgeable about the catalyst for how AI came back to importance, the reasons why so many businesses are now starting to use it, and how it’s beginning to be everywhere in society – whether its presence is known or not. Above is the list topics in the series. I plan to produce one a week and put them out on AI Truth.org as well as LinkedIn and Medium. I have attached Part 1 below, Part 2 will be out next week. As always, part of my goal is to stimulate discussion and interaction so please feel encouraged to make your thoughts and questions known.

Part 1: What is artificial intelligence anyway?

If you read AI research papers and books available online, you’ll see many highly sophisticated answers from people with multiple PhDs. Here’s my condensed version without the 20 to 50-page paper or 1109-page text book. AI are human-like systems. At least they aim to demonstrate as human-like intelligence as possible. Though many of today’s AI research scientists might argue AI is most valuable when it can surpass human intelligence. There is much debate about what constitutes “intelligence”. To be fair, it’s not as easy as it would at first seem to define intelligence when it comes to computer systems. 

Merriam-Webster defines intelligence as: (1) the ability to learn or understand or to deal with new or trying situations; also the skilled use of reason, and (2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests).

Considering that definition, we have AI systems that meet some of the criteria but not all. The hallmark of today’s AI is that it can “learn” at ever-unprecedented rates.  The example that pops to mind is AlphaGo. It learned to play the strategy game Go from a database of over 30 million expert-based Go game moves and strategies in weeks. It’s worth noting that it learned from human experts’ moves. Another interesting aspect of the Webster definition of intelligence is “to deal with new or trying situations”. Most AI systems and machine learning algorithms work off of trends and patterns, which by definition are based on information and situations that have already happened. In other words, not new and certainly not complex enough to be considered “trying”. When AI programs are put into situations that they do not have a training set for, they do not know what to do without human intervention. 

“Can machines do what we can do?” is a question Alan Turing hoped to answer when he devised his famous test in 1950. Some AI scientists argue that what he was really trying to discern was “Can computers think?”. The test was based off of a party game called the imitation game. A man and woman would go into two separate rooms and they would try to answer questions from an “interrogator” in a way that would fool the interrogator into believing he/she was the other person. After each answer to each question was delivered via handwritten note, the interrogator then had to determine if it was the man or woman’s answer. In the same way, Turing’s test would have a human questioner, human answerer, and a computer answerer – a person who could type the question into the computer and represent the computer’s answers back to the human questioner. The human questioner would then determine if the answers were given by a computer or a person. Many scientists have challenged Turing’s test and still do to this day. One such challenger is philosopher and Berkeley professor John Searle. He proposed the Chinese Room experiment which posited that just because a computer can emulate a person’s answers does not mean that it can understand the answers it gives or therefore, “think”. 

While at IBM, I worked on standardizing implementation methods for cognitive computing services engagements; and the following is what I learned. If you break down how humans operate; then, cognitive computers should be able to: 1) understand like a human, 2) learn and reason like a human, 3) decide like a human and 4) interact and act on decisions like a human. 

Maybe you are thinking this sounds simple only on the surface. Humans have lots of other things going on behind the scenes during these "processes”. I couldn’t agree more. When we dissect how humans think, we have to consider that some humans think simply and linearly with as few variables as possible (almost like a computer); while others are using a lot massive parallel processing on many factors at once to arrive at decisions. Some people will rely more heavily on facts presented at the exact time they make a decision. Others may take information based on cues not just from the immediate situation, but also pull on a wealth of past experiences, social norms, consequences, habits, rewards, and fears. Research indicates that over 45% of decisions are guided by habit. Other research points out that 95% of decisions are made subconsciously. Using myself as an example, most of my “logic” is going on without my conscious knowledge of it. I can often articulate answers about choices but I’m not always good at explaining the reasons for those answers because much of the time I’m operating off of nuanced information, inconsistent emotion, and “gut feel” that I would be hard-pressed to explain to myself much less a computer. But it turns out most humans do this. There is even a particular part of our brain called the basal ganglia specifically designed to help us not be overwhelmed by making decisions every day.

That acknowledged, in another upcoming series, we will dive deeper into this topic to see if we can make enough sense of what it means to think and act as a human so that we can understand how these concepts work relative to AI systems.  As a few spoilers, based on the research in, “The Power of Habit” by Charles Duhigg, I can tell you that a lot of our thinking is autonomic - based on decisions we have encountered before. That is why we can do so many things at the same time. But when we are confronted with new decisions, our strongest reactions to new stimuli often are made out of fear not reward. So far, we only have ways to simulate reward for AI, I do not know that we have the ability to create fear in an AI. 

The question is how far should we go to make AI human-like in order make it truly intelligent? Does it have to be able to experience fear to have human-like intelligence? Would a sense of empathy and social norms as developed from fear (loss of life, loss of job, loss of reputation, not fitting in with other humans, physical pain), help AI to make the same decisions we would make? Is it important for AI to be human-like in order to exist with us in ways that are beneficial to us?

I'd love to hear your thoughts. Please post a comment below. If you have any questions that you do not feel comfortable posting in this forum, please feel free to contact me direct at: Cortnie@AITruth.org. Please also subscribe at: www.AITruth.org.

Mark Williams

Insurance Law Specialist | Public Liability | Professional Indemnity | Life Insurance | Defamation Lawyer

6y

I achieved some real clarity after this reading - thanks for sharing.

Like
Reply

To view or add a comment, sign in

More articles by Cortnie Abercrombie

Insights from the community

Others also viewed

Explore topics