Getting up to speed on Artificial Intelligence
Not a day goes by anymore without a mention of Artificial Intelligence in the mainstream press, and for good reason. Take, for example, the ongoing coronavirus pandemic. A company specializing in AI was first to spot the outbreak. AI and its smaller cousin, Machine Learning, are now being offered up as helping with solutions to the pandemic. If you’re foggy on what AI is, let’s get you up to speed. The science has been around since the 1940s. Now it looks to be maturing enough to really change our world in significant ways, and in dramatic fashion. More than ever, it needs to be understood so everyone can have a hand in shaping the future.
The concept of Artificial Intelligence has been around since the development of a mathematical model of a biological neuron. This was done some 80 years ago. We’re not talking about just computing. This is also about learning. Cognition. That’s what we refer to when we say “learning.” Trying to get a computer to do it has proven extremely difficult. But now, it’s finally taking off. There are a number of reasons for this, not the least of which is how much digital data we now have. This is due to the fact that computers have now been in widespread use by the average consumer for some 30 years.
Data is everything. It used to all be in analog form. It took the human mind to process it. i.e. Read a book by turning the pages, process what you read and somehow put it to use. We do that through the 100-billion neurons in our brain, though just how, is something we are still trying to figure out. Whatever the case, we know that these single-celled neurons work together in amazing ways, making us the most intelligent creatures to walk the planet. The big issue for computer scientists has been trying to replicate that intelligence for use in machines.
Don’t be misled into thinking that somehow scientists one day just dreamed up the concept of patterning Artificial Intelligence after the neurons in our brain. Quite the opposite. We are talking about decades of work by some of the brightest minds in science and engineering. The work on the development of artificial neurons, an example of one seen above, has its origins in the 1940s.
The long road to what we now call Artificial Intelligence
Developing these artificial neurons into networks that could actually do computations started by the 1980s: “Artificial neural networks,” as we now say. This is a huge development. Now we go beyond just putting hardware and programming software together to solve a problem. We branch into actual learning, Machine Learning. Training a neural network to recognize faces comes to mind. There are lots and lots of other applications. More on that in a moment. And we also need to add that mathematical neural networks are gross oversimplifications of the brain neural networks. Whatever the case, we first have to explain how AI is NOT a computer program.
Some may get stuck on the now old concept of writing a software program to solve a problem, like how to email someone. This type of program doesn’t require learning, other than you figuring out how to use it. Actually getting the program to learn on its own is a whole new ballgame. One might think, for example, that a programmer just sits down and writes something to recognize what someone is saying. Speech recognition! No, no! What is really happening now is that there are a multitude of already written clever modules prepared and on the shelf for use in making computers think intelligently.
Python is well known when it comes programming a new AI project. It’s relatively simple and can be easily learned. R is also at the forefront. It’s popular because open source programs giving users a lot of latitude in building an intelligent machine. Lisp, Prolog, Java, Tensorflow and Torch are just some of the others.
Now for the catch. Knowing a computer program and how to code is one thing, but the real barrier you come up against in the field of Artificial Intelligence is the math. Let’s face it, if you understand the artificial neuron pictured above you would not be reading this. Note the picture here of the problem no one can solve. You have to know the math to work in AI and that stops many people from even taking up the subject. AI has become so pervasive that we all need a better understanding of it going forward. For the record though, the specific math used in AI that we’re talking about is linear algebra, probability, multivariate calculus and optimization. To demonstrate some of the complexity involved in math for neural networks, in this case “back propagation” try to get your mind around what an expert offered to explain the problem you see in the picture at the top of this article:
“The problem is complex and complicated to untangle the solution and as the number of variables increase (as in our case of 13,002 weights & biases in our neural network of 4 layers with 784 neurons, 16 neurons, 16 neurons & 10 neurons). The complexity of the problem increases astronomically, including the computational effort and the time it takes to solve it. Also, the brute force method would take years to solve this problem. Hence, we use gradient descent to solve this problem by finding the minimum of the cost function and the appropriate weights and biases.”
The human brain, with more neurons in its pre-frontal cortex than any other, is a neural network. Advances in AI recently have focused on the development of artificial neural networks, a computing system of highly interconnected elements or notes. These neural networks are organized in layers and produce responses akin to what might be expected if a human is involved. Deep learning involves neural networks with many middle layers. This idea, for the purposes of this paper, is hugely oversimplified. That’s because “consciousness” has not been achieved by machines, so it cannot be easily explained. In fact, the big debate in the field right now is if AI will ever match or surpass human intelligence, or consciousness.
There are so many artificial neural networks, and more are being constructed all the time. The image below cuts to the issue of having more than just an input and an output. That would simply be linear, something easily accomplished with anything nowadays having to do with input and output. For example, type on the keyboard in a computer texting program and a word appears. That’s linear input to output with no computation involved. But here with NNs we have layers of neurons; what makes them clever is having at least one middle layer to solve complex problems that are more than linear. The issue with artificial neural networks is that they learn, going from input to an output that is generally unknown. In the following image you see three inputs and one result, or output. Imagine a computer sensing three inputs, like an alarm at 7 a.m., the sun coming up and a dog barking. Your brain would go through a series of responses to determine what to do next. Now imagine training the artificial neural network to respond and you get an idea about how far we’ve come and what more needs to be accomplished.
The reason we are now hearing so much about AI is that the engineers and scientists have really begun to get their arms around how to make it work. Evidence of it is everywhere as Artificial Intelligence has hit the mainstream with Apple’s personal assistant Siri, Amazon’s Alexa, Tesla’s self-driving capabilities, Netflix, Pandora, Nest and so on. Take a deep dive into each of these services and you delve into what AI can really do. But these businesses only give us limited view of the big picture.
Choose any other segment of society, including government, transportation, law and order, or healthcare and you begin to see just how promising AI can be. Future prospects for AI in healthcare, for example, include drug creation and helping people make healthier choices and wiser decisions. That’s just to start. One of the reasons the healthcare aspect of AI is taking longer in its development is privacy issues, but there are an increasing number of companies jumping into its development. For example, there’s a concept to mine called “precision medicine.” It employs numerous technologies to guide individually tailored diagnosis and treatment for patients. The technology will learn about you with the assistance of your healthcare provider and then tell the doctor how best to treat you.
The government part of AI freaks people out the most. The surveillance state is talked about a lot, including the use of cameras in public places to track your movements and alert authorities when you’ve done something wrong. Now there are even cell phone data maps already at work finding out who are ignoring quarantines in hopes of stopping the spread of the coronavirus. You see, the data is already out there since your phone tracks where you are and AI can be employed to determine behaviors.
In conclusion, we could say that Artificial Intelligence is finally coming of age. AI has matured enough to become mainstream. It’s been nearly a century in the making and will likely touch every aspect of our existence in the not-to-distant future. This means new and wonderful things, but also brings great peril. The power of AI in the wrong hands is something we need to contemplate, just as we also imagine the dawn of a new era. This new era will rival the introduction of electricity as well as the Industrial Revolution. Now we can contemplate not having to think too hard because getting a machine to do a lot of the difficult work is here. Another way to think about it is that we are freeing our minds up to contemplate an even greater existence for humankind. What we all do need to do though is become aware of what’s happening with AI. Think hard now, because Artificial Intelligence offers us something beyond the scope of our imaginations.