Learning Machine Learning

Learning Machine Learning

It’s hard to open a newspaper or technology magazine without seeing a picture of Geoffrey Hinton. The U of T professor and former Google intern (it’s true) spent a couple of decades in the wilderness of AI research promoting the idea that artificial ‘neural networks’ were our best shot at developing machines that could think like a human. 

From Frank Rosenblatt’s 1958 Perceptron to a flurry of research in the late 90’s, ‘thinking machines’ remained little more than a tantalizing idea with little to show in the way of progress. Computers that could play checkers and chess have been around since the 50’s, but advances in this area were little more than benchmarks in computing power. Once programmed with the rules of the game, a sufficiently powerful computer could calculate the probable outcomes of enough scenarios to overpower even the greatest chess players. One of these benchmark happened in the mid-90’s when IBM’s ‘Deep Blue’ defeated the legendary Garry Kasperov in a series of well-publicized matches. 

But these machines were not ‘learning’ anything. They were simply running calculations according to explicit instructions provided by their human programmers. This was the distinction that convinced Hinton that neural networks were the answer. Our brains don’t operate based on a specific sets of instructions that someone else provided. We (somehow) ‘figure things out’. Hinton’s theory, which is now revealing itself as the greatest advancement in the history of AI, was to create software that mimicked the structure and process of the brain. 

It’s estimated that the human brain contains roughly 100 billion neurons, interconnected cells that send electrochemical nerve impulses to each other. Individually these neurons don’t do much. But together, our brain creates pathways to route impulses from sensory neurons in the inner ear to the motor neurons in our legs that enable us to stand, walk and run. Studies of the brain have taught us that ‘learning’ is a process of trying multiple neural pathways in the brain and reinforcing the ones that produce positive outcomes.

The video below, courtesy of Smarter Every Day host Destin Sandlin, is an vivid example of this process in action. Destin’s mechanic friends build him a ‘backwards bike’ that he eventually learns to ride, but only after a couple of weeks of relearning, or creating new neural pathways to properly route impulses according to a new set of rules.  

At around the 6:00 mark Destin finds himself in Amsterdam, and after a couple of months showing off his ability to ride the backwards bike finds he’s ‘forgotten’ how to ride a normal bike. But after a few frustrating minutes it suddenly clicks - his brain locates the neural pathway he’s been honing since he was a child - and his ability instantly returns.

To illustrate how the concept of neural networks are applied to machine learning, let’s use the classic Price is Right game Plinko. Imagine each peg is a neuron, with a computer program able to shift each peg a few millimetres to the left or right. We’ll start with a disk that weighs exactly 100 grams, and a single slot at the bottom marked “100g”. 

Then we do a few thousand Plinko runs using different peg positions, favouring the peg positions that bounce the disc to the correct slot, and negatively weighting the peg positions that don’t. Eventually, the computer will have learned how to navigate the disk to the right slot every time. Add disks of different weights and sensors on the pegs to measure the mass of the disks, and you’ll have a Plinko disk sorting machine. A machine that taught itself. 

Now imagine instead of Plinko disks we feed our machine images, and at the bottom we have slots for cat, dog, bird, human etc. This machine can ‘see’ images but only to the extent that it can distinguish lighter pixels from darker pixels. And instead of pegs we have what Hinton calls ‘hidden layers’ that experiment with things like identifying patterns of adjacent light and dark pixels that form an outline, or comparing outlines to identify an eye or beak. We don’t tell the hidden layers what to do, we only tell the computer if they came up with the correct result, and to do more of what works and less of what doesn’t.

And as best we know, this is essentially how the human brain works too.

In a talk at the University of Toronto in March, Hinton spoke about backpropogation, a technique where each answer is run backwards through the neural network and each decision point is analyzed for how it contributed to a right or wrong answer. Bias and accuracy of each layer can then be further adjusted, vastly improving the speed and accuracy of learning. 

Equally mind-blowing is the technique of weighting some layers to adjust very slowly, requiring much more input over an extend period, whereas other layers adjust quickly and frequently. In this way artificial networks can mimic the short term and long term memory functions of the brain. This makes sense - once understood, the translation of a phrase from one language to another isn’t going to change, but if one of the inputs is the weather forecast, one should expect change to be constant. 

If you want to try this out yourself, Google has created some very cool experiments that are free to play around with. With Quick, Draw! you are asked to draw random images, and the computer tries to guess what they are. At the end of the experiment, you can see how other people drew the same objects, which is how the experiment 'learns' what a particular item looks like (or at least a crude line drawing of one).

In hindsight, Geoffry Hinton admitted the limiting factor of his team’s past neural network experiments was a lack of data and computing power. One could imagine Hinton and his colleagues arguing through the 90’s that if they only had 100x more data and a 100x more computing power, they could show some real results. The Quick, Draw! experiment above ably demonstrates the importance of having lots of data to ‘train’ neural networks. And in terms of computer power as something that held Hinton back, consider how much farther we have to go to catch up to the human brain. 

In the movie Hidden Figures, scientists of the early 1960's were astonished by the speed of the IBM 7090 computer - 24k calculations per second (cps) or in modern processing terms, .139 million instructions per second, or less than one-fifth a MIPS. Today's fastest processors reach speeds of over 300,000 MIPS.

Best estimates put our brain's computing power at 100,000,000 MIPS. There are actually a few computers in the world with that level of processing power - massive warehouse-sized machines that consume enough electricity to power 10,000 homes. The human brain can match the performance of these beasts using the equivalent of 20 watts of power, enough for a dim lightbulb. 

And while it’s realistic to expect this level of computing power will be widely accessible within a decade or two, there is certainly much more to the brain than processing decisions. Emotions, motivations, consciousness itself - these are things that clearly exist, although we have no idea how to create them. 

Tantalizingly, given a human brain’s worth of processing power and access to all the world’s data, machine learning might just figure these things out on it’s own. 

Teena Poirier

(She/Her) Creative|Driven|Strategic|Collaborative|Sales and Marketing Executive

8y

Great article Richard. Very compelling and insightful.

Like
Reply
Alison Gibbins

Leading volunteer services at the Daily Bread Food Bank after accomplished career in product management, marketing, HR, and tech

8y

Wonderful explanation of AI and its Toronto-based roots.

Like
Reply
Chad McCaffrey

Building businesses with people I admire and sharing everything I learn along the way. Former Pro Hockey Player & SaaS Founder 🇨🇦

8y

Great read Richard Switzer! Happy Easter man

Like
Reply

To view or add a comment, sign in

More articles by Richard Switzer

  • Bot-to-Bot Payments, Stablecoins and (of course) RTR: A Wrapup of the Payments Canada Summit

    The Payments Canada Summit wrapped up today in Toronto, and I’m hopeful it’s presence wasn’t a contributing factor as…

    4 Comments
  • Canada’s big AI threat isn’t robots — it’s people

    There’s a certain irony to all the talk about how Artificial Intelligence is going to take our jobs. Particularly if…

    2 Comments
  • The Last Car I’ll Ever Buy: Pt. 3

    When I started this series of articles a little over a year ago I took a bit of an outsider position on the future of…

    4 Comments
  • Better than you know yourself

    How social and big data are manipulating you - and your reality No matter how laissez-faire your attitude about big…

    4 Comments
  • Easing into Blockchain

    Blockchain. Bitcoin.

  • Why the future is hard

    In December of 1895 the Lumière brothers unveiled the Cinématographe, and Parisians experienced for the first time what…

  • Your next mobile carrier is... Google?

    Google's Project Fi 'experiment' just got a lot more real this week on the announcement that they're adding 135…

  • “The last car I’ll ever buy” (Part 2)

    In Part 1 of this post I wrote about the downsides of car ownership, and suggested that autonomous vehicles will expose…

    1 Comment
  • “The last car I’ll ever buy” (Part 1)

    That’s how a developer I know described his recent purchase of a new Mazda 3. His statement has really stuck with me…

    3 Comments
  • Drowning in (Technical) Debt

    If you deal with software, you deal with technical debt. I'm not just referring to people involved in creating…

Insights from the community

Others also viewed

Explore topics