A Brief History Of AI (part 2)
1997: IBM's Deep Blue defeats world chess champion Garry Kasparov.
In 1997, a landmark event in the history of artificial intelligence occurred when IBM's Deep Blue, a chess-playing computer, defeated the reigning world chess champion, Garry Kasparov. This match, held in May 1997, was a highly publicized six-game battle that ended with Deep Blue winning two games, Kasparov winning one, and three draws. This event was monumental because it was the first time a reigning world champion had lost a match to a computer under standard chess tournament conditions. The victory of Deep Blue was not just a triumph in the field of AI but also a demonstration of the potential of computers in processing and evaluating complex tasks.
2004-2007 DARPA Challenges for Autonomous Vehicles
The DARPA (Defense Advanced Research Projects Agency) Challenges for autonomous vehicles significantly propelled the development of self-driving car technology. These competitions were designed to encourage innovation in autonomous vehicle systems capable of navigating complex terrains. The first DARPA Grand Challenge, held in 2004, invited teams to build autonomous vehicles that could traverse a 150-mile desert course. Although no vehicle completed the course, the challenge succeeded in sparking widespread interest and technological advancement in the field. Subsequent challenges in 2005 and 2007, featuring urban environments and more complex tasks, saw notable improvements in navigating autonomously and demonstrated the principled feasibility of self-driving technology.
2010s: LSTMs rule Speech Recognition
Long Short-Term Memory networks (LSTMs), a special kind of recurrent neural network originally developed by Sepp Hochreiter and Jürgen Schmidhuber in 1997, saw a resurgence in the 2010s that revolutionized speech and sequence recognition. Their unique architecture, capable of learning long-term dependencies, made them exceptionally suitable for processing, predicting, and classifying data where context over long sequences is crucial. This made LSTMs ideal for tasks such as speech recognition, language modeling, and text generation. With the explosion of smartphone usage in the 2010s, LSTMs were integrated into billions of mobile devices, significantly improving the user experience in voice-activated assistants, predictive typing, and language translation applications.
2011: Watson wins "Jeopardy!"
In 2011, IBM's Watson, an artificial intelligence system, won the game show "Jeopardy!" by competing against two notable champions of the show. This event was significant in the field of AI, particularly for its practical demonstration of advanced natural language processing and information retrieval capabilities. Watson's ability to understand complex questions and rapidly search through large databases for accurate answers was a noteworthy achievement. While this win was a notable milestone, it also represented a step forward in the practical application of AI in areas such as data analysis and decision-making support, beyond the realm of entertainment.
Recommended by LinkedIn
2012 Deep Learning Big Bang: AlexNet wins ImageNet Competition By A Huge Margin
In 2012, AlexNet, a convolutional neural network (ConvNet), acchieved a landmark victory in the ImageNet Large Scale Visual Recognition Challenge. AlexNet outperformed the traditional image recognition methods by 15%, marking a milestone that reignited interest in neural networks, particularly in deep learning for computer vision. Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, an important factor in its performance was the use of graphics processing units (GPUs) for neural network training and inference, which allowed for much faster processing and handling of large datasets. This breakthrough led to a major shift in the ImageNet competition and across the AI field, with nearly all contestants in later years adopting neural network-based approaches. Neural networks were back with a bang.
2013: Learning Superhuman Video Play From Pixels
DeepMind's 2015 research on applying Q-learning to play Atari games introduced a significant innovation in the field of AI. They developed a Deep Q-Network (DQN) that combined Q-learning, a type of reinforcement learning, with deep neural networks. This system learned to play various Atari 2600 games directly from pixel inputs, thereby finding strategies without game-specific programming. The AI achieved impressive performance, efficiently mastering a range of games through this method. A stunning example is the strategy found for the game "Breakout" shown in the above image which consists of drilling a hole into the wall of blocks and then sending the ball through this opening to delete many of the stones without further intervention.
2016: AlphaGo beats Lee Sedol in 5-game Go match
In 2016, AlphaGo, an AI program developed by Google DeepMind, achieved a groundbreaking victory in the realm of strategic board games by defeating world champion Lee Sedol in the game of Go during a 5-game match. This win was particularly significant as most experts had anticipated that humans would continue to dominate Go for at least another decade, owing to its immense complexity and the vast number of possible positions. AlphaGo's success was attributed to its use of deep reinforcement learning combined with advanced search algorithms, enabling it to develop a strength in playing Go previously thought unattainable by machines.
2017: AlphaGo Zero - Learning Go only from Self-Play
AlphaGo Zero, an evolution of DeepMind's original AlphaGo AI, marked a significant advancement in the field of artificial intelligence when it was introduced in 2017. Unlike its predecessor, which was trained using large datasets of human professional Go games, AlphaGo Zero learned to play Go entirely from self-play without any human data. It started from scratch, using a deep neural network combined with a Monte Carlo tree search algorithm, improving itself iteratively through reinforcement learning. This method of learning enabled AlphaGo Zero to discover new strategies and achieve superhuman performance, surpassing not only human Go players but also previous versions of AlphaGo.