Part 2 of "How We Got Here," Chapter 1 of the The Machine Intelligence Primer
MI has long captured our imaginations through science fiction, but it has also existed as an area of serious academic research for more than 60 years.
The convergence of three interrelated technology trends [Fig. 1.5] are propelling MI past the hindrances that have encumbered its development over the past 60 years. Their effects are so highly anticipated that many experts forecast they will spark a new technological revolution akin to industrial revolutions of the past. While we like to avoid hyperbole, we concede the factors contributing to MI’s advance today signal the start of a major technological transformation.7, 8, 9, 10
The hallmark achievement these trends have enabled is a powerful machine learning technique known as deep learning. You can learn more about deep learning in the next section of The Machine Intelligence Primer.
An Exponential Increase in Availability of Digital Data
Training machine learning algorithms requires massive amounts of data. In the past, researchers painstakingly codified pieces of information into digital format to make them useable for machines. Today, our Internet-connected devices produce massive quantities of machine-readable data without our explicit direction.
Better Hardware/High-Performance Computing
Not too long ago, running a single machine learning experiment could take days or weeks due to the sheer volume of data the algorithms needed to process to make MI. Today, new types of computing chips have enabled much faster experimentation. Graphics processing units (GPU), originally created for the world of computer gaming, in which players move through visually rich digital worlds, have proven exquisitely useful for machine learning. By carrying out many computations in parallel—rather than sequentially, like traditional chips—GPUs work much more efficiently than their forebears.
Breakthroughs in Machine Learning Research
Connectionist machine learning research has in recent years produced new algorithms that are remarkably efficient and accurate in their interpretation of massive data. Artificial Neural Networks, a type of algorithm that has existed since the 1950s, now have far more data to learn from and are far more complex than previous iterations. These changes have contributed to the creation of Deep Neural Networks and "deep learning."
How We Got Here, Part 1:
Defining Machine Intelligence
How We Got Here, Part 3:
Get Introduced to Deep Learning
The Machine Intelligence Primer provides a foundational understanding of where MI came from, how it got to where it is today, and where it’s likely going. It explains and dispels common myths that surround MI. It is meant to help executives, practitioners, and curious skeptics alike consider what MI will mean for them and their teams, and create a world that is more efficient, equitable, meaningful, and verdant.