(Source: imagIN.gr photography/Shutterstock.com)
After more than a century of research on Artificial Intelligence (AI), the field has recently become both popular and enormously important. In particular, Pattern Recognition and Machine Learning have been revolutionized through Deep Learning (DL), a relatively new moniker for Artificial Neural Networks (NNs) that learn from experience. DL is now heavily used in industry and daily life. Image and speech recognition on your smartphone, and automatic translation from one language to another, are just two examples of DL in action.
Many people in the Anglosphere assume that DL is a creation of the Anglosphere nations. However, DL was, in fact, invented where English is not an official language. Let us first zoom back and have a look at AI history in the broader context of computing history.
One of the earliest mechanical computing machines was the Antikythera Mechanism, built in Greece in the first-century BC. Running with 37 gears of various sizes, it was used to predict astronomical events (Figure 1).
Figure 1: The Antikythera Mechanism was built in Green in first-century BC. The device consisted of 37 gears of various sizes. It was used to predict astronomical events. (Source: DU ZHI XING/Shutterstock.com)
The sophistication of the Antikythera mechanism was not surpassed until 1,600 years later when Peter Henlein of Nürnberg began building miniaturized pocket watches in 1505. Like the Antikythera mechanism, however, Henlein’s machines were not general machines calculating results from user-given inputs. They simply used gear ratios to divide time. Watches divide the numbers of seconds by 60 to get minutes, and minutes by 60 to get hours.
In 1623, however, Wilhelm Schickard in Tübingen constructed the first automatic calculator for basic arithmetic. This was soon followed by Blaise Pascal's Pascaline in 1640, and Gottfried Wilhelm Leibniz' step reckoner in 1670, the first machine to perform all four fundamental arithmetic operations of addition, subtraction, multiplication, and division. In 1703, Leibniz published his Explanation of Binary Mathematics, the approach to binary computing that is now used by virtually all modern computers.
Mathematical analysis and data science also continued to develop. Around 1800, Carl Friedrich Gauss and Adrien-Marie Legendre developed the least squares method of pattern recognition through linear regression (now sometimes called "shallow learning"). Gauss famously used such techniques to rediscover the asteroid Ceres by analyzing data points of previous observations, then using various tricks to adjust the parameters of a predictor to correctly predict the new location of Ceres.
The first practical program-controlled machines appeared at about this time in France: automated looms programmed by punch cards. Around 1800, Joseph Marie Jacquard and colleagues thus became the first practical programmers.
In 1837, Charles Babbage of England designed a more general program-controlled machine called the Analytical Engine. Nobody was able to build it, perhaps because it was still based on the cumbersome decimal system instead of Leibniz’ binary arithmetics. However, in 1991, at least a specimen of his less general Difference Engine No. 2 was shown to work.
At the beginning of the 20th century, progress toward intelligent machines accelerated dramatically. Here are major milestones related to the development of AI since 1900:
Figure 2: Alan Turing in the U.K. reformulated the popular programming language LISP in 1936, using theoretical construct called the Turing machine. (Source: EQRoy/Shutterstock.com)
So much for the history up to 1970. AI History Part II will take a closer look at what has happened since then.
Jürgen Schmidhuber is often called the father of modern Artificial Intelligence (AI) by the media. Since age 15 or so, his main goal has been to build a self-improving AI smarter than himself, then retire. His lab's Deep Learning Neural Networks (since 1991) such as Long Short-Term Memory (LSTM) have revolutionized machine learning. By 2017, they were on 3 billion devices, and used billions of times per day through the users of the world's most valuable public companies, e.g., for greatly improved speech recognition on over 2 billion Android phones (since mid 2015), greatly improved machine translation through Google Translate (since Nov 2016) and Facebook (over 4 billion LSTM-based translations per day as of 2017), Apple's Siri and Quicktype on almost 1 billion iPhones (since 2016), the answers of Amazon's Alexa (since 2016), and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, with superhuman performance. In 2012, they had the first deep NN to win a medical imaging contest (on cancer detection). All of this attracted enormous interest from industry. His research group also established the fields of metalearning, mathematically rigorous universal AI and recursive self-improvement in universal problem solvers that learn to learn (since 1987). In the 1990s, he introduced unsupervised adversarial neural networks that fight each other in a minimax game to achieve artificial curiosity etc. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. He is recipient of numerous awards, author of over 350 peer-reviewed papers, frequent keynote speaker at large events, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI. He is also advising various governments on AI strategies.
Privacy Centre |
Terms and Conditions
Copyright ©2021 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.