For decades, scientists have been fascinated by the topic of artificial intelligence.
Many systems created under this epithet have been artificial for a long time, but intelligence has not been a property they widely possessed. Over the last seven years, however, this has changed rapidly: With deep neural networks, developers now have a powerful tool at their disposal.
In July 1956, creation of the first artificial Man seemed imminent. A group of computer scientists and mathematicians at the renowned Dartmouth College in New Hampshire in the USA had sent out the call to join an ambitious research project— the Dartmouth Summer Research Project on Artificial Intelligence. In their enthusiasm, the project’s founders believed to have speaking machines, networks modeled on the human mind, self-optimizing computers and even machine creativity at their very fingertips. But although a busy summer’s month produced little more than sheaves of writing and big ideas, the utopian scientists did coin the term artificial intelligence (or AI for short) and create an entirely new field of research that would from then on keep the whole world holding its breath.
Artificial intelligence: tricky to pin down
A good sixty years later, one thing’s for sure: What is referred to as true, or general, artificial intelligence—that is, AI that comprehensively copies or even exceeds human intelligence—is a utopian dream even today. No technological system in the foreseeable future will be capable of passing the Turing test. ‘Weak’ AI systems, which are currently the primary object of research, are not even intended to pass the Turing test. Instead, these systems’ design pursues autonomous processing of problems within defined boundaries or responding to input questions. Weak AI algorithms are becoming better and better at overcoming concrete problems of application, for example the solution of complex logical or mathematical expressions. They can also act as opponents in a game of chess, checkers or Go. They excel at analyzing large volumes of text or data and form the core element in internet search engines. Embedded in myriad smartphone apps, artificial intelligence is already our constant we’re carrying AI around with us in our pockets. When we speak to “Alexa” or “Siri,” our words and phrases are analyzed by AI algorithms. As the founder of the Dartmouth Conference John McCarthy himself already drily remarked on the fate of AI applications: “As soon as it works, no-one calls it AI anymore.”
As interconnected as a brain
The first artificial neural networks were already devised in the early 1950s. These networks are the key to artificial intelligence’s success. In such a network, the separate computation operations are not rigid, binary computing that allows only two options: 1 or 0, on or off. Instead, they are modeled on biological nervous systems. Nervous systems operate based on threshold values and can accommodate a multitude of values between 1 and 0; a seemingly infinite number of nerve cells are dynamically interconnected by growing, mutable links. The human brain learns by constantly reassessing these links’ weighting. Pathways used frequently are reinforced, rarely used links allowed to wither. Of course, artificial neural networks run on conventional computers—ultimately, they too operate based on ones and zeros. But within this system, the complex algorithm’s operating principle and threshold logic reflect their biological counterparts.
Artificial, interlinked neurons are fed input values and pass the data on to neurons at a downstream level. At the chain’s end, a level of output neurons supplies a result value. The variable weighting of the separate connections lends the network a remarkable property: an ability to learn. Today, these networks possess more and more levels; they are more complex, more greatly nested—they are deeper. Deep neural networks in some cases comprise more than one hundred of these successive program levels. Being learning networks, they usually keep on taking corrective feedback into account until they are able to produce the ideal solution to a problem—for example in image recognition: During training, also referred to as ‘deep learning’, the system devours thousands upon thousands of photographs until it is capable of making statements about previously unseen images. It performs a feat of knowledge application: It sees a cat as a cat; it calls an apple an apple, even when the apple is semi-obscured by leaves; it recognizes traffic signs, deer, humans. Highly reliable recognition not only allows robot taxis to follow traffic rules but even now also helps surgeons identify tumors. Computer resonance imaging scans are more and more often compared with medical image databases in a fully automatic process. For a long time, deep neural networks were largely disregarded by AI research. The chaotic nature of their growth was unable to keep up with the speed of the classic deterministic algorithms. But in the noughties of the new millennium, computing power slowly became sufficient to exploit the full potential of deep networks. Geoffrey Hinton from the University of Toronto in Canada had long suffered mild ridicule for his self-teaching approach. In 2012, however, he won the ImageNet Challenge, a competition in which AI systems compete to correctly interpret hundreds of thousands of images.
Deep neural networks shine in any field that needs to analyze complex patterns: They recognize, interpret and translate languages, analyze video sequences or predict stock price developments. They are the core element of voice assistants such as those used by Amazon or Apple. With an extensive, but targeted training, they can learn to play computer games or even beat human Grand Masters at the highly complex game of Go. When combined with other types of networks or with robotics, the capacities of deep networks can be vastly expanded: For a long time now, artificial soccer players have played each other in the annual RoboCup championship. They react entirely autonomously to their opponents, interact with teammates and occasionally even manage to score a goal. At this year’s RoboCup in Nagoya in Japan, the smartest robots were given the opportunity to compete autonomously in other disciplines, too: For example in the Logistics League or the @work industrial robot category, rescuing accident victims in disaster scenarios in the Rescue Robot League or as electronic butlers in the RoboCup@ home competition. The progress made in the field of artificial intelligence will drive radical changes in the mobility sector over the coming years, as the massive complexity of road traffic, particularly in urban population centers, will push classic algorithms to their limits when developing highly automated or even autonomous vehicles. Dr Christian Koelen, project leader at Porsche Engineering, explains: “Covering all imaginable parameter variations using classic algorithms would take a very long time and incur high expenses for programming and tests.” For object classification to reliably detect other traffic, such as pedestrians, Porsche Engineering has chosen to pursue the method of deep learning. “Deep neural networks today achieve very high success rates,” Koelen confirms.
Promising practical tests
But artificial intelligence is not only of use in recognizing your surroundings during automated driving. Assistance systems like Lane Keep Assist, for example, can also benefit from deep learning. Porsche Engineering’s Johann Haselberger has completed a feasibility study that proves it. The issue is no small matter. After all, assistance systems of this kind take control of the steering while driving. For the neural network to make the right decision within fractions of a second, it first needs to be trained. Professional drivers completed long test drives in the area around Stuttgart in a trial vehicle equipped with a high-performance computer and two new video sensors. While driving, the human driver’s steering motions were continuously correlated to the video recordings of the road ahead. Roughly half of the time, the car was driven on the motorway. The other half took place on country roads and under dynamic driving.
After several weeks, the system was put to the test: The neural network was allowed to drive by itself. “Both the computer simulation and the real-life tests on the road provided pretty good initial results,” Haselberger says. But they also showed that the current development status still has a few shortcomings. The robustness of the neural-network-based controller depends on the volume of input training data, and control quality depends heavily on the training material used. Special circumstances that the controller has not yet “seen” in training–say road work with special markings—are in reality hardly manageable. Nonetheless, dangerous situations are precluded: The classic controller always remains active in the background. Were the neural networks to deliver nonsensical values, they would be instantly overruled. This kind of combination of machine learning and classic deterministic algorithm is referred to as a hybrid system. Many experts expect these hybrid systems to become commonplace in the automotive industry in the near future.
When this article was being written, road testing had not been fully completed. But Koelen holds to his conviction: “This technology has great potential for providing drivers with even better assistance. We can imagine using it in series production by early next year.” There remains a fair bit of work to do until then. In a standard-production car, drivers should still be able to decide whether they would prefer to corner in a sporty and dynamic or more conservative style. And the assistance system also needs to react correctly when drivers choose to change their driving style mid-corner. Haselberger is looking forward to the work ahead: “We’re combining a classic Porsche virtue—transverse dynamics—with artificial intelligence, which is a new core competence for us. It’s really exciting.” Who would argue with that?
Text first published in the Porsche Engineering Magazine, issue 01/2018