“You look tired,” says the voice. “Maybe you could do with a nap. There’s a service station 
in 25 minutes. I’ll wake you then and you
can buy yourself a coffee.” I hadn’t noticed that the light had grown warmer and softer. The music matched my heartbeat, but always a beat slower to relax me. I smiled. “You’re not normally convinced so easily. Sleep well,” says my car. And drives me to my conference in Hamburg. 

– A vision of the future from Marco Maier. 

Humans and machines: an interactive relationship. Machines give us directions, remind
us about our appointments, and warn us when we are not moving enough. They can drive, cook, paint, make music, sometimes provide a more accurate diagnosis than a doctor and anticipate problems. Yet we complex beings with our hidden thoughts and feelings remain a mystery to them. The question is: for how long? 

Human interaction uses many different ways of communicating: language, writing, facial expressions and gestures. Interacting with a computer is exactly the same. Program codes, in other words written instructions, make sure that machines do exactly what humans ask of them. Screens react to a swipe. Voice­-based interfaces wait for a command. They are based on explicit statements and orders. Yet the unsaid can often be as telling as what is actually said out loud. The machines of the future will not just be smarter. They will be empathetic too, detecting our emotions from voice alone, for example. 

Affective computing: function and understand

Shrikanth Narayanan, an Indian­-American Professor at the University of Southern California in Los Angeles, and his colleagues spent two years recording hundreds of conversations from couple therapy sessions. This material was supplemented with information on the marital status of the people involved. Narayanan’s team added voice data to their algorithm, analysing the data according to volume, pitch, and jitter and shimmer symptoms. That was all it took. The system was then able to predict with 80 % certainty whether a couple would still be together at the end of the observation period, outdoing the assessments of the therapists involved in the trial. Narayanan
is very optimistic about the future of this technology, claiming that machines are moving very close to people when it comes to recognising emotions. He also explains how our voices transport a great deal of information about our mental state and our identity.

Affective computing focuses on machines that not only function but can also adapt to people and understand their feelings. The growing popularity of voice assistants has lent huge impetus to research in this area of computer science. Voices, more than any other human expression, transport emotions. They are a key element in the interaction between humans and machines. 

A changing human/machine interaction

Meanwhile, the steadily growing autonomy of machines and their ever increasing scope
are changing the emotional human/machine interaction. Instead of following orders, the “smart agent” only has a framework for action and an optimisation target. From abstract business process optimisation systems based on artificial intelligence (AI) to autonomous vehicles – machines are making decisions that affect our everyday lives, dimming the lights
at home after a hard day at work, adjusting the room temperature or music volume, and even running a bath. 

“Emotion AI technologies pick up on the smallest changes in individual parameters and can derive a person’s state of mind from that information. Not just language, but also visual and physiological data provide valuable information,” confirms Dr Marco Maier from TAWNY, a company that specialises in affective computing and is already trialling the technology in everyday applications. How, for example, should work be distributed among members of a team so that nobody feels overburdened and stressed and, conversely, nobody feels underused and bored? Smart systems independently optimise workflows, and measure and take account of the impact on the safety, productivity and well­being of workers. Empathetic consumer devices dynamically adjust their functionality to the user’s state. Professional athletes use this technology to support their training to achieve the longest possible flow. Sales staff practise their presentation and attitude using an empathetic companion.

The majority of machines in the world have an emotional IQ of 0. But what is already clear is that the machines of the future will not just be smarter. They will be empathetic too.

Being able to accurately assess a person’s mood is a vital part of genuine communication without misunderstandings. This is leading directly to a second trend, namely pervasive or ubiquitous computing – the concept of computing that is made to appear anytime, anywhere. 

The American Thad Starner, a professor at Georgia Tech and one of the developers of the Google Glass, is a pioneer in this field. Starner has been wearing a computer for about a quarter of a century. Wearable technology is as natural to him as wearing a jacket and trousers. Over the years he has worn a hip PC, donned a clunky pair of glasses and kept a Twiddler in his trouser pocket (a chorded keyboard). Starner refers to himself as a cyborg and remembers very well how he wrote his dissertation while walking around and was able to rehearse his lectures while lying on the couch in his office. His students thought he was sleeping. 

Technology is always with you

Starner laid his smartphone to rest about ten years ago now, frustrated by the unwieldy design and the fact he never had his hands free. His preference remains glasses with integrated computers, which are becoming ever smaller to the point of being invisible. It is still to have a breakthrough, but he firmly believes that this type of smart system, combined with voice commands and an assessment of mood, will soon be able to recognise what the user needs: a weather report or route navigation on the way to an appointment, or even, if the user is stressed and rushing to an urgent meeting, learning only to put through important phone calls. These systems “sense” what their wearer is doing and predict what he is about to do. They can, for example, project the next stages in a work process on to smart glasses or directly on to the desk using augmented reality, or provide unobtrusive assistance by briefly illuminating the box containing the correct screws. Dieter Schmalstieg, augmented reality expert at Graz Technical University and author of the book Augmented Reality – Principles and Practice, refers to these wearable devices as “all­-knowing organisers”. “Information is becoming a component of the real world.”

Modern­-day cars, devices on wheels, are already busy collecting data. Sensors can monitor the driver’s stress levels by recording skin conductance or pulse, recognising when he or she is excited or angry and reacting accordingly. The Fraunhofer Institute for Industrial Engineering IAO in Stuttgart is developing demonstrator models and prototypes for the short-term future of automated driving. These use the principles of persuasive computing to track the mood of drivers and passengers at any given time, by evaluating eye movements, for example. If they detect fatigue or a lack of attention, a blue light within the vehicle or a small movement of the steering wheel alerts the driver to the situation. 

Emotionally adjusted machines are the future

Emotionally adjusted machines will change our future. “The added inclusion of emotional and social messages allows an interactive interplay between humans and technology”, explains Tanja Terney Hansen­-Schweitzer from VDI/ VDE Innovation. What that feels like can be experienced at a conference organised by the German Federal Ministry for Education and Research focusing on “Socially and emotionally sensitive systems for optimised human/technology interaction”. 

The man on the training bike is pedalling hard and putting in a huge effort but suddenly starts grimacing. “You look like you’re in pain,” his instructor says sympathetically. “Try cycling more slowly.” The man follows the advice and the instructor is happy: “Much better.” 

The instructor is not a human but an avatar on a huge screen on the wall. In some miraculous way, this avatar senses how its charge is feeling. This is a project being run by Augsburg University in Germany, in cooperation with Ulm University Clinic. The smart agent learns which actions – bright or dark, loud or quiet, warm or cool – steer the user in the desired direction, in other words what makes them relaxed or attentive, awake or sleepy, calm or energetic.

The aim is for the virtual trainer on the screen to help old people in particular, ensuring the correct level of exertion. To do this, it interprets facial expressions but also monitors noises, such as heavy breathing. The system also measures skin conductance and pulse, thereby recording stress and signs of over­exertion. Based on this information, the instructor can adapt its facial expressions and gestures in line with how the person working out is faring. 

Voice-based emotion recognition

Björn Schuller has launched a start­up called Audeering, offering voice-based emotion recognition services. “Emotions are important because people need them to survive. And that also applies to artificial intelligence.” Ideally, Schuller wants to see machines adapt to people in the same way as another person would do. Alongside the US, Germany is a driving force in this type of research. 

Audeering’s customers include market research companies interested in using analysis of customers’ voices to find out what they really think about a product. According to Schuller, the analysis of voice data from the internet (such as YouTube) is another huge market, enabling “opinion­-forming to be tracked on a real-­time basis”. Schuller is in no doubt. Before long, emotionally sensitive systems will be having conversations with humans, and not just controlling devices with language. Siri’s response to a marriage proposal might be: “It’s nice of you to ask.” But in a real conversation the dialogue would have to continue, and “for that I need emotions,” explains Schuller. “The computer can then carry out a perfect analysis of mood and knows if I am feeling strong, weak, happy or sad.”

Machines have to learn to adapt to humans

“Socially sensitive and cooperative systems are the future,” says Professor Stefan Kopp from Bielefeld University in Germany, where he heads the Social Cognitive Systems working group. But only if machines learn to adapt to humans. What happens if they do not was demonstrated during trials carried out by the German Research Centre for Artificial Intelligence, during which socially disadvantaged young people took part in practice
job interviews with an avatar. The researchers subsequently added an emotion recognition feature, after the first trial ended disas­trously, as least as far as the technology was concerned. One of the users was driven to distraction by the avatar on the screen as it confronted him with unpleasant experiences over and over again with no concern for his emotional state. The young man’s response was to throw the monitor out of the window. 

Related Content