If artificial intelligence is running the company – what happens to the people?

Artificial intelligence can be utilised to automate company decision-making processes. At the moment, it is first and foremost being used successfully for routine tasks. However, the future could also see robo-advisors playing an active role in product development, human resource matters or strategic decisions. AI systems will not replace human managers in the foreseeable future. Yet managers who use artificial intelligence as a decision-making tool will ultimately outlast those who don’t. In tomorrow’s world, the core skills of people in management will include the ability to make intelligent decisions on when to utilise machine-based assistance.

by Thomas Ramge

Algorithmic art operates at the interface of two spheres of human knowledge. One is logic, which defines problems in advance and solves them in linear progressions. The other is the practice of aesthetics, in which problems can be defined only after they are solved. Human intelligence commands both modes. This can be seen in Christiaan Endeman's abstract hovering forms. Drawing on multiple sources (photographs, colour spectra), they crystallise in endless permutations of coloured layers.

David Ferrucci is not a well-known name, and it was a computer system that took all the credit for his work: IBM Watson. It made headlines around the world in 2011 when it defeated the reigning champions in the American cult quiz show, Jeopardy! Computer scientist Ferrucci was the human architect of the artificial intelligence system (AI) that can deal with human language so well. Today, Watson is one of the most successful platforms that companies and organisations are using to automate know­ledge work – and the world’s biggest hedge fund, Bridgewater Associates, has pinned its hopes on its creator. By 2022, three quarters of all the fund’s management decisions – from promotions to strategic company matters – will be made by artificial intelligence. And Ferrucci is meant to make sure that this really happens.

Every procedure in the company, every decision-making process has been comprehensively datafied. The company has been logging all meetings for years in order to find out later who contributed what to which decision. The employees constantly evaluate each other in an app. All data flows into Ferrucci’s learning system, which is called PriOS. It is not intended to replace humans completely, but to lead to more evidence-based, rational decisions that are less subject to cognitive distortion.

Initial experiments in developing fully-automated companies in which decentralized autonomous organization, DAO for short, replaces human intelligence fully, exist in all aspects connected to blockchain technology in the fast-growing start-up communities. Company goals, business models and processes are written in code. An invisible software controls the destinies of the company, from purchasing to warehousing and pricing to customer management, by means of statistical analysis, algorithmic decision routines and so-called smart contracts. DAO is not as far from practicability as it might seem at first glance. A nearly fully-automated online shop with a specialized range of products and robotised package delivery is more reality than science fiction; an app store already works fully automatically to a large extent today. Decentralised, autonomous organisations would simply be the logical consequence.


Artificial intelligence can be utilised to automate company decision-making processes. At the moment, it is first and foremost being used successfully for routine tasks. However, the future could also see robo- advisors playing an active role in product development, human resource matters or strategic decisions. AI systems will not replace human managers in the foreseeable future. Yet managers who use artificial intelligence as a decision-making tool will ultimately outlast those who don’t. In tomorrow’s world, the core skills of people in management will include the ability to make intelligent decisions on when to utilise machine-based assistance.


There are good reasons to think of a fully automatic company devoid of people as an economic dystopia. It wouldn’t be compatible with the social dimension of a sustainable economy. And yet artificial intelligence is currently experiencing something of a Kitty Hawk moment. Humankind tried to learn to fly for centuries, but only met with success with the Wright Brothers’ breakthrough in 1903. Two decades later there was a booming aviation industry. There could be a similar phenomenon with artificial intelligence, yet with a substantially bigger change effect. AI is a cross-sectional technology that will be pervasive through all industries. It will alter decision-making for health care service providers just as much as for retailers, energy suppliers, logistics companies, agriculture and all manufacturing companies. The automobile industry is doubly affected, as AI also transforms the product in fundamental ways.

Those who want artificial intelligence to change the world in general, and the business world specifically, for the good and not for the worse, have to ask themselves a series of questions. What does algorithmic decision-making mean for the company from an economic, ecological and social perspective? A systematic view shows that data-learning systems open up enormous opportunities in each of the three sustainability dimensions.

Artificial intelligence today is already extremely good at recognising deviations from norms or desired results. Credit card providers use this ability to unmask fraud attempts early and block payments automatically. With regard to cars, AI-supported image recognition systems recognise the smallest flaws in the paint jobs and decide whether or not a vehicle should be repainted. In both cases, AI enhances the product’s quality and streamlines the processes. There are similar examples in nearly every step of a company’s value-creation chain. Data learning systems increasingly support product development, production, human resources development and knowledge management. They help with administrative processes, logistics, marketing, sales, not to mention retention processes once a customer has decided for a company.

From an economic point of view, the AI applications essentially always come down to one aim: improving competitive standing. Since AI systems also in generally don’t produce economic miracles overnight, innovative companies introduce them with longer-term development in mind. They are intended to organise a company more efficiently over the long run, raise quality and assist management in making better decisions. In short, the big economic opportunity presented by the use of artificial intelligence in companies lies in boosting competitiveness systematically. Conversely, it means that forgoing AI to improve decisions on every level would be the opposite of sustainable in economic terms.

In addition, companies putting artificial intelligence to wise use can achieve a great deal with respect to ecology – thereby also bene­fitting the planet. AI dispatchers can help logistics companies drastically reduce their empty truck runs by around 30 per cent nowadays. Thanks to better forecasting, energy providers can anticipate their customers’ demands better and manage energy production more efficiently, which is to the advantage of the environment. The same applies to production planning in manufacturing processes, where AI will shrink overcapacity and rejects. Viewed abstractly, AI will help recognise and reduce inefficiencies in value creation processes. In the final analysis, this signifies the possibility of improved economising with limited resources, thus bolstering the balance between economic and ecological sustainability. Artificial intelligence will also continue to gain significance in a field that up until now has been the exclusive domain of humans, namely innovation. Humans pre-define the construction goals, and the computer searches and tests solutions that no one has ever thought of at lightning speed. When humans prescribe objectives during construction, the intelligent design machine optimises the drafts in terms of green materials, energy savings, recyclability, etc.

The probability that artificial intelligence has more of a positive than negative impact with regard to economic feasibility and the environment is high. Yet the fears remain. They concern the subsequent social cost of a rapid spread. Systems that learn from data relieve people of annoying routines and create space for those tasks that are really important and make work fulfilling: creativity, interaction, responsibility, innovation. AI becomes a tool that enables people to work better. The optimism is realistic, but raises the following question: how does a new technology succeed in creating better jobs in the medium and long term than it destroys through automation in the short term? The obvious follow-up question is, then, who shoulders the social cost that emerges from a technological revolution in employment? Social partners will not be able to escape the responsibility of providing solutions to social problems which arise from technology any more than government can. Yet all that is no reason to view the culture of an AI-assisted future pessimistically, because the good news is: digital transformation remains a human design task even when machines that learn from data turbocharge digitalisation.

It is up to humans to determine whether or not intelligent machines become their assistants in a digital social market economy where creative and responsible companies continue to derive their identity from sustainable created value. Intelligent machines can do more good work, make the planet greener and at the end of the day even generate more prosperity. Intelligent managers will use artificial intelligence to manage their companies sustainably. Additionally, they will constantly question where AI systems actually lead to better decisions in the process – and where the systems’ vendors simply claim it to be so.


It is up to humans to determine whether or not intelligent machines become their assistant in a digital social market economy where creative and responsible companies continue to derive their identity from sustainable created value.


The overarching question in this context is then: who programs the learning systems with which goals in mind? Developers and vendors of AI systems like to suggest that algorithm-based decisions are more objective than human ones. The magic word here is “evidence-based”, the argument behind it is that data doesn’t lie and the algorithm – as opposed to humans – is free of prejudice and incorruptible. At the same time, algorithmic decision-making is never neutral and – exactly like humans – susceptible to mistakes.

Algorithms interpret man-made interpretation models. These interpretation models, however, are programmed with their developers’ goals and values in the literal sense, even if they make an effort to be objective and neutral. In the case of data-learning systems – especially with so-called deep learning procedures in ever more powerful artificial neural networks – the selection of training data is already a subjective or random decision. Systems trained with deep learning tend to have biases that result implicitly from the training data.

Critics of algorithm-based decision-making have been demanding that companies disclose their algorithms for several years now. A transparency requirement quickly comes up against legal, economic and even technical limits.

The learning processes in artificial neural networks are the result of millions and millions of connections, each of which has a tiny influence on the result. Decision-making is therefore so complicated that the machine cannot explain or show humans which conclusion it reached. Furthermore, algorithms continuously change independently. Transparency is hard to establish, because the system doesn’t evaluate clear, straightforward criteria that are comprehensible to humans. It recognises ­patterns in a complexity that are beyond the human brain’s capability.

Two approaches to a solution are being discussed in this context. One says: algorithms that decide about people must be subject to the control of an independent entity. It should have insights into the machine-based decision-making processes and ensure that these operate on a solid statistical foundation and lead to fair results. The examiners may certainly not then communicate their know­ledge to the competition. The other: artificial neural networks explain how (other) artificial neural networks arrive at their decisions. In other words, a software tool explains to humans – algorithm-testers, for example – how an extremely complex software system operates. The combination of both approaches to a solution could help to actually make machines do what they should from the perspective of an ethically responsible developer: be an aid to improve human decisions.

The human answer to decisions in a mechanical black box can only be the return to the point of departure for clarification: we must always question everything. For what purpose were the systems developed, with which data were they trained, and which interests do the people or organisations who use them pursue?


We must always question everything. For what purpose were the systems developed, with which data were they trained, and which interests do the people or organisations who use them pursue?


We must understand when mechanical assistance is beneficial to us and in which context it hampers us in our thinking. The automation of decision-making provides great opportunities for the individual, organisations and for communities that we call societies. Yet the better machines can make decisions, the more deeply we humans have to think about which decisions we – as private users, companies and as a society – want to delegate to artificial intelligence. The advances in artificial intelligence present us with a new intellectual challenge. Humans should not place their complete faith in machines. The solution is an old one and takes effort. We must think for ourselves. And decide ourselves.

 

Thomas Ramge is the technology correspondent for brand eins and writes for The Economist. He also supports the German-American analytics company QuantCo as Chief Explaining Officer. Ramge has published twelve non-fiction books and a novel. His next book will soon be published by Reclam: “Man and Machine. How Artificial Intelligence and Robots are Changing our Lives.”

Christiaan Endeman is a 23-year-old graphic designer from the south part of The Netherlands with a main focus on 3D art and motion graphics. Having recently graduated, he is investing every moment he gets into his business, TheManDesigns, which he started during his studies. With a creative vision and passion for his work, he targets new heights with every project and leaves a lasting impression with his visuals.