An Ideas-Based Online Magazine of the Global Network for Advanced Management

The Data Duels of Decision Making

In this interview, Francis de Véricourt, professor of management science at ESMT Berlin, tests the claims of AI superiority to human creativity. Can data-mining machines outperform their human masters? Not in decision making. At least, not for now.

Q. You are teaching students about decision science to affect leadership decisions. How well has it been received?

A. The research on decision science has become more and more popular. We have learned that when we make decisions, we are subject to what are called decision traps or decision biases—we are making mistakes without being aware of it. Our brain is making inconsistent choices for us—sometimes these are choices against our values or assumptions that contradict each other. The idea, then, is to use math to try to protect ourselves against these decision traps. Mathematical frameworks impose consistencies on our choices, helping us to approach decisions in coherent ways.

But that’s not the only approach. You could use a team to constrain your decisions, to de-bias yourself. You have your beliefs but, in a team, someone who has different beliefs can readjust your beliefs or challenge your assumptions.

In the same way, you can use math. And you can use other kinds of processes that constrain you to question your assumptions and to be consistent. That’s how I teach it.

Q. In a recent interview with Brand Eins, you spoke about the impact of AI and machine learning on decisions. You shared an example from a doctor’s office and the diagnosis of a mole on the skin. How are we achieving this? Is medical science already moving ahead with AI-driven decision making?

A. In the medical sphere, there has been success and there has been failure. What seems to be working in machine learning is the diagnostic aspect, not the treatment aspect. IBM, for example, has tried to develop big computers to cure cancer. There is a lot of data but it is not working as well as in diagnostics.

There are deeper questions to raise about healthcare and machine learning. Are you giving up your decision making power to a machine that makes a diagnosis or (at some later point) a recommendation for your treatment? The choice for a machine versus a human creates other big questions. Is it ethical? Is it better? When should you do it? Does it mean that future doctors are only going to be there to make you feel good—to be warm and caring—while the treatments will be done by your iPhone or centralized somewhere else?

This is a question that will be true for all of us, not just doctors. For me, will it mean students don’t need professors? Will we only provide some coaching sessions? Maybe machine learning will mean they learn in front of their computers. And what about falling in love? Do you need algorithms that match you or do you still need human social networks?

The main question that healthcare issues illustrate is “What’s going to be our role as humans?” There’s no going back. What is sure is that machine learning will take over more and more of the tasks that we are doing now, in the same ways that computers took over much of what our parents and grandparents did before. The difference between machine learning and computers, however, is in decisions. Computers helped us with writing, calculations, presentations, and many other tasks. But what is new with the advancement of machine learning is that algorithms and machines are slowly starting to take over some of our most important decisions.

Q. Is this because the machine is setting aside some of the biases and focused only on the optimal answer in a way that we humans aren’t capable of?

A. A machine is not a living entity. The efficiency of the machine is, in the end, the data not the algorithm. Of course, the algorithm helps. But it is the data on which it is trained that matters more. For instance, we have biases—strong biases—that the machine doesn’t make disappear.

For instance, the social biases we have are often captured by the data on which computers are trained. The more data you have and the better quality of your data, the better it’s going to be. But the machine is not going to tell you “these are the best things to do,” rather “this is the conclusion from the massive amount of data I have.” That’s the closest that there is. Why? Nobody knows. The machine does not know. Even the programmer does not know.

A great example of that is AlphaGo. Three years ago, this Google/DeepMind machine learning program won against amazing players—Go grandmasters. They use their intuition to play. There is no way to be a Go grandmaster without having deep intuition and creativity. Gameplay is so complex that you cannot program all of the possible moves. Yet Google machine learning beat the best players in the world. Just using data and game simulations, AlphaGo won. The thing is, nobody understands the strategy of this machine. There are no explanations for the underlying strategy—there is none, just the data.

Humans still have an advantage, though. Our computing capacity and ability to handle large amounts of data is no match for a computer. Yet we have the ability to use representations, to develop models of problems. We managed to put a man on the moon on almost our first attempt, and we found what we were expecting to find. No big surprises. Yet, we did not have any firsthand data point since nobody had gone there before. We could do that because of our ability to create representational models about how the universe works. A machine would have had to learn by much trial and error and likely killed billions of humans in the same attempt.

Q. There’s so much talk these days about data privacy. Concerns about data quality—“garbage in, garbage out”—also undermine public trust in machine learning and artificial intelligence. How do we address this resistance?

A. If there’s resistance, it’s also for good reasons. Machine learning is a tool. We use “artificial intelligence” as a metaphor. It makes it seem more alive than it is. But in the end, what matters is what we do with it.

There are applications that I find extremely scary – China’s application of facial recognition, for example. You can also cheat the machine – you can use another machine to create a fake image of yourself, to make believe that you are someone else. I think we haven’t even scratched the surface of what is possible, yet.

In terms of business models and the economic side of things, data has become gold. As a researcher 15 years ago, you could approach a company, try your best, and have a reasonable chance that they give you some data to work with. In the eyes of the company, they worried a little about what you might find or about data leakage. But then it was “it’s okay, it’s just data.”

But, now, it’s much harder to get to the very same data. There is an awareness that data used for research can be used for other purposes. You can use it to make money. Or to learn something that you didn’t know before. The rise of machine data has propelled the notion that any data you have – even if you do not know now what you’re going to do with it—should be stored away in a treasure chest. How do I get the data? How do I protect it? The more data you have, the wealthier you are. Don’t give it away because, who knows, there may be value there that you can exploit or sell later.

When we consumers use Google, we give away a lot of our data. If I gave you the dollar value of this data, you might actually realize that the services of Google are particularly prohibitive. Google Maps is in fact extremely expensive because we’re giving up a lot about ourselves.

Q. If you move people away from being machine reliant, what’s left? In the AlphaGo situation, you mentioned the role of intuition. Can you develop someone’s trust in their intuition?

A. Yes, thank God, you can! [Laughs]

The key to develop your intuition is to have unambiguous and immediate feedback about your decisions. And that’s why it’s very hard, in fact, for managers to develop good intuition, despite some of their own beliefs.

If you’re an emergency room doctor who sees many, many patients, your decisions have immediate consequences. So you see right away, if you were wrong or if you were right. It’s sometimes ambiguous, but often if you say, “Oh, we need to do XYZ,” and it fails, you have immediate feedback.

But let’s say you are a doctor working elsewhere in the same hospital but not in the emergency room. You believe that one of your patients has pneumonia, so you order an X-ray. But then you go home because you are done with your day. In the meantime, someone else was responsible for the care of that patient. You may never even know the result of the X-rays you ordered. Or, if you do get that feedback some days later, you will have forgotten what made you reach the pneumonia conclusion at that time.

Q. Are there processes that companies can put in place to create more opportunities for this kind of feedback?

A. It is less a definitive process than an approach: we need to be extremely careful not to simply reward outcomes.

We tend to evaluate our decisions based on the success or failure of our results. CEOs know that they need to show success, whatever the reasons. The results of the decisions matter more than how you make the decisions. So, if you’re successful in whatever you do, you get your bonus and, with it, the implicit message that you will be rewarded regardless of the why. You may have been successful for all sorts of other reasons than your decisions. It could have been luck, for example. The reward creates the wrong incentive and prevents you from learning from your mistakes in the decision-making process.

When we do look, we look at the decisions when something has gone wrong. In a crash, you really want to know what’s going on and, along with it, who is to blame – usually the wrong person. Bad outcomes sometimes force you to go deeper. But with good outcomes, we do not care much. We celebrate and move on. Yet, we may have just been lucky this time and failing to recognize this possibility makes us unduly overconfident for the future.

This interview was originally published on EMST.Berlin.

ESMT Berlin Germany