This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Last month in this newsletter, I interviewed Ahmer Inam, the chief A.I. officer at technology services firm Pactera Edge, who offered some advice for how companies can build machine learning systems that can cope with the changes the pandemic has caused to normal business and consumption patterns.

Inam argued that the coronavirus pandemic is pushing many businesses to accelerate the adoption of more sophisticated artificial intelligence.

Abhishek Gupta, a machine learning engineer at Microsoft and founder of the Montreal AI Ethics Institute, got in touch over Twitter to say that I should have highlighted some important safety issues to bear in mind when considering Inam’s suggestions.

Last week, I caught up with Gupta by video call and asked him to elaborate on his very valid concerns.

One of the suggestions Inam made was for the A.I. systems to always be designed with a “human in the loop,” who is able to intervene when necessary.

Gupta says that in principle, this sounds good, but in practice, there’s too often a tendency towards what he calls “the token human.”

At worst, this is especially dangerous because it provides the illusion of safety. It can just be a check-the-box exercise where a human is given some nominal oversight over the algorithm, but actually has no real understanding of how the A.I. works, whether the data analyzed looks anything like the data used to train the system, and whether its output is valid.

If an A.I. system performs well in 99% of cases, humans tend to become complacent, even in systems where the human is more empowered. They stop scrutinizing the A.I. systems they are supposed to be supervising. And when things go wrong, these humans-in-the-loop can become especially confused and struggle to regain control: a phenomenon known as “automation surprise.”

This is arguably part of what went wrong when an Uber self-driving car struck and killed pedestrian Elaine Herzberg in 2018; the car’s safety driver was looking at her phone at the moment of the collision. It was also a factor in the two fatal crashes of Boeing 737 Max airliners, in which the pilots struggled to figure out what was happening and how to disengage the automatic pilot.

Gupta thinks there’s a fundamental problem with the way most A.I. engineers work: They spend a lot of time worrying about how to train their algorithms and little time thinking about how to train the humans who will use them.

Most machine learning systems are probabilistic—there is a degree of uncertainty to every prediction they make. But a lot of A.I. software has user interfaces that mask this uncertainty.  

It doesn’t help, Gupta says, that most humans aren’t very good at probabilities. “It is hard for most people to distinguish between 65% confidence and 75% confidence,” he says.

More A.I. software, he says, should be designed to show a user its confidence in its own predictions. Better yet, if that confidence drops below a certain pre-defined threshold, the software should alert the user that it simply can’t make a prediction. Users should also be told exactly what data was used to train the software and what its strengths and weaknesses are.

Pactera Edge’s Inam said that the pandemic was also leading more companies to experiment with reinforcement learning—the kind of A.I. that learns from experience—usually in a simulator—rather than historical data. Gupta said that while reinforcement learning can indeed be very powerful, it can also be particularly dangerous.

The biggest challenge with reinforcement learning is specifying the A.I.’s goal in such a way that it will learn to do what you want it to do—without doing something dangerous or harmful in the process.

A.I. software is remarkably adept at “specification gaming”—finding shortcuts through data that allow it to achieve its objective, but not in the way or spirit that its creators intended. Programmers have to be extremely careful about how they state the algorithm’s objective and how they design positive reinforcement during its training.

DeepMind, one of the world’s foremost practitioners of reinforcement learning, recently published an excellent compilation of about 60 examples of A.I. systems running amok due to inaptly specified objectives and rules. (They come from lots of different researchers—not just DeepMind—and span about 30 years of experiments.)

Even an A.I. that performs fantastically in a simulator may not be safe when it is transferred to the real world, since simulations are never perfect. Even with thousands of human years of training in a simulator, it’s impossible to know if the A.I. will actually be able to cope with everything the real world might throw at it.

“Reinforcement learning is opening up new avenues for us, there’s no doubt,” Gupta says. “And in Go and chess, it’s fine. But we don’t have disposable humans to use to test autonomous vehicles.”

Good points all. Now here’s the rest of this week’s news in A.I.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn