© 2006 - 2020 Maracaibo Media Group · All Rights Reserved.  Terms of Service  |  Powered by Cora+Krist.

A.I. engineers should spend time training not just algorithms, but also the humans who use them

A.I. engineers should spend time training not just algorithms, but also the humans who use them
May 5, 2020 coraandkrist

Pactera Featured In

READ ORIGINAL ARTICLE


A.I. engineers should spend time training not just algorithms, but also the humans who use them

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Last month in this newsletter, I interviewed Ahmer Inam, the chief A.I. officer at technology services firm Pactera Edge, who offered some advice for how companies can build machine learning systems that can cope with the changes the pandemic has caused to normal business and consumption patterns.

Inam argued that the coronavirus pandemic is pushing many businesses to accelerate the adoption of more sophisticated artificial intelligence.

Abhishek Gupta, a machine learning engineer at Microsoft and founder of the Montreal AI Ethics Institute, got in touch over Twitter to say that I should have highlighted some important safety issues to bear in mind when considering Inam’s suggestions.

Last week, I caught up with Gupta by video call and asked him to elaborate on his very valid concerns.

One of the suggestions Inam made was for the A.I. systems to always be designed with a “human in the loop,” who is able to intervene when necessary.

Gupta says that in principle, this sounds good, but in practice, there’s too often a tendency towards what he calls “the token human.”

At worst, this is especially dangerous because it provides the illusion of safety. It can just be a check-the-box exercise where a human is given some nominal oversight over the algorithm, but actually has no real understanding of how the A.I. works, whether the data analyzed looks anything like the data used to train the system, and whether its output is valid.

If an A.I. system performs well in 99% of cases, humans tend to become complacent, even in systems where the human is more empowered. They stop scrutinizing the A.I. systems they are supposed to be supervising. And when things go wrong, these humans-in-the-loop can become especially confused and struggle to regain control: a phenomenon known as “automation surprise.”

This is arguably part of what went wrong when an Uber self-driving car struck and killed pedestrian Elaine Herzberg in 2018; the car’s safety driver was looking at her phone at the moment of the collision. It was also a factor in the two fatal crashes of Boeing 737 Max airliners, in which the pilots struggled to figure out what was happening and how to disengage the automatic pilot.

Gupta thinks there’s a fundamental problem with the way most A.I. engineers work: They spend a lot of time worrying about how to train their algorithms and little time thinking about how to train the humans who will use them.

Most machine learning systems are probabilistic—there is a degree of uncertainty to every prediction they make. But a lot of A.I. software has user interfaces that mask this uncertainty.

It doesn’t help, Gupta says, that most humans aren’t very good at probabilities. “It is hard for most people to distinguish between 65% confidence and 75% confidence,” he says.

Read more via subscription to Fortune.