In search for a perfect solution, risk cannot be engineered away

Not that she needs any sympathy, but I feel a little sorry for Minnesota-based science writer Maggie Koerth-Baker, who made a reasonable point about engineering self-driving cars and ran into a common problem on the Internet today: People who don’t want to read a full article and/or people who want to hear nothing that might disrupt their notion that a favored solution to a problem is anything but perfect.

In her fivethirtyeight column, she considered yesterday’s revelation that a man driving an autopiloted Tesla car was killed when the system apparently failed. The autopilot had been tested for 130 million miles before the fatality, which occurred in May. That’s a pretty good record that doesn’t make the tragedy any easier, of course.

Self-driving cars are safer than those driven by humans. But they can never be made risk free. It’s like casual sex, she says. Accidents happen.

The safer you make a technological system — the more obvious, known risks you eliminate — the more likely it is that you’ve accidentally inserted unlikely, unpredictable failure points that could, someday, come back to bite you in the ass. All the more so because the people operating the system don’t expect the failure.

Humans make mistakes, too, the fans of driverless cars pointed out… even though Koerth-Baker already had.

Some risks are unpreventable and that’s a fact we simply have to accept.

Normal accidents are a part of our relationship with technology. They are going to be a part of our relationship with driverless cars. That doesn’t mean driverless cars are bad. Again, so far statistics show they’re safer than humans. But complex systems will never be safe. You can’t engineer away the risk. And that fact needs to be part of the conversation.

So should reading an entire article before commenting.