In search for a perfect solution, risk cannot be engineered away

Not that she needs any sympathy, but I feel a little sorry for Minnesota-based science writer Maggie Koerth-Baker, who made a reasonable point about engineering self-driving cars and ran into a common problem on the Internet today: People who don’t want to read a full article and/or people who want to hear nothing that might disrupt their notion that a favored solution to a problem is anything but perfect.

In her fivethirtyeight column, she considered yesterday’s revelation that a man driving an autopiloted Tesla car was killed when the system apparently failed. The autopilot had been tested for 130 million miles before the fatality, which occurred in May. That’s a pretty good record that doesn’t make the tragedy any easier, of course.

Self-driving cars are safer than those driven by humans. But they can never be made risk free. It’s like casual sex, she says. Accidents happen.

The safer you make a technological system — the more obvious, known risks you eliminate — the more likely it is that you’ve accidentally inserted unlikely, unpredictable failure points that could, someday, come back to bite you in the ass. All the more so because the people operating the system don’t expect the failure.

Humans make mistakes, too, the fans of driverless cars pointed out… even though Koerth-Baker already had.

Some risks are unpreventable and that’s a fact we simply have to accept.

Normal accidents are a part of our relationship with technology. They are going to be a part of our relationship with driverless cars. That doesn’t mean driverless cars are bad. Again, so far statistics show they’re safer than humans. But complex systems will never be safe. You can’t engineer away the risk. And that fact needs to be part of the conversation.

So should reading an entire article before commenting.

  • PaulJ

    I must not have gotten far enough in the article; why does engineering away known risks increase the unknown risks?

  • joetron2030

    You’re supposed to read a linked article?!? But, that’s not the Internet way.

  • jon

    So if we remove the autopilot from the situation, who would be blamed for the accident? The truck turning across a divided highway that didn’t leave enough room, or the car that didn’t brake?

    Had the car braked we wouldn’t have heard of this at all, but now that we are looking, why is the car even suspect in this instance?

    I think I get her point that we can never completely eliminate risk, but that doesn’t even impact just technology, you can hide in a cave all your life, but a cave in might still kill you, but risk isn’t a Boolean true/false, it’s a matter of mitigating risk to reasonable levels, sounds like so far Tesla has, heck if normal is a reasonable level they are doing better than that.

    Blaming the autonomous car for failing to stop when another vehicle did something that caused the accident sounds a bit like victim blaming to me.

    • In this case the autopilot didn’t do what a human would have done. But she didn’t place blame.

      • jon

        With a human sitting at the wheel, required to hold onto the wheel the whole time the autopilot is doing it’s thing, the human didn’t do what a human would do either…

        Though I suspect many collisions that are caused by humans are done because a human didn’t do what we’d expect a human to do…

  • lindblomeagles

    Cars are risky in general because passengers travel at speeds man really wasn’t designed to go in metal fueled compartments that barely withstand crashes with anything. Crashes with trees, concrete medians, poles, etc. Usually do little in stopping death. We play dumb so that we can get to all these places without fear or worry.

    • Rob

      Indeed, yet another way in which life is cheap.

  • Jeff

    If you don’t like the autopilot in these cars I recommend you stay off aircraft with autopilot systems… which would be all of them.

  • Gary F

    Why would we ever think that life could be risk free?

  • Jeff C.

    Tom Scott did a podcast recently about an interesting safety feature at a nuclear fision lab that allows humans to pick up subtle problems that computers might overlook. That discussion starts at 1:34. They realized that humans have developed a way to sense that something is wrong. Sometimes humans can detect something subtle that computers can’t (of that would require millions of lines of code to detect).

    https://www.youtube.com/watch?v=IrtGp8hv-0Y