Apartheid Emerald Mine Space Karen's autopilot conveniently shuts off one second before impact.

"No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error. It can only be attributable to human error."

Musk's regulatory woes mount as U.S. moves closer to recalling Tesla's self-driving software:

On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot "aborted vehicle control less than one second prior to the first impact," suggesting the driver was not prepared to assume full control over the vehicle.

CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision.

Previously, previously, previously, previously.

Tags: , , , , , , ,

14 Responses:

  1. Thomas Lord says:
    8

    libc error code EOOPSIMOUTAHERE

  2. That's worse than I thought. I mean it's pretty obvious that the thing can be dangerous, because if it screws up there's no way a human being can react in time to fix the problem.

    • phuzz says:
      1

      This has always been my problem with 'autopilot'. The whole idea is that you can relax and not have to concentrate on driving the car, but, you also have to be ready to take over from the computer when it's in a situation it can't deal with. So you can't relax if you want to stay safe.
      And as the autopilot gets better, the situations in which it gives up and says "human take the wheel" are going to be more and more dangerous edge-cases, where the human is less and less likely to be able to do much.

      That said, there's probably an argument to be made that the people who turn on autopilot and stop paying attention would probably be still getting into crashes, with or without computer involvement.

  3. Zygo says:
    9

    So they've perfected software that detects imminent liability, and applies real-time corporate risk mitigation.  That's some hard-core next-level AI achievement there.  If only the "auto-autopilot-off" feature could stop the car...

    • thielges says:
      1

      “if (timeToImpact < 1.0) disengageAutopilot()” might be the most space efficient CYA liability dodging code ever written.  

  4. Koleslaw says:
    3

    It's not my fault the murder machines murdered because right before they murdered they said "no one is murdering anything right now"

    • Phil! Gold says:
      2

      Isaac Asimov spent a lot of time exploring ideas about AI.  One story that's managed to stick with me is "Little Lost Robot".  People who've read the story may be able to extrapolate the rest of this comment.

      All robots in the story (any many other Asimov stories) have a hardcoded rule, "No robot may harm a human or, through inaction, allow a human to come to harm."  But for certain reasons, a handful of robots in the story have a truncated version of the rule that just says, "No robot may harm a human."  A character points that one of the modified robots could drop a heavy object above a human, knowing that its robot strength and reflexes were sufficient to grab and hold the object before it fell, so it wasn't going to harm the human.  But the instant it let go of the object, the robot would become a mere bystander, with no obligation to prevent the human from being crushed.

      I don't know why I was reminded of this story.

      • jon says:
        1

        That's like pulling the trigger and saying: sorry, but did I penetrate that human flesh or was it the bullet? Not my fault that human didn't dodge away.

  5. Olivier Galibert says:
    4

    A couple of interesting things I've heard about that:

    • driving assistance systems are required to switch off around an accident (just before, just after, doesn't matter).  It is so that the car doesn't keep driving with an unconcious (or worse) driver.
    • Tesla public documents consider the autopilot involved in an accident if it was active 5 seconds or less before it happened

    I doubt five seconds is enough for an unsuspecting human to take over, but that's another story.

    • K says:

      According to their <a href=https://www.tesla.com/VehicleSafetyReport>published statistics</a> the methodology they use is to count a collision in the Autopilot column if the system was activated in the five seconds leading up to the collision. It's in the small print at the bottom. So you haven't just heard it, they've put it in writing.

      And yes, totally agree, if the human isn't already heads up and paying attention to the scenario, five seconds is probably not enough time to take over.

  6. spoon00 says:
    2

    This smells like the “I’m not touching you” defense.

  7. joe luser says:
    1

    as laurie anderson pointed out several decades ago: "it's not the bullet that kills, it's the hole". also, news like this makes me slightly happy with hope for the future  because it indicates that there are at least a few not-moron people working at the nhtsa and they are not totally squelched by political patronage

  • Previously