Today in Uber Autonomous Murderbot News

The Uber executives who put this software on the public roadways need to be in jail. They disabled safety features because they made testing harder. They disabled safety features because they made the ride rougher.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash:

The National Transportation Safety Board has released its preliminary report on the fatal March crash of an Uber self-driving car in Tempe, Arizona. It paints a damning picture of Uber's self-driving technology.

The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car's 43mph speed.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there.

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

Deadly Accident Likely Caused By Software Set to Ignore Objects On Road:

The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Tags: , , , , , , , ,

11 Responses:

  1. Troy says:

    I previously semi-joked on twitter that Uber's software wasn't trained to actually identify a person wheeling a bike in traffic, but it looks like my instinct was right.

    Common failure-point of ML: train on A=A, B=B -- but when it sees A&B together it sees nothing.

    I read that Uber turned off the Volvo's self-driving stuff since they wanted to test their own SW.

    Main failure was testing on uncontrolled roads in the first place, then not having two-person, professional teams in the cars. Idiots.

    • Wilko says:

      I read that too. These idiots took what is probably one of the safest cars on the road and turned it into a death machine. If I was Volvo I would be investigating how I can prevent Uber from using my cars as their test platform.

      My Subaru has a similar collision warning and avoidance system and while it does false trigger sometimes its warnings have saved me from at least one low-speed fender bender when I was distracted looking for a parking spot.

  2. Important public reminder: merely deleting the Uber app doesn't deactivate your account, and they still get to claim you as part of their userbase when they do investor calls.

    You need to write their customer service bots and demand that they actually close and delete your account.

  3. Jan Kujawa says:

    They probably figure they can offset the lives they've saved taking drunks off the road with people their robots kill. Balance!

  4. MattyJ says:

    So if corporations are people, why can't corporations go to jail ...?

    • phuzz says:

      Or get the death penalty? Wait, that's probably only for poor people right?
      Actually, I think I've just answered both our questions: because they have money, that's why they don't get punished.

  5. Joe Shelby says:

    Other issues in process is that all the backups were down, too...as in, not only did it not react to a pedestrian by breaking (the breaking systems were shut off), it expected the human to actually notice it...BUT the systems that would tell the human hey, i spotted something, check it out were ALSO disabled.

    If you're not really driving, even in the driver's seat, you're not paying attention.

    Seems nobody actually talked through use-case execution and all of the possibilities of WHY something like that would be a really stupid thing to do...

  6. cxed says:

    It sounds like the detection classifiers pretty well worked and the system detected a bicycle. It seems like the real problem was a fatal combination of 1. the car not activating an emergency brake operation and 2. the safety driver not being aware of the situation enough to do it manually.

    That #2 seemed pretty damning to me from the interior video but now it seems we know the safety driver wasn't checking social media. Apparently Uber made the very bad strategic decision to have the safety drivers looking at some kind of screen feedback which was part of the designed mission. That was very stupid.

    Good information is here.

  7. snert says:

    Regardless of the specifics of the internal process, the Uber beast said to itself "if you see someone in the way, fuck that person, because our agenda is more important than everyone." Uber should have its license suspended, they should pay restitution and crippling punitive fines, executives should serve time in prison and forfeit their stock and options, and every manager involved should face charges.

  8. Chris Davies says:

    So if I understand this correctly, the software knew full well that its course of action was likely to result in human injury or death and the in-built response to this situation was to carry on as if nothing was wrong and hope like hell that the supervising human was fully awake? Who wrote this software?

    If sane responses to the detection of hazardous situations cause erratic driving behaviour from the software, clearly the software isn't ready to be unleashed on the streets. If Asimov were still alive I'm sure he'd have a few choice words to say about deliberately 3 laws non-compliant AI.

  9. Marten says:

    On top of all that's already been well said by others about the AI... if it was going 43 mph then it was speeding. I know it's hard for humans to understand limits, and for a human driver we wouldn't think anything of it if someone was 3 mph over the limit.

    But it's a limit. Not the recommended average. Not a target speed. It's the max speed that is deemed safe. I think it's incredibly stupid to go tell robots that they can exceed, just because humans can't understand this and make up excuses. (And yes, of course I do the same.)

  • Previously