Today in Uber Autonomous Murderbot News

"Safety driver" of fatal self-driving Uber crash was watching Hulu at time of accident:

Police obtained records from Hulu, an online service for streaming TV shows and movies, which showed Vasquez's account was playing the TV talent show "The Voice" for about 42 minutes on the night of the crash, ending at 9:59 p.m., which "coincides with the approximate time of the collision," the report said. [...]

The Uber car was in autonomous mode at the time of the crash, but the company, like other self-driving car developers, requires a back-up driver inside to intervene when the autonomous system fails or a tricky driving situation occurs.

WHICH WILL NEVER WORK!

Vasquez looked up just 0.5 seconds before the crash, after keeping her head down for 5.3 seconds, the Tempe police report said. Uber's self-driving Volvo SUV was traveling at just under 44 miles per hour. [...] Police said a review of video from inside the Volvo showed Vasquez was looking down during the trip, and her face "appears to react and show a smirk or laugh at various points during the times that she is looking down." The report found that Vasquez "was distracted and looking down" for close to seven of the nearly 22 minutes prior to the collision. [...]

According to a report last month by the National Transportation Safety Board, which is also investigating the crash, Vasquez told federal investigators she had been monitoring the self-driving interface in the car and that neither her personal nor business phones were in use until after the crash. That report showed Uber had disabled the emergency braking system in the Volvo, and Vasquez began braking less than a second after hitting Herzberg. [...]

In addition to the report, police released a slew of audio files of 911 calls made by Vasquez, who waited at the scene for police, and bystanders; photographs of Herzberg's damaged bicycle and the Uber car; and videos from police officers' body cameras that capture the minutes after the crash, including harrowing screams in the background.

I repeat myself, but:

  1. The Uber executives who put this software on the public roadways need to be in jail. They disabled safety features because they made testing harder. They disabled safety features because they made the ride rougher.

  2. This notion that having a "safety driver" in the passenger seat will allow a distracted human to take over at the last minute is completely insane. You think driving-while-texting is dangerous? This is so much worse. When people aren't engaged in the task of driving, their minds wander. They cannot re-engage fast enough. This is obvious on its face, we don't need studies to prove it. Oh, but we have them anyway.

  3. I would still like to know the answer to the question of who gets charged with vehicular homicide when one of these machines kills someone. Even if they are ultimately ruled to be not at fault, what name goes on the court docket? Is it:

    • The Uber employee "non-employee independent contractor" in the passenger seat?
    • Their shift lead?
    • Travis Kalanick?
    • The author(s) of the (proprietary, un-auditable) software?
    • The "corporate person" known as Uber?

Previously, previously, previously, previously, previously, previously, previously, previously, previously.

Tags: , , , , , , ,

9 Responses:

  1. Ru says:

    Well, that's good news for uber! I wonder if they vet their people for a minimum level of irresponsibility.

  2. Kyle Huff says:

    "I personally had nothing to do with this. My job was to observe and report on it."
    -Adolf Eichmann

  3. Kaleberg says:

    They are absolutely desperate for a patsy, if only as a distraction.

    The whole thing is playing out like a film noir. A two bit ex-con fresh out of jail gets hired by some big high-tech company as a driver for a self driving car. It's the Oxymoron Company. The car kills someone while she's in the driver's seat. In a good film noir, it would turn out to be a hit job. The murdered woman was a whistle blower or ex-wife seeking alimony. Real life is more sordid than any film noir. The murdered woman was just some poor schnook trying to cross the street. Now they're trying to pin the rap on the ex-con rather than the criminals releasing totally flawed self driving car software onto the streets.

    Only a patsy and a conviction might just let them keep producing software so bad, even people who don't understand software can understand how bad it is. It's not some subtle buffer underflow error or something in Geek you can't understand. It's, they turned off braking when heading towards a pedestrian crossing the street bad. You really don't need a Stanford degree to grok who wretched that is.

    They'd be blaming some patsy if the car jumped onto the sidewalk and exploded killing 17. They'd probably try and turn it into a terrorism rap when it was just a who-wants-to-update the compiler right before release date problem.

    • k3ninho says:

      The 95- or 99-percent p-value for the miles driven without a fatality by an autonomous vehicle versus the current population is something like 2.3 billion miles[1]. The best use of the tech in the mean while -- while it's got it's learner permit or 'L' plates -- is to keep score with a fully-blamed fully-controlling human driver: the robot knows the control envelope of the vehicle and can respond to things around it, such that every time that the system spots and over-rides a hazard you don't anticipate, it gets a point; same for the human. Treat a human spotting an unanticipated hazard as a bug bounty and reward it, while every time the car spots something and competes with the human, we've gamified it so the human sharpens their skills.

      K3n.

      1: https://medium.com/@AsherOfLA/how-manned-vehicles-are-much-safer-than-elon-musk-and-others-admit-and-why-that-matters-42d8cc5656c8

      • Kaleberg says:

        Those safety figures are meaningless when a company can release a car on the public roads that doesn't brake when it detects a someone in front of it because that feature has been disabled, and there are no serious consequences. Perhaps we want a point system where companies that release dangerous vehicles on the street lose points rather than trying to blame some patsy with no knowledge of the software ostensibly sitting at the wheel.

        I'm not saying this technology cannot be made to work. I'm just saying we need to provide some incentives to the right party.

      • MetaRZA says:

        You seem to fail to understand the problem at hand.

  4. ennui says:

    I think it's pretty obvious that "self-driving" is a lot harder problem than has been sold and that the technology isn't there. it's going to be fascinating to see if:

    a) whether Google's Waymo's Taxi service in Phoenix ever actually goes live

    b) how much flack interference the Phoenix PD is going run for Google Waymo while they run their experiment.

    The Uber murder case shows that the right police department is willing to go all out for a corporate sponsor.

    But the real question is whether enough money is lined up that we are going to rewrite the rules of the road so that investors don't get hosed. At this point, the technology has so many fans that any potemkin pilot project is going to get a chorus of support regardless because there is a huge constituency of nerds who desperately need to believe that their career in javascript hacks was actually about building the future. When Musk eventually gets sued into oblivion it will almost be tragic. He bet all his insider trading rewards on building Disney's world-of-tomorrow on real streets instead of a carefully managed gated utopia.

    But, that's where all of this will end up: self-driving golf carts driving retired techbros around their gated private tech-Disneylands on virtual rails...

  5. MattyJ says:

    Tragic irony: The Voice works just as well as a radio show.

  6. Nick Lamb says:

    Specifically in response to the "which will never work", this approach absolutely can be a success and is, in other transport industries. And what they do is instructive.

    A big passenger jet is quite capable of landing under computer control in thick fog (Cat III landing). After the point where you're committed to putting the plane on the ground this system can of course still fail, and this is a fail active state, because failing passively might kill everybody on board. Active failure is alarmed, so the pilots have a distinct trigger for the scenario where they must prepare for the worst if a further failure causes the autoland to disconnect altogether.

    GoA 3 trains have a qualified driver traveling inside them, but they may be anywhere onboard (often they check tickets or assist passengers with problems). The train can't always cope with every problem but it can always safely wait for the operator. Again there's a specific alarm that attention will be needed.

    What can't work is humans expected to intervene at any moment to override a system that usually works but sometimes silently fails. The system needs to know there's a problem.

  • Previously