Google and Uber

How are these murderous sociopaths not in jail?

"If it is your job to advance technology, safety cannot be your No. 1 concern," Levandowski told me. "If it is, you'll never do anything. It's always safer to leave the car in the driveway. You'll never learn from a real mistake."

Levandowski had modified the cars' software so that he could take them on otherwise forbidden routes. A Google executive recalls witnessing Taylor and Levandowski shouting at each other. Levandowski told Taylor that the only way to show him why his approach was necessary was to take a ride together. The men, both still furious, jumped into a self-driving Prius and headed off.

The car went onto a freeway, where it travelled past an on-ramp. According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic, but Google's software wasn't prepared for this scenario. The cars continued speeding down the freeway side by side. The Camry's driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median. Levandowski, who was acting as the safety driver, swerved hard to avoid colliding with the Camry, causing Taylor to injure his spine so severely that he eventually required multiple surgeries.

The Prius regained control and turned a corner on the freeway, leaving the Camry behind. Levandowski and Taylor didn't know how badly damaged the Camry was. They didn't go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.

Levandowski, rather than being cowed by the incident, later defended it as an invaluable source of data, an opportunity to learn how to avoid similar mistakes. He sent colleagues an e-mail with video of the near-collision. Its subject line was "Prius vs. Camry." (Google refused to show me a copy of the video or to divulge the exact date and location of the incident.) He remained in his leadership role and continued taking cars on non-official routes.

According to former Google executives, in Project Chauffeur's early years there were more than a dozen accidents, at least three of which were serious. One of Google's first test cars, nicknamed kitt, was rear-ended by a pickup truck after it braked suddenly, because it couldn't distinguish between a yellow and a red traffic light. Two of the Google employees who were in the car later sought medical treatment. A former Google executive told me that the driver of the pickup, whose family was in the truck, was unlicensed, and asked the company not to contact insurers. kitt's rear was crushed badly enough that it was permanently taken off the road.

In response to questions about these incidents, Google's self-driving unit disputed that its cars are unsafe. "Safety is our highest priority as we test and develop our technology," a spokesperson wrote to me. [...]

As for the Camry incident, the spokesperson [said that] because Google's self-driving car did not directly hit the Camry, Google did not cause the accident.

These words actually came out of this creature's mouth, on purpose, when it knew that humans could hear it speaking:

"The only thing that matters is the future," [Levandowski] told me after the civil trial was settled. "I don't even know why we study history. It's entertaining, I guess -- the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn't really matter. You don't need to know that history to build on what they made. In technology, all that matters is tomorrow."

Previously, previously, previously, previously, previously.

Tags: , , , ,

34 Responses:

  1. MattJ says:

    Didn't they already reboot Robocop? I feel like this is Robocop.

    You might recognize Levandowski as the guy that stole all Waymo's trade secrets on the way out and took them to Uber, and was subsequently fired from Uber.

    The list of reasons to hate this asshat goes on:

    "... Levandowski had established a religious organisation called 'Way of the Future' to “develop and promote the realization of a Godhead based on Artificial Intelligence.”"

    So, this guy is _promoting_ the idea of Skynet? Yikes.

    • jwz says:

      Generously, he's gone all in on Roko's Basilisk. But far more likely is that he's just a troll as well as (as has been established) a murderous sociopath.

  2. Jason McHuff says:

    A former Google executive told me that the driver of the pickup, whose family was in the truck, was unlicensed, and asked the company not to contact insurers.

    "Uh oh, this might turn into an incident and lead to bad publicity." ... "Phew. The other party wants this kept quiet, too."

  3. ennui says:

    "all that matters is tomorrow..." because tomorrow belongs to me!

    and to think, Lewandowski only has a Master's degree! and from Berkeley...

    How many exGoogle execs saw Camry vs. Prius and are still working at Waymo?

  4. phuzz says:

    Say what you like about the guy, but he's clearly a good cultural fit for working at Uber.

  5. Have you already encountered STET by Sarah Gailey? Seems apropos.

  6. -dsr- says:

    A short SF story which is relevant and, I fear, prescient:
    https://firesidefiction.com/stet

    (If you're unfamiliar with it, "stet" is the editing term for "leave this as written, it's exactly what I meant".)

  7. tyggerjai says:

    While people are recommending related reading - I'm never sure how well known it is outside Australia, but have you read _Jennifer Government_ ? It's _Space Merchants_ for the 21st century.

    https://www.goodreads.com/book/show/33356.Jennifer_Government

  8. thielges says:

    The way safety is approached in AVs is bizarre. The hardware is held to very high standards for fault tolerance (ISO-26262) to ensure that random chip failures don’t result in a dangerous incident. But validation of the software running on that hardware is loosey goosey mainly due to the self training AI techniques applied which resist formal verification.

    So the ARM Cortex CPU controlling the car might survive several direct hits of bit flipping radiation. But who knows how that neural net it is running will respond to some unexpected corner case ?

    • BHN says:

      Is anyone else amazed that DOT isn't requiring vast, vast amounts of off-the-road validation testing for any AV technology before it's tested on-the-road? I'm guessing the current breed of auto-braking and similar technologies had to prove they don't cause crashes before being released into the wild, for example.

      I think of all of the crashtest dummies that have died to prove my seatbelt and impact zones and airbag work compared to 'Prius vs. Camry' and it truly boggles my mind.

      How are these AV executives not in prison or on their way there if they've caused actual crashes and injuries by testing unproven tech on public streets and highways? Is this just a gap in current laws or lack of enforcement?

      • thielges says:

        FWIW ISO-26262 does require vast amounts of verification consuming thousands or millions of cpu hours. It’s just not enough so I agree with you. Interestingly you can draw a straight line from ISO-26262 to the techniques used over a century ago to improve railway safety (because what is a digital chip after all but a giant conglomeration of switches and such :-). The part of the circuit designated to detect and correct faults is even still referred to as the “safety mechanism”.

        The way Safety Engineering and law interact is that builders of devices are only required to apply the best known techniques for verification. For neural nets if that’s just “playback a bunch of known scenarios and confirm nothing bad occurred “ then that’s all that is done. There’s no requirement to go above and beyond best practices. So you can ship a dangerous device and deflect injury/death lawsuits if you can prove you’ve done your “best”.

        Expanding safety requirements to better validate AI SW is a large and daunting task. If any agency/lab/university requests budget from the USA federal government to build the team required to develop advanced methods you can bet the current Secretary of Transportation will likely deny such a request.

        My guess is the push to the edge of the envelope will come from Europe or Japan which have higher safety cultures. Once those techniques are discovered it will force American laws to follow.

        So thanks in advance Japan for protecting the American people.

      • Nick Lamb says:

        Although some safety testing of your car is mandatory as part of what's called "Type Compliance" (for mass-market products the government destructively tests a small sample provided by the manufacturer and considers that if the items in the sample achieve a satisfactory result the entire "type" is passed) it's to a very low standard. Type Compliance also tends to have the same view on what's "the same" as a motor mechanic might - the trim, branding and other parts which aren't mechanically essential to the operation of the vehicle are not taken into consideration - so the sample tested as an example of your "type" of vehicle may be almost unrecognisable unless you're a car buff.

        NCAP, the New Car Assessment Program is not mandatory, and although the results of NCAP tests have to go on a Monroney sticker (so if a mass market vehicle has been NCAP tested, new purchasers should become aware of how it scored) there is no minium score. Manufacturers can (and sometimes do) sell vehicles that get atrocious NCAP scores. Most of the videos you may have seen of crash dummies getting smashed in impressive-looking simulated accidents are from NCAP (or its various clones in other jurisdictions, EuroNCAP, ANCAP and so on) not from a Type Compliance test.

        As to things like auto-braking (AEB, Autonomous Emergency Braking) that involve the computer braking, no, those are not tested for Type Compliance, almost any conceivable computer braking decisions would not impact on Type Compliance.

        AEB is tested by EuroNCAP (and I presume NCAP, I don't read their stuff) but again your cars don't require NCAP testing to be sold, and even if they get the worst possible rating from NCAP they're still legal. And NCAP doesn't care about optional features unless it believes the option is nearly universally taken up. If there's a $8000 "Cherish Plus" option for your new car with AEB, that has probably never been tested anywhere by anyone other than the manufacturer.

        Maybe full blown Autonomous Driving will get some Type Compliance tests before it becomes broadly legal, but don't bet on it, and if it happens don't expect those compliance tests to be more thorough, rather than less, than existing manufacturer internal testing.

        • BHN says:

          And there go my warm fuzzy feelings. :-)

          Woman on Plane: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.

          Woman on Plane: Are there a lot of these kinds of accidents?

          Narrator: You wouldn't believe.

          Woman on Plane: Which car company do you work for?

          Narrator: A major one.

          • Andrew Klossner says:

            That quote is from Fight Club. Fictional but still disconcerting.

            • BHN says:

              I'm aware of its origin, I just felt like Woman on Plane for a moment there and had to share. I definitely felt like I had been disconcerted and also somewhat thankful that both of our vehicles in my household are old enough that many of the kinks have been worked out via everyone's beta testing (thanks, dead people!).

              Of course I just received a voicemail about a recall on one of the two. The universe has nothing if not a good sense of comedic timing.

        • maysiedasie says:

          Thought: Cars are much bigger than bikes.

      • Julian Calaby says:

        The "great" thing about automatic breaking systems is that they have two failure modes:

        1. They don't react, at which point the situation devolves into a normal accident. "the driver should have been paying more attention"

        2. They react too much, at which point they can be described as flaky and tuned to react less. "we've updated the system to work better"

        Which means that all they need to do is work correctly a few times and everything else gets written off - "I don't think anyone or anything could have braked fast enough" or "it was a little glitchy to start, but it saved my life yesterday"

        The regulatory side of this is that they can be described as "driver aids" like electronic parking brakes or self-cancelling indicators, so it's "not a big deal" if they fail - the driver is expected to be in complete control of the vehicle and not require these systems.

        Self-driving systems are a completely different kettle of fish where the driver is expected to not be in complete control so any certification needs to tread the very thin line between providing sufficient safety, lobbying from auto makers and consumer desires.

        Personally I would not consider any level of computer simulations "enough" for real-world driving. Even Google's "approved routes" approach to letting these vehicles out on real roads has problems too. Remember, most accidents happen within 5 miles of home.

  9. Eric says:

    Gives a new meaning to "fail fast."

  10. Bai Hui says:

    Car accidents happen every day. Self-driving cars are the future. In the long term this would much safer than humans driving.

    • thielges says:

      I agree that long term AVs could* make streets safer. Just concerned that in the short term a lot of people could be unnecessary maimed or killed in the name of expediting progress.

      * improved safety isn’t a given. A lot to discuss on this topic but there’s a conflict between the convenience people inside the AV versus the safety of those outside. If we handle this conflict the way we handle normal vehicles today (SUVs!) then the future doesn’t look too bright.

      • k3ninho says:

        The thing that's not often mentioned in this discussion: the lone autonomous vehicle is the thing these guys envision, but a mesh network of vehicle-to-vehicle comms and vehicle-to-infrastructure information would allow contemporary cars to know about oncoming obstructions and to enforce automatic emergency braking. With more autonomy it would facilitate vehicles to swarm round obstacles and to spread out so that nobody has to barge into traffic when merging at intersections.

        It's a damn social technology, how dare I suggest it?!?

        K3n.

        • jwz says:

          Trolley Problems get really interesting when all the gamer AIs can see the board and when their proxy for "respect for human life" is "does the lawsuit settlement fail to bankrupt the corporation".

          Also: optimally spreading traffic onto all available road surfaces is terrible. It's well established that adding road capacity does not reduce congestion, it just adds volume. So more efficiently allocating traffic will do the same.

          • Nick Lamb says:

            Optimal spreading can however reduce the need to accelerate and decelerate vehicles, reducing both energy used and emissions from operations. Far more so than a pedestrian, a road vehicle produces particulates from braking, and especially heavy braking. An electric vehicle reduces this, by turning more of the kinetic energy back into electrical energy, rather than heat associated with rubbing two surfaces together (which invariably releases particulates) but there is still some pollution and less braking further reduces that.

            Humans drivers seem on the whole to feel compelled to "close gaps" and so they spend far more of their time accelerating and decelerating than they really need to. A fleet of autonomous vehicles, without this impulse to aggressively "close gaps" might achieve essentially the same results (in terms of how long it takes to get where you're going) while reducing the energy used and emissions generated.

            With railway trains the autonomous systems really, really like coasting, because of course that's cheaper and so that's what they were programmed to prefer. But human drivers get trained to behave the same way. Don't race to each danger signal and sit there waiting, just drift along, increasing the pace when you seem to have a clear route ahead, gently decreasing it when the signals are against you. It's cheaper, it's better for the environment, it's more comfortable for passengers, and it makes almost no difference to the journey time.

          • Julian Calaby says:

            Think about it this way:

            If we segregated autonomous cars the way we segregate trains, the gamer AIs can not only see the board, but also the cards everyone is holding and the strategy in their heads, then plan around it almost perfectly.

            Right now the best gamer AIs can do today is try to predict that data in a system where they don't even know the rules, play-styles and win conditions the other players are playing with.

            It's seriously impressive technology - and by that I mean it's seriously impressive that it even works at all - however it's trying to predict the almost-unpredictable, no wonder we're having so many accidents. Or should I say SNAFUs.

  11. E.M.S. says:

    "The only thing that matters is the future," [Levandowski] told me after the civil trial was settled. "I don't even know why we study history. It's entertaining, I guess -- the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn't really matter. You don't need to know that history to build on what they made. In technology, all that matters is tomorrow."

    ...this, kids, is why history is repeating itself with such vigor lately.

    Most of these techies behave just like their autonomous cars; they're following a program, and if anything deviates from it, they can't cope. Segmentation fault (core dumped). Such incredible arrogance. Perhaps this is what happens when an entire generation is raised by a surrogate parent in the form of a "smart" phone.

    Message for the arrogant techies: "just because you can doesn't mean you should."

    • BHN says:

      The arrogant techies need inputs from someone other than sales folks and people that give them money hoping for a ridiculous return on investment (sociopaths enabling sociopaths, isn't it cute?). Someone who occasionally reminds them that they're not gods might also be useful.

      Government really needs to get better at keeping up with advances in technology but I don't know how that would happen since we don't elect anyone that isn't already way out of touch with technology... and frankly democracies work better if they can't change rapidly since it reins in lots of potentially very bad developments. There must be a happy medium in there somewhere.

  12. Moofie says:

    I find it interesting that for all the journalists in the valley it took the New Yorker to find out about Levandowski and Google self driving shenanigans. Given that the New Yorker found this out people must have been willing to talk, if anyone asked questions about how safe the self driving cars are.

  13. o.o says:

    I question the way the "Prius vs. Camry" incident is framed. Unless laws are different in CA, it is incumbent upon merging traffic to yield, not the other way around.

    • BHN says:

      True but in actual practice the flow of traffic is supposed to leave gaps into which entering traffic can, you know, merge. ;-) Otherwise people end up trapped at the ends of entrance ramps or you get Prius vs. Camry situations.

      (It's also essential in highway traffic to have an assured clear stopping distance between vehicles but that's probably never actually been implemented, it's only theory.)

  • Previously