DMV allows deployment of fully-automated murderbots on San Francisco streets

Cruise says it plans to test vehicles without a human safety driver behind the wheel before the end of 2020.

Cruise will be allowed to test "five autonomous vehicles without a driver behind the wheel on specified streets within San Francisco," the agency said. "The vehicles are designed to operate on roads with posted speed limits not exceeding 30 miles per hour, during all times of the day and night, but will not test during heavy fog or heavy rain."

A spokesperson for the DMV did not immediately respond to a question about the streets to which Cruise's vehicles will be confined. [...]

The California permit came through before the federal government's, which is weighing a separate application from Cruise to deploy a fleet of fully driverless vehicles without steering wheels or pedals.

As I keep saying: I would like to know the answer to the question of who gets charged with vehicular homicide when (not if) one of these machines kills someone. Even if they are ultimately ruled to be not at fault, what name goes on the court docket? Is it:

  • The Cruise employee "non-employee independent contractor" in the passenger seat?
  • Their shift lead?
  • The CEO, Dan Ammann?
  • The author(s) of the (proprietary, un-auditable) software?
  • The "corporate person" known as General Motors?

Self-driving cars will never be safe on city streets. This is a hard AI problem that is far beyond any current technology, and that cannot be solved without human-level general artificial intelligence. Every company claiming that this is possible is lying. Their grift is to extract money from investors while weathering an "acceptable" level of increased human casualties. If there is a more literal definition of blood money, I don't know what it is.

Once again, from Fight Club:

I'm a recall coordinator. My job is to apply the formula. It's a story problem.

A new car built by my company leaves somewhere traveling at 60 miles per hour. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now: do we initiate a recall?

Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X... If X is less than the cost of a recall, we don't do one.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Tags: , , , , , , , , , , ,

18 Responses:

  1. Not that I think safety drivers do a damn thing anyway. Even if they aren't looking at their phone, by the time they figure out the robot is off the rails, it's way too late for a human to fix it.

    I'm in favor of putting the CEO on the indictment, myself.

    • jwz says:

      Yes, the "safety driver" is a liability lighting rod. When the inevitable death occurs, that low-payed contractor is the patsy. It's a bargain!

    • thielges says:

      The closest person to being the actual culprit will be whomever tweaks the higher level operational constants that make trade offs between rider convenience/comfort versus safety. For example “slam on the emergency brakes if probability_of_human_in_path > K1 && duration_of_detection > K2”. That’s likely someone wearing a product marketing hat.

      As for holding the actual developers accountable, that is evaded by following best practices. There’s plenty of legal precedent to lean on.

      So, yeah, the culprit is probably the highest ranking person in the executive staff making product value trade offs against safety. Worth noting that those operational tweaks can be implemented long after the R&D team delivered the product.

      • Dude says:

        The R&D folks aren't any less accountable than anyone else in the web of Uber's murdermobile-making/industry-wrecking/laws-of-sovereign-nation-evading bullshit (amongst their many, MANY other crimes linked to above). The R&D Dept. at Uber's like John Hammond's dino-makers.

        As for how Uber will deal with the inevitable lawsuits:

        • thielges says:

          Not saying it is right, just that it is the reality of how our legal system works.

  2. Different Jamie says:

    If the first casualty is a bicyclist, we'll know they're genuinely modeling local legal priorities.

  3. Dude says:

    Makes sense: this was the year SF saw the red Martian sky; might as well be the year of the exploding Johnny Cab (as endorsed by M.A.D.D.).

  4. Elusis says:

    Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X... If X is less than the cost of a recall, we don't do one.

    Apparently this is also the equation for OSHA regulations as well - will the cost of implementing a regulation exceed the value of the human lives saved? Apparently early on, labeling hazardous and flammable chemicals in the workplace just cost too much to be worth it.

  5. Elektro says:

    The General was making autonomous cars in the 50s: they put a radio wire in the pavement. This would be a much easier problem to solve today if they put more of their investment dollars towards making the infrastructure better such as dedicated and clearly marked highway lanes, and not all into the cars.


    There are dozens of multimillion dollar proving grounds throughout the country where automakers can test their latest creations in a controlled environment. These facilities have entire fake towns with inflatable vehicles and pedestrians just to test driver assist functions and full autonomous cars. They can test any driving situation you can imagine, can be tested over and over with every variable controlled and no risk of injury or property damage.


    How are they supposed to test specific scenarios in a scientific and repeatable manner on a public street? This is not "testing" at all, it is just a dangerous publicity stunt.

    • Ham Monger says:

      This is not "testing" at all, it is just a dangerous publicity stunt.

      Without disagreeing about the benefits of existing proving grounds and the publicity nature of this pseudo-testing, there are another attractiveness factors beyond "PR stunt".

      - From the Finance dept: Someone else is paying for the test grounds, which reduces up-front costs, and hey, if somehow the code actually works the first time, this will be way cheaper.

      - From Development: We get to test our code in production? Skip the safer, scientific, and boring parts of testing, and go straight to live tests? Luckily, since they never make mistakes, their code will work perfectly the first time, so they're right there with Finance, why waste time and money on test sites?

      Of course, anyone who disagrees with Finance wants to waste money and should be fired, and anyone who disagrees with Development is questioning their manhood and should be forced to quit.

      So sure, this is first and foremost a PR stunt, but everyone, from the Chief Executive Sociopath to the junior programmers, thinks this plan is flawless.

    • ChoHag says:

      Rails. We solved this problem. The Docklands Light Railway (DLR) is a fully autonomous commuter train network running in the business district of London since 1987. I don't know when the automation happened.

      Why do we need death machines roving around on their own without guard rails?

  6. tfb says:

    Of course the people writing the software for these things are not responsible for what that software will do. They are, after all, just following orders. Of course they will not give up their highly-paid jobs just because those orders involve killing a few pedestrians and cyclists, or even really stop to think that that's what those orders actually mean, because that would involve realising all sorts of inconvenient truths.

    (Also of course we're all complicit in this. Some are just more cmplicit than others.)

    • dcapacitor says:

      This is it. This must be exactly how it feels to be a part of an evil empire. No matter what you choose to do, you're still complicit in some wrongdoing. The only choice you have to make is how to balance personal gain with how much evil you're willing to inflict on the world.

      ...which is a problem as old as human culture. Too bad the US decided to give up on ethics, except for very few, very specific and mostly bizarre cases.

      • tfb says:

        I don't think the problem is giving up on ethics (well, OK, that is part of it): it's giving up on thinking. In particular this is yet another instance of the single-bit mindset[*] at work. 'Everything I do has some kind of bad effect, therefore it does not matter what I do, since my mind can only represent quantities with a single bit and so all bad effects are equivalent. So I'll work for Uberbook[**] because they will pay me more' (funny how, when it comes to money, there is more than one bit, eh?).

        The solution to this problem isn't learning ethics (again, yes, it is): it's learning to think.

        [*] 'Single-bit mindset' is the name of my new band. We sound a bit like a cross between Freur and Syd Barrett era Pink Floyd.

        [**] After the tragic death of our drummer in an autonomous-car accident, we reformed as 'Uberbook'. Our influences are ... difficult to describe, but mostly, of course, Norwegian.

        • jwz says:

          You frame this single bit mindset as if it's a flaw, but if your goal is to justify continuing to do the thing that most benefits you personally without having to think too hard about its ramifications on others, it's a great coping strategy!

          • Elusis says:

            if your goal is to justify continuing to do the thing that most benefits you personally without having to think too hard about its ramifications on others, it's a great coping strategy

            Aka "going Galt," I believe.

        • dcapacitor says:

          Framing it as a single-bit mindset and not enough thinking seems to imply that more bits and more calculation would yield a better result.

          The decision whether or not to stomp someone's head into the ground shouldn't come from a solution to a differential equation.

          • tfb says:

            That's like saying that, when you throw a stone into a pond, what happens shouldn't come from the solution to a differential equation.

  • Previously