I swear, 10% of the car traffic on SOMA streets these days is composed of single occupancy "self-driving" cars, plastered with their performatively-spinning greeblies and logos, testing this week's git pull of the new "let's see if we know how to not swerve into the bike lane yet" code on me without my consent.

I'm getting used to seeing the bored, dead-eyed stare of the hourly contractors sitting in these murder boxes, wasting fuel by driving in endless loops around my neighborhood, all day and all night long. It's disgusting.

Here's a clip from a video of some asshole "testing" his self-driving car by putting strangers into mortal danger. He starts his video by saying, "I just want to keep doing it for science, and see how it reacts, let's just roll." Fuck you entirely, you monstrously irresponsible piece of shit. After his murderbot almost mows down a crosswalk full of people, he says, "Not perfect! A big improvement, though."

I'm not linking to the original source because you shouldn't give this deadly troll his ad views. Don't reward the kind of person who never saw a Trolley Problem lever he didn't want to wildly yank back and forth.

Also apparently the Musk Defense Crew keep doing DMCA take-downs on Twitter of anyone who reposts it: "This media has been disabled in response to a report by the copyright owner."

Previously, previously, previously, previously, previously.

Tags: , , , , , , , , , , ,

39 Responses:

  1. B says:

    After recently re-reading Cory Doctorow's "Attack Surface", weaponized self-driving murdercars are my new pet fear.

  2. Jeff Bell says:

    Move fast and break things...

    • jwz says:

      That's surely the philosophy of the corporate psychopaths who are selling this technology, but then there are by-blows like this guy, who's doing it just for the ad views, or possibly the lulz... On the one hand, you've got someone consulting actuarial tables and deciding that your life is worth less to them then settling a lawsuit, and on the other hand you've got some "vlogger" who just says, "Fuck you, YOLO".

      • Dude says:

        Oy... that last line reminded me that Slim's was replaced by a douche-factory with that name. Anecdotally, I've heard the owners treat masking and vaxxing the way Republican governors do.

        Speaking of whom, Violet's latest Pandemic Round-Up contains queries about how many more right-leaning US citizens are dying of COVID (a fuck-tonne) and directs to this link which mentions the following exchange from the film the auto-malpractice legal drama Class Action:

        MAGGIE: May I ask a question, please?
        GETCHELL: Sure.
        MAGGIE: Why didn't you just change the blinker circuit? It's just a question.
        GETCHELL: I told Flannery about the problem a month or so before he died. He called in his head bean counter.
        MAGGIE: What's that? Risk management expert, right?
        GETCHELL: Yeah. Flannery shows him the data and asks him how much it would cost to retrofit...
        MAGGIE: You mean recall?
        GETCHELL: Yeah, you got it. To retrofit 175,000 units. Multiply that times 300 bucks-a-car, give or take. You're looking at around $50 million. So the risk guy, he crunches the numbers some more. He figures you'd have a fireball collision about every 3,000 cars. That's 158 explosions. Which is almost as many plaintiffs as there are. These guys know their numbers. So you multiply that times $200,000 per lawsuit. That's assuming everybody sues and wins. 30 million max. See? It's cheaper to deal with the lawsuits than it is to fix the blinker.

        It's what the bean-counters call a simple actuarial analysis.

        Just like the scene from Fight Club.

      • Doctor Memory says:

        There also seems to be a distinct subset of people who heard the prediction that "autonomous cars will have a safer driving record than humans" and processed that as a statement of settled, past-tense fact. I keep running into them in various fora online and I get tired just thinking about the basic statistics education necessary to engage with the argument.

        • thielges says:

          And “safer than humans” should not be the bar to clear certifying AVs for roads because humans in general are not that skilled. AVs should be expected to be significantly better.

          For the last couple of decades humans have been getting worse at driving, a trend that shows no signs of reversal. Digital distractions and reduced police enforcement are part of the reason. There are other features that protect car occupants like side curtain air bags and brute force protection provided larger chassis like SUVs. The result is that collisions are on the rise though fewer car occupants lose their lives, masking the increased danger if you just look at the fatality stats. Look at how fatalities are distributed to people inside vs. outside cars tells a different store: life has become more dangerous for pedestrians and bicyclists.

          • Malcolm says:

            Humans and collisions in the United States perhaps?

            In Australia at least per capita road fatalities have been decreasing for the last twenty years.

        • Lloyd says:

          Autonomous cars will have a safer driving record than humans, as humans get worse at driving.

  3. Dara says:

    Intersection of Lenora and 5th Avenue. Seattle.

    • That's a shitty section to test a human on, let alone a robot. It is peppered with one-way intersections and prohibited turns. The monorail supports add extra visual noise. The monorail itself is not exactly subtle when it passes overhead. The whole section is narrow and well-peppered with lunatics willing to step out into traffic.

      When I first saw this video, I was sure they were doing to try to make a left-hand turn. Most navigation software believes that that's a simple two lane road. Far too many humans believe they can just cross the double-white line when their navigation bot tells them they've got to make a turn in a few blocks.

      Christ, what an asshole...

      • William says:

        It's legal to cross underneath the monorail tracks, but inadvisable:

        • That's insane! I find it incredibly hard to believe, but the MUTCD supports what you say: crossing solid white lines is discouraged but not prohibited. But now I'm wondering where the $136 ticket for crossing the gore comes from.

          The article you posted said this at the end:

          You cannot switch lanes or make turns across the lanes at the intersections on 5th Avenue. There are signs posted about that.

          The Google Streetview car does exactly that at 5th and Wall, amusingly. I see "NO TURNS" signage that prohibits turns across the lanes, but I don't see anything that prohibits a lane change in an intersection.

          Certainly that feels slightly more legal to me than crossing between the pillars. But then, nobody's ever accused Washington road laws of being consistent.

      • K J says:

        Can confirm, I hate driving through that section whenever I forget and turn on to it. It's the last spot I would expect self driving to get right.

  4. Erin M. says:

    Next release features an automated sideshow option.

  5. Eric TF Bat says:

    Maybe I'm now old enough to fall foul of Clarke's First Law, but I can't see those computerised motor-car thingummies ever having the smarts to handle all the driving conditions of the world. Even in relatively civilised countries, there are way too many country roads and strangely-designed intersections, and that's before you get into countries where traffic lights are just sort of advisory decorative items: "the (traffic) code is more what you call guidelines than actual rules". Too easy to hack, and too expensive to retrofit every road in the world. Some effort may be made to simply declare some places off limits, but I wouldn't expect to see the Premier's limo around those parts, self-driving or not.

    I think it's like viable nuclear fusion: fifty years ago it was twenty years off being commercially available. In fifty years time, it will still be twenty years off.

    • k3ninho says:

      I want to live in a world where the insurance for one of these things makes them uneconomic -- at least for a generation or so. There's plenty of older drivers with experience of low-traffic roads who have to adapt to dense contemporary traffic, but throwing an unpredictable robot in there will make their job in piloting a car much harder.

      Waiting another generation will also allow in-car radar and safety breaking to become cheap and ubiquitous. Also ubquitous: vehicle-to-vehicle mesh networking and vehicle-to-infrastructure data allowing your vehicle to swarm with the herd around it at the point in time. Right to repair is a thing and you might licence tinkers and flag them in the herd when all vehicles around a near miss or collision capture information across the swarm to learn from the situation.

      That makes for collective smartness, anathema to the USA's 'exceptionalism of the individual' and the symbol of the autonomous individual that is the automobile.

      There's another economic consideration to insurance for these devices: if there's bugs in the system at some rate of 'bugs per line of code', and those cause collisions at a rate per mile travelled, then there's a max revenue getting the most passenger bums on seats paying per time the code is executed and with it fewer vehicles gets you a minimum number of collisions per billion bytes of control program executed. More passengers in fewer vehicles reinvents buses and trains.


    • MattyJ says:

      I always thought this was going to be the main problem with this bullshit. The world's roads have been built in such a way as to (marginally) accommodate humans driving/riding horses/vehicles. There's too much history in there and cognitive processing requirements for a robot to do that safely.

      There's a reason every cool robot video from Boston Dynamics is inside a lab or warehouse with custom-built environments ...

    • bmj says:

      ...that's before you get into countries where traffic lights are just sort of advisory decorative items: "the (traffic) code is more what you call guidelines than actual rules".

      You mean something like this?

      • Eric TF Bat says:

        That looks like fun*, doesn't it?

        * Note: use of certain words may be at odds with strict dictionary definition.

        Actually, it emphasises for me the pointlessness of the car horn. It's basically a binary data stream: one for honking, zero for not honking. Except that nobody does the obvious thing and expands it out to octal or hex or ASCII or anything. So the only data you have is 00001111000111110000000101111111000110110000111111111000 and you get no information from it. I guess a horn mainly exists to give the driver something to do so they don't feel powerless, NOT to allow any other driver or pedestrian to gain any knowledge at all.

      • Doctor Memory says:

        One of the things I find fascinating about traffic in countries where scooters and mopeds predominate is that while it looks terrifying (and okay, the injury rates are not great and crossing the street in Vietnam will take years off your life), the traffic actually moves surprisingly well. Mopeds are small, adding just a few feet ahead and behind of the rider, and almost nothing laterally. And they're extremely maneuverable and can stop on a dime. Cars are big: suddenly one person is taking up tens of square feet in the street, and depending on the car height maybe no on can see around you.

        Even in the video above, it seems like the worst moments of congestion are caused by cars bunching up, which turns into a wall that the scooters can't get past.

        If in any American city you tried to run the equivalent number of cars through a major intersection as it happening in that video, the result would be instant, near-permanent gridlock. For hours.

        And yeah: this bodes really poorly for AV applications outside the developed world. For an AV system to work in Jakarta or Ho Chi Minh City, step one would be to clear out all of the scooters (ie: the vast majority of the humans using the roads) and replace them with... vehicles that are an order of magnitude less space-efficient. Congratulations, you've just completely fucked a previously functional city.

    • K J says:

      The software in general seems damn close to 'human or better' on limited access highways. That's probably where it should stay until we get several orders of magnitude past the humans in reliability. For anywhere that you could reasonably expect a human to cross the street this still seems like a bad idea to me.

  6. Michael Kohne says:

    Do I understand correctly that this negligence isn't legally actionable until he does actual damage? Sigh.

    • Derpatron9000 says:

      Here this would class as 'dangerous driving' if either a complaint were made or law enforcement saw this, at minimum they'd receive a caution, rather than the savage beating they so clearly deserve.

    • グレェ「grey」 says:

      At least in California, you can be given a citation for reckless endangerment without doing real harm/"actual damage". I am not a cop, nor a lawyer though, and I would really rather not explain why I know that because it's a horrible tale I would rather not think about any further. I am uncertain if such things apply in other jurisdictions.

      No one was harmed nor injured, but did I ever have some fines to pay. My insurance premiums may have also gone up for a while. Thankfully, that was back around 2010. In related news: I will never again make any effort to attend Autonomous Mutant Festival.

  7. Eric says:

    One advantage here is that once this guy inevitably kills someone, the prosecution will have a shitload of self-recorded evidence about his careless attitude towards safety.

  8. tfb says:

    The only real lesson from this is that humans don't learn very well. Anyone who can read can read the history of AI which is entirely made of 'look look, we've partly solved 10% of this problem, here's a cool demo; now ignore the fact that our solution actually isn't, that the problem is much worse than exponential, and that to produce our non-solution we've shovelled special-purpose hardware at it and GIVE US ALL THE MONEY'. And the money people dutifully hand them all the money, and they spend ten years not solving the problem and then they all go bankrupt and there's an AI winter. And now, look look, we've, again (again? why did we have to do it again?) solved 10% of the natural language problem, this time by shovelling all the text that exists in the world at some vast system made of special-purpose hardware which is, well, only shit as a result (we did at least discover that most people are really bigoted based on what it parroted back to us: something we ... already knew). But this time it will work, we just need more special magic hardware and more training data (oops, you used it all) and we'll have something we can use. And the same for cars, except this time the training data comes by just running over and killing a lot of people.

    And humans don't learn, so even though the last AI hype cycle is really recent history – well within a human lifetime – they continue to shovel all the money at people who hype essentially the same nonsense that they hyped last time and all the times before that.

    The real question is how many people they will kill before the money runs out, and whether there will be enough left to solve the actual real problem which will kill most of the rest of us instead of the idiot pretend problems AI fails to solve.

    • グレェ「grey」 says:

      Yeah, more or less everything you wrote here is on point.

      I had written a reply, perhaps one which was too lengthy with lost of examples throughout history and even mythology, and after clicking "Post Comment" it vanished into the ether.

      Long ago, I would have been habituated to copying a reply to a local buffer due to the frequency of such challenges. Sometimes, computing seems to get to be almost so reliable, that we put to much trust in it, even when earlier lessons taught us better.

      If you've been in this field long enough, and you encounter Alan Kay's phrasing of, "reinventing the flat tire" it really articulates the distressing reality better than any other condensation of wisdom I've encountered thus far.

      Though do I ever wish I had copied the lengthier reply to a local buffer even if for personal reference. Drat.

      • グレェ「grey」 says:

        Error: "lost of examples" was clearly a typo, intended to be "lots of examples" though the Freudian slip nature given the data which vanished in my previous reply effort seems mildly entertaining.

      • グレェ「grey」 says:

        Additional errata: "we put to much trust in it" should be "we put too much trust in it".

        Albeit, me putting trust into any technology which lacks an edit feature when BBSes with in-line editors (even if they were line editors) had such functionality before the WWW existed is nonexistent.

        At least there's that "HERP DERP" checkbox?(/sarcasm) There's not even a delete post option. I have no idea how some users get icons, maybe those privileged sorts also are blessed with post edit or delete capabilities.

    • says:

      My “Investor’s Corollary” to Moravec’s Paradox states: “A true breakthrough in AI will appear at first glance mundane, reliably achieving capabilities that a one year old could achieve.”

      So, no more chess. No more go. Let’s put self-driving cars on hold. How about “peek-a-boo”, following an object and inferring it’s still there when it rolls behind the couch? Or inferring a gestalt triangle from pacmans? Or a cast shadow doesn’t actually change the object you’re trying to perceive.

      What I’m trying to say is that a qualitative change of approach is required to achieve 5+ 9s. The red herring with deep learning methods has been that they always get better with more data… not 10% demos, we are seeing jumps from 90% to 98% accuracy. I understand this apparent progress can be compelling for investors. But, fundamentally, the assumptions of deep learning models are flawed. In principle they cannot achieve the levels of accuracy required for safe reliable operation in “real world” scenarios. (Proof akin to Minsky-and-Papert but for 2021 forthcoming.)

      Machine learning can achieve the desired goals, but not when the problems are cast as obstrusively as they have been again and again. Basic understanding of the world’s physical nature needs to be learned before natural language or driving.

  9. SED says:

    This reminds me...
    Someone in Israel with perhaps too much money bought a Tesla and it ended up speeding up at an intersection and T-boning a truck. First of all, that's stupidly expensive especially because it has to be transported to Israel and second of all, now that it got wrecked there isn't anyone around to service it!

  10. jwz says:

    No. Fuck that reckless, homicidal grifter.

  11. david says:

    At last, bringing truth to the saying: "Nobody goes there anymore, the traffic is too terrible."

  12. some name says:

    Shocking that they are doing these kind of things.

    Didn't it stop after they killed some lady?

    Where is the humanity?

    Unsurprised in general, but I thought they atleast cleaned up stuff when hounded by the media

  13. Andrew says:

    I didn't see any pedestrians on the touchscreen. Possibly they are there, but too hard to see in this video. Does the car's system register and account for them at all? Cause if not, "there's your problem right there"

Leave a Reply

Your email address will not be published. But if you provide a fake email address, I will likely assume that you are a troll, and not publish your comment.

You may use these HTML tags and attributes: <a href="" title=""> <b> <blockquote cite=""> <code> <em> <i> <s> <strike> <strong> <img src="" width="" height="" style=""> <iframe src="" class=""> <video src="" class="" controls="" loop="" muted="" autoplay="" playsinline=""> <div class=""> <blink> <tt> <u>, or *italics*.

  • Previously