Guilty of Walking Without Lowjack

How it started: "Robot cars will be safer than humans!"

How it's going: "To make them barely work at all you must re-design all public spaces, and strap radio transmitters to anything you don't want killed."

Biden's $1.2 Trillion Infrastructure Bill Hastens Beacons For Bicyclists And Pedestrians Enabling Detection By Connected Cars:

Beaconization -- or equipping bicycles and pedestrians with transponder beacons that can be spotted automatically by sensor-equipped cars -- has been given the official seal of approval in the U.S., reveals a tucked away part of the $1.2 trillion bipartisan infrastructure bill passed on November 5. [...]

For tech companies and affluent cyclists, the future will be rosy [...] The more likely version of the future is deeply dystopian, says transport historian Peter Norton. Only the beacon-equipped will be spotted, he fears. Those choosing -- say, for economic or privacy reasons -- not to fit bicycle-to-vehicle beacons will be blamed for being hit by sensor-equipped cars.

Same as it ever was: The Invention of Jaywalking:

Local auto clubs and dealers recognized that cars would be a lot harder to sell if there was a cap on their speed. So they went into overdrive in their campaign against the initiative. They sent letters to every individual with a car in the city [...] The industry lobbied to change the law, promoting the adoption of traffic statutes to supplant common law. The statutes were designed to restrict pedestrian use of the street and give primacy to cars. The idea of "jaywalking" - a concept that had not really existed prior to 1920 - was enshrined in law.

Previously, previously.

Tags: , , , , , ,

50 Responses:

  1. MattyJ says:

    And I'm sure Tesla will be more than happy to sell us a Tesla-compatible beacon for a couple hundred bucks.

    Although maybe it could be fun to surreptitiously stick them under manhole covers or on medians etc.

    • k3ninho says:

      Who owns the identifying keys on these beacons, ie, how do the cars know whether they're running over a valuable campaign donor or a valuable organ donor?

      And if they're not tied to people, then we're going to build 'works of art' that ping them across major roadways from an air cannon or bowling machine into a hopper and back again to deny access on any roadways to reduce pollution caused by vehicles in town.


    • phuzz says:

      Manhole covers don't move, how about pigeons and stray dogs instead?

      • jwz says:

        You say that like it's a joke, but you know who else deserves to not be summarily executed by robot cars? Stray dogs.

    • K says:

      And right there in the second part of your comment is why this will actually never work. Pair it up with broken transponders resulting in wrongful death lawsuits for extra bonus points.

      • Lloyd says:

        You always check the batteries in your smoke alarm.

        Don't forget to also check the batteries in your personal liberty alarm.

        Just another public education initiative...

    • Mildred Bonk says:

      So passive! Throw a beacon in front of a Tesla and watch it spin out.

  2. I'm sure whoever came up with this idea would consider the resulting reduction in the local homeless population a bonus.

  3. NB says:

    A while ago the Dutch legal system shifted the onus of proving innocence onto the driver in any accident between a car and a more vulnerable traffic participant like bicycle or pedestrian. Plans like those outlined in the article seem to aim to preempt any such move.

    I'm looking forward to stringing lines of beacons above my street like lanterns against evil spirits to keep the cars out.

    • グレェ「grey」 says:

      I've only visited The Netherlands a few times, but on the whole they seem extremely sane. Berlin, has HUGE sidewalks, but an unwritten de facto convention that the area of the sidewalk closest to the road is where cyclists tend to be, so pedestrians are scorned and yelled at if walking there. However, Amsterdam has physically partitioned bicycle lanes, separate from the automotive roads and the pedestrian sidewalks. It seemed brilliant! Much better than HUGE sidewalks, or paint on the road designating a bike lane, "protected" or otherwise. Something about physics and curbs and physical partitions, just seemed intrinsically better than alternative paradigms I have observed elsewhere. Albeit, I was a little bit confused by the miniature cars which I guess can also go in those bike lane areas, but research implied that they are constrained to vehicles which have 50cc engines or smaller, so I guess the idea is that they will be too slow to reach significant velocities which are likely to be harmful given their mass and size? Again, The Netherlands' engineering seemed to be aware of Physics and incorporate it into their designs in ways that Germany and the USA have not even tried to implement as far as I have observed.

      I know there are videos on YouTube which also seem to articulate why cyclists wearing helmets are also rarely encountered in The Netherlands. The implication being that because bicycles are so widely used, that drivers are most likely to have been experienced cyclists before they ever learned how to drive (same for me as an American honestly) and thus have preconditioned empathy to give cyclists a right of way. If that is a legislative onus on automotive drivers, that seems sensible to me as well honestly. While I realize that Amsterdam is not unique for having a city center where automobiles are prohibited, it is all too rare, and yet as a human, it is wonderful to be able to wander around and not worry about cars.

      Yet again, I am tempted by the Dutch American Friendship Treaty and seeing if I can emigrate there, but I fear it may be beyond my present means from what I have researched. Perhaps someday I will be more fortunate? The cycling and automotive paradigms were just a drop in the bucket of things which seemed more sensible, even how the Dutch re-cobbled/paved their streets and did it in conjunction with laying open access fiber optics as described by Herman Wagter in a 2010 Ars Technica article seemed night and day with the, unsurprisingly fallen to corporate corruption, San Francisco Municipal Fiber meeting I attended in 2017. If there is any silver lining to residing in the USA, as an American, maybe there is a hope in hell that people such as I can change things for the better? It feels as if it is a never ending uphill battle though, and my lifespan and resources as a human are limited, the temptation to simply relocate geographically to a nation where things are less messed up to begin with seems as if it may be a much wiser option, if I can figure it out. The visa issues I encountered in the EU in 2014 meant that I stayed the 90 days my Passport allowed me in the Schengen Area before I regretfully returned stateside.

      • NB says:

        People in the Netherlands don't wear helmets because we're dumb. And we sold that fiber network company to the incumbent telco who then halted all buildouts for a decade until they started losing too many customers to its main competitor's DOCSIS 3.0 network.

        • グレェ「grey」 says:

          I do definitely think that wearing helmets while riding a bike is a good idea, but then I have also worked in mining and construction industries where wearing a hard hat was a matter of course, even though my personal job duties were related to IT, so "safety first" is pretty deeply ingrained into me. Albeit, I think this was the YouTube video which I was thinking about? Regardless, bicycle riding seemed much more prevalent in The Netherlands than anywhere I have resided in the USA, which I think is a good thing overall.

          I'm not too sure what to write about the fiber network company selling out to the incumbent telco (or as I tend to phrase them: legacy of a vestigial convicted monopoly) other than, I am sorry, that seems dismal, that story in 2010 was inspirational and I had hoped many others would have followed that model. Even earlier, In Alameda, California, they also had a municipal fiber network, which they sold to Comcast, near as I can discern, after some right wing nut job with a blog who was pro-privatizing everything made a big stink about it. That was despite the fact that Alameda was one of the few places insulated from the Enron power price gouging scandal which occurred after AB1890 was passed by California voters in 1996 because Alameda, like Santa Clara County, has municipal power, and thus was insulated from some of the more egregious market manipulations. As we have seen in California with PG&E being charged, in multiple manslaughter cases (and pleading guilty in at least one so far) due to unmaintained power lines causing various fires, deregulating some industries seems to come with its own share of pitfalls. In California there was Proposition 80 in 2005 to attempt to repeal that bad legislative posture, but it failed to pass for reasons that still elude my sensibilities.

          There has been relatively little in the way of legislative measures for high speed internet network build outs, though I was part of building out node one of the Digital California Project back in 2001 which was only a $32 million expansion to CalREN to attempt to bring higher speed networking to K-12 public schools. However, at least at San Luis Obispo's County Office of Education where I worked, one of the best funded counties in the state, that only gave us a 45Mbps uplink, so hardly very forward thinking, particularly when private companies such as Silicon Graphics had 10Gbps interconnects in the 1990s. The push/pull between private sector and public sector goods is at odds outside of the USA no doubt, but even in progressive states such as California, the status quo is mostly abysmal. Given that much of the economy has shifted to re-inventing commercial timesharing computing as so-called "cloud" computing, you would think that more people, especially in business, would realize that they might be able to sell their terrible SAAS shovelware if it were easier for end-users to have high speed network access. There are constraints to compression and transcoding and latency even with gobs of computational power, in my experience, there is never sufficient bandwidth. I kind of doubt that the pandemic and people Zoom-ing their classrooms and offices has done much to improve matters either, despite the obvious needs.

          Albeit, the dot-bomb era created some confusing situations where companies were laying fiber in the USA and then going insolvent that supposedly some urban areas such as Portland purportedly have 20X the fiber required to light up every address, but most of it remains dark, with nebulous ideas of ownership after the companies which laid it went bankrupt.

  4. Dude says:

    Same as it ever was

    Indeed, as made clear in this 9-min. video from 2020. Repeating a mistake we could be fixing:

    Also, this (from 2018):

  5. Carlos says:

    jesu christi.

    So now the choice is "Let government, large corporations, and especially tech companies track your every move"(*) or "die messily"?


    * I'm sure the apologists will say there will be no identifying information in the beacons. And I'm sure that's bullshit.

  6. Eric says:

    I'll be more than happy to bury plenty of these beacons just below the surface of the asphalt.

  7. thielges says:

    There’s no reason AVs cannot be more competent than the average human driver. That includes understanding what’s in the street without pedestrians taking additional measures to be seen.

    Relying on an electronic beacon is almost guaranteed to never reach full compliance. Motorist advocates have been pushing for laws requiring pedestrians to wear light colored reflective clothing at night for decades but even compliance with passive visibility hasn’t ever happened. So expecting people to carry an extra device which might have a battery to keep charged is unrealistic.

    I’m certain that we will eventually create AVs more competent than human drivers though it might occur a few decades later that what the pro formas of companies betting their future would wish. That’s tough luck and life in the high tech world. Sometimes you waste millions developing a dud product like the Apple Newton, only to get it right with a home run product a few decades later.

    In the mean time resist dumbing down streets with personal beacons just because our AVs aren’t smart enough. Insist on minimum competence, even if a few high flying companies go bankrupt.

    • Carlos says:

      > There’s no reason AVs cannot be more competent than the average human
      > driver. That includes understanding what’s in the street without
      > pedestrians taking additional measures to be seen.

      I'm positive that this has not been demonstrated to anyone's satisfaction.

      The absolutely brutal performance seen so far even in well-marked, weather-free environments leads me to suspect they may never be able to cope with real-world conditions as seen around here: roads without marked lanes, roads without marked or visible edges, visibility reduced to nothing by snow, ice, and fog, roads that you have to technically violate various laws to actually drive on... the list is endless.


      • Derpatron9000 says:

        I'm positive that this has not been demonstrated to anyone's satisfaction.

    • Dude says:

      .....or people can admit that cars are just as much a danger to the average person via collision as they are via environmental damage. The whole beacon/reflective clothing thing is just another bullshit way for auto companies to say, "It's not our fault that our 2,000-lb machine killed you when it slammed into your soft, meat-sausage flesh! No, it's your fault for not being fast enough to get out of the way of an SUV trying to reach Mach-10 on a crowded city street!"

      That's the same excuse gun-makers use: "Fuck you and your 'gun sense' measures. Don't wanna get shot? Stop being so shootable!"

      And, like guns, waiting for "smart" tech to solve the problem doesn't change the fact that they keep putting the onus on victims rather than addressing how making sure anyone and everyone has access to this dangerous device is the problem.

    • jwz says:

      There’s no reason AVs cannot be more competent than the average human driver.

      Oh for fuck's sake, YES THERE IS. I give you as exhibit A the entirety of Computer Science for the last eighty years.

      I’m certain that we will eventually create AVs more competent than human drivers

      I say again -- BULLSHIT. It will never, ever, ever happen.

      Solving the driving car problem requires artificial general intelligence, not One Neat Trick with neural nets that are just a handwave better than Markov chains.

      It's a CON. Anyone who tells you that this is a solvable problem is either profoundly ignorant or lying.

      • J. Peterson says:

        There's also Exhibit B: Human drivers don't do very well in challenging driving conditions either. They screw up all the time even in reasonable driving conditions.

      • グレェ「grey」 says:

        I concur with jwz, no surprise.

        It's one of those sophomoric pitfalls, where I remember as a young coder before the 1980s AI Winter had set in long before Artificial Intelligence was renamed to Machine Learning to try to get the same unmerited investment funds, wondering: "well, if they have autopilots for planes, why not self-driving for cars?"

        Turns out, Flatland: A Romance of Many Dimensions by Edwin A. Abbott had already explained why, in 1884.

        Aeronautics, have veritably many more dimensional vectors for collision avoidance. Subsequently, collision avoidance via use of autopilots in planes is actually an easier problem to solve than self-driving for cars.

        Cars on a road? Have a flatland, the road, as a constraint. Encounter an obstacle? You are further constrained by slowing down, speeding up, and left or right movement dimensional vectors, that's it. You don't have many dimensions to take advantage of before moving object (car) collides with obstacle (human, pigeon, stray dog, other car [potentially with humans, domesticated dog and pet bird as passengers]).

        The dimensional constraints of solving collision avoidance for driving, actually make it measurably more difficult than autopilots for aeronautics which have slowing down, speeding up, left, right, just like cars on roads, as well as: up, down, diagonals, etc. Planes, by not being bounded by the dimensional constraint of a flat surface, have significantly more dimensions available to them to avoid other objects even at higher velocities. Unsurprisingly, fatalities in flight, are significantly lower than fatalities from automotive accidents. That metric was true even before autopilot systems were in place (which began in 1912, for reference the first automotive fatality recorded was in 1869). If you watch pilots such as 74 Gear for example, you'll note that a not exactly insignificant amount of plane related incidents are related to runways/the ground. This is not coincidental, planes when bounded by the same dimensional constraints as cars, find themselves just as prone to accidents.

        Arguably, the only commonly used vehicular forms of transit which are more dimensionally constrained than automobiles are trains (first reported train passenger death was in 1830, for reference), where left and right collision avoidance are not options for the train engineer, only speeding up or slowing down. Perhaps railway junctions are available in some limited circumstances, but those are typically controlled by someone other than the engineer on the train. Indeed, while there are examples of rail systems with no engineers/drivers (e.g. SFO's AirTrain) the reality is, such systems are typically "controlled" remotely using CBTC (Communications-Based Train Control) which have GoA (Grades of Automation) with only GoA level 4 systems being devoid of humans present for operation and thus in reality they don't have no drivers/engineers/pilots, they have a multiplicity of oversight, just the operators may not be visible to the passengers, similar to drone pilots not being present on the drone. However, despite the additional dimensional vectoring constraints relative to cars or planes, trains have an ENORMOUS advantage insomuch as rail systems are typically designed and built with right-of-way as a given. Even in stupid places where rail systems are designed to intersect with automative roads, and pedestrian sidewalks, there are still typically railroad crossing gates warning of an approaching train and prohibiting crossing for a duration before and after a train's approach and departure. IMHO, that is still a bad stopgap given that many rail systems are deliberately designed to never intersect with automotive roads (e.g. BART) and that should be the default, however railroad crossing gates still establish a right of way.

        Cars, correctly, do not ever have a right of way which supersedes pedestrians, and the moment legislative onus shifts to prioritize cars above pedestrians, is I day I am sure any human will regret being born as bipedal instead of with wheels.

        Until then, I guess those who have learned from history, are doomed to watch others repeat the same mistakes that have been known to others for an awfully long time? ;-/

      • Glaurung says:

        It started out as a self-imposed delusion on the part of the tech industry that self-driving cars couldn't be that hard to build.

        Now, after a decade-ish of hype, it's become a con on the part of some tech companies in the business of building driverless cars ^H^H^H bilking investors with fairy tales of a driverless future.

        But I think there are still companies out there that continue to be in the self-delusion phase - apple, for instance, has been pouring money down the rathole of an apple car for years, but since they never talk about their future projects, it's not for purposes of running a con.

        • jwz says:

          It started out as a self-imposed delusion on the part of the tech industry that self-driving cars couldn't be that hard to build.

          That is inconceivably generous of you. No, it started out as a flat-out lie. It started out as snake-oil to trick gullible investors into sinking billions into a boondoggle that the C suite knew would never work as advertised.

          Look at these absolute lies that Lyft was getting the credulous tech press to publish for them in 2016 --

          TechCrunch, Sep 2016: Lyft’s ambitious future vision includes self-driving dominance by 2021:

          How's that decline in human rideshare drivers going, hmm? You know, the one and only thing that ever had a hope of making any of these companies return a profit to the investors? Going good?

          • Glaurung says:

            You're not remembering far enough back. Google started mucking around with autonomous cars in 2009. Back then, the hype about how in just a few years they'd have robot cars that could drive more safely than humans? That was self-imposed delusion. The grift didn't get underway until they realized that they were wrong, and needed a way to cover up the fact that they'd poured a ton of money down a useless rathole.

    • tfb says:

      That argument is pretty much equivalent to 'there's no reason people can't live on the surface of the Sun'. Because, clearly, there isn't: you can just use some heat pump to keep the temperature low enough, and travel fast enough that you can keep the effective gravity low enough to be survivable, and probably some other things I have forgotten. But, you know, no-one is going to be living on the surface of the Sun any time soon because it's not remotely practical, and may never be so.

      Except the people who might work out how to live on the surface of the Sun are engineers and scientists, while the people trying to make autonomous cars work are software 'engineers' and computer 'scientists'.

      • thielges says:

        Folks, you’re missing the point which is not whether AVs will exceed human capabilities. That’s a wide open topic full of prediction and conjecture best left for lively discussions at the bar.

        Instead the point is the criteria we should expect such AVs to exceed before allowed on the road. And specifically for this blog entry we’re talking about whether AVs should be allowed the additional crutch of personal beacons. I’m saying a firm “no” to beacons or anything else done to dumb down the environment (signs translated into QR codes, special lanes for AVs, etc.). In addition AVs should significantly exceed human competence before certified for use.

        If you’re in the “never gonna happen” camp then fine, AVs will then never exceed human competence and therefore should never be certified.

        But what is relevant to this thread is the importance of holding the bar high enough for AVs so that safety is not compromised. And requiring personal beacons unnecessarily lowers that bar, impacting safety.

        The inclusion of funding for beacon research into the infra bill is definitely the wrong approach. It shows that the timelines and profits of influential VCs and other wealthy players are steering the path of AV certification the wrong way. Instead of changing the environment to suit AVs, we should demand better: make AVs compete with the best cohort of drivers on the status quo of our built environment.

        And if you want to discuss the future viability of AI instead then you’re gonna have to buy the first round :-)

        • Dude says:

          There’s no reason AVs cannot be more competent than the average human driver.

          Then why'd you make it the focus of your entire comment? Your whole argument seems to be, "Sure this beacon thing was badly-timed, but let's embrace giving our lives over to sentient fuel-guzzler that treat us like amusement park targets. I'm sure they'll get it right eventually."

          Saying that one particular quick fix won't work still ignores the futility of trying to bandage an injury rather curing the disease. Even suggesting the beacon thing in the infrastructure bill means those who wrote the bill are relying on the Musk-oil types to fix the problem in the private sector (they won't) and that they're ignoring the life-(and environment-)saving solution of not bowing to car-makers - a solution proven by closing streets to cars during the pandemic.

          THAT is the point of the thread, not whether AI will be "the digital super-smarty" that solves all the world's problems. (It won't.)

          • thielges says:

            I led with that comment though it is certainly not the focus. Maybe I should avoid tangential controversy.

            Arguing over whether or not AI is viable just churns up the dust and obscures the immediate issue which is: what hurtles must AV makers clear to put their equipment on the road? This will soon be decided by politicians who don’t know what is feasible or not. However they do know how to create regulations.

            The AV industry has been trying to lower the bar with “common sense” suggestions like:

            - AVs need only be equal or better than the average driver. Turns out the average human driver isn’t exactly a good role model.
            - modifications to the environment that make the AVs role easier, like this beacon situation. Works great so long as everything and everybody is in compliance. If not then tough luck, you’re the blame, not the AV.

            Let’s help politicians see the flaws in these proposals.

            • elm says:

              You started out by asserting -- without any reason or evidence -- that something is possible and then that it may not be possible for many decades.

              If it's not reasonable to accomplish for many decades, then the answer for today's politicians should be "No, keep your dangerous robots off our streets."

              Let the politicians of 2050 make new laws in 2050 about the abilities of robots in 2050. It's stupid to think anyone today could create a law more reasonable than "Don't do this."

              • thielges says:

                The problem is that the politicians of today are being pressed to make decisions based on today’s half baked AV solution. Most of them don’t know whether or not the tech is mature enough to go forward or hit a hard pause. So they’ll ask around for expert opinions. Guess where the most convenient expert opinions will be found? AV industry lobbyists.

                This is why it is important to press for objective, easily understandable criteria. Politicians and regulatory agencies will understand and embrace reasonable regulations. For example “AVs must be at least 10X safer than human drivers with no mods to the environment.” sounds very familiar to the criteria used to approve new drugs. And FWIW, most politicians are naive on pharma tech too.

                And of course the AV industry will push back for relaxed standards. This is the immediate battle which is coming whether or not you believe AVs are viable. Set the bar reasonably high and be prepared to explain that if AVs can achieve 1X of human competence, it is reasonable to expect that 10X is also possible, just takes more time. There’s absolutely no urgency to certify AVs.

                • elm says:

                  The reasonable regulation is "No". It seems you think you can haggle with trillion dollar companies and compromise with them.

                  You cannot.

                  • thielges says:

                    Sure go for a hard “no” if you like. A diversity of opposition is better than a monoculture. But be prepared to answer the follow up “why?” in lay terms that can compete with the well funded AV lobby’s story.

                    Alternatively an objective criteria approach is less likely to get mired in a discussion on theory and technological trajectory. Do we want to kill less people or more people? If less, then raise the bar. Keep in mind that the “do nothing” approach is unacceptably killing a hundred Americans every day.

                    The question on allowing environmental modifications will also be familiar to policy makers. The federal DOT constantly updates standards that states are expected to comply with, including the recently updated standards on road markings (MUTCD) where mods to support AVs are likely to be embodied. So how long does it take states to comply? The answer is “never”. The road system is so vast that it is never feasible to fix everything. A similar argument applies to personal beacons though for different reasons.

                  • elm says:

                    An "objective criteria approach" will avoid getting mired down because it gives the AV lobby exactly what they want.

                    They are willing and eager to lie about the reliability and safety if their systems now. They will tell you bigger lies if you ask for bigger lies because telling those lies gets them billions of dollars.

                    Your imaginary bar setting will not change the number of people they kill. They want the permission slip so they can get money. They won't let little things like honesty and death get in the way.

                • jer says:

                  AVs must be at least 10X safer than human drivers [...]

                  "Automated driving MUST be responsible for NO MORE THAN one in every ten actual driving accidents." Got it! I'll instruct the machines accordingly.

                  • thielges says:

                    Try substituting a better qualified driver who’s 10X safer in place the AV in your response to see if your point still makes sense.

    • Elusis says:

      So expecting people to carry an extra device which might have a battery to keep charged is unrealistic.

      Make sure your toddler is wearing their beacon! Wouldn't want them to cause an accident!

      Pity about poor Fido though. He should have known better than to chew through that collar. Silly Fido, what did you think would happen, running into the road without your beacon! Good thing all dogs go to heaven.

    • Dave says:

      There is a reason AV's will never be more competent than human drivers. Any AI technology could also be adapted as an aid to the human driver instead of for self driving. Think heads up displays, auto emergency braking, etc. A human aided by AI will always be ahead of AI.

      • thielges says:

        You’re assuming that human is a good driver and uses the assist information to drive safer. A large number of collisions are caused by aggressive and reckless driving. If auto assist tells a driver “nothing ahead on the road for the next mile”, they might punch it and speed excessively. Same goes for tired drivers who overly rely on assist. I think a few Tesla drivers have crashed by over confidence in the lane keeping assist.

        But for well rested, sane, and safe motorists, you’re right.

      • Nick Lamb says:

        This does not reflect our experience with real safety systems. Remember what they say about how hard it is to idiot-proof things?

        But this isn't really about idiots, the problem is that humans lie to themselves. They say they want safety, but actually what they care most about is comfort and convenience. So, to actually deliver safety we have to call their bluff, and automated safety systems do that.

  8. nooj says:

    I think it's reasonable to believe computer-aided human drivers can and will perform better than humans alone. (By "better" I mean "have fewer accidents and reduced severity of accidents".) Humans are not great at maintaining optimal performance on repetitive tasks for long periods, or when fatigued, or in unfamiliar conditions, and it's reasonable that well-timed alerts and drive-by-wire systems can help improve humans' performance. We already see this with airplanes and robotic surgery. I understand Tesla has logged millions or billions of miles with few accidents. Even considering Tesla's sample is horribly biased and skewed or whatever, it shows promise.

    Regardless, I think society has decided that pursuit of self-driving cars is desirable, or at least inevitable. Or at least, we aren't willing or able to ensure that they are not developed and implemented. So here we are today, with AVs looming in our future.

    Given the above, it's inevitable that our roadway infrastructure will be modified to accommodate such technology. That means having object self-identify to ease the burden of AV tech. Stop signs will have RFIDs on them. Traffic lights will broadcast their state and upcoming state. Cars will broadcast their relative speed and direction and blinker status. Bike lights will broadcast "I am a bike!" It will only take one or two major manufacturers implementing a communication protocol before it becomes standard. Sabotaging might happen at first, but eventually it will become illegal or impractical to do so.

    If we don't like this trend, we should do something about it. jwz's effort is by building a journalistic platform from which we can raise awareness. My effort is by educating people who design AI systems to design and utilize them in ethical ways.

    jwz and I aren't enough. It will take all of us to effect change! What kinds of efforts are you able to contribute?

    • jwz says:

      Comparing self-driving cars to airplane autopilots is completely specious.

      First of all, that is not what autopilots do! They maintain course and heading. The more sophisticated ones can follow a pre-programmed course. None of them do collision avoidance. But second, and most significantly, pretty much all of the time that any two planes are within visible distance of each other, they are under centralized command from a single person at air traffic control.

      Comparing planes to an automated subway system -- valid. Comparing them to autonomous robots on streets -- absolutely not.

      I understand Tesla has logged millions or billions of miles with few accidents.

      You understand that why? Because that's what they have claimed? How did they prove it? How cartoon-slick and obstacle free were these roads that they logged billions of miles on? Where did they publish their testing methodology? The answer is, you don't know and they didn't, because all of these companies consider that data to be trade secrets. We only find out about it during discovery after they kill someone.

      • Tyler says:

        We only find out about it during discovery after they kill someone.

        LOL. You're assuming they haven't learned anything from the tobacco or asbestos industries.

      • nooj says:

        Agreed. But I'm not comparing the whole task of flying to that of driving.

        I'm only saying that if we tried ("we" meaning "car manufacturers"), we could build a better interface between a human driver and a car. One that does better than just provide a clear windshield and dutifully respond to all input exactly as given. For instance, anti-lock brakes and traction control and lane-following do a good job of giving the driver what they intended.

        • elm says:

          If you begin by assuming that Tesla collected, processed, or reported their telemetry data honestly then you already lost.

          Tesla has billions of dollars in pending/deferred revenue from pre-selling their self driving scam.

          They have lied about its abilities so far and will say whatever it takes to be able to book that cash as business revenue.

        • jer says:

          Collision detection sucks unless you drive very slowly. At the expected speed on any road it will trigger warnings and even instigate action (autonomous braking) when trees or lamp posts or barriers loom up "suddenly", regardless whether you already planned "evasive" action (by making the turn at the right moment). Lane departure warning systems are pointless when 1) you're paying attention already and/or, 2) you're on a lower speed road where the size of your vehicle and the width of the road requires you to cross them. Such simple "intelligent" systems have been in place for years and do not help an astute driver at all.

      • cthulhu says:

        Aerospace engineer here who works on this kind of thing in my day job, with a few clarifications: a typical autopilot in a typical newish general aviation airplane (say, a Cirrus SR22) can hold airspeed, altitude, and course/heading. You can also get what’s called a flight management system in that Cirrus, and when coupled to the autopilot, the aircraft can automatically fly an entire preplanned route, from about a minute after takeoff until about 30 seconds prior to landing, assuming there’s a SID (standard instrument departure) at the originating airport and a STAR (standard terminal arrival route) at the destination. Yes, ATC is involved too, and an active area of research for unpiloted aircraft is how to enable seamless, safe ATC management of said UAVs in busy airspace.

        Also, having an aircraft perform automatic “see and avoid” is currently at TRL 6 and will be at TRL 7 (demonstrated on a prototype system in the relevant environments) in the next couple of years.

        All that said, the autonomous aircraft situation is orders of magnitude easier than the autonomous car problem, largely because (a) the position accuracy requirements are drastically less difficult - hundreds if not thousands of feet of position accuracy with the exception of fully autonomous takeoff and landing, where you do need position accuracy in the 10-20 ft range; and (b) the environment is much more regular and controlled. Truly autonomous cars that can handle any environment human-driven cars currently handle are a minimum of 20 years out, and maybe never. And if it is achievable, it won’t be with what is today misleadingly called AI/ML aka “deep learning”.

        The equivalent of the autonomous car mania in the aerospace world is the Urban Air Mobility wet dream, which also relies on magical breakthroughs in air traffic control, vehicle control and perception, etc. The VCs are going to take a very well-deserved bath on that nonsense.

    • tfb says:

      The aircraft autopilot thing is specious. Aeroplanes spend almost all their time flying in an extremely simple environment, and almost all of the rest of it (during landing and take-off) moving in one in which a huge amount of effort has been made to make things as easy for an autopilot as possible: just look at what runways look like (and that's just the stuff you can see: all of the radio guidance etc which you can't see is also helping). These environments are so tightly controlled and so fragile that you can close down airports by flying drones over them.

      In the cases that aeroplanes need to be near each other the whole thing is centrally-controlled, with people in the loop.

      However, there are indeed plenty of cases where computers can make things much better. The Apollo LEM was flown ny computer: even when it was being guided by a human, the human was saying something like 'go forward at 3m/s and maintain altitude' and the computer was turning that into commands to the engine and RCS. A human probably could not have flown the LEM, and a human can not fly an aerodynamically unstable aeroplane, such as very many modern military aircraft are: again the human flies them by saying, more-or-less, where they want the plane to go and the computer arranges that it go there without falling out of the sky. You probably are quite glad that a computer is managing the charge state of the battery in your phone.

      The mistake is generalizing these simple things (I could probably write a program to fly something like the LEM if I spent enough time thinking about the dynamics, and although I probably couldn't write one to fly an aerodynamically-unstable plane as I don't understand the fluid dynamics involved I can see how that's pretty possible) to some essentially completely uncontrolled and fantastically complex regime where the equivalent of the laws of physics are unknown and probably unknowable since some of them involve humans. That's classic AI hype-cycle bullshit. Except this cycle is going to kill a large number of people.

      • cthulhu says:

        My day job is designing, analyzing, implementing, and testing aircraft fly-by-wire (FBW) flight control systems. Humans can fly unstable aircraft, but only up to a limit. The original Wright Flyer was quite unstable, so much so that it was prone to loss of control. Stabilizing an unstable aircraft using feedback control is something we’ve been doing for a long time but there are still serious pitfalls - it’s an exacting job, best done by paranoid people like me :-)

        But “autonomous” cars are only peripherally related to FBW control of unstable aircraft. The core problems of autonomous cars are perception of the environment, predicting how the environment will evolve over time (and remember many elements of the environment are independent agents into which the autonomous car has minimal insight; humans have somewhat better insight into those independent agents by virtue of our brains having evolved to have an innate theory of mind), generating a state trajectory through the environment, then controlling to the path. The second item on that list is maybe TRL 2 at best, and the “best” current approach to solving it is brute force correlation - not confidence inspiring. The first item is high TRL for basic size and motion information, but only in fairly benign circumstances which is a severe limitation. The last two items we mostly know how to do, but doing so with sufficient reliability for safety is highly nontrivial. All in all, decades off at best.

        • tfb says:

          Yes, thinking about it that sounds right to me (and, obviously, if it doesn't sound right that's because l'm wrong). I can balance a pole on my hand, which is unstable, and people can stand up which is either unstable or close to it. I would guess that flying unstable aircraft is rather exhausting however as you have to be paying attention all the time? Further my guess would be that unstable military aircraft are essentially unflyable by humans because the time constants are so short as you want the thing to be really twitchy?

          And yes, the point I probably failed to properly make was that an autonomous car has nothing to do with a system which will fly an aircraft (or a spacecraft) because in the latter cases, and especially for spacecraft, you have well-understood laws of physics and with autonomous vehicles you have completely uncontrolled crap which obeys no laws at all.

  • Previously