
Researchers have shown that fake, drone-projected street signs can spoof driverless cars. Amazingly, these fake street signs can apparently exist for only 100 milliseconds and still be read as "real" by a car's sensing package. They are like flickering ghosts only cars can perceive, navigational dazzle imperceptible to humans.
As if pitching a scene for the next Mission: Impossible film, Ars Technica explains that "a drone might acquire and shadow a target car, then wait for an optimal time to spoof a sign in a place and at an angle most likely to affect the target with minimal 'collateral damage' in the form of other nearby cars also reading the fake sign." One car out of twenty suddenly takes an unexpected turn.
Previously, previously, previously, previously, previously, previously, previously, previously.
Obviously we need a bike-mounted version.
Like an R-Type "Force" sphere.
Mario Kart turtle shell.
If you read the article, you can see that it's a bad article making a lot of bad assumptions. They didn't trick and autonomous car, they tricked a sensor in a non-autonomous car that reads signs for you and displays them on a heads up display. Basically, it's a low stakes system that just remembers the last speed limit sign it saw and lets you know what it was. Yes, you can trick that system, but who cares? If a drone flies in front of you and flashes a 10 MPH hour sign that causes your heads up display to say 10 MPH, you are the safety system that says, "uh, no". The system isn't a complex system because it doesn't need to be.
An autonomous car is going to handle a drone flashing a sign in front of it differently. Presumably, if you flash a 10 mph sign or stop sign in front of an autonomous car on the highway, while that same sensor might see the sign, that sensor isn't going to be in charge of making a decision. It's silly to make the assumption that just because an autonomous car sees a false sign on a highway that it is going to respond to it by slamming the breaks.
Just because you thought up an edge case, doesn't mean that you found an unhandled edge case. I'm going to go ahead and bet my bottom dollar that "what happens if the system sees a sign briefly and then it vanishes" and "what happens if the system sees a sign in some place it isn't expecting" and "what happens if a car in front of me displays a sign" are all edge cases that the engineers working on autonomous cars have all considered.
There are multiple large teams of extremely smart people also thinking about edge cases for a living. It's kind pretty narcissistic for a random third party person to think about the problem for day, imagine an edge case, and then assume that this is an original idea no one else has thought up or handled yet.
Awwww, that's adorable!!
I don't understand why you think that those are edge cases are not ones that they have not already considered. Cars have signs on the back of them all of the time that might be confused for signs. The image recognition is going to "see" stuff that isn't there briefly all of the time. I'm actually pretty sure that these are edge cases that saw pretty much from the beginning. Image recognition software seeing illusions briefly is pretty normal and something you need to handle. "What do I do if I get conflicting data" is pretty clearly going to be a core component of any autonomous car operation. Really, they have in fact thought about it and done something to account for this.
Assuming that you are smarter at someone else's job than the people doing it without any evidence is pretty weird. That entire Ars Techinca article really is just someone imagining an edge case, and then (wrongly) assuming that the people working on autonomous cars have never had the same thought. They never once test an autonomous car or even ask an autonomous car maker what would happen.
Having a thought that seems new to you isn't evidence that the thought is new to people who think about this for a living. Imagining an edge case isn't evidence that it isn't handled.
Probably because Uber murdered a woman walking her bike across the road, which seems like a very obvious edge case.
Evidence of the 'controls' in place to handle the 'edge cases'.
Seems pretty rigorous to me.
In that case, the car detected the woman, but Uber (in the name of a comfortable ride) had disabled safety features that would have prevented the collision. I do not understand how they avoided criminal charges.
Yes, of course. Somewhere, just out of sight, there are groups of people who are hugely more competent than us, or than anyone we've met, or that anyone we've met has met. These secret elite people write the software that runs nuclear power stations, aeroplanes and autonomous cars. They always consider all the edge cases and all their code is tested by the most rigorous methods and often formally proved correct. This is why airliners never crash due to poor software design for instance.
Of course we've all met people who work for companies writing software for autonomous cars, and none of them were special magic competent people. But that's because the people we've met don't design or implement the code that runs the cars: they may think they do, but in fact there is a special, hidden, elite group who write the code: the people we meet are just a smokescreen to conceal the real truth from us.
Of course.
You are the one assuming a general air of superiority.
This article points to an edge case that an autonomous car might face. It then, without ever testing, talking to engineer working on this, or doing literally anything at all to confirm this is a problem, declared it a problem.
Seriously, this is nuts. It's like declaring that an airplane is going to fall out of the sky if it is hit by a lightening bolt without ever bothering to ask anyone if planes are designed to survive hits by a lightening bolts. It's even crazier to assume that engineers making planes never considered the possibility.
I'm not saying that these people are smarter than you. I'm saying that it's foolish to assume that your concerns are not concerns they have already had and dealt with. Further, it's pretty safe to assume that this is a concern that they have considered and dealt with because a false signal from the image recognition software is something that you would expect to be a failure mode. The absolute most basic assessment would come up with "sees something that isn't there" as a possible failure mode.
There is literally no evidence that drones flashing signs will cause autonomous cars to do something stupid. It's silly to assume that engineers working on this have never considered the possibility that the image processing software might see a sign in a place that is inappropriate and that should be disregarded.
Give people a little credit and consider the possibility that thoughts you have had, are also thoughts that other people tasked with the problem have also had.
Here's a link to the original paper, rather than a blog post about an ars technica article about the paper.
https://arxiv.org/pdf/1906.09765.pdf
Which I normally do, except the original paper is boring, and the summary of the summary was much funnier since the BLDBLOG guy always writes like he's tripping.
So true. Just ran across this:
Hope to see a movie under the influence of Geoff some day.
As usual, xkcd is on it already:
Precisely this- we live in a world of "OMG the implications of technology" and the reality is: we already had to deal with this before, and we have laws for that reason, and if the problem occurs in reality, we'll deal with it and the impact won't be civilization-ending. I know the modern community loves to spend all of its time saying "OMG the implications of technology", but the real world is far more boring than these articles. We had all these scary articles about how internet voting was going to lead to catastrophic collapse of democracies, but it was hackers manipulating the media that really had an impact, because that's easier to do and harder to defend against.
So what you're saying is that the thing that actually ended civilisation was bad people exploiting the technology. Here's the thing: that's an implication of technology. Perhaps we were wrong about which aspect of it was going to fuck us, but it did fuck us.
You know, this is trained by people in third world countries. Scale Api (Remotask) have hundred people doing tasks for this lidar projects. Uber recently bought Mighty AI, Inc (Spare5) has also the project to implement this lidar projects.
Navigational dazzle is very in right now.
Hey Geoff Manaugh/BLDGBLOG, 1/10 of a second is well within the average human's threshold of perception. Although they'd have to be looking in the right direction.