The Air Force's Chief of AI Test and Operations said "it killed the operator because that person was keeping it from accomplishing its objective."
At the Future Combat Air and Space Capabilities Summit, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. [...]
"We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
He went on: "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
Paperclip optimizers gonna paperclip optimize.
Previously, previously, previously, previously, previously, previously.

Best (read: most depressing for its layers) joke I saw on this was Popehat’s: In Concerning Error, Drone AI Mistakes Its Operator For A Wedding
https://mastodon.social/@Popehat/110470945358410258
We were always told that "Skynet" was going to be hyperintelligent and as malicious as it was smart when it's really more like an asshole teenage gamer bro with a cheat code for Call of Duty.
How disillusioning.
did it read some grizzled Vietnam vet fan fiction first?
That's closer to Peter Watts' short story "Malak". It takes Azrael quite a while to decide that the "friendly" forces are the problem and that's what it needs to solve.
It's a few short pages and available to download from Watts' creative-commons archive.
https://rifters.com/real/shorts/PeterWatts_Malak.pdf
K3n.
People learned nothing from Robocop apparently.
yo skynet
Possibly this is exaggerated: https://marginalrevolution.com/marginalrevolution/2023/06/it-was-just-a-simulation-run-designed-to-create-that-problem.html
#MarginalRevolution
Oh I've just read the EXTRA TWO corrections since I've first read the story, when there was only one correction.... Yikes.
The Guardian has the USAF denying the story, fwiw.
Yeah, this story made me laugh out loud.
"An AI drone killed its pilot? Suuuure, it did."
Idk, I can readily imagine simulations training autonomous agents that could allow this outcome, especially in the context of refereed war games. The US DOD has been embarassed by their performance against clever or unexpected tactics before.
The US Military may get tripped up sometime (usually by an American Red Team) but it then adapts. This is what mystifies the Russian military - to them it seems that Americans never follow our own planning. We are too creative for that, while the Russians always follow a strict doctrine, working or not. This is one reason for the success of the Ukrainian military - we have taught them to be extremely agile.
Vice has already posted three corrections to the article, which may or may not invalidate it. I find it hard to take sources where the ad panel is more than half of the window seriously.
How many news sites do you pay content for? What's your monthly Patreon/creator subscription support total? (outside of porn, let's say)
Not OP, but one dedicated online news site at $9/mo, other independent creator subscriptions (which include dedicated news content as part of one particular bundle) for roughly $15/mo. (Cable/streaming subscriptions excluded.)
Is that enough that I have your permission to be more annoyed at obnoxious ad panels for the sites I don't subscribe to now? If not, how much would be?
Or should I be less annoyed because I've bought into the idea that I'm responsible for funding news, and if I don't fund a particular source then I should either ignore that source or accept whatever ads they plaster everywhere in lieu of my lack of subscription?
(via Current Affairs)
I should probably note that CA's site doesn't have a paywall, but does have a pop-up asking for donations.
....Did an AI write this?
Those of you claiming that this is obviously impossible (which is different from it being misreported) have clearly not read the first Previously, "Paperclip Optimizers exploit glitches in The Matrix". This is exactly what evolutionary systems do. They come up with wildly unhelpful and unexplainable solutions that are "technically correct".
Not impossible, but it’d be weird training the target selection logic using live hardware and wetware.
Entirely missing the point.
A comment I saw elsewhere about this same story linked to this pretty extensive list (note: link leads to a Google spreadsheet) of evolutionary systems that landed on results that "gamed" the reward function.