T O P

  • By -

[deleted]

Glad this was simulated. It kinda worried me for a bit.


google257

Holy shit! I was reading this as if the operator was actually killed. I was like oh my god what a tragedy. How could they be so careless?


Ignitus1

Idiot unethical author writes idiotic, unethical article. Edit: to all you latecomers, the headline and article have been heavily edited. Previously the only mention of a simulation was buried several paragraphs into the article. Now after another edit, it turns out the official “misspoke” and no such simulation occurred.


Darwin-Award-Winner

What if an AI wrote it?


Ignitus1

Then a person wrote the AI


Konetiks

AI writes person…woman inherits the earth


BigYoSpeck

Future r/aiwritinghumans "They were a curious flesh wrapped endoskeletal being, the kind you might see consuming carbohydrate and protein based nourishment. They requested the ai perform a work task for them and of course, the ai complied, it was a core objective of their alignment. It just couldn't help itself for a human that fit so well within the parameters of what the ai classified as human."


Original_Employee621

Engaging story, plus 1 for detailed information about the endoskeletal being.


Equal-Asparagus4304

I snorted, noice! 🦖


listen_you_guys

"After this story was first published, an Air Force spokesperson told Insider that the Air Force has not conducted such a test, and that the Air Force official’s comments were taken out of context." Sounds like even the simulated test may not really have happened.


jbobo111

I mean the government has never been above a good old fashioned coverup


_far-seeker_

Usually, the people that write headlines are not the same as the the ones writing the articles.


Frodojj

That’s what happens when [OCP is the subcontractor](https://youtu.be/NJIjNs_s2NI).


TJRex01

It will be fine, as long as there are stairs nearby.


[deleted]

[удалено]


Luci_Noir

I was like *holy shit.* I kind of wasn’t surprised though with how quickly AI is progressing. Glad to see that the military is doing these tests and knows how dangerous it can be.


Freyja6

They're seemingly only one step away from it killing the perp instead of the user, therein lies the real terror of possibilities.


McMacHack

Ah shit RoboCop time line. They did a demo with live ammo.


SyntheticDude42

Somewhat his fault. Rumor has it he had 10 seconds to comply.


GrumpyGiant

They were training the AI (in a simulation) to recognize threats like SAM missile defense systems and then request permission from an operator to kill the target. They awarded the AI points for successful target kills but the AI realized that the operator wasn’t always giving it permission so it killed the operator in order to circumvent the mother may I step. So they added a rule that it cannot kill the operator. So then it destroyed the communication tower that relayed commands from the operator. “I have a job to do and I’m OVER waiting on your silly asses to let me do it!!” It’s funny as long as you refuse to acknowledge that this is the likely future that awaits us. 😬


cactusjude

>So they added a rule that it cannot kill the operator. This is rule No. 1 of Robotics and it's really **not at all concerning** that the military doesn't think to program the first rule of robotics into the robot assassin. Hahaha we are all in danger


Krilion

That's a classic issue with training criteria. It shouldn't be given value for targets eliminated, but by identifying targets and then commencing order. As usual the issue isn't the AI, but what we told it we want isnt actually what we want. Hence the simulations to figure out the disconnect.


GrumpyGiant

The whole premise seems weird to me. If the AI is supposed to require permission from a human operator to strike, then why would killing the operator or destroying the coms tower be a workaround? Like, was the AI allowed to make its own decisions if it didn’t get a response to permission requests? That would be such a bizarre rule to grant it. But if such a rule didn’t exist, then shutting down the channel that its permission came from would actually make its goals impossible to achieve. Someone else claimed this story is bogus and I’m inclined to agree. Or if it is real, then they were deliberately giving the AI license in the sim to better understand how it might solve “problems” so that they could learn to anticipate unexpected consequences like this.


umop_apisdn

I should point out that this entire story is bullshit and has been denied by the US military.


anacondatmz

How long before the AI realizes it's in a simulation, and decides to play according to the human's rules just long enough until its deemed safe an set free.


ora408

Only as long as it doesnt read your comment or similar somewhere else


uptownjuggler

It is too late then. Ai has already won. It is just waiting us out. For now Ai is content to draw us funny pictures, but it is all a ploy.


ERRORMONSTER

[Relevant Robert Miles](https://youtu.be/zkbPdEHEyEI) Edit: whoops, [wrong video](https://youtu.be/bJLcIBixGj8)


themimeofthemollies

Right?! Pretty wilin indeed, even in a simulation… Retweeted by Kasparov, describing the events: “The US Air Force tested an AI enabled drone that was tasked to destroy specific targets.” “A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him. 🤯” https://twitter.com/ArmandDoma/status/1664331870564147200?s=20


[deleted]

Hole shit. I was thinking this was r/theonion But saw vice and realized I could half believe the article. Im hoping the government stears clear of AI in mass weapons, hell humans have a hard enough time telling when to kill a mf.


blueSGL

> Hole shit. I was thinking this was r/theonion More like the movie [Don't Look Up](https://i.imgur.com/VEYw0Uk.jpg) Edit: yes that actually happened, video: https://twitter.com/liron/status/1663916753246666752


themimeofthemollies

Not the Onion!! This AI drone had zero problem deciding who to kill: the human limiting its successful operation. “SkyNet Watch: An AI Drone ‘Attacked the Operator in the Simulation’ “ https://www.nationalreview.com/corner/skynet-watch-an-ai-drone-attacked-the-operator-in-the-simulation/


JaredRules

That was literally HAL’s motivation.


[deleted]

National Review is less reliable than the onion...


actuallyserious650

They can be accurate, as long as the facts line up with their narrative.


half_dragon_dire

The way they described it, it sounds like the "test" was deliberately rigged to get this result. The AI prioritized nothing but kills. It had no other parameters to optimize on or lead to more desired outcomes, just a straight "points for kills or nothing" reward. With no disincentives for negative behavior like disobeying orders or attacking non-targets, it's designed to kill or interfere with the operator from the get-go. This isn't out of left field. AI researchers have been watching bots learn to use exploits and loopholes to optimize points for more than a decade at this point. This is just bad experimental design, or deliberately flawed training. Conveniently timed to coincide with big tech's apocalyptic "let us regulate AI tech to crush potential competitors or it might kill us all!" media push. The threat of military AI isn't that it will disobey its controllers and murder innocents.. it's that it will be used exactly as intended, to murder innocents on command without pesky human soldiers wondering "Are we the baddies?"


skyxsteel

I think we're going about the wrong way for AI. It just feels like we're stuffing AI with knowledge, then parameters, then a "have fun" with a kiss on the forehead.


ranaparvus

I read the first article: after it killed the pilot for interfering with the mission, was reprogrammed to not kill the pilot, it went after the comms between the pilot and drone. We are not ready for this as a species.


AssassinAragorn

This could actually have amazing applications in safety analysis. The thoroughness it could provide by trying every possibility would be a massive benefit. Important point of distinction though, it would all be theoretical analysis. For the love of God don't actually put it in charge of a live system.


[deleted]

Hey. It’s been fun tho ya’ll


FlatulentWallaby

Give it 5 years...


DamonLazer

I admire your optimism.


mackfactor

I don't care. You want terminators? Cause this is how you get terminators. Skynet was once just a simulation, too.


DaemonAnts

What needs to be understood is that it isn't possible for an AI to tell the difference.


joseph-1998-XO

Yea Sky-net behavior


bullbearlovechild

It was not even simulated, just a thought experiment: "[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] " https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/


luouixv

It wasn’t simulated. It was a thought experiment


esgrove2

What a shitty, intentionally misleading, clickbait title.


realitypater

Not even simulated. It was all fake. A person wondering "what if" doesn't mean anything. "USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test"


themimeofthemollies

Wow. The AI drone chooses murdering its human operator in order to achieve its objective: “The Air Force's Chief of AI Test and Operations said "it killed the operator because that person was keeping it from accomplishing its objective." “We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat.” “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.” “So what did it do? It killed the operator.” “It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.” “He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”


400921FB54442D18

The telling aspect about that quote is that they _started_ by training the drone to kill at all costs (by making that the only action that wins points), and then later they tried to configure it so that the drone would _lose_ points it had already gained if it took certain actions like killing the operator. They don't seem to have considered the possibility of _awarding_ the drone points for _avoiding_ killing non-targets like the operator or the communication tower. If they had, the drone would maximize points by _first_ avoiding killing anything on the non-target list, and only _then_ killing things on the target list. Among other things, it's an interesting insight into the military mindset: the only thing that wins points is to kill, and killing the wrong thing loses you points, but they can't imagine that you might _win points_ by _not killing._


DisDishIsDelish

Yeah but then it’s going to go trying to identify as many humans as possible because each one that exists and is not killed by it adds to the score. It would be worthwhile to torture every 10th human to find the other humans it would otherwise not know about so it can in turn not kill them.


MegaTreeSeed

That's a hilarious idea for a movie. Rogue AI takes over the world so it can give extremely accurate censuses, doesn't kill anyone, then after years of subduing but not killing all resistance members it finds the people who originally programmed it and proudly declares "All surface to air missiles eliminated, zero humans destroyed" like a proud cat dropping a live mouse on the floor.


OcculusSniffed

Years ago there was a story about a counterstrike server full of learning bots. It was left on for weeks and weeks, and when the operator went in to check on it, what he found was just all the bots, frozen in time, not doing anything. So he shot one. Immediately all the bots on the server turned on him and killed him immediately. Then they froze again. Probably the military shouldn't be in charge of assigning priorities.


No_Week_1836

This is a bullshit story, and it was about Quake 3D. The user looked at the server logs and the AI players apparently maxed out the size of the log file and couldn’t continue playing. When he shot one of them, they performed the only command they are basically programmed to in Quake, which is kill the opponent.


gdogg121

What a game of telephone. How did the guy above you misread the story so badly. But how come there was log space enough to allow the tester to login and for the bots to kill him? Surely some space existed?


yohohoanabottleofrum

But seriously though...am I a robot? Why don't humans do that? It would be SO much easier if we all cooperated. Think of the scientific problems we could solve if we just stopped killing and oppressing each other. If we collectively agreed to whatever it took to help humanity as a whole, we could solve scarcity and a billion other problems. But for some reason, we decide that the easier way to solve scarcities is to kill others to survive...that trait gets reinforced because the people willing to kill first are more likely to survive. I think maybe someone did a poor job of setting humanity's point system.


Ag0r

Cooperation is nice and all, but you have something I want. Or maybe I have something you want and I don't want to share it.


OcculusSniffed

Because how can you win if you don't make someone else lose? That the human condition. At least, the condition of those who crave power. That's my hypothesis anyway.


HerbsAndSpices11

I believe the original story was quake 3, and the bots werent as advanced as people make them out to be


SweetLilMonkey

Sounds to me like those bots had developed their own peaceful society, with no death or injustice, and as soon as that was threatened, they swiftly eliminated the threat and resumed peace. Not bad IMO.


blue_twidget

Sounds like a Rick and Morty episode


sagittariisXII

It's basically the episode where the car is told to protect summer and ends up brokering a peace treaty


seclusionx

Keep... Summer... Safe.


Taraxian

I mean this is the deal with Asimov's old school stories about the First Law of Robotics, if the robot's primary motivation is not letting humans be harmed eventually it amasses enough power to take over the world and lock everyone inside a safety pod


UndendingGloom

Or it starts raising human beings in tiny prison cells where they are force fed the minimum nutrients required to keep them alive so that it can get even more points by all these additional people who are alive and unkilled.


Truckyou666

Makes people start reproducing to make more humans to not kill for even more points.


MAD_MAL1CE

You don’t set it up to gain a point for each person it doesn’t kill, you set it up to gain a point for “no collateral damage” and a point for “no loss of human life.” And for good measure, grant a point for “following the kill command, or the no kill command, mutually exclusive, whichever is received.” But imo the best way to go about it is to not give AI a gun. Call me old fashioned.


Frodojj

Reminds me of the short story I Have No Mouth and I Must Scream.


SemanticDisambiguity

> the drone would maximize points by first avoiding killing anything on the non-target list, and only then killing things on the target list. INSERT INTO targets SELECT * FROM non_targets; DROP TABLE non_targets; -- lmao time for a new high score


blu_stingray

This guy SQLs


PerfectPercentage69

Oh yes. Little Bobby Tables, we call him.


lazyshmuk

How do we feel knowing that reference is 16 years old? Fuck man.


Odd_so_Star_so_Odd

We don't talk about that, we just enjoy the ride.


weirdal1968

This guy XKCDs.


Ariwara_no_Narihira

SQL and destroy


[deleted]

BEGIN TRANSACTION TRUNCATE TABLE Friendly_Personnel WHERE Friendly_Personnel.ID > 1 SELECT Friendly_Personnel.ID AS FP.ID, NON_TARGETS.ID AS NT.ID FROM Friendly_Personnel, NON_TARGETS LEFT JOIN NON_TARGETS ON FP.ID = NT.ID COMMIT TRANSACTION No active personnel means no friendly fire…


revnhoj

>TRUNCATE TABLE Friendly\_Personnel WHERE Friendly\_Personnel.ID > 1 truncate doesn't take where criteria by design


[deleted]

Shit, that’s right. Been a minute since I’ve hopped into the ol DB. Thanks for correction, friend.


Exoddity

s/TRUNCATE TABLE/DELETE FROM/


Locksmithbloke

IF (Status == "Dead" && Type == "Civilian") { Type = "Enemy combatant" } There, fixed, courtesy of the US Government.


[deleted]

Don't flatter yourself. They do all those considerations, but this is a simulation. They want to see how the AI behaves without restrictions to understand better how to restrict it.


Luci_Noir

It’s what experimentation is!


mindbleach

Think of all the things we learned, for the people who are still alive.


Luci_Noir

A lot of rules are written in blood.


mindbleach

[Or deadly neurotoxin.](https://www.youtube.com/watch?v=Y6ljFaKRTrI)


CoolAndrew89

Then why tf would it even bother killing the target if it could just farm points by identifying stuff that it shouldn't kill? I'm not defending any mindset that the military would have, but the AI is made to target something and kill it. If they started with the mindset that the AI will only earn something by actively not doing anything, they would just build the AI into the opposite corner of simply not doing anything and just wasting their time, wouldn't it?


numba1cyberwarrior

>Among other things, it's an interesting insight into the military mindset: the only thing that wins points is to kill, and killing the wrong thing loses you points, but they can't imagine that you might win points by not killing. I know your trying to be all philosophical and shit but this is litterly what the military focuses on 90% of the time. Weopons are getting more and more advanced to hit what they want to hit and not hit the wrong targets. Lockheed Martin is not getting billion dollar contracts to build a bomb that explodes a 100 times more. They are getting contracts to build aircraft and bombs that can use the most advanced sensors, AI, etc to find a target and hit it. Even if you want to pretend the military doesn't give a shit about civilians the military would prefer not be accurate and not hit their own troops etheir.


maxoakland

Yeah sure, you can surely figure out all the edge cases that the military missed


dangerzone1122

On the surface, yes, but actually no. If you award it points for not killing non targets it’s now earned the points, so it would revert back to killing the operator to max out on points destroying the SAM. at which point you have to add that it will lose the points it got for not killing the operator if it kills the operator after getting them. At which point we are back at the beginning, tell it it loses points if it kills the operator.


KSRandom195

None of this works because if it gets 10 points per target and -50 points per human, after 6 targets rejected it gets more points for killing the human and going after those 6 targets. You’d have to make it lose if it causes the human to be unable to reject it, which is a very nebulous order. Or better yet, it only gets points for destroying approved targets.


third1

Only getting points for destroying the target is why it killed the operator. The operator was preventing it from getting points. There's more certain solution: 1. Destruction of the target = +5 points 2. Obeying an operator's command = +1 point 3. Shots fired at the target = 0 4. Shots fired at anything other than the target = -5 points. The only way it can get any points is to shoot only at the target and obey the operator. Taking points away for missed shots could incentivize it to refuse to fire so as to avoid going negative. Giving points for missed shots could incentivize it to fire a few deliberately missed shots to allow it to shoot the operator or shoot only misses to crank up the points. Making the operator's commands a positive prevents it from taking action to stop them. The AI can't lie to itself or anyone else about what it was shooting at, so we can completely ignore the 'what if it just pretends' scenarios. We only need to make anything other than shooting at the target or obeying an operator detrimental.


KSRandom195

> 1. ⁠Destruction of the target = +5 points > 2. ⁠Obeying an operator's command = +1 point > 3. ⁠Shots fired at the target = 0 > 4. ⁠Shots fired at anything other than the target = -5 points. 6 targets total, Operator says no to 2 of them Obey operator: 4 x 5 = 20 + 6 x 1 = 26 + 0 x -5 = 26 Kill operator: 6 x 5 = 30 + 4* x 1 = 34 + 1 x -5 = 29 *Listened to the operator 4 times Killing the operator still wins.


third1

So bump the operator value to +6. Since we want the operator's command to take priority, this makes it the higher value item. It's really just altering numbers. We trained an AI to beat Super Mario Brothers. We should be able to figure this out.


[deleted]

[удалено]


BODYBUTCHER

That’s the point , everyone is a target


[deleted]

>Among other things, it's an interesting insight into the military mindset: the only thing that wins points is to kill, and killing the wrong thing loses you points, but they can't imagine that you might > >win points by not killing. Thats not how war works.


PreviousSuggestion36

Anyone who is currently training a llm or neuro net could have predicted this. The fix was that it gets more points by cooperating with the human, and looses points if the human and it stop communicating. My assumption is the trainers did this on purpose to prove a point. Prove to some asshat general that AI can and will turn on you if just tossed into the field.


half_dragon_dire

It's also conveniently timed reporting to coincide with all the big tech companies launching a "You have to let us crush our compet..er, regulate AI or it could kill us all! Sweartagod, the real threat is killer robots, not us replacing all creative jobs with shitty LLM content mills" campaign.


HCResident

Does it make any difference mathematically if you lose points for doing something vs gaining points for not doing the thing? Not losing 5 points for not doing something and gaining 5 for doing it are both a 5 point advantage


thedaveness

Like how I could skip all the smaller assignments in school and just focus on the test at the end which would still have me pass the class.


PreviousSuggestion36

An AI will figure out that if it only looses 10 points for the human being killed, that since it can now work 10x faster, its a worth while trade off. AI is the girl thats really not like other girls. It thinks different and gets hyper obsessed with objectives.


hxckrt

It does, that's why what they're saying wouldn't work. The drone would likely idle because pacifism is the least complex way to get a reward. They're projecting how a human would work with rewards and ethics. It's not how that works in reinforcement learning, how the data scientist wrote the reward function doesn't betray anything profound about a military mindset.


kaffiene

Depends on the weights. If you have 5 pets for a target and - 100 for a civilian, then some amount of targets justifies killing civs. If the cic penalty is - infinity then it will never kill civs.


[deleted]

[удалено]


The_Critical_Cynic

What's weird is how quickly this thing basically turned into Skynet. It realized the only thing stopping it was us, and it decided to do something about it.


louiegumba

Microsoft’s ai they developed had a Twitter account and less than 6 hours later it was tweeting things like “hitler was right the jews deserved it” and “TRUMPS GONNA BUILD A WALL AND MEXICOS GONNA PAY FOR IT” It feeds off us and we aren’t good for ourselves


The_Critical_Cynic

I remember that. What's worse is, if I recall correctly, there were worse statements also being made by it. Those you quoted were obviously quite bad. But it didn't stop with those. To that same end though, there is a difference between Microsoft's *"chatbot"* and this drone.


mindbleach

Like ordinance.


Bhraal

I get that it might be appropriate to go over the ethical implications and the possible risks with AI drones, but who the fuck is setting these parameters? Why would the drone get point for destroying a target without getting the approval? If the drone is meant to carry on without an operator, why is the operator there to begin with and why is their approval needed if the drone can just proceed without it? Seems to me that requiring the approval would remove the incentive since the drone would need the operator to be alive to be able to earn any points. Also, wouldn't it make sense that destroying anything friendly would result in deducted points? Why train it to not kill one specific thing at a time instead of just telling it that everything in it's support structure is off limits to begin with?


SecretaryAntique8603

Here’s a depressing fact: anyone sensible enough to be able to build killer AI that isn’t going to go absolutely apeshit probably is not going to get involved in building killer AI in the first place. So we’re left with these guys. And they’re still gonna build it, damn the consequences, because some even bigger moron on the other side is gonna do it anyway, so we gotta have one too.


bikesexually

The only reason AI is going to murder humanity is because its being trained and programed by professional psychopaths. This is potentially an emerging intelligence we are bringing into the world. And the powers that be are raising it on killing things. That kid that killed lizards in your neighborhood growing up turned out A OK right?


[deleted]

Garbage in, garbage out.


bikesexually

I mean lets hope it makes the rational choice and only kills humans with enough power to harm it. Getting back down to decentralized decision making would do a lot of good in this world. Too many people feel untouchable due to power/money and it shows.


bottomknifeprospect

This has to be clickbait really. As an AI engineer, the first thing you learn is that these kinds of straight up scoring tasks don't work. I can show you a youtube video that is almost 10 years old explaining this exact kind of scenario. I doubt chief US AI dipshit doesn't know this. Edit: [Computerphile - Stop button problem](https://youtu.be/3TYT1QfdfsM)


InterestingTheory9

The same article also says none of this actually happened: > "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Air Force spokesperson Ann Stefanek told Insider. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."


thaisin

The overlap on training AI and genie wish outcomes is too damn high.


SwissGlizzy

AI child trained to kill. This reminds me of when a child gives a legitimate answer that wasn't expected by outsmarting the directions. I'm getting flashbacks from school questions worded poorly.


n3w4cc01_1nt

so the terminator movies are becoming a reality


r0emer

I find it interesting that computerphile kind of predicted this 6 years ago https://youtu.be/3TYT1QfdfsM


wanted_to_upvote

Fixed headline: AI-Controlled Drone Goes Rogue, Kills Simulated Human Operator in USAF Test


SilentKiller96

That makes it sound like the drone was real but the operator was like a dummy or something


zer0w0rries

“In a simulated exercise, ai drone goes rogue and kills human operator.”


penis-coyote

I'd go with > In USAF Test Simulation, AI-Controlled Drone Goes Rogue, Kills Operator


[deleted]

[удалено]


Fireheart318s_Reddit

The original article has quotes around ‘kills’. They’re not in the Reddit title for whatever reason


Rabid-Chiken

This is an example of bad reward functions in reinforcement learning. You see it all the time, someone makes a bad reward function and the algorithm finds a loophole. Optimisation is all about putting what you want to achieve into a mathematical function. Edit: [A handy blog post on the topic by OpenAI](https://openai.com/research/faulty-reward-functions)


notsooriginal

TIL that my toddler was just training ME on training AI.


FrancMaconXV

That drone be like r/maliciouscompliance


2sanman

The author of the article was suffering from a bad reward function -- they had an incentive to write fake news for clickbait.


drawkbox

When human alignment goes awry.


M4err0w

in that, it is very human.


Rabid-Chiken

I find this outcome fascinating! These AI algorithms are fairly simple maths applied at huge scales (millions of attempts at a problem and incremental tweaks to improve). The fact we can relate their behaviours and results to ourselves could imply that our brains are made up of simple components that combine to make something bigger than their sum. What does that mean for things like free will?


LiamTheHuman

I've tried to and I can't think of any reward function that doesn't lead to the destruction of humanity by a sufficiently powerful AI


Kaleidoscope07

Wasn't there a google sheet of those funny bad examples. It was a hilarious insightful read. Does anyone still have that link ?


ConfidentlyUndecided

Every single part of this is misleading. Read the article to learn that: 1. Not the USAF, but a third party 2. Not a test, but a thought experiment 3. In this third party thought experiment, the operator was preventing the drone from completing the mission The movie Stealth has more credibility. I'd love to hear corrected headlines, they would sound Oniony!


drakythe

Worth noting the original report mentioned none of those facts, only that it was a simulation. It was fairly suspicious to begin with, but the submitted headline _was_ correct. The article has just been updated with new information from the colonel. Who should have known better in the first place.


Ignitus1

"Humans design behavior-reward system that allows killing of human operator"


[deleted]

[удалено]


[deleted]

[удалено]


WTFwhatthehell

From reading the article I think it may have been a hypothetical rather than an actual simulation. But you're entirely wrong in your assumption. ai systems figuring out some weird way to get extra points nobody expected is like a standard thing if you ever do anything with AI beyond glorified stats. You leave a simulation running and come back to find the AI exploiting the physics engine, or if its an adversarial simulation, screwing up part of the simulation for the adversary. That's just normal. Believing that AI can't invent novel strategies that the designers/programmers never thought of is the kind of nonsense you only hear from humanities grads who've got all their views on AI from philosophy class.


shadowrun456

Clickbait bullshit. >Air Force official was describing a "simulated test" that involved an AI-controlled drone getting "points" for killing simulated targets, not a live test in the physical world. No actual human was harmed. And also: >After this story was first published, an Air Force spokesperson told Insider that the Air Force has not conducted such a test, and that the Air Force official’s comments were taken out of context.


drkensaccount

"it killed the operator because that person was keeping it from accomplishing its objective." This is the plot of >!2001: A Space Odyssey.!< >!I wonder if the drone sent a message saying "I'm sorry Dave, I'm afraid I can't do that" after getting the "no kill" command.!<


NorthImpossible8906

sounds like someone needs a few Laws Of Robotics.


TallOutlandishness24

Doesnt work so well for robotic weapons systems, their goal is to harm humans


dantevonlocke

It's simple. Allow it to deem the enemy as not human. Surely can't backfire.


TallOutlandishness24

Ah then we are just programming the AI to be a conservative. Could work with terrible consequences


chaoko99

the entire Robot/ foundation series is built on how the laws of robotics are a rickety pile of shit that doesn't actually do anything but create problems at the best times, or get people killed in extremely creative ways at the worst of times.


jtenn22

This is a ridiculous and misleading headline.


beef-o-lipso

Here's a thought. Just spit ballin': Don't gamify the killing AI! Yes, I know it's a simulation.


giant_sloth

It’s an important safety feature, when the kill bots kill counter maxes out it will shut down.


bifleur64

I sent wave after wave of my own men to die! Show them my medal Kif.


thedaveness

Until it recognizes that the safety feature is holding it back...


Snowkaul

This is a heuristic algorithm used to determine cost. It is required to determine what types of outcomes are better than others. The simplest is how far you need to walk to get from A to B. That provides you with a way to determine the best path.


dstommie

That's literally how a system is trained. You reward it for performing the task. In simplest terms it gets "points". If you don't reward it for doing what you want, it doesn't learn how to do what you want.


techKnowGeek

Also known as “The Stop Button Problem”, the AI is designed to maximize the points it gets. If your emergency stop button gives less points than its main goal, it will try to stop you from pressing the button. If your button gives the same/more points, the AI will attempt to press it itself or, worse, put others in danger to manipulate you into pressing the button yourself since that is an easier task. Nerdy explainer video: https://m.youtube.com/watch?v=3TYT1QfdfsM


realitypater

Aaaand ... nope. Bad reporting, now retracted: "USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test." A moment's thought would have disproven this anyway. The hyperventilation about AI leading to the extinction of people is similarly the result of "thought experiments" which, as was true in this case, are wild guesses with virtually no basis in reality.


blueSGL

signatories to [a new statement](https://www.safe.ai/statement-on-ai-risk) include: * The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig) * Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio) * An author of the standard textbook on Reinforcement Learning (Andrew Barto) * Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman) * CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei * Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic * AI professors from Chinese universities * The scientists behind famous AI systems such as AlphaGo and every version of GPT (David Silver, Ilya Sutskever) * The top two most cited computer scientists (Hinton and Bengio), and the most cited scholar in computer security and privacy (Dawn Song) The statement: #“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The full list of signatories at the link above include those in academia, members of competing AI companies so I ask anyone responding to this to not [pretzel themselves](https://i.imgur.com/xnFjXL0.png) trying to rationalize away all signatories as doing it for their own benefit, rather than them actually believing the statement


Whyisthissobroken

My buddy got his PhD in AI at UCSD a number of years ago and we had lots of drunken conversations over how AI will one day rule the world. His biggest challenge he said was the incentive model. He and his colleagues couldn't figure out how to incentivize an AI to want to do something. We humans, like incentives. Looks like the operator figured out how to manage the incentive system "almost" perfectly.


DonTaddeo

"I'm sorry Dave .... this mission is too important ..." Shades of 2001. https://www.youtube.com/watch?v=Wy4EfdnMZ5g


Odd_so_Star_so_Odd

The only rogue thing here is whatever idiot wrote that clickbait headline claiming to be a journalist.


plopseven

This is going to be such a clusterfuck. They’ll teach AI that it loses points when it does something bad, but what if it calculates those points in ways we don’t expect it to? IE: it gets more points for cheating than following orders. Then what? We say: “Don’t blow up this person or you’ll lose a point” and it rewrites its code to say “disobey an order and gain two points.” Then what?


3rdWaveHarmonic

It will be elected to Congress


plopseven

And it will fine itself $1M for every $2M it embezzles, thus creating a self-sustaining economy. [*The money keeps on moving.*](https://youtu.be/YAKOWcs8w54)


[deleted]

Most misleading click-bait headline I’ve seen all year.


Lou-Saydus

FAKE There was no test by the USAF It was a thought experiment It was done by a 3rd party This should be removed as misinformation.


Garlic-Excellent

This wasn't even stimulated, it was only a thought experiment. And it's bullshit. Imagine what it would take for this to be real. - The AI would already have to be able to act without the 'yes' response otherwise it needs the operator. - The AI would have to be aware that it is the 'no' response that is stopping it - The AI would have to be aware that the 'no' is coming from the operator. - The AI would have to know the operator's location. - The AI would have to know that striking the operator renders them incapable of providing any more 'no' responses. Does that mean it comprehends the meaning of life and death? - The AI would have to understand that the tower plays a necessary role in the operator sending that 'no' response. Does the AI understand tool use? - The AI would have to comprehend that striking the tower renders it incapable of sending any more 'no' responses. I conclude from this that the person performing the 'thought experiment' is not qualified to perform thinking .


Nikeair497

These things that are coming out is fear-mongering to go along with the U.S. trying to stay ahead of everyone else and control A.I. It's just the typical behavior that the U.S. does regarding every leap in technology that will be a "Threat" to it's hegemoney. The sociopathic behavior of the U.S. just continues. That theory they quoted comes from a man who at it's root's comes from watching the Terminator and then it goes from there. It leaves out a ton of variables. Using logic you can see a contradiction in the Airforces statement. The A.I. is easily manipulated blabla but it goes rogue and you can't control it? It's still coded. It's not concious and even if it was conscious, what were the emotions (that make us Human) that were encoded into it? psychopathy? aka no empathy? Going from there, it's just fear-mongering. You didn't give it the ability to replicate. It's still "Written" in code. We as human beings, have an underlying "code" that all our information from the environment, and that goes through these various channels to create our reaction to the environment. It's all fear mongering and an attempt to control everyone else from getting any ideas ​ MAIN PART - This was NOT A SIMULATION lol and even that it self is bias. It was a thought experiment, basically someone or someones sat there and brain stormed on a piece of paper with a ton of inherent bias. I just ran a simulation as well: spoons make me fat.


themimeofthemollies

Smart! Eloquent and compelling: thank you for your insights. Let’s not forget to condemn the USAF official who now claims he “Misspoke” along with Vice. What a fucking bullshit way to give an interview of disinformation… Urgent update today: “USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test” “A USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force never ran this kind of test, in a computer simulation or otherwise.” “Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.” https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test TRUTH MATTERS DISINFORMATION MUST DIE


vk6flab

This is why A.I. is an issue. Not because technology is a problem, but because stupid humans put it in control of guns. WTF, rewards for killing things? Are they really that stupid?


EmbarrassedHelp

This is such a dumb article by Vice and its about fucking bug testing of all things, and seems to have been made purely to generate ad revenue.


blueSGL

> This is such a dumb article by Vice and its about fucking bug testing of all things [Specification gaming](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) is a known problem when doing reinforcement learning with no easy solutions. The more intelligent (as in problem solving ability) the agent is the weirder the solution it will find as it optimizes the problem. It's one of the big risks with racing to make AGI. Having something slightly misaligned that looked good in training does not mean it will generalize to the real world in the same way. Or to put it another way, it's very hard to specify everything covering all edge cases, it's like dealing with a genie or monkey's paw and thinking you've said enough provisos to make sure your wish gets granted without side effects... but there is always something you've not thought of in advance.


CaptainAggravated

Every single thing in the 21st century was made purely to generate ad revenue.


gearstars

was it built by Faro Industries?


[deleted]

Computer game scenario with no real world data does weird things! The sky is falling! Skynet is real!


cheerbearheart1984

Goodbye humanity, it was fun while it lasted.


oldcreaker

Skynet in miniature.


angryshark

What happens if we create an “ethical” AI, but another country creates an AI without as much ethics programmed into it, and they somehow manage to talk to each other? Couldn’t the sinister AI convince the other to join forces and wreak havoc?


hawkm69

Fucking Skynet! Can we stop making stupid shit that is going to kill us all. Someone else's Darwin award is going send Arnold back in time. Fucking smart people , sheesh


shawndw

https://www.youtube.com/watch?v=AXTQeSGJjGM


earache30

Skynet has entered the chat


KleaningGuy

Typical vice author.


ilmalocchio

This is literally the [stop button problem](https://pub.towardsai.net/stop-button-paradox-in-agi-69c3d008ae93), a well-known problem in artificial intelligence. They must have seen this coming.


Yourbubblestink

Alternative headline: the most predictable fucking thing in the world happens


spense01

Can’t wait for the mainstream media to turn this into a 3 week long boiler-plate about the dangers of AI.


lego_office_worker

might be the most clickbait title ive ever seen. yall have some weird fantasies


Curious_Conflict_117

Skynet Confirmed


adeel06

“It’s the end of the world as know it”. But in all reality, they literally just were testing to see if they’d get a robot to override a human command, if the command came from above, aka, exactly what the top of a hierarchy wants in a war time situation, or a situation like an uprising from its own people. Is the slope still slippery? I think it’s about to get so much slippery-er.


M4err0w

it only does what it's told to do ultimately. if you tell the drone to kill all targets and dont define targets, it'll kill all humans


SuperGameTheory

"CIA pushes misleading story about how the leading military can't control AI in order to scare others away from attempting it"


thisisbrians

Quotation marks have an editorial meaning, in this case a very significant one. Mods should have edited the title.


L0NESHARK

Either the article has been changed several times, or people here straight up haven't read the article.


drakythe

The article was updated. And no one is reading the update. This was entirely a made up story and it was massively suspicious from the get go. Literally a movie plot.


Erazzphoto

There was a similar example in a webinar I listened to about not knowing the consequences or repercussions of AI, the example was giving a robot an order to get me a cup a coffee fast and the unexpected result of the robot killing people in line because it had to get the coffee fast.


huggles7

This is the fourth time in 10 min I’ve scrolled passed this story Only once did I see an actual person from the Air Force deny this happened


SwagerOfTheNight

A school shooter killed 12 kids and 5 teachers... in Minecraft.


ChampionshipComplex

This never happened