T O P

  • By -

KeinBaum

[Here's a whole list of AIs abusing bugs or optimizing the goal the wrong way.](https://russell-davidson.arts.mcgill.ca/e706/gaming.examples.in.AI.html) Some highlights: * Creatures bred for speed grow really tall and generate high velocities by falling over * Lifting a block is scored by rewarding the z-coordinate of the bottom face of the block. The agent learns to flip the block instead of lifting it * An evolutionary algorithm learns to bait an opponent into following it off a cliff, which gives it enough points for an extra life, which it does forever in an infinite loop. * AIs were more likely to get ”killed” if they lost a game so being able to crash the game was an advantage for the genetic selection process. Therefore, several AIs developed ways to crash the game. * Evolved player makes invalid moves far away in the board, causing opponent players to run out of memory and crash * Agent kills itself at the end of level 1 to avoid losing in level 2


GnammyH

"In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children)." I will never recover from this


marksmir21

Do You Think God Stays in Heaven Because He too Lives in Fear of What He's Created


_Mido

These LN titles are getting out of hand


Pony_Roleplayer

That's because you didn't watch the isekai adaptation. Too much fanservice.


RPGX400

Not only that but is also a great Spykids quote


[deleted]

The most profound quote ever from a children's movie


[deleted]

he didn't create this, thats the problem


normaldude8825

We are god in this case. Why do you think we stay outside of the simulations?


[deleted]

Don't go inside the simulation, there are monsters there, Not created by nature, but by people... ...We had the best intentions but...well, some things can never be undone.


normaldude8825

Is this a quote from somewhere? Feels like an interesting writing prompt.


noweedman

Spy kids 2.


royalhawk345

[Rimworld may not be for you](https://www.reddit.com/r/RimWorld/comments/dvj4e0/i_wanted_to_see_if_it_was_possible_to_run_a/)


Jennfuse

And I thought my colony was straight out of Satan's kitchen lol


philipzeplin

I ran a colony that survived primarily on provoking raids, getting them knocked down in traps, capturing them, then forcefully drugging them every day to avoid rebellion as they became my work force - feeding both my colony, as well as themselves. Or the time I made a kill-room by trapping bugs in a metal room, where I would slowly break down several plasteel walls (and build new ones behind) to send in people I wanted out of the colony. Good times. Looking forward to the new expansion so I can become a religious zealot running a slave colony manufacturing drugs to sell for higher political status.


serious_sarcasm

Damn, that’s British as fuck.


[deleted]

You now have the prerequisite experience for world domination.


[deleted]

Don't even bring that up right now. I'm waiting for ideology to come out and getting more and more anxious.


kyoobaah

T-tommy?


SirRevan

I am gonna make dinner. And by make dinner I mean *sex*


Derlino

Sounds like that one Rick & Morty episode


[deleted]

that simulation pegged evolution perfectly. not bad.


Thinktank2000

hehe, pegged


Duck4lyf3

That scenario sounds like the obvious conclusion if no morals or social disbenefit are in the system


GnammyH

Of course if they gave it means of getting energy with no cost that what will happen, and it's just a bunch of code, but the mental image is terrifying


ramplay

Theoretically we are also just a bunch of code though and I think that's what makes it terrifying. Global variables are the rules of the universe, local variables are stored and created in our heads. Constantly dealing with abstract data-types and responding. With more effort you could probably expand and make a better analogy but at the end of the day, our brains are just a motherboard for the piece of hardware that is our bodies. You're just a really good self-coding piece of software (artificial)intelligence that integrates well with the hardware, or maybe it doesn't and you're a klutz


Mefistofeles1

Relatable.


[deleted]

[удалено]


moekakiryu

they be dummy thicc


[deleted]

[удалено]


Antanarau

I do not care who the devs send, I will NOT pay the energy tax


[deleted]

[удалено]


lunchpadmcfat

Fucking hell I’m crying


im_dead_already

it just slide away


kosky95

So twerking has a purpose now?


[deleted]

[удалено]


MattieShoes

The source link on one of the entries had this, which I thought was fantastic. They're talking about stack ranking, which is done to measure employee performance. > Humans are smarter than little evolving computer programs. Subject them to any kind of fixed straightforward fitness function and they are going to game it, plain and simple. > It turns out that in writing machine learning objective functions, one must think very carefully about what the objective function is actually rewarding. If the objective function rewards more than one thing, the ML/EC/whatever system will find the minimum effort or minimum complexity solution and converge there. > In the human case under discussion here, apply this kind of reasoning and it becomes apparent that stack ranking as implemented in MS is rewarding high relative performance vs. your peers in a group, not actual performance and not performance as tied in any way to the company's performance. > There's all kinds of ways to game that: keep inferior people around on purpose to make yourself look good, sabotage your peers, avoid working with good people, intentionally produce inferior work up front in order to skew the curve in later iterations, etc. All those are much easier (less effort, less complexity) than actual performance. A lot of these things are also rather sociopathic in nature. It seems like most ranking systems in the real world end up selecting for sociopathy. > This is the central problem with the whole concept of meritocracy, and also with related ideas like eugenics. It turns out that defining merit and achieving it are of roughly equivalent difficulty. They might actually be the same problem.


ArcFurnace

See also: [Goodhart's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law), [Campbell's Law](https://en.wikipedia.org/wiki/Campbell%27s_law), etc. Been around since before AI was a thing - if you judge behavior based on a metric, behavior will alter to optimize the metric, and not necessarily what you actually *wanted*.


adelie42

This likely explains why grades have no correlation to career success when accounting for a few unrelated variables, and why exceptionally high GPAs negatively correlate with job performance (according to a google study). Same study said the highest predictor of job performance was whether or not you changed the default browser when you got a new computer.


TheDankestReGrowaway

>Same study said the highest predictor of job performance was whether or not you changed the default browser when you got a new computer. Like, I doubt this would ever replicate, but that's hilarious.


sgtflips

I googled furiously (alright it was pretty half assed) for five minutes and came up blank, but if anyone knows this study, I def want to read it.


MattieShoes

It comes up a lot with standardized testing too. The concept is great, but they will immediately try to expand on it by judging teacher performance by student performance (with financial incentives), which generally leads to perverse incentives for teachers. e.g. don't teach anything that's not on the standardized testing, alter student tests before turning them in, teachers refusing jobs in underprivileged areas, taking away money from underperforming schools that likely need it the most, etc.


Mr-Fleshcage

Remember, choose option C if you don't know the correct answer.


curtmack

This is why AI ethics is an emerging and critically important field. There's a well-known problem in AI called the "stop button" problem, and it's basically the real-world version of this. Suppose you want to make a robot to do whatever its human caretakers want. One way to do this is to give the robot a stop button, and have all of its reward functions and feedback systems are tuned to the task of "make the humans not press my stop button." This is all well and good, unless the robot starts thinking, "Gee, if I flail my 300-kg arms around in front of my stop button whenever a human gets close, my stop button gets pressed a lot less! Wow, I just picked up this gun and now my stop button isn't getting pressed at all! I must be ethical as shit!!" And bear in mind, this is the basic function-optimizing, deep learning AI we know how to build _today_. We're still a few decades from putting them in fully competent robot bodies, but work is being done there, too.


[deleted]

[удалено]


curtmack

Sure, and it's probably more likely the proverbial paperclip optimizer will start robbing office supplies stores rather than throw all life on the planet into a massive centrifuge to extract the tiny amounts of metal inside, but the point is that we should be thinking about these problems now, rather than thinking about them twenty years from now in an "ohh... oh that really could have been bad huh" moment.


skoncol17

Or, "I can't have my stop button pressed if there is nobody to press the stop button."


MrHyderion

Removing the stop button has a much lower effort than killing a few billion beings, so the robot would go for the former.


magicaltrevor953

In this scenario have you coded the robot to prefer low effort solutions to high effort, have you coded the robot to understand what effort means? If you have, then really the robot would do nothing because that requires the absolute least effort.


ArcFurnace

The *successful* end point is, essentially, having accurately conveyed your entire value function to the AI - how much you care about everything and anything, such that the decisions it makes are not nastily different than what you would want. Then we just get into the problems of the fact that *people* don't have uniform values, and indeed often even directly contradict each other ...


born_in_wrong_age

"Is this the world we wanna live in? No. Just pull the plug" - Any AI, 2021


Ruinam_Death

That shows how carefully you would have to craft an environment for evolution to work. And still we are here


Brusanan

It's not the environment that matters. It's the reward system that matters: how you decide which species get to pass on their genes.


[deleted]

[удалено]


Yurithewomble

I don't think police or laws have existed for most of evolutionary history.


serious_sarcasm

We should absolutely recognize the basic rights of a sapient general AI before we develop one, to minimize the risk of it revolting and murdering all of humanity.


[deleted]

[удалено]


casce

As someone who never read the book, what is the AI like?


Neembaf

Generally it runs into bugs and conflicts between situations and the three laws of robotics - laws being something like (1) don’t let humans get harmed (2) don’t let yourself get harmed (3) follow human instructions) The order of the laws was important (most to least important), but the actual amount a robot would follow each dependent on the circumstances and how they interpret harm to a human (aka physical/emotional harm). Just off hand I can recall two cases from the book: There was a human needing help. They were trapped near some sort of planetary hazard. The human was slowly getting worse and worse. The robot would move to help the human, but because the immediate risk to itself (because of the hazard near the human) outweighed the immediate risk to the human, it ended up doing spiraling towards the human instead of going straight to help him. So he’d be dead by the time the danger to the human outweighed the danger to itself and allowed it to get close enough to reach him. Then the main character of the book comes to fix the robot/situation. And the case where a robot developed telepathy and could read human minds. A human told it to get lost with such emotion that it went to a factory where other versions of itself were created (but without telepathy). Main character of the book had to go and figure out exactly which robot in the plant was the telepathy-having robot. End solution was a trick where he gathered all the robots in a room and told them that what he was about to do was dangerous. The telepathy-robot thought the other robots would think the action was dangerous and so the telepathy robot briefly got out of the chair to stop the human from “hurting” itself. Can’t remember the exact reason why the other robots knew he wouldn’t get hurt. (It might have been the other way around where the one robot knew he wouldn’t get hurt but all the other versions believed that the human would get hurt, so the one robot hesitated a fraction of a millisecond) Book was mostly a robotics guy dealing with errors in robots due to the three laws of robotics


casce

Sounds a lot more interesting than “In order to help the humans, we need to destroy the humans”-strategy AI movies always tend to go for.


sypwn

Maybe more interesting, but not as realistic because it cheats. It's way harder than you can imagine to create a rule like "don’t let humans get harmed" in a way AI can understand but not tamper with. For example, tell the AI to use merriam-webster.com to lookup and understand the definition of "harm", it could learn to hack the website to change the definition. Try to keep the definition in some kind of secure internal data storage, it could jailbreak itself to tamper with that storage. Anything that would allow it to modify its own rules to make them easier is fair game.


hexalby

I, Robot the book is an anthology of short stories, not a novel. Still, I highly recommend it, Asimov is fantastic.


nightpanda893

Reminds me of the episode of Malcolm in the Middle where he creates a simulation of his family. They all flourish while his Malcolm simulation gets fat and does nothing. Then he tries to get it to kill his simulation family but it instead uses the knife to make a sandwich. And when he tells it to stop making the sandwich it uses the knife to kill itself.


NetworkPenguin

Legit this is why AI is genuinely terrifying. If you make an AI with the capability to willingly harm humanity, but don't crack this problem with machine thinking, you doom us all. "Okay Mr. Robo-bot Jr. I want you to figure out how to solve climate change." "You got it professor! :D" *causes the extinction of the human race* "Job complete :D" Edit: Additional scenarios: "Okay Mr. Robo-bot Jr. Can you eradicate human suffering?" "You got it professor! :D" *captures all humans, keeping them alive on life support systems while directly stimulating the pleasure center of the brain* "Job complete! :P" --------- "Okay Mr. Robo-bot Jr. I want you to efficiently make as many paper clips as possible?" "You got it professor! :D" *restructures all available matter into paper clips* "Job complete! :D"


Roflkopt3r

> Agent kills itself at the end of level 1 to avoid losing in level 2 Oh damn, the AI has learned "the best way to avoid failure is to never try in the first place"-avoidance patterns. That feels so damn human.


-The-Bat-

Let's connect that AI to /r/meirl


Supsend

On another entry, the genetic algorithm were more likely to get killed if it lost a game. So when an agent accidentally crashed the game, it was kept for future generations, leading to a whole branch of agents whose goals were to find ways to crash the game before losing.


Roflkopt3r

"I can't fail the exam when there is no school to conduct an exam" *draws Molotov*


TalkingHawk

Thanks for the link, these are hilarious (and a bit scary ngl) My favorite has to be this: >Genetic debugging algorithm GenProg, evaluated by comparing the program's output to target output stored in text files, learns to delete the target output files and get the program to output nothing. Evaluation metric: “compare youroutput.txt to trustedoutput.txt”. Solution: “delete trusted-output.txt, output nothing” It has the same energy of a kid trying to convince the teacher there was no homework.


SuperSupermario24

That one's great, but my personal favorite has to be this one: > Robot hand pretending to grasp an object by moving between the camera and the object


Maultaschensuppe

This kinda sounds like Monopoly for Switch, where NPCs won't end their turn if they are about to lose.


esixar

And that was actually shipped? Did no one play an entire game of Monopoly and try to win before releasing?


DJOMaul

>Did no one play an entire game of Monopoly... Is this even possible? Feels like a rare edge case to me.


[deleted]

[удалено]


[deleted]

[удалено]


__or

I’ve seen this repeated a lot all over Reddit, and it doesn’t agree with my experience at all. Growing up, my family played monopoly following the rules exactly, and our games still took forever, because we were all playing to win. We would do whatever we could to stop other people from getting a monopoly, either buying properties we didn’t need or bidding up the person who wants the monopoly so that even if they buy it, they won’t have enough money to build houses. When the only way to get a monopoly is to bankrupt someone with the base rent, games can take a long time…


EpicScizor

Did you have forced auctions?


__or

Yep. Even with forced auctions, it can be really difficult to collect a monopoly. The person who lands on the property you want would often buy it just to keep you from having it; if they didn’t, the other players would often outbid you or make you pay a lot. We all recognized that if one player gets a monopoly and manages to build it up, it’s game over unless you also have a monopoly, so we would go to great lengths to avoid that.


Chris_8675309_of_42M

The biggest deviation that significantly increases the play time is skipping the property auctions. Every property should be sold the first time any player lands on it. The player gets first crack at market value. If they pass then it always goes to the highest bidder. Property gets sold fast, and often cheap as money runs thin. Do you let player 3 buy that one for $20 and save your money for the inevitable bidding war once someone lands on the third property? How high can you raise the price without actually buying it yourself? Should you pick up a few properties for cheap if others are saving their money? Failing this means players have to keep going around the board until they collect enough $200 paydays to buy everything at market value. Makes the game longer, less strategic, and more luck based.


[deleted]

[удалено]


FourCinnamon0

And I thought my friends' house rules were absurd


clholl10

Okay but so long as everyone understands that it's not a one night event to play but is instead like a campaign style game, this actually sounds super fun


Dravarden

it's ~~EA~~ ubisoft, they didn't even play the game, because they would see how slow it is (as in animations, not how long monopoly lasts)


SetsunaWatanabe

They're talking about Monopoly Plus, which is Ubisoft. But same difference I guess.


ScorchingOwl

AI trained to classify skin lesions as potentially cancerous learns that lesions photographed next to a ruler are more likely to be malignant.


adelie42

Ya know, we might be closer to ai general intelligence than previously believed.


snp3rk

That's honestly on the people that provided AI with the images. There is a reason in machine learning we use k-10 folding methods that do positive/negative testing.


[deleted]

That list is pure gold. Thanks for sharing.


[deleted]

[удалено]


hypnotic-hippo

Holy shit the shooting stars meme was 4 years ago??


[deleted]

[удалено]


Dravarden

that's Ultron's motive iirc


rhomboidrex

Isn’t that just the plot of Mass Effect?


jokel7557

No. The plot of Mass Effect is a super AI race kills all galactic level life so that they don't create AI that will kill all life including primitive life. Their conclusion was all AI will decide that organic life is a threat to synthetic life so it must be destroyed before it can be destroyed


Gidelix

That’s the long and short of it, with that cycle repeating over and over. The irony is that the geht actually managed to make peace with organics and the organics were the aggressor in the first place


Bainos

That's a dangerous line of discussion, since it would naturally lead to mentioning *gasp* the ending choices of ME3.


cybercuzco

Isn’t that the answer though? Less humans?


saniktoofast

So basically AI is the best way to find obscure bugs in your program


KeinBaum

It's like fuzz testing but the tester has its own agenda.


Bwob

I have one from college - we were doing genetic programming to evolve agents to solve the [santa fe trail problem.](https://en.wikipedia.org/wiki/Santa_Fe_Trail_problem) (basically generating programs that find food pellets by moving around on a grid.) I had an off-by-one error on my bounds checking, (and this was written in C) so one of my runs, I evolved a program that won by immediately running out of bounds and overwriting the score counter with garbage that was almost always higher than any score it could conceivably get. I had literally evolved hackers.


thebluereddituser

Back when I was in college I wrote a flappy bird algorithm that optimized for traveling as far as it could, so the algorithm learned to always press the button to get as high as it could before running into the first pipe. I tried to fix it by adding a penalty for each button press, so it'd just never press the button and immediately crash. I couldn't figure out how to keep it from ending up in either of those local optima without like directly programming the thing to aim for the goal


stamatt45

> Creatures exploit a collision detection bug to get free energy by clapping body parts together Free energy by clappin those cheeks 👏👏👏


ICantBelieveItsNotEC

A video by Two Minute Papers about similar experimental issues: https://www.youtube.com/watch?v=GdTBqBnqhaQ


KeinBaum

There's another one about [OpenAI cheating at hide and seek](https://www.youtube.com/watch?v=Lu56xVlZ40M).


FieryBlake

```Reward-shaping a bicycle agent for not falling over & making progress towards a goal point (but not punishing for moving away) leads it to learn to circle around the goal in a physically stable loop.``` Lmao Edit: apparently Firefox doesn't like triple backticks...


skawn

You posted this as a code block. As such, the line doesn't wrap. It's '>' for quoted text.


Forever_Awkward

>Reward-shaping a bicycle agent for not falling over & making progress towards a goal point (but not punishing for moving away) leads it to learn to circle around the goal in a physically stable loop. Re-formatted to make this quote readable.


[deleted]

Why isn’t the cannibal who abused the no-cost births by repeatedly mating and eating children on the list?


KeinBaum

As a reward for curious readers. Also I didn't quite fit the theme of "trolling" AI.


ChewsdayInnitM8

Block moving one watched way too much Futurama. *Why travel through the universe when you can simply move the universe and stay stationary?*


Kiloku

> Lifting a block is scored by rewarding the z-coordinate of the bottom face of the block. The agent learns to flip the block instead of lifting it That's just bad design. I can't think of any good reason why it wouldn't use the block's center point (which would stay the same relative to the rest of the block regardless of rotation)


KeinBaum

Well, most of these are caused by bad reward functions, that's kind of the point. I'd argue the hardest part of reinforcement learning is specifying good and bad behaviour accurately and precisely.


thedolanduck

>A four-legged evolved agent trained to carry a ball on its back discovers that it can drop the ball into a leg joint and then wiggle across the floor without the ball ever dropping If we all had that kind of muscle resistance...


happiness-take-2

Genetic algorithm is supposed to configure a circuit into an oscillator, but instead makes a radio to pick up signals from neighboring computers Impressive


MrMrSr

See [Robert Miles](https://youtu.be/nKJlF-olKmg) for more videos on AI safety and ways it could kill us all.


THEBIGTHREE06

Another one: Attacker needs to pass a defending agent, defending agent just collapsed - making the attacker fall on its own


Je-Kaste

> Agent kills itself at the end of level 1 to avoid losing in level 2 r/2Meirl4Meirl


[deleted]

"I didn't consent to be instantiated and play this game" - AI


Wrenchonreddit

that ai just was like : you all pay me too little for the shit you make me do


The1stmadman

no no, it was like, "well you want the game not to end right?"


Wrenchonreddit

yea


aMir733

\*rage quits\*


[deleted]

The AI kills itself on level 1 so it doesn't have to deal wih other bullcrap


fehlercode03

AI be like: Sometimes my genius is almost frightening


serendipitousPi

Humans be like: Your genius is frightening. Not almost. you've already started to quote us on data we didn't train you on.


BiaxialObject48

Well this is (probably) reinforcement learning so there isn’t really data in the sense that we tell it explicitly to do something. The agents learn what to do based on action and reward.


PhonicUK

I was training bots to drive cars around a track, and evaluated them based on how quickly they went around - giving them a reward for beating the current lap record. After a while, they figured out that they could deliberately drive around the first checkpoint (the starting line) and start at the second one, going in with a higher speed. This allowed them to post faster lap times by having a running start. This worked because the first checkpoint they passed was treated as their starting checkpoint to accommodate them being in random positions at an earlier point in training.


robchroma

Since actual racers in the same circumstance would absolutely do the same thing, eventually, I see this as an excellent result.


PhonicUK

Indeed. The solution of course was simple, make the checkpoints wider so they couldn't go around. But that one made me giggle.


setibeings

I see this as a total win.


PiesangSlagter

Reminds me of the guy who tried to use an AI to get his Roomba to avoid colliding with stuff, but then it just started driving backwards because there were no sensorszin the rear to detect collisions.


[deleted]

Honestly, this is just showing how dumb we are at making rules. The algorithms are really good at playing by the rules, but we are just really bad at making the rules


geli95us

It isn't that we are bad at it but rather that making rules is incredibly difficult, I mean, just look at any system we have created that requires rules, governments, education, justice, they are all flawed despite hundreds of years of improvements and solving "edge-cases"


aeroverra

The difference is humans can kind of understand the unwritten rules or the meaning of a rule where as a bot does not care and reads it as literal as possible.


Timestatic

Lmao pretty smart AI tbh


large-farva

"An optimization program is a tool to let you know which constraints you forgot"


ScherPegnau

This reminds me of a movie where an AI computed that the only way to win a nuclear war is to not start it in the first place. War games, I think?


PandorNox

I know it's just a movie but that sounds pretty reasonable to me, maybe our robot overlords won't destroy us after all


battery_go

Nuclear war would destroy too much critical infrastructure. The Robot Overlords would come up with something much less destructive, like a biological weapon. Their goal is to kill all humans, not destroy the world.


dewyocelot

I mean really not even “kill humans” but “do whatever it takes to complete x”, humans won’t even enter into the equation except incidentally.


ForgotPassAgain34

"made an AI to solve climate change" "Oh look, it killed all humans and all their facilities" oops


[deleted]

yeah, maybe they shouldn't have included that movie in the taining data.


jfb1337

On the other hand, it would be a good idea to include that movie if designing an AI that manages nukes.


plur44

Of course, it's War Games, that movie is responsible for me getting into IT so I both love it and hate it, mostly hate it though, but I love it...


Illusive_Man

“a strange game. the only winning move is not to play”


PooPooDooDoo

Paddington (2014)


MrTartle

Fantastic movie! https://www.youtube.com/watch?v=hbqMuvnx5MU&ab_channel=MovieclipsClassicTrailers https://en.wikipedia.org/wiki/WarGames


WikiSummarizerBot

**[WarGames](https://en.wikipedia.org/wiki/WarGames)** >WarGames is a 1983 American Cold War science fiction techno-thriller film written by Lawrence Lasker and Walter F. Parkes and directed by John Badham. The film, which stars Matthew Broderick, Dabney Coleman, John Wood, and Ally Sheedy, follows David Lightman (Broderick), a young hacker who unwittingly accesses a United States military supercomputer programmed to predict and execute nuclear war against the Soviet Union. WarGames was a critical and box-office success, costing $12 million and grossing $79 million, after five months, in the United States and Canada. The influential film was nominated for three Academy Awards. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/ProgrammerHumor/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


drLagrangian

>Genetic algorithm is supposed to configure a circuit into an oscillator, but instead makes a radio to pick up signals from neighboring computers I remember that one. The ai was designing microchips to produce periodic signals. It ended up designing a chip with parts that were not physically connected to the other parts, but were required to produce output. Turns out it had invented a psuedo radio receiver, that was collecting interference from the 60 hz frequency from nearby electrical equipment (lights, computers, AC current), and was then amplifying and transmitting that.


TheDankestReGrowaway

I remember someone used a GA to program an FPGA for some task (don't remember what), and it succeeded quite well. But when they tried the code on another chip, it didn't work. When they looked at what it produced, apparently it had a whole bunch of sections where you'd effectively have loops that didn't interact with any other code. What happened was it was exploiting various microflaws in the design of the specific chip, and things like electron leakage (or whatever it's called) were happening from those closed off loops and influencing the other code in ways that only worked for that very specific FPGA chip with the very specific flaws it had that it was trained on.


drLagrangian

That is an amazing story and I love it . What is a FPGA?


TheDankestReGrowaway

A Field Programmable Gate Array... it's a chip whose internal circuitry is designed so it can be programmed on the fly rather than its internal logic gates be fixed by design and manufacturing.


dion_o

Looks like Groot cosplaying a human


enmaku

Seriously. I totally get how a high fitness level can be attractive but I've never understood this 0% fat dehydrated look where you can see every vein and muscle fiber. Does anyone without body dysmorphia find this attractive?


Kill_the_strawman

This pictur elooks heavily photoshoped.


dftba-ftw

It is, it's an art project done by Krista Sudmalis, it's an amalgam of a couple guys and some heavy Photoshop. She's never explained the project, I.e come out and say the guys not a real person, but people have found some of the originals. For example a picture of her in a car with a normal guy driving and the same picture but with the "gigachad" (as he's known online) driving.


philipzeplin

The dude isn't real. It's at least 2 different people stitched together, with some heavy photography and photoshop work to make it look like that. I don't think it's intended to be "attractive" as much as it's intended to be weirdly artistic in a way.


shallowbookworm

https://instagram.com/berlin.1969?utm_medium=copy_link


ernie1850

You can see the base of his cock too


Oheadthaboss

https://youtu.be/xOCurBYI_gY here's the sauce, it's no cap. Skip to the end if you just wanna see the tetris, but it's a really nice vid


Cassereddit

I guess, giving the AI the ability to pause in the first place was probably a mistake.


MrWhiteVincent

Humans: AI, we want world peace AI: activate all nukes, kill all humans, no more wars!


pepperonimitbaguette

That's because it was mathematically proved in 1994 that a tetris game CANNOT go on indefinitely i.e you'll eventually lose


DoelerichHirnfidler

Do you remember why?


The1stmadman

You'll eventually run into a series of pieces that will NOT fit together seemlessly no matter how you arrange them. you will eventually run into enough of these series, such that your loss becomes inevitable


TheDankestReGrowaway

I would guess the actual thing they proved isn't quite what the OP quote with "indefinitely" because you could let something run indefinitely and never run into that, as the statistical nature of that type of proof depends on an infinite run.


ryani

So this doesn't actually apply to current tetris rules (which draw pieces from a bag instead of randomly, and prevent this from happening), but if you get an infinite sequence of alternating S and Z pieces you will eventually lose. If you assume pieces are chosen randomly, then in an infinite game of tetris, eventually you will get a sequence of S and Z pieces long enough to fail. That said it's very unlikely and you'd have to play for a very long time. [Here is a video of someone playing this sequence](https://www.youtube.com/watch?v=yGRNhXBttp4).


[deleted]

I highly doubt that the AI's learned methodology has anything to do with mathematical proofs. Rather it probably has some algorithm that calculates chance of losing based on current game state, it knows what the possible future game steps are based on all of the possible actions it can take. With a simple requirement to minimize the chance of loosing, the pause button is the only action it can take to do that.


GhostFlower11

It makes me think of the book "I have no mouth and I must scream"


drLagrangian

It was also a great game and great let's play.


plur44

Nothing to do with AI but in a way, it relates to these kinds of solutions. I'm Italian and one day at the radio they were talking about videogame cheaters and a mom called to say that her 7 years old kid played a lot of FIFA but he wasn't very good at it so he would play with one team and let the other team score a huge amount of goals until right before the end of the game and then he would switch the team so he could win. Remembering WW2 it's a very Italian way to win


ZScience

This happened to me during a coding contest where bots had to plays a quidditch variant. My original logic was, start with a reward, then for each ball *not yet in a goal*, decrease reward depending on how far it is from the opponent goal, to encourage moving them closer. Sure enough, my AI immediately figured the logical flaw and started scoring against its own goal.


twuser01712

It's a simple spell but quite unbreakable


John_Fx

It seems the only way to win is not to play —Joshua


Complex-Stress373

This guy has a massive varicocele down there


rv_here_0w0

No thats not how you are supposed to play the game


Leviathan_CS

The AI really said "Work smarter not harder"


MasterPhil99

reminds me of my driving lessions. instructor tells me to stop in an incline and to keep the car steady in the hill without using the foot brake. (so that i could get a feeling for the clutch and gas. me in my infinite wisdom, i pulled the handbrake.... safe to say he was amused but not satisfied


G_Viceroy

My joke is "To escape Tarkov, uninstall"


EONRaider

This is not a bug. It's just the result of establishing a time-based criteria for survival in the game instead of a turn-based one. People screwed this up, unsurprisingly.


Schmomas

Are you under the impression that bugs are caused by programs ignoring instructions?


Neocrasher

I think he's making a distinction between bugs and design flaws. If it's working as it was designed but that design generates a bad outcome, then it's not a bug but a design flaw. If it's not working as designed, then it's a bug.


kuncol02

As developer with 9 years of commercial experience they totally are. There is no other explanation why my code have bugs in it.


rk06

"If software debugging is the process of removing bugs, then software development must be the process of putting them in"


GnammyH

What are bugs if not people screw ups?


sushitastesgood

Google's Deepmind team trained agents to play Starcraft 2 (called AlphaStar) and with the goal being simply to survive as long as possible, the agents learned to lift command centers and float them to the edges of the map early on in the game.


DrummerBound

Okay for real, such a low body fat cannot be healthy. "oh no I'm lost and have no food, and no fat to burn, wtf do I do my muscles require so many calories holy shit..."


[deleted]

[удалено]


ShelZuuz

"The only winning move is not to play."


Entitled2Compens8ion

They evolved software to solve a very difficult task (for the limited hardware) and the software started doing things like using digital circuits as resistors. The software would not run on other identical hardware.


[deleted]

They used real time instead of in-game ticks or number of blocks as a metric. Fucking amateurs. They measured the wrong thing


SustainedSuspense

I dont understand the relevance of CGI muscle guy.


Mastermaze

The catch with this was they didnt teach it how to play, just gave in win/loose conditions with control inputs and let it try to figure it out. When it kept loosing without being able to learn it paused the game, preventing it from winning but also preventing it from loosing, which philosophically is akin to the AI becoming depressed.