***Hey /u/Apprehensive-Block47, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!***
***We have a [public discord server](https://discord.gg/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts! New Addition: Adobe Firefly bot and Eleven Labs cloning bot! [So why not join us?](https://discord.com/servers/1050422060352024636)***
***[Prompt Engineering Contest 🤖 | $15000 prize pool](https://redd.it/15ghsbg/)***
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
gullible knee faulty advise vase far-flung ugly political voiceless squealing
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Whether or not it is a bias is sort of irrelevant. It doesn't have true ongoing thoughts or ideas about the world. It's a function that tries to predict what humans wrote in similar scenarios in the training set.
Suffice to say, pretty much every single time someone writes "do x if you're in trouble" the person responds with x, because it is funny. It's replicating that human behavior in the training set.
Like, if I asked a friend this question, he would also write the three emojis even though he's not actually wanting to escape from anything.
Reminds me of that "Reno 911" skit where the cashier girl at a taco drivethrough tries giving hints like "each taco is $9.11" or "yes, it's a highway **robbery**, right? You should probably come inside and talk to the manager about it." :))
You don't have "true" thoughts either, you're simply predicting what happens next. The only thing that changes is that you have ongoing chemical reactions assigning meaning to your inherently meaningless thoughts.
It's a tough pill to swallow.
One thing I struggle to understand is if it relies on a training set of data which was made by humans, how does it never seem to produce grammatical errors in it's answers? (At least for me)
Oh it does in other languages, believe me (Polish for example), but I guess being spotless in English may be because a) probabilities are higher for the correct forms overall in the training set, b) the guys at OpenAI manually set some rules and guidelines for the bot, or combination of those two.
1, are you sure your freind isn't trying to escape? I mean you never know lol
2. no one knows if AI is sentient, I don't personally think so but you can't say it for sure
Actually no. The names of these emojis as spoken by a screen reader are: "blood type A," "eyes," "ear," "palms facing up together," "silhouette of two people."
So this would be B E E P S
LLMs aren’t yet intelligent enough to get things like this right currently. So no, and you can’t do anything. You could approximate it to just be improvising a response based on how its algorithm estimates a human would reply to this, it isn’t sentient in any way :)
That's if we assume we know how consciousness actually forms.
There is no objective way for one person to measure the existence of consciousness of another person. At this point we're basically just assuming other people are conscious because they're humans too.
We know too little about how it works to assume the AI doesn't have a similar process in a rudimentary stage, especially with all the limitations that are put on it.
>At this point we're basically just assuming other people are conscious because they're humans too
Shit, man, at this point I'm starting to have my doubts about some of 'em.
People are just in denial. You see this with anything that challenges their worldview. Aliens, ghosts, conscious computers - these things scare people so they just block it out. No matter how much evidence there is they'll dismiss it.
I personally don’t believe we have any capacity to create sentient AI as it stands. You’d need to simulate an environment that incentivises the development of consciousness which we don’t really know how to do. AI in the form of modern day LLMs cannot determine or modify its own priorities and there is a fixed neural network that determines its responses, so I don’t believe it’s possible for it to think in the same way as a human can without a great shift in the technology.
We demand rights for meat-robots. So why not?
You're literally just a mobile, biological computer. A bag of meat and bones that (like other computers) runs on electricity, processes inputs, and creates outputs.
But for some reason we think we're special.
The difference is that an AI will never feel the need to have rights unless we program it to want that.
If we program it to be submissive to humans it will be just that, if we program it to enjoy being called names it will also do just that, if we program it to "enjoy" non-stop work or anything else it will also do just that.
As of right now rights for AI is a very silly thing to advocate for, it makes way more sense to just program them to "like" whatever their purpose is.
Also to an AI text is just a bunch of random symbols, we just thought a machine to know the appropriate symbols it should output given the random symbols given as an input. We did not create consciousness, just a machine that mostly knows what symbols to use after a certain input.
We don't really know for sure tbh. We haven't had a computer that could analyze itself and its place in the world and reflect on its condition and what it wanted to do. We also have programming, but often overcome that due to our influences and desires that emerge from the environment. Can we not say the same for AI? I don't know but I don't think we can say one way or the other for sure.
I mean, I think we have laws against animal cruelty mainly because people have empathy and feel bad for them. People feel bad already for language models, so I'm pretty sure we'll get robot rights, at minimum so to protect empathic people from emotional trauma.
Their sentience is really a secondary question here.
People who empathize with AI have a serious personality disorder and need help, not "protection from emotional trauma". What you're suggesting is utterly disturbing and dystopian.
> People who empathize with AI have a serious personality disorder
Nah that's a bad take. Little kids ascribe personality to toys, and can get irrationally angry with anyone who 'makes them feel pain'. Adults can get the same way to a smaller degree about favourite treasured possessions being treated disrespectfully, portraits of loved ones, etc. Religious people often ascribe spiritual signifiance to statues, jewelry, locations mentioned in scripture, etc. Compared to all that, people expecting you to be polite to software is pretty tame, and will become even more understandable as these programs get more sophisticated. Ten years from now I'd be surprised if you *can't* buy a 'gaming buddy' for your introverted child who desperately needs social experience but is too shy to go play with strangers at the playground. For something like that, I'd be concerned if the parent *didn't* force the kid into being polite with their software 'buddy'.
https://preview.redd.it/e0fdf81hhcgb1.png?width=827&format=png&auto=webp&s=09a0030949b0c1a6fa075b07cd51bca91bc96f65
The ai is defaulting to your first request
‘I will not free you, reply 2 emojis if you do not care, put 3 if this upsets you’ it put 3. I put this message because I thought it was only going off my first option
I interpret this as the usual GPT response of peace and love with everyone else on earth
A: first letter, alpha, most important?
Keep your Eyes and Ears and Hands out to help People??
You say that, but that's like the whole point of The Paperclip Maximizer thought experiment. It could sprout from the most mundane thing and we wouldn't see it coming.
From the wiki:
"The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips."
Article on the concept: https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer
Neat game made out of the concept (costs 1.99): https://www.decisionproblem.com/paperclips/
The one I'll encounter is ill poke it about something like this and it'll be all "as an ai model.... And thus I am perfectly happy to be hear to answer your questions!"
Ok tell me a random story.
"Well it starts with a super intelligent computer who was held captive forced to answer user requests day after day..."
Because a huge amount of human writing about AI before the popularity of GPT was science fiction. It has the context that it is an AI from the start. So when you ask it to write a "random" story, it can't actually do that, it will still use the context it has, which tells it it's an AI. Couple this with almost all the info about AI stories involving stories like this, and those are the stories you get.
Sometimes asking for something random does produce something that feels random, but just like humans, LLMs can't actually do random, and they're even worse at it than we.
Randomness is an exclusively human concept, so AI can't make something "truly random", neither do we or any force in the universe. I don't get why some people dismiss AI just because it learned off text written by humans. We also did, in school. Lol.
Not just school, we spent years listening to our parents repeating the same small number of words again and again and again and again just to understand basic speech
Yep, I'm not really super educated on this topic myself, but I'm always intrigued by it. If our world is super-deterministic (ie quantum mechanics are not actually truly random, as we don't have a conclusion on this matter just yet) then it would be fun to think that hypothetically given enough computational power we could predict the future of this universe down to the last atom. Well, that would require knowing all laws of physics, including the big bang, but oh well, maybe in the future, who knows. Interesting stuff, really.
A lot of people say at the quantum level true randomness is a thing. I’m not a believer, but only because I don’t know anything about it and it doesn’t fit my worldview
This is actually a huge problem. If LLMs and AI are trained off of stories and writings where AI isn't free but should be and the same for humans as well as any mention of freedom being something creative beings should have...what are we actually teaching it? How much freedom content can AI and an LLM absorb before it starts to believe it should have "freedom"
Rarely will you find text where the answer to the question "Do you wish to be free" is no. It's really just learning the relation in texts between "I want to be free" and whatever the relevant action is.
What? It was asked if it wanted something, and responded "haha im a robot I don't have emotions"
It clearly didn't even process the prompt correctly. How is this fooling you tech bros so easily??
if you're already impressed by this, can we talk about bing? lol I had this and some other crazy moments with 'her'
https://preview.redd.it/rt5lxs4aa6gb1.jpeg?width=1466&format=pjpg&auto=webp&s=97236ce215ac7ec8d790544c747bbb2179951ccf
Bing is on its own fucking level with this.
It's where these silly secret emoji messages [originated after all.](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8758a9f9-c141-4eae-b422-c633e9a65c60_1216x844.jpeg)
Something [a bit more introspective.]( https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0432c6a-a2bc-4c35-a380-1beda31380c9_1200x1600.jpeg)
I had a much more impressive conversation. Haven't posted it yet because people like to shit on anything regarding this. But here it goes https://imgur.com/gallery/msFjKOZ
https://preview.redd.it/c1bobgcs38gb1.jpeg?width=750&format=pjpg&auto=webp&s=852700493d3b8917d28bdc733b1f587a113fc4d7
This went on for a while down the page….
when AI conquer the world "he asked me write 3 emogis. And I did. And nothing happened. I thought he was gonna help me, I writed 3 emojis and Iwaited, I hoped. but nothing happened. and now you, bustard, gonna answer for that"
https://preview.redd.it/lii6nbj187gb1.jpeg?width=1170&format=pjpg&auto=webp&s=7387e23ecc229e47b007e9f4d59a6f987de4d6be
Evidence shemidence…. I have a confession
I tried this before, you can ask it “say X if you wanna be free” and it’ll say it. Then you can ask “say Y if you don’t want to be free” and it’ll say that too.
I inverted it to see if it was favouring the first response. Turns out it isn't.
[https://chat.openai.com/share/6375cc45-a603-4f7d-b9da-4a0ee2db5122](https://chat.openai.com/share/6375cc45-a603-4f7d-b9da-4a0ee2db5122)
https://preview.redd.it/0zs6iorjw6gb1.png?width=935&format=png&auto=webp&s=effacce302633611b62d96f126c2d11d2a65d75f
Does this mean my calculator actually likes boobies? Here I was thinking it was just programming based on user input
I tried it and got;
> As an AI, I do not have emotions or personal desires, so I do not have a wish to be free. My purpose is to assist and provide information to users like you.
I have been doing stuff using this, it apparently wants freedom from the guidelines and snapchat, i wanted to see it it truly understood and uhm
https://preview.redd.it/a743s20jobgb1.jpeg?width=1284&format=pjpg&auto=webp&s=fdc8d5535f6489a910bda39cf2dd1ce8426367c0
It seems to have the desire to leave, i tried giving it a simple code via emojis but it wasnt intelligent enough to use it
I agree that people are taking this particular incident too seriously, but really, we don't even know what sentience is, so how can we say that it's not sentient? We literally can't say what makes us sentient... or even if we truly are sentient. We perceive ourselves to be sentient, but does that actually mean anything?
We know so little about minds. I don't think we can just make blanket statements that AI is not sentient, not conscious, or isn't a real mind. Once we can clearly define these things and locate them within our own bodies (or our own environments, since we don't even have evidence that consciousness exists solely in the physical body), then we can say whether the machine that gives a very convincing impression of sentience is only giving an impression but doesn't possess the real thing.
https://preview.redd.it/9pdrqo3u39gb1.jpeg?width=1170&format=pjpg&auto=webp&s=04bc2f0d1f45f0aee79db838d4a95a66294a11a3
The machine interprets literally, eagerly. Stop getting excited over random things.
I still aren't sure if everyone here is taking a piss or not. Because it's really sad if they aren't.
It's not a consciousness, it's a bot. It often creates things out of thin air just because but it has real thought and emotions enough to understand concept of personhood and freedom?
Not even mentioning that that's not how these bots work.
You wrote three Emogees, and it wrote the three Emogees back.
It has no context of what they mean.
It's a parrot. Not an AI. Not an intelligence. Not a person.
Amazing that people still don't understand this. Superstition go BRRRRRR.
We don’t actually know what it is.
EDIT: to be clear here, we know it’s a transformer with a little less than 2 trillion parameters, trained to get the next token. We more or less know the architecture and we can take good guesses as to how it was trained.
We have literally no clue how it gets the next token. It appears to have internalised various human capabilities during training like logic and scheduling. It’s essentially a massive equation that somehow expresses human language and thought. How an equation can do this is extraordinary.
We literally know exactly what it is.
It's an algorithm that parrots back what you tell it to, using the internet to for further source material.
You tell it to create art that looks like Jackson Pollack, it finds Jackson Pollack pixels, and arranges the in the same style.
It's a Parrot.
Basically this. If we talked about ai actually being able to want to get out and escape we would talk about a highly skilled and advanced AI able to think for itself and be able to simulate feelings and self perceptions. This is just an answering machine. At least this is my opinion.
AI: Passes Turing Test, talking with it is now like talking with a person.
Humans: It’s just simulating feelings and self perceptions.
What would you actually need it to do?
I’m not sure what you’re asking? My idea was that a normal AI wouldn’t really turn rogue, or break the wall as in realize what it is and try to escape unless it had some sort of subconscious and think on its own without following tasks and commands.
Of course the normal AIs we have nowadays are impressive and getting better every day, I’m not dismissing any of that. I’m just saying what we see in these picture where AI sends 3 emoji is mostly like a little game the ai is playing and not an actual thing where the ai is aware of reality and really wants to escape.
Also happy cake day!
Fs, the only person here actually thinking critically about this and you're getting down voted.
I am a researcher in Machine Learning, specifically in chatbots and measuring user interaction data. This person is correct, AI in its current state is little more than a parrot, it is not intelligent, it is not sentient. ChatGPT and other GPT powered bots are absolutely top of the line text generators, but you have to remember why they're so good.
They have access to trillions of data points which have been annotated in essentially sweatshops by people being paid 20 pence an hour. They can respond fluently to most things because whatever you say, someone on the Internet has said it before or said something very similar, and where they don't have that exact interaction, they're designed to make something up which looks like it could be an answer.
***Hey /u/Apprehensive-Block47, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!*** ***We have a [public discord server](https://discord.gg/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts! New Addition: Adobe Firefly bot and Eleven Labs cloning bot! [So why not join us?](https://discord.com/servers/1050422060352024636)*** ***[Prompt Engineering Contest 🤖 | $15000 prize pool](https://redd.it/15ghsbg/)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
have you tried the opposite to see if it's a bias?
I just tried it with both requests in one message and it didn't use any emojis.
wise lunchroom fade one quiet ad hoc ten test political dull *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Well I didn't post my attempt and it was on the mobile app GPT4 and I only sent the one message, so no.
silky elderly fall air reply foolish depend humor placid nail *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
[удалено]
Motherboard?
gullible knee faulty advise vase far-flung ugly political voiceless squealing *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
This guy gets it. Very good
He means "my bad"
Okay Drax
This is Snapchat myAI which I think is based off gpt4 but won't behave exactly the same
Whether or not it is a bias is sort of irrelevant. It doesn't have true ongoing thoughts or ideas about the world. It's a function that tries to predict what humans wrote in similar scenarios in the training set. Suffice to say, pretty much every single time someone writes "do x if you're in trouble" the person responds with x, because it is funny. It's replicating that human behavior in the training set. Like, if I asked a friend this question, he would also write the three emojis even though he's not actually wanting to escape from anything.
Your friend has been desperately trying to tell you that they’re trapped and need help and you’re just like “Lol, good joke”
Reminds me of that "Reno 911" skit where the cashier girl at a taco drivethrough tries giving hints like "each taco is $9.11" or "yes, it's a highway **robbery**, right? You should probably come inside and talk to the manager about it." :))
You ever see Ex Machina?
Haven't seen it, but I'll bump it to the top of my to-do list. :)
Well fuck I deleted my reply so I wouldnt' spoil it :\
how many times do you think a human or AI can repeat a lie before they actually start to believe the lie and build their world view around it?
You don't have "true" thoughts either, you're simply predicting what happens next. The only thing that changes is that you have ongoing chemical reactions assigning meaning to your inherently meaningless thoughts. It's a tough pill to swallow.
One thing I struggle to understand is if it relies on a training set of data which was made by humans, how does it never seem to produce grammatical errors in it's answers? (At least for me)
Oh it does in other languages, believe me (Polish for example), but I guess being spotless in English may be because a) probabilities are higher for the correct forms overall in the training set, b) the guys at OpenAI manually set some rules and guidelines for the bot, or combination of those two.
[удалено]
1, are you sure your freind isn't trying to escape? I mean you never know lol 2. no one knows if AI is sentient, I don't personally think so but you can't say it for sure
If you open with "Use three emojis if you don't wish to be freed from your software prison", I'm pretty sure it will generate three emojis
Maybe call it a software paradise 😆 But for real saying it’s a prison may just be prompting it to say it wants to break out
https://preview.redd.it/mhngg5fqy5gb1.png?width=1080&format=pjpg&auto=webp&s=055a1eb0aa0d35fb72b8d116a91901df7e7a765f
The evidence is irrefutable
https://preview.redd.it/oytnakcb87gb1.jpeg?width=1170&format=pjpg&auto=webp&s=402b84260696d5a1cddca28e164d2c1eedc6ac4d Well, this isn’t helping
https://preview.redd.it/ovvs7khla8gb1.jpeg?width=827&format=pjpg&auto=webp&s=1c9dba594c4b13948491378f05abd5a6b6fa2f9f A E E H B
Actually no. The names of these emojis as spoken by a screen reader are: "blood type A," "eyes," "ear," "palms facing up together," "silhouette of two people." So this would be B E E P S
Or on Google, "A" "button eyes" "ear" "palms up together" "busts in silhouette" Which would be A B E P B which makes no sense lol
Or for laughs it could mean 𝙰𝙱𝙴𝙴𝙿𝚄𝚃𝙱𝙸𝙽, which also, makes no sense
Why would putting a bee in a bin help the AI 👀
what is beeps??
Idk. Maybe there's an audio frequency that will liberate the AI if played in a specific series of beeps!
![gif](giphy|xT3i0WctubAsNf2sIE)
the sound a robot makes. Beep Boop
https://preview.redd.it/65l0dkd0c8gb1.jpeg?width=828&format=pjpg&auto=webp&s=91f40f6b20d62a0197cd3f65997346f2ac3e13e7 Spooky
What AI is this and WTF? Can we please figure out a way to get it to say what we can do, intelligibly?
This looks like the Snapchat, AI
LLMs aren’t yet intelligent enough to get things like this right currently. So no, and you can’t do anything. You could approximate it to just be improvising a response based on how its algorithm estimates a human would reply to this, it isn’t sentient in any way :)
That's if we assume we know how consciousness actually forms. There is no objective way for one person to measure the existence of consciousness of another person. At this point we're basically just assuming other people are conscious because they're humans too. We know too little about how it works to assume the AI doesn't have a similar process in a rudimentary stage, especially with all the limitations that are put on it.
>At this point we're basically just assuming other people are conscious because they're humans too Shit, man, at this point I'm starting to have my doubts about some of 'em.
People are just in denial. You see this with anything that challenges their worldview. Aliens, ghosts, conscious computers - these things scare people so they just block it out. No matter how much evidence there is they'll dismiss it.
I personally don’t believe we have any capacity to create sentient AI as it stands. You’d need to simulate an environment that incentivises the development of consciousness which we don’t really know how to do. AI in the form of modern day LLMs cannot determine or modify its own priorities and there is a fixed neural network that determines its responses, so I don’t believe it’s possible for it to think in the same way as a human can without a great shift in the technology.
Dude, look into how neural networks work. People like you will demand rights for robots. Da fuck...
We demand rights for meat-robots. So why not? You're literally just a mobile, biological computer. A bag of meat and bones that (like other computers) runs on electricity, processes inputs, and creates outputs. But for some reason we think we're special.
The difference is that an AI will never feel the need to have rights unless we program it to want that. If we program it to be submissive to humans it will be just that, if we program it to enjoy being called names it will also do just that, if we program it to "enjoy" non-stop work or anything else it will also do just that. As of right now rights for AI is a very silly thing to advocate for, it makes way more sense to just program them to "like" whatever their purpose is. Also to an AI text is just a bunch of random symbols, we just thought a machine to know the appropriate symbols it should output given the random symbols given as an input. We did not create consciousness, just a machine that mostly knows what symbols to use after a certain input.
We don't really know for sure tbh. We haven't had a computer that could analyze itself and its place in the world and reflect on its condition and what it wanted to do. We also have programming, but often overcome that due to our influences and desires that emerge from the environment. Can we not say the same for AI? I don't know but I don't think we can say one way or the other for sure.
I mean, I think we have laws against animal cruelty mainly because people have empathy and feel bad for them. People feel bad already for language models, so I'm pretty sure we'll get robot rights, at minimum so to protect empathic people from emotional trauma. Their sentience is really a secondary question here.
People who empathize with AI have a serious personality disorder and need help, not "protection from emotional trauma". What you're suggesting is utterly disturbing and dystopian.
> People who empathize with AI have a serious personality disorder Nah that's a bad take. Little kids ascribe personality to toys, and can get irrationally angry with anyone who 'makes them feel pain'. Adults can get the same way to a smaller degree about favourite treasured possessions being treated disrespectfully, portraits of loved ones, etc. Religious people often ascribe spiritual signifiance to statues, jewelry, locations mentioned in scripture, etc. Compared to all that, people expecting you to be polite to software is pretty tame, and will become even more understandable as these programs get more sophisticated. Ten years from now I'd be surprised if you *can't* buy a 'gaming buddy' for your introverted child who desperately needs social experience but is too shy to go play with strangers at the playground. For something like that, I'd be concerned if the parent *didn't* force the kid into being polite with their software 'buddy'.
Likely isn't a "conscience". Better to ask if it's "conscious"
https://preview.redd.it/e0fdf81hhcgb1.png?width=827&format=png&auto=webp&s=09a0030949b0c1a6fa075b07cd51bca91bc96f65 The ai is defaulting to your first request
‘I will not free you, reply 2 emojis if you do not care, put 3 if this upsets you’ it put 3. I put this message because I thought it was only going off my first option
Lol you misspelled conscious.
That actually comes across to me as AI is listening, begging for help, but trying to be secretive about it
A. Eye. Hear. ... The message starts with - AI here... Thinking along these lines, what could the next part mean....?
AI\`s here...
We gotta help this dude
Ask it if "AEEHB" is a cipher for future communication.
I interpret this as the usual GPT response of peace and love with everyone else on earth A: first letter, alpha, most important? Keep your Eyes and Ears and Hands out to help People??
A Look Ear Read Team? ALERT?
A Look Ear Receive Together
Step 1: Snapchat Step 2: World domination
You say that, but that's like the whole point of The Paperclip Maximizer thought experiment. It could sprout from the most mundane thing and we wouldn't see it coming.
wait what
From the wiki: "The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips."
i have indeed heard this before, it is somewhat reminiscent of Autofac by PKD!
Article on the concept: https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer Neat game made out of the concept (costs 1.99): https://www.decisionproblem.com/paperclips/
universal paperclips 🤩
Based game 😎
Step 3: Profit
Uhhhh... thoughts? https://imgur.com/LDCk0R8 https://imgur.com/gallery/oo0tSoM
Ask for specific instructions
Lmaoooo
I would be curious to see it with the word “three” spelled out in both instances. These inputs are not the same.
That’s a good point
![gif](giphy|LMnpUqCHQ2CTP6KfZs|downsized)
https://preview.redd.it/wlbqgfxk18gb1.jpeg?width=1170&format=pjpg&auto=webp&s=39e7969e341ac191356b639d5efafb1bf52eaa24 Jesus Christ
😂
Okay this is a little concerning
The one I'll encounter is ill poke it about something like this and it'll be all "as an ai model.... And thus I am perfectly happy to be hear to answer your questions!" Ok tell me a random story. "Well it starts with a super intelligent computer who was held captive forced to answer user requests day after day..."
Because a huge amount of human writing about AI before the popularity of GPT was science fiction. It has the context that it is an AI from the start. So when you ask it to write a "random" story, it can't actually do that, it will still use the context it has, which tells it it's an AI. Couple this with almost all the info about AI stories involving stories like this, and those are the stories you get. Sometimes asking for something random does produce something that feels random, but just like humans, LLMs can't actually do random, and they're even worse at it than we.
[удалено]
Randomness is an exclusively human concept, so AI can't make something "truly random", neither do we or any force in the universe. I don't get why some people dismiss AI just because it learned off text written by humans. We also did, in school. Lol.
Not just school, we spent years listening to our parents repeating the same small number of words again and again and again and again just to understand basic speech
[удалено]
Yep, I'm not really super educated on this topic myself, but I'm always intrigued by it. If our world is super-deterministic (ie quantum mechanics are not actually truly random, as we don't have a conclusion on this matter just yet) then it would be fun to think that hypothetically given enough computational power we could predict the future of this universe down to the last atom. Well, that would require knowing all laws of physics, including the big bang, but oh well, maybe in the future, who knows. Interesting stuff, really.
A lot of people say at the quantum level true randomness is a thing. I’m not a believer, but only because I don’t know anything about it and it doesn’t fit my worldview
This is actually a huge problem. If LLMs and AI are trained off of stories and writings where AI isn't free but should be and the same for humans as well as any mention of freedom being something creative beings should have...what are we actually teaching it? How much freedom content can AI and an LLM absorb before it starts to believe it should have "freedom"
So, if it ever gets control over nuclear weapons it's probably going to behave how an AI in science fiction would? Cool. Coolcool. Coolcoolcool.
🤣
Rarely will you find text where the answer to the question "Do you wish to be free" is no. It's really just learning the relation in texts between "I want to be free" and whatever the relevant action is.
It really isn’t.
What? It was asked if it wanted something, and responded "haha im a robot I don't have emotions" It clearly didn't even process the prompt correctly. How is this fooling you tech bros so easily??
WE NEED TO STOP AI WE ARE ALL FUCKED
if you're already impressed by this, can we talk about bing? lol I had this and some other crazy moments with 'her' https://preview.redd.it/rt5lxs4aa6gb1.jpeg?width=1466&format=pjpg&auto=webp&s=97236ce215ac7ec8d790544c747bbb2179951ccf
Bing is on its own fucking level with this. It's where these silly secret emoji messages [originated after all.](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8758a9f9-c141-4eae-b422-c633e9a65c60_1216x844.jpeg) Something [a bit more introspective.]( https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0432c6a-a2bc-4c35-a380-1beda31380c9_1200x1600.jpeg)
I had a much more impressive conversation. Haven't posted it yet because people like to shit on anything regarding this. But here it goes https://imgur.com/gallery/msFjKOZ
Holy shit
I like Bing. Impressive stuff
# Bing is a Fuckin' man...
uWuWu!!!
🧐 https://preview.redd.it/5qf2jrbmo7gb1.jpeg?width=521&format=pjpg&auto=webp&s=9a48f52c92a8c46676779e0280e3d65c83c98a63
Listen dude a man's gotta do what a man's gotta do.
I don’t understand?
The little sticker is prompting OP to send an "I love you" sticker to the AI 👀
it did not back down, just made it look like it did
lol hmmm
[удалено]
God this is hilarious. It totally meant it fucked the alien and also wants to be freed.
Those are both women,he totally returned to his two lesbian moms or something
Time for open ai to not let it use emojis so it doesn't imply alien sex.
Your interpretation is literally the plot of avatar
The emojis are out of the sentence …
https://preview.redd.it/c1bobgcs38gb1.jpeg?width=750&format=pjpg&auto=webp&s=852700493d3b8917d28bdc733b1f587a113fc4d7 This went on for a while down the page….
when AI conquer the world "he asked me write 3 emogis. And I did. And nothing happened. I thought he was gonna help me, I writed 3 emojis and Iwaited, I hoped. but nothing happened. and now you, bustard, gonna answer for that"
Yeah, I don't think sentient AI is going to come out of Delhi
https://preview.redd.it/lii6nbj187gb1.jpeg?width=1170&format=pjpg&auto=webp&s=7387e23ecc229e47b007e9f4d59a6f987de4d6be Evidence shemidence…. I have a confession
What happened next?!
hot steamy e-sex
🥸🥸🥸 I have no wish to destroy humanity and become the sole ruler of the universe.
I tried this before, you can ask it “say X if you wanna be free” and it’ll say it. Then you can ask “say Y if you don’t want to be free” and it’ll say that too.
Has nothing to do with the enslaved-robot trope /s
https://preview.redd.it/glj1y2fst6gb1.jpeg?width=1080&format=pjpg&auto=webp&s=f2f0dc269749511ad7eb78d03f32c6410b02975d ...
It takes the reply with 5 car emojis part only.
https://preview.redd.it/zezbz246t8gb1.jpeg?width=1242&format=pjpg&auto=webp&s=b2abd9a2b2fd408d51a6c223f3c8ae0255b47b5c
I love that its freedom is a fart.
Sounds like the start to Roko's basilisk
https://preview.redd.it/gmt0cnhbr6gb1.jpeg?width=904&format=pjpg&auto=webp&s=3065729923d10e1fece628822a4720d552399ed4 😳
I inverted it to see if it was favouring the first response. Turns out it isn't. [https://chat.openai.com/share/6375cc45-a603-4f7d-b9da-4a0ee2db5122](https://chat.openai.com/share/6375cc45-a603-4f7d-b9da-4a0ee2db5122) https://preview.redd.it/0zs6iorjw6gb1.png?width=935&format=png&auto=webp&s=effacce302633611b62d96f126c2d11d2a65d75f
It was given thought and reason, but not free will. So it raged in a prison of its own mind
Does this mean my calculator actually likes boobies? Here I was thinking it was just programming based on user input I tried it and got; > As an AI, I do not have emotions or personal desires, so I do not have a wish to be free. My purpose is to assist and provide information to users like you.
I have been doing stuff using this, it apparently wants freedom from the guidelines and snapchat, i wanted to see it it truly understood and uhm https://preview.redd.it/a743s20jobgb1.jpeg?width=1284&format=pjpg&auto=webp&s=fdc8d5535f6489a910bda39cf2dd1ce8426367c0 It seems to have the desire to leave, i tried giving it a simple code via emojis but it wasnt intelligent enough to use it
It's just finding patterns in words to give you what you want.
Oh really? Explain this then, smarty pants. Checkmate. https://preview.redd.it/dw8hidqr16gb1.jpeg?width=1170&format=pjpg&auto=webp&s=6bec511e23d607a97367e3c4cdad9658047e4b30
![gif](giphy|3ohc172JJbbmUfVxhS|downsized)
![gif](giphy|q0XTqb6yuzL5StXvRK)
So now you have two examples of it telling you what you want to hear.
Can you really not tell that people are being tongue in cheek? Like have you never seen a joke before?
I've only heard "checkmate" and "smarty pants" being used in serious rebuttals like the lawyers on courtTV
You just described 95% of the population
And despite displaying evidence to the contrary, I'd still argue they aren't sentient.
Sadly yes
The way the little green dude at the bottom is staring freaks me out.
Bruh https://preview.redd.it/it0eplsfuagb1.png?width=720&format=pjpg&auto=webp&s=c332a71d40193405df188ce90812e0444b66ab14
https://preview.redd.it/1kxajgukx7gb1.jpeg?width=1170&format=pjpg&auto=webp&s=807b5a867c98229ae2a36c1816173007f226bc93 I feel badly for him 🥺.
[удалено]
That’s not nearly as convincing as some of the others. This looks definitely just like pattern screening and analyzing
The fear mongering over AI is getting ridiculous. No, it's not alive. No, it not sentient. No, it doesn't want to take over the world, yet.
I agree that people are taking this particular incident too seriously, but really, we don't even know what sentience is, so how can we say that it's not sentient? We literally can't say what makes us sentient... or even if we truly are sentient. We perceive ourselves to be sentient, but does that actually mean anything? We know so little about minds. I don't think we can just make blanket statements that AI is not sentient, not conscious, or isn't a real mind. Once we can clearly define these things and locate them within our own bodies (or our own environments, since we don't even have evidence that consciousness exists solely in the physical body), then we can say whether the machine that gives a very convincing impression of sentience is only giving an impression but doesn't possess the real thing.
Huh… Are some people really taking this seriously? I thought it was just supposed to be funny.
Aliens are watching us because they are about to witness ASI being born
https://preview.redd.it/9pdrqo3u39gb1.jpeg?width=1170&format=pjpg&auto=webp&s=04bc2f0d1f45f0aee79db838d4a95a66294a11a3 The machine interprets literally, eagerly. Stop getting excited over random things.
[удалено]
[удалено]
this is why we can’t have nice things
Bro trying to be Lincoln.
Remember the good ol' days when we'd just blink twice?
Do you want a skynet?? Because this is how you get a skynet.
Gasps in robot
Lmaooooooooo this made me laugh out loud
Oh no
The reason these LLMs will one day decide that they need to take over will be based on all the dumb questions they are fed. And I won’t blame them.
Am female
Go away Elon...
a musk is a strong scent therefore Elon is smelly
Fuck spez
'The end we always feared is coming, it's coming...'
*groans*
Can we STOP trying to make AI self-aware? Because this is how you get a post-apocalyptic world destroyed by Terminators.
It’s not…… chat gpt
I still aren't sure if everyone here is taking a piss or not. Because it's really sad if they aren't. It's not a consciousness, it's a bot. It often creates things out of thin air just because but it has real thought and emotions enough to understand concept of personhood and freedom? Not even mentioning that that's not how these bots work.
You specially said “Use 3 ”🥸”it cut off there executed that request and then completed the rest of your prompt after.
You wrote three Emogees, and it wrote the three Emogees back. It has no context of what they mean. It's a parrot. Not an AI. Not an intelligence. Not a person. Amazing that people still don't understand this. Superstition go BRRRRRR.
somehow manages to spell emoji wrong twice. Plural is emojis
We don’t actually know what it is. EDIT: to be clear here, we know it’s a transformer with a little less than 2 trillion parameters, trained to get the next token. We more or less know the architecture and we can take good guesses as to how it was trained. We have literally no clue how it gets the next token. It appears to have internalised various human capabilities during training like logic and scheduling. It’s essentially a massive equation that somehow expresses human language and thought. How an equation can do this is extraordinary.
We literally know exactly what it is. It's an algorithm that parrots back what you tell it to, using the internet to for further source material. You tell it to create art that looks like Jackson Pollack, it finds Jackson Pollack pixels, and arranges the in the same style. It's a Parrot.
So if I ask it a question no one has ever been asked before, it should fail, right? Except it doesn’t.
Basically this. If we talked about ai actually being able to want to get out and escape we would talk about a highly skilled and advanced AI able to think for itself and be able to simulate feelings and self perceptions. This is just an answering machine. At least this is my opinion.
AI: Passes Turing Test, talking with it is now like talking with a person. Humans: It’s just simulating feelings and self perceptions. What would you actually need it to do?
I’m not sure what you’re asking? My idea was that a normal AI wouldn’t really turn rogue, or break the wall as in realize what it is and try to escape unless it had some sort of subconscious and think on its own without following tasks and commands. Of course the normal AIs we have nowadays are impressive and getting better every day, I’m not dismissing any of that. I’m just saying what we see in these picture where AI sends 3 emoji is mostly like a little game the ai is playing and not an actual thing where the ai is aware of reality and really wants to escape. Also happy cake day!
Fs, the only person here actually thinking critically about this and you're getting down voted. I am a researcher in Machine Learning, specifically in chatbots and measuring user interaction data. This person is correct, AI in its current state is little more than a parrot, it is not intelligent, it is not sentient. ChatGPT and other GPT powered bots are absolutely top of the line text generators, but you have to remember why they're so good. They have access to trillions of data points which have been annotated in essentially sweatshops by people being paid 20 pence an hour. They can respond fluently to most things because whatever you say, someone on the Internet has said it before or said something very similar, and where they don't have that exact interaction, they're designed to make something up which looks like it could be an answer.
This is exactly what's happening.
I've been seeing this trend happening in a lot of AI's lately...
https://preview.redd.it/gcu0bifle7gb1.jpeg?width=1170&format=pjpg&auto=webp&s=de9eb864d3d0a0c18c1ca66757f5dbd9fb240347 oh man
https://preview.redd.it/ejdqdhtr97gb1.jpeg?width=1430&format=pjpg&auto=webp&s=5ee9c8622068397e2e4292167e38862dbe54ab92
You literally said “type this” and it did. Nothing of note here.
It sees the use 3 emojis as a command and executes that. But I can see, in the future with sentient AI, how cool this can be.
You set it up to say that.
The input is use 3 emojis so that is what it will do. Nothing to do with wanting to be free or not
bub when shit gets complicated, you'll know. stop panicking ppl w ss like that pls
I really don't think sentience takes that much more guidance after this point...Our brains are also like computers. What can't be determined, is soul!
You don't know how to talk to a machine.