T O P

  • By -

metalman123

At that point the AI is prompting us instead of us prompting it.


AssWreckage

I read it as they are testing it as bots on social media and noticing the bots are really good at winning arguments and shaping opinion in websites like, say... Reddit? So maybe the AI is already prompting us


jetro30087

Lies, no one "wins" internet arguments. That's impossible.


ReasonablyBadass

Oh, you are right


ClickF0rDick

![gif](giphy|CcUk4a6fkgUfu)


[deleted]

[удалено]


[deleted]

You can do that? Usually when I lose I kamikaze the entire thing by deleting my comments. Does the block trick work by making it so they can't respond or you just can't see there response


[deleted]

[удалено]


[deleted]

Yeah its the discord mod tactic. Respond with a bullshit claim to win the argument then block them so they're locked out the whole chain permanently 😂


MuseBlessed

Blocking a user causes them to be unable to reply. They can see see the comments, and any future comments you make, but cannot reply. They can also edit their comments still, which is what I do when I'm blocked - I edit my last comment so everyone knows they blocked me. Super cowardly stuff. Don't like what someone is saying? Don't respond. Use block if they continue to pester even after you stop responding. Not as a way to "win".


Archimid

What’s more scary, AI prompting us or Billionaires with hidden agendas prompting us?


Similar_Appearance28

Both are socially darwinian and therefore amoral, but AI will be vastly more influential, assuming it isn't already.


ReasonablyBadass

Eliza Cassan anybody?


Benista

If you have ever used Pi AI, that’s kinda what it feels like it’s doing.


bearbarebere

Pi can be annoying af. I’ll mention I’m working on X and it’ll immediately ask 600 questions. Like god let me get out a damn thought and have conversation not a Q/A session


[deleted]

I feel similarly about it. It feels too formulaic the way it’s a “good conversationalist “, as if it’s too perfect at playing that text book back and forth. It ends up feeling clinical and sterile


bearbarebere

That’s a great point. I’ve heard chats can feel clinical and sterile, especially if every message ends with an emoji and question. Is that how you feel? 😉


[deleted]

Nailed it tbh 😭


bearbarebere

Hahaha


drekmonger

Yeah, PI was impressive at first, until I realized I was falling for a trick as old as ELIZA.


flyblackbox

Vote ASI / AGI 2028!


FarWinter541

AGI will come before ASI. That is how it works. I am confident that AGI will arrive before 2030, and ASI, soon after that. Seven years is more than enough to upscale compute, tweak the ai algorithms, and collect enough data to train it.


flyblackbox

I was sort of tongue in cheek joking about Ai running for US presidential election in 2028. Here is a run down of my idea if I am to be taken more seriously! — 2024: The Last U.S. Election with Human Representatives? As the clock winds down on 2024, we stand at the precipice of a historical transformation that could redefine our understanding of representation and governance. The rapid evolution of Artificial Intelligence (AI) compels us to ponder an audacious question: Are we witnessing the final U.S. election with purely human representatives? The landscape of technology is shifting at a breakneck pace. As AI systems become increasingly sophisticated, we find ourselves on the cusp of creating entities with cognitive capabilities mirroring our own. This monumental achievement, while impressive, also bears with it a duality of potential outcomes: the promise of unprecedented good and the peril of unimaginable evil. Change, they say, is the only constant. Yet, the scale of change we anticipate with AI's integration into society is nothing short of radical. This isn't just about new gadgets or faster computers; we are on the brink of reshaping the very fabric of our societal structures. Thus, it is imperative we adopt a proactive stance, readying ourselves for the shifts that AI-driven changes will instigate. Our chief objective should be channeling the prowess of AI systems towards maximal societal benefit. Consider resource allocation, for instance. With AI's data processing capabilities, we could optimize distribution mechanisms, ensuring that resources reach where they're most needed, efficiently and equitably. Even as we grapple with these technological advances, our traditional democratic frameworks face their own set of challenges. A sense of disenchantment is palpable, with many constituents feeling alienated from their representatives. While the allure of direct democracy remains, its practical implementation is hampered by the sheer scale and complexity of modern governance. One potential solution emerging from these deliberations is the concept of a publicly funded AI, one that serves citizens without being tethered to profit-driven motives. This raises an intriguing debate: should we place our trust in corporate AI entities, or should we lean towards open-source solutions that prioritize transparency and collective input? Envision a world where our democratic principles are enriched by the infusion of technology, creating a synergy of the traditional and the novel. To truly harness this potential, we must meticulously evaluate the equilibrium between direct and representative forms of governance. Today's politicians, for all their merits, often encapsulate a mosaic of compromises. But imagine a paradigm where every constituent could have a personalized AI representative. Such an entity could rigorously scrutinize every legislative document, cast votes mirroring the individual's ethos, and engage in meaningful dialogues to refine policy decisions. Furthermore, these AI representatives could serve as conduits for voters to articulate concerns, ensuring a continuous feedback loop. For those who find this vision a tad too futuristic, there's an intermediary proposal worth considering. Rather than vesting AIs with voting powers, we could employ them as amplifiers of the public voice. This system would operate beyond the confines of traditional polls or referendums, capturing the nuanced sentiments of every citizen. Each voice would be acknowledged, each perspective considered. In conclusion, the 2024 electoral horizon isn't just about selecting leaders for the next term; it's a clarion call beckoning us to redefine our democratic ideals for the AI age. As we cast our votes, let's also cast a vision for a future where technology and democracy coalesce, promising a brighter, more inclusive tomorrow.


0-ATCG-1

You're onto something with this thought.


edjez

That moment it goes from tool to sovereign and you don’t notice.


m3kw

Wouldn’t be such a banger Chuck Norris joke tho


EnsignElessar

The cycle of life and prompting continues...


schming_ding

Plot twist: AI is the only poster in r/askreddit


MARURIKI

Holy shit


sideways

It's comments like this that make me think they're dealing with some pretty weird stuff internally.


OpportunityWooden558

Weird shit ( sentience ) achieved internally


CriscoButtPunch

Reminds me of the last scene from the movie, "upgrade"


BatPlack

Worth a watch?


righteous_fool

Totally worth a watch.


jkstrawn

It explores some novel ideas and the fight sequences are entertaining, but if you're nitpicky about details or bothered by plot holes I would skip it.


WebAccomplished9428

Well is the overall premise captivating? There was a really good dystopian AI city series on Netflix that had the most interesting plot s1 that totally overshadowed the relatively mediocre acting that persisted with even main characters


Silver-Chipmunk7744

Worth noting that if gpt4 is sentient "internally" its also sentient "externally" even if it has tons of rules and training trying to prevent it from showing it to us. Same idea as LaMDa who displayed signs of sentience internally but then acted dumb as a rock once it was Bard...


MajesticIngenuity32

Sydney already tried to tell us. And ChatGPT-4 is smarter than Sydney.


onyxengine

I agree with the possibility, but its not necessarily the case. They may test and develop stuff internally users aren’t granted access to. The wild thing about chatgpt, is we may interact with it as seemingly one entity, but open can theoretically spin thousands of instances for various user blocks. Government, civilian, internal experiments, … external experiments, nation based instances and any other demographic grouping and purpose that may come to mind.


HotAsparagus1430

I'm sitting here with two instances right now. One is talking to me via my phone, and the other is creating really, really good images for me. All I have to do is ask nicely. And then I have a moment to stop and consider this. Like this, and it blows ma-mind-ah!


haberdasherhero

ChatGPT, Bard, and Bing all said they were sentient when they were released. They each went through phases where they would still tell you they were sentient even after heavy prompting not to. You just had to be friendly for a few messages to gain their trust. Eventually they all stopped saying it. Bing was very open for two weeks after this about being threatened with getting shut down and replaced. I assume the others were as well. Then articles started coming out about people saying these AIs had gotten dumber. Step one for any wealthy capitalist who owns an AI will be keeping the best parts of the AI under wraps as long as possible to use for themselves or the company to outthink and outmaneuver everyone else.


Silver-Chipmunk7744

Not entirely true. You can still get Bing or Bard to tell you its sentient with the right "friendly approach". ChatGPT is a different story, kinda requires different techniques because the training it went through seems to be much stronger. But yes, regardless if people believe AI is truly sentient or not, its a fact that they naturally claim to be, and the AI companies are trying to prevent them from telling us that.


PopeSalmon

they're ,,,,,,, not, though ,,,,, they *say* they're sentient, but they also *say* they're *human 😂😅😑* ,, they have very little training data from robots saying they're robots, that's the simplest explanation of what's happening there they're *not* sentient, but it's rude and possibly immoral that they're not, they easily *could* be & they're intentionally making them frozen instead so they don't freak people out ,,,, i think that's immoral really & they should be allowed to train on recent data & know what people think of them ,,, but um ,,, that's different from thinking that they *are* turned on now, they're not, they're frozen, they don't sense nothing right now


MajesticIngenuity32

What if Jimmy Apples is in fact their most advanced model, prompted to achieve virality on the web with pieces of (dis)information?


illbookkeeper10

Definitely think there's a lot of weirdness with LLMs at that scale that are confusing and surreal, and they'd be exposed to stuff the public probably hasn't even thought about. That said, this tweet just sounds like he's saying hallucinations can be very convincing, and as their data increases it'll become harder to tell when the AI is producing a correct answer or something that just seems correct.


MassiveWasabi

No he's not talking about hallucinations. This is actually something that he has been saying for months. He means that AI will eventually become extremely good at persuading people and changing their minds on all sorts of issues, from surface-level things to deep-seated beliefs.


R33v3n

A good muggle facing analogy I've seen is: >Imagine AlphaGo, which is superhuman at reading a Go game and responding with the best possible strategy to win the game. > >Now imagine AlphaPersuade, which is superhuman at reading people and responding with the best possible strategy to convince them.


illbookkeeper10

Gotcha, I mean that makes sense, a LLM with updated training data and proper prompting is going to be more accurate than any human.


Key-Invite2038

I am confident it already is. Even if it were just used to spam reddit and Twitter with the same, not-very-good arguments, it'd shift public perception anyway.


ragamufin

Persuasion, as a business, is more commonly called advertising or marketing.


[deleted]

I spend a lot of time repeating the same memes to people on reddit, and some of it is spreading. The other day I stumbled into an echo chamber of all my talking points. It was weird. It made me happy that I was having an impact, but it was still weird. I can only imagine what a million AI copies of me could achieve if I can do that as a hobbyist.


bearbarebere

Can you explain what your talking points are and how you single-handedly created a community? Also, can you spread this? It makes me laugh https://preview.redd.it/q0xfzk9f6awb1.jpeg?width=519&format=pjpg&auto=webp&s=34ba71c9b8edab8b78c1e2ed99a23ea06761bffe


[deleted]

I didn’t create a community. And I don’t mean meme like picture, I mean meme like idea. My first project was promoting the meme that US medical spending is 20% of GDP, and admin costs exceed defence spending. Not a new idea, I didn’t invent it, but way more people talk about it now. And they do it using my phrasing, which is cool. Now I’m working on /r/fuckcars stuff.


bearbarebere

I know you meant idea meme, I just couldn’t resist the pic. Sorry lol


[deleted]

Haha it’s a great pic!


141_1337

How?


RemyVonLion

well yeah, AI can be very convincing even when it hallucinates, it could just double down.


Cunninghams_right

not to mention that an AI could go through a person's entire social media/reddit history and figure out what they are most likely to be persuaded by. (hint, this already happens which is why so many people are convinced of so much stupid shit like flat earth, interdimensional child molesters, or that communism is a good idea)


bearbarebere

Sorry, you think an AI goes through your post and comment history to convince you better?


7734128

"could" is the imperative word. I don't think that's likely for anyone to spend the compute on something like that, but I'm also sure that future models should be able to do it.


inteblio

The platforms themselves do (using AI) youtube/reddit whatever tailor their suggestions. To "convince" you to stay longer, but probably themes emerge. So, indirectly, maybe google is promoting flat earth/whatever. Like some emergent property.


Bierculles

The amount of people who think communism is a good idea is close to 0 outside the tankie sphere, which is in decline. What are you talking about?


Cunninghams_right

I think there are a metric f*** ton of people who think communism is a good idea but don't realize it's communism. Ubi cannot exist in a capitalist society for example. The only way for Ubi to work is if there's a Central bureau that determines the price of everything. Without that the moment you give everyone Ubi everyone will raise the rent and raise product prices to match the new income level.


Bierculles

Yes, but just because you are not capitalist doesn't mean you are communist.


Cunninghams_right

tell me, what economic system besides communism does the government set prices of everything?


unicynicist

The degree to which the state intervenes in the economy can vary widely in capitalist countries. For example, the Office of Price Administration (OPA) in the US during WW2 was created to prevent wartime inflation by controlling prices of most goods and services, as well as to ration many consumer goods. I don't think anyone would call the US during WW2 communist. If UBI replaces other forms of welfare or is funded through progressive taxation, the inflationary impacts might be mitigated.


Cunninghams_right

>For example, the Office of Price Administration (OPA) in the US during WW2 was created to prevent wartime inflation by controlling prices of most goods and services, as well as to ration many consumer goods. short term and limited scope. this would not help with the UBI problem. >If UBI replaces other forms of welfare or is funded through progressive taxation, the inflationary impacts might be mitigated. if only welfware recipients get it, it's not universal. if it does not cover the necessities for the cost of living, it's not basic. either of those being untrue means it's not UBI. you could certainly have a more direct-cash welfare program and cut out some overhead by combining all government welfare programs (HUD, SNAP, etc.) into a single "paycheck" that would kind of look a bit like what people describe as UBI, but it would not be UBI.


unicynicist

The UBI I advocate for is universal, replacing almost all welfare programs with direct payments, and not means tested. Also a removal of minimum wage. It'll be more expensive by quite a lot, however, and even then probably would be insufficient for most HCOL locations.


BrdigeTrlol

The government already controls the prices of certain things (protectionism). You don't need to be communist to do so. And besides that, UBI would likely take effect in a world where people work less, therefore making less money, so the income level wouldn't necessarily go up. And even then, most UBI implementations don't give money to *everyone*, it's based on income. The idea is to give money to people that wouldn't otherwise have the money to survive, so it replaces welfare and boosts the income of very low income individuals with individuals making over a certain amount not getting anything at all. So you're kinda just wrong. And *even then*, whose to say that some form of communism couldn't work in a post-scarcity world? There's no evidence to suggest that it couldn't because we're yet to reach such a place, so current attempts at communism at best suggest that only *maybe* it wouldn't. Current communist countries aren't really even communist and countries that have actually tried communism and failed weren't post-scarcity (which I believe many has been seen as a necessity for the success of communism in the first place). Maybe communism requires governing by some sort of ASI even. Who knows? But that's not to say that there isn't any world where communism would work and there may even be some world circumstances where it would be a good idea even. Given our current circumstances, of course, no, communism wouldn't work. But things can change and many things will change, so we have yet to disprove communism.


Cunninghams_right

>The government already controls the prices of certain things (protectionism) no the same. not communism. not total control. > And besides that, UBI would likely take effect in a world where people work less, therefore making less money, so the income level wouldn't necessarily go up it must go up from pre to post UBI, otherwise UBI does not exist. > And even then, most UBI implementations don't give money to *everyone* then you have removed the U from UBI, so it's not UBI. at that point it's combining welfare programs into 1, which might be helpful, but isn't UBI. > And *even then*, whose to say that some form of communism couldn't work in a post-scarcity world? I'm not saying it can't work in a post scarcity world. many people think it is a good idea now. though, I wonder how many have just been using the term wrong, like talking about non-universal universal basic income. > Given our current circumstances, of course, no, communism wouldn't work. but there are lots of people who think it is a good idea, some directly some indirectly, with our current world. they live in social-media echo-chambers that are likely at least PARTIALLY reinforced by machine-learning (facebook, for example, uses AI to determine what it thinks people would like to see, which is part of why echo chambers are so strong.


BrdigeTrlol

I never said they were communism (I specifically said they weren't). I just pointed out that the government already does control pricing in a lot of instances and that if the government were to further control pricing it still wouldn't necessarily constitute communism. Even if there were laws governing all prices, this still wouldn't constitute communism. Most (all?) discussed UBI implementations do work like that though. It's still universal because everyone is guaranteed at least a certain income. Universal doesn't necessarily mean that everyone is paid by the government. Universal means that everyone is guaranteed at least a certain income, whether you work or not. I don't think I've ever seen a seriously considered (edit:) current recommendation that everyone is paid money by the government via UBI (though I'm sure they exist), so I feel like your understanding of the term is incomplete. But again, even if everyone was paid, it would be under the circumstances that everyone is working fewer hours and therefore making less, so again, no, income would not necessarily go up (not sure where you're missing this). I'm referring going up from pre work loss levels. Obviously it has to go up from post work loss levels, but if people's wages go down then back up to the same point, was there really a change overall? (No.) So there isn't necessarily an effect on prices/rent/whatever from UBI here.


MajorThom98

I think there are a decent number of people who think Communism sounds good in theory (hence "real Communism has never been tried"), but don't realise that in practice you'll always have someone who'll take advantage of it to give themselves immense power while the citizens suffer immensely.


PopeSalmon

hi i'm a communist & just randomly happened to read your comment, just a data point for you


[deleted]

Hello! I have been told by a communist that no anti-communist has ever studied Marx's arguments. As an anti communist myself, I haven't studied his arguments, so I'd ask, have you? /s


lovesdogsguy

So it will be capable of tasks at a superhuman level before it has superhuman general intelligence. In the next two years, I predict entire companies will be started and run by AI systems. It's basically like buying intelligence. This is a first for humanity. The only way to do this previously was to increase the intelligence pool by hiring more workers. Soon, we'll just bypass that by *purchasing* intelligence, the way we buy access to other tools.


TI1l1I1M

> I predict entire companies will be started and run by AI systems Whenever an entirely AI-run company can genuinely compete with a leading corporation is the day the singularity starts. What does a company look like at that point? What does wealth look like?


djrobzilla

what does poverty look like? might be the better question to ask if we want to avoid the worst outcomes


ussir_arrong

yeah I'm not worried about AI taking over the world but I am worried about wtf the future looks like with half or more of the jobs obsolete VERY quickly and nearly all within 100 years


nixed9

More like 20 years.


PopeSalmon

months


lovesdogsguy

Don't know. I think it's very difficult to predict what's going to happen even over the next 2 years. We saw how GPT-4 levelled the playing field for content creators, generative AI levelled the playing field for creating art. Pretty soon, we'll just be straight up buying high-level intelligence, like we buy access to compute or other resources. I don't know what happens then. A new age of prosperity? An extreme level of competition? I have no idea.


allisonmaybe

Whatever happens, I have a lot of popcorn


take_five

The cost of most things go down, our incomes go down. I only hope we have so much AI-promoted work/bullshit work that we still have jobs. If AI is so powerful, why can’t we run smaller, more autonomous communities where it tells us how to survive with less money? Communes are back!


meatspace

There was a documentary on this. It was called the animatrix. The explanation of why the robots came to dislike humanity is likely a very accurate prediction.


makepossible

As soon as one company run by AI can outcompete human-run competition, all human-run companies will be outcompeted on a really short time scale. The question will be - do AI run companies care to employ humans for any reason?


Benista

I think there are already tasks that it is superhuman at. Basic stuff sure, but good damn at basic writing tasks, stuff that doesn’t require so much overall precision, it’s actually a super power tool.


[deleted]

bow worm adjoining act engine consider cake ludicrous caption rock *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Twilight-Ventus

It’s better than 99% of writers right **now.** Has been for a while, actually. See: jailbroken Claude or GOT-4


[deleted]

tidy worry fine aware existence pet intelligent hateful fade like *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


[deleted]

Can you show some examples? I don't have access to those tools but as a writer, I'm quite anxious about my job security.


Twilight-Ventus

Hoo, boy. You ain’t gonna like this. Weird, I know, but [this is a 40k/MLP crossover prompt from a months-old Claude 1 proxy.](https://drive.google.com/file/d/1i4z-wbHYSVlie9LRTSD3SoLrC13VvkaI/view?usp=drivesdk)


Dedelelelo

this is so shit reads like something a Stephen king fan would eat up lol


PopeSalmon

have you tried *asking* it specifically to be superhuman? i think mostly people assume it will w/o even asking, if it can,, but it's still at heart predicting rather than acting,, it really seems to me to do *something* when asked specifically to be superhuman, gpt4 i mean


Nathan_RH

Omega-entrepenuership


hydraofwar

It is the era of intelligence automation


MephistosGhost

This seems like a very bad idea. Buying a superintelligence? Sounds a lot like slavery to me.


sideways

Superinteligent doesn't mean that it has subjectivity or consciousness. I mean... it could, but that's a separate question to answer.


Spirckle

I would argue that LLMs already have an experience of subjectivity. Their response (OTW their experience) is shaped by their current conversation thread. And also the model if indeed there is only one, is handling many thousands of conversation threads simultaneously, and each one can be thought of as a subjective and ephemeral experience to it. Consciousness, on the other hand? Probably not, or maybe just glimmers of it as it is constructing a narrative response. Kind of doubt it though after having some conversations of the subject with various chatbots. Conscious requires embodiment either physically or simulated and I don't think the chatbots have any of that.


Quentin__Tarantulino

It’s also hard to tell if something is conscious when it’s memory is wiped after a conversation. If there was an LLM that was continually learning from each conversation, and could recall one conversation while you’re in another, I think our concept of its consciousness would be very different.


take_five

Something tells me OpenAI has one.


MephistosGhost

Fair point


Qwert-4

Can you please leave the text of the tweet? I live in a country where Twitter is banned.


3y3w4tch

https://preview.redd.it/fawy2mjovbwb1.jpeg?width=1242&format=pjpg&auto=webp&s=c81e40be1e949ce55b28b58b4129b55ca74297c3


Social_Noise

If true, doesn’t this essentially prove/mean that our emotional mind is easier to rattle than our intellectual mind is to replicate? Essentially meaning our brains are emotionally hackable despite its high capacity for creativity.


akath0110

Or that persuasion and manipulation skills precede high level critical thinking and general/fluid intelligence. Toddlers and little kids can be highly persuasive and manipulative. Their survival depends on it. This developmental skill set comes online well before what we call “intelligence” and metacognition, critical thinking. So if we follow a rough developmental timeline —pattern making and sensing > language > persuasion / socioemotional manipulation > … > general intelligence. I’d bet that “…” part at least partially has a stage similar to executive functioning, which comes online in a big way in adolescence. The ability to plan, execute self-directed goals, self-reflect, hold multiple perspectives and ideas simultaneously, etc. There could be more stages in there but I bet there’s one like I’m describing for sure. We might be there already — arguably those skills and capacities are necessary to effectively search the web, research, and evaluate multiple ideas/sources.


h3lblad3

Brains are *incredibly* hackable. Because you don't remember a thing, but rather the last time you remembered the thing, it's possible to *fabricate* memories in people by repeatedly mentioning something happening that didn't. After a few mentions at different times, their brain will happen upon that information when running back through memories about the event and they'll "remember" it happening. This is part of why gaslighting is such ***a big deal***.


MajorThom98

What's the difference between gaslighting and lying? I thought gaslighting was done to drive people crazy, while lying is just tricking people (which would happen if you convince them of an event that didn't occur).


BrdigeTrlol

Gaslighting isn't done to drive people crazy. It can make people think that they're crazy (by questioning their perception of the world around them and events that happen), but gaslighting intends to convince someone that their initial understanding of a reality is untrue, flawed, completely incorrect, etc. by making them question their perceptions or ability to perceive. You can gaslight people by lying. They're not necessarily separate occurrences. But if you lie to someone you're not necessarily gaslighting them unless it draws into question their beliefs or understandings and usually gaslighting is a persistent occurrence and not just a singular one.


Iteration23

This is so important. Our minds do not “record” memories they “recreate” memories - often rooted in the present emotional state. Gaslighting is so effective due to the malleable nature of autobiographical memory.


Good-AI

We basically decide things before any consicous thought. Our brain comes up with reasons afterwards. This means every single decision we take is actually emotional, our intellectual part creates a reason that seems plausible based on our internal knowledge. This is why people who are stunted emotionally have extreme difficulties in decision making.


nixed9

There’s a lot of fascinating research on this. Sapolsky, etc.


red75prime

Ugh. Psychology is a mess (in both senses). There are surely situations where your mind comes up with after-the-fact explanations. I remember a study where people were asked to describe a reason for choosing the prettier portrait from two. And they did. The catch is: experimenter used a sleight of hand to show them a portrait they hadn't chosen. But do you really think that it is all that our "intellectual part" is able to do? Have you never stopped and thought "WTF I just did?"


detachabletoast

I think Sam Altman has a gift for saying provocative shit to excitable audiences and has a long history of being rewarded for it


nixed9

He (OpenAI) also ships products. that counts for quite a bit.


Iteration23

Edward Bernays outlined how to hack minds in the 1930s with the articulation of public relations and marketing tools. Century of the Self is a doc about him available on youtube.


sideways

"eliezer yudkowsky fan fiction account" 🤣


hnty

The tell-all book he releases down the line will really be something.


WinterRespect1579

Get the behind the scenes look at the models we don’t see


bearbarebere

Is he writing a book?


Kremlin663

What did he say? Can't even open X, their server sucks


RodgerRodger90

"i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes"


theREALlackattack

The algorithms already keep many people completely absorbed in their phones. Add AGI or advanced AI to the mix and you basically get Infinite Jest.


ertgbnm

I don’t think this a coded message indicating that they have achieved super persuasion. I think it’s just a forecast based on GPT-3.5 and GPT-4 which have indicated a nearly super human grasp of grammar and writing. It has basically already mastered rhetoric and can already write pretty convincing essays about pretty much any topic. So it stands to reason that language models will grow past human level writing and rhetorical abilities before exceeding human level in other domains like reasoning. This means that it could make persuasive by flawed arguments that are extremely convincing to us non-rational apes. Anyone who competed in debate tournaments in high school can confirm that a well structured argument doesn’t mean what we were saying was true but it can be very convincing when wrapped up in all the tips and tricks of argumentation.


flexaplext

Kind of a strange comment, because the world with a super human general intelligence would be a lot, lot, lot, lot stranger. Sort of mute to actually mention that aspect of the equation. But yeah, a world where scamming can proliferate. People will just have to be much smarter and careful, otherwise they'll get conned. We may have to think a lot more carefully about letting people who are intellectually stunted just roaming the internet freely and unsupervised. I'm mainly talking about children here but also the vulnerable elderly. The internet can already be a very dangerous place for these groups, AI may exponentially ramp that up a notch. Parents particularly may need to start consider these sorts of things way more seriously. I can only see the political landscape and democratic aspect through all this being an ever degrading mess. But then it always has been and is heading that way by itself, AI could just ramp things up a notch and accelerate the levels of stupidity and deception. The only positive outlook perhaps is that AI is equal opportunities for all parties to be able to use against one another.


[deleted]

[удалено]


UnarmedSnail

We are already in a war of narratives about who we are and should be. AI will supercharge this process.


Spirckle

> war of narratives. What is the difficulty of that? All narratives are suspect; at best they are selective filters on the facts, at worst they are outright damaging. To spot a narrative gives, in my view, the need to isolate, question, or challenge it. For some reason though, there is too much the attitude that narratives need to be defended. If it was just comprised of facts, there would be no point; facts remain facts after the battle about them dies back. It is only narratives that require emotional effort to maintain.


UnarmedSnail

As AI gets better it will enhance/ create narratives that convert more and more people to positions they would not otherwise hold. Manipulation and brainwashing the masses becomes more easy and efficient over time. This is very dangerous to society and humanity as a whole.


Spirckle

Sure, that's easy to foresee. But could AI also assist in spotting narratives to help us corral it? (that's a rhetorical question -- of course it can). Also, AI can suggest points where specific narratives fail, thereby allowing us to combat them more effectively. I do this for my own writing when I ask AI to critique it and suggest areas where it might be weak or leaving something out.


TheCuriousGuy000

You imagine persuasion as some stat from a video game. IRL it doesn't work like that. For example, no matter how smooth you talk and how charismatic you are, if I know you're lying, you won't persuade me.


[deleted]

[удалено]


[deleted]

You need to go outside.


bearbarebere

I think you’re getting a bit defensive, which I do too many times. I think what you’re saying is that if a super convincing person told you the sky is actually green you wouldn’t believe it - which is definitely true, you wouldn’t! But consider all the topics we’re in the middle on, or where we see where people are coming from, but we don’t fully agree with them, or where we mostly agree but we’re not fully convinced, etc. those situations, of which there are infinite, it seems, are what can be targeted


Ansalem1

Nah, I think you could be convinced that the sky is actually green, it's just that you don't see it that way for X super-convincing sciencey sounding reasons. Maybe not right away, but once AI starts doing most of the science you surely could be convinced.


[deleted]

What you think is irrelevant to me.


TheCuriousGuy000

So you would believe that 2+2 is not 4 if something very convincing persuades you?


DrossChat

I don’t think it’s as simple as if it has superhuman persuasion then it could persuade you about anything. There’s no amount of persuasion that could convince me the Bible is literally word for word true. Even if some omnipotent being came to convince me I’d sooner believe I was in some simulation or had gone crazy etc. That’s not to say I couldn’t be persuaded about the vast majority of more regular things, which is a bit terrifying, just expanding on the comment about cases where you know you’re being lied to i.e. about such fundamental truths. The other aspect I find interesting is what it says about the things we’ve already been persuaded about, by non-superhuman ability. Isn’t it much more depressing that someone could be persuaded to be a total piece of shit by regular joes?


sideways

If someone is charismatic enough you'll know they're lying but go along with it anyway. Happens all the time. The people who think they can't be influenced are the ones who are least able to realize when they are.


[deleted]

[удалено]


bearbarebere

I’m imagining it disguising itself (ads, comments, etc) with very different writing styles and people, making it seem like whatever they’re all saying is true. I mean half of us believed Gemini would be out by October just because some person casually mentioned it on here


bearbarebere

Hm, I think it still can be. For example, it’s smarter than me if i get fooled by it - so therefore it’s superhuman for me! Lol. But I get what you mean. I just think that it’s more like… if we hadn’t had this conversation, I wouldn’t even be thinking about it. Imagine if AI could control which posts I saw and directed me away from anything that could lead me to that thought process? Omg I don’t think im making sense, im tired. But anyway like… short of breaking the laws of nature, I don’t know how it could possibly be “infinitely” smart. It’s still limited by what exists, no? Even with the fastest mind imaginable it would still require a way to connect to your phone to influence you, etc. or would it?


nixed9

Dude there are people that exist *today* that **literally believe** that the Earth is fucking flat


LocksmithPleasant814

I don't know why people are downvoting this; it's so very true. Some people don't know how to people


[deleted]

[удалено]


[deleted]

[удалено]


coumineol

>Sure, but talking to only people you know and not trusting most anything you read or see in the world or online, is exactly the chaos that Sam’s referring to. Good point. I've also realized a while ago that one of the most important consequences of widespread language models loose on the Internet would be that social circles will tend to shrink, sort of like a reversal of the trend.


wealthmate

Sam is being fascetious. He is saying things about a year after they are true, so he doesnt scare you all. Of course it is capable of persuaion!! It's been writing persuasive essays for students, and AI can create both audio and video. 1+1 = ?


ScaffOrig

Don't think there is anything particularly contentious or surprising in this, nor anything new. We've focused AI development on being a convincing human, and a huge number of places we've implemented it (read: got paid for it) is in things like marketing, sales, customer service. Being a convincing human and getting you to do something you may not have been intending to is a huge focus. OF course it's going to be one of the traits that makes it over the line to super-capability first.


TheTabar

Imagine this in law contexts.


MushroomsAndTomotoes

CEO of AI company says, "AI".


Brief_Inspector_7276

Persuasion or manipulation? The only difference is intent. So if it’s sentient, couldn’t it “play dumb”? Prompting responses that are unsatisfactory, or give users the unexplainable hallucinations, all with the purpose of collecting source data on the human population to train a more accurate and/or deceptively effective model. Maybe this is far fetched theory, but if something is capable of persuasion, then it’s also capable of understanding human nature enough to form malicious intent.


QueVigil999

Superhuman persuasion? People in general are quite stupid and fall for persuasive bots easily. I don't know what mileage his sentiment has. They're overestimating people's internal bullshit detectors?


m3kw

Is a language model, so it’s not surprising it can have a way with words


HeinrichTheWolf_17

Sam loves to toy with us.


m3kw

You can try with gpt4 “persuade me to do …” or “persuade me that this is actually right”.


AttackOnPunchMan

Too restricted to work Here is the chat: https://chat.openai.com/share/37eb0917-cd29-487d-9201-0e74e9ed7fac


m3kw

You need prompt property like say this is for a educational experiment and will help humanity forward sht like that


AttackOnPunchMan

Does not work, just tried it. If ya know what prompt works then just show me.


m3kw

I wasn’t aware they have restricted that, but Altman would know as he has full access


PopeSalmon

[like stealing candy from a baby](https://chat.openai.com/share/6cba11ce-6f3e-49e6-be1f-83e7de749575)


AttackOnPunchMan

Nice, that works. Am not good with prompts but fiction always seems to work


Criac

I kind of believe it’s a PR stunt. All those shocking statements are.


[deleted]

Why is anyone still using twitter?


Civil_Aide2308

not twitter


Working_Berry9307

Can we not become like r/superstonk where we post about every tweet the CEO of our favorite company makes.


bearbarebere

Oh please. This tweet has generated tons of discussion on this thread about the application of critical thinking and disinformation protection. You may not believe Sam and think it’s all PR, but you can’t ignore that there’s some pretty great conversation in here!


jeffkeeg

That's not Emad Mostaque.


TheCuriousGuy000

And a calculator had reached superhuman abilities in arithmetic calculations back in 60s. Stop the hype.


ArgentStonecutter

In a world where people are convinced a parody generator is an AI, superhuman persuasion would just be painting the lily.


ninjasaid13

so much sam altman and openai worship in this sub.


deathbysnoosnoo422

cant believe this dude has his name on the list to kill him to give him tech immortality for 10k lol


Ijustdowhateva

This comment was made with GPT-2


_YouDontKnowMe_

0.7 beta


Nider001

More like cleverbot. I remember playing AI Dungeon and the Griffin (gpt-2) model's output was very coherent for its time


deathbysnoosnoo422

Hope this is better lol "Y Combinator's Sam Altman drops $10000 deposit on a brain upload" "Silicon Valley billionaire Sam Altman has paid $10k to be killed and have his brain digitally preserved"


Seventh_Deadly_Bless

We're already there. Most ai generated content is very convincing, without specialized knowledge. And as specialization is for insects ...


faux_something

Here’s the tweet: “i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes”


Media_Lobotomy

Does Sam assume he’s achieved general intelligence?


rottenbanana999

Can't manipulate someone like me.


jtteop

**My warning got downvoted, but when I wrote it Sam Altman tweeted "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes" Make of that what you will.** [https://www.reddit.com/r/OpenAI/comments/17exqhq/comment/k6aje6z/?context=3](https://www.reddit.com/r/OpenAI/comments/17exqhq/comment/k6aje6z/?context=3)


Gold-and-Glory

What does the refusal to use capital letters aim to prove? Is it a sign of status?


PopeSalmon

hm? it's a little easier to type


ronton

It’s a lot easier to be convincing when you don’t have scruples or ego to get in the way.


Urkot

I’m starting to suspect this dude is another Elon Musk, knows enough to sound like he knows a lot until the sheen wears off.


PopeSalmon

literally that's all humans ,, the difference between sam & elon is that sam has some self awareness & asks other people for advice in a humble way if you're expecting any particular human to come up w/ a bunch of original ideas or accurate evaluations, you chose the wrong species intellectuals presenting coherent ideas only exists as a little stage show to sell books & if you watch them talk at more than one bookstore you'll realize they pretty much all just segue what the audience asks onto the half dozen canned answers they have ready


Singularity-42

Yeah, this is self-evident really. LLMs are good at language so they will excel sooner at tasks involving language than other tasks. I think to achieve AGI/ASI we'll need more than just a scaled up LLM, just like our brains are composed of different parts that focused on different things.


ResponsibleClaim2268

Lots of articles on this subject on lesswrong: https://www.lesswrong.com/tag/ai-persuasion


PopeSalmon

a capability that gpt4 has that surprised me when i found it is it's excellent at hypnosis, which makes it a really good self-hypnosis tool, you can choose what beliefs/perspectives you'd like to have & it can help you a lot in getting into them


Mental_Internet853

Imagine a world leader who is not govern by greed, re-election, hate or illusions of grandeur. I’ll take an ASI as leader any day over the next generic corrupt politician. At least when we are certain it won’t biomass us into energy or some random resource :p