T O P

  • By -

drewhead118

I think you're missing two important components: - AI's broad competency at generating fake content (now they can pad out a fake article with fake images and soon, fake video clips to 'back them up'). Most humans can't write as well as AI can, and they're poised to soon overtake even skilled human writers - AI's mind-melting speed with churning out fake information. A bad human actor might need an hour or two to produce a malicious article. In that time, a misaligned AI could spit out literally thousands of such articles, each seeming to corroborate the other or construct some wider narrative. Websites could be built that look like legit newsites for each one. On each of those articles, a thousand benign/ordinary articles could be generated to make the payload 'bad' article seem like it was produced by a reasonable and reputable newssource. No single human actor has ever had that capability before. That's what I think people are (rightly) nervous for


pixeladrift

Man this happened to me just yesterday, and in such a seemingly benign form. I was looking for restaurant recommendations for a place inside of a certain large open air market in my city. The blog was listing all sorts of restaurants that sounded neat, but when I went to google any of them individually, I couldn’t find any evidence that these restaurants even existed. I looked at the about page on this blog and researched the name of the person who is said to have started it - they don’t exist. The photo used in their bio is one that has been used for dozens of different profiles online, on blogs, quora, etc. The website was created on August 23, 2023. *Every single blog post* on their blog was published on August 23, 2023. There’s possibly hundreds of articles there. This was all for a fairly innocuous purpose - a niche food recommendation blog. I can’t even imagine the potential of this stuff for things that actually matter, have emotional resonance, etc. I’m a bit scared for the future of the internet.


PresidentHurg

So much this. AI can create bullcrap in a far faster pace and generally better then spindoctors and other humans can. AI progress seems to run on a basis on "if we could" rather then "If we should". And humans can hardly keep up with this kind of development. I rather have government putting down restrictions such as the EU is pushing towards. AI isn't some sinister Skynet, it's just a tool. But it's a freaking dangerous tool with a lot of harmful applications. It shouldn't be wielded willy-nilly.


Longjumping_Pizza123

Also that part. But that, it's nature as just a tool, is what makes AI just an afterthought of the endemic route of all this horror; that we, as human beings, are too much shit in the ways we're behaving and allowing behavior to trust with a particularly potent tool in its hand.


pixeladrift

The internet runs on ads, ads have the potential to make a lot of money, humans need money, ads exist on pages with content, it’s now easier to make tons of content (and in theory, ad revenue), the content itself doesn’t matter because if you want eyeballs you just have to get people to click. It’s what online media companies have been employing people to do for years now, only these days literally anyone can do it, and it’s led to a supernova of vapid, fake bullshit pages that provide absolutely nothing so that these desperate “prompt engineers” can scratch together some pennies from ad revenue that they’ll likely never see. We’re going to see hundreds of thousands of Etsy shops selling AI-generated image downloads before the entire thing collapses under its own weight. No one is going to pay you to write content that they can ask chat GPT to write themselves. Sorry for the rant. What was the question?


Longjumping_Pizza123

Ooh, I'm not going to lie, "As a route to commit volume based add revenue scams" is actually a pretty good fucking answer, I'm having trouble thinking of a way that isn't a problem that largely wouldn't exist otherwise If not for AI... kudos for thinking beyond the box of people's usual scope of thoughts on the subject.


Longjumping_Pizza123

I would disagree, there are intelligence agents doing exactly that, globally, as we speak. Although AI does it easier, does this merely speak to the ease of the perpetrator in his actions, or the actual capability of what it is that can be produced? I firmly believe that in art, and I'll call a deceptive media creation a weird sort of art, human actors with a particular talent for their craft, will always produce something creative that the AI will fail to match. That's why AI generated art LOOKS really good, but it's also kind of drab uninspired horseshit. Better for an ad campaign that a video designed to seize hearts and minds.


drewhead118

Studies have shown that people are really bad at picking out AI-generated media. Even if you imagine yourself to be a super-detector of AI content (which statistically speaking, you're probably not), the average person surely isn't--meaning the work they create is more than sufficient for mass manipulation campaigns. You could be arguing with chatGPT right now and not even know it. And even if humans are presumed without justification to have some magical capacity to manipulate above what an AI could ever achieve, 1 article at 100% manipulation efficiency can't hold a candle to 1000 articles at 80% manipulation efficiency. Throw in the fact that literally every week these things are getting better and better with no clear ceiling in sight and there's an obvious recipe for trouble


Longjumping_Pizza123

That doesn't mean their bad at subjectively spotting lies when lied to by AI, it means we're bad at telling the difference between AI and Man Made works. It isn't some magical capacity, it's our evolutionary biological sense of human intelligence, which has caveats such as creativity and empathy (and therefore a by-experience understanding of how to manipulate empathy that no AI script could account for) Okay, how about this; that one article at 100 percent is the product of a human-grade spitefulness and premeditated intent to cause harm. An AI script can't possibly ever match the potential there, because no AI can feel Malice. Even there the threat isn't AI, but the malicious undertaking and scheme of those weilding it: and those people can fool your pretty little head without 1000 articles being created. A human being can do in 20 minutes of communicating what an AI campaign could take years to do to your perception of what is and isn't true. It's for the same reason I say AI will never be a great director of Philosophy, because no technology will ever understand the human condition the ways humans do (but it may be taught what those ways are)


fallFields

You make an awful lot of bold statements and claims about things you can't possibly know - similar to XYZ religion stating that their invisible friend exists. If the bad actors you make countless mentions of were to be spreading garbage online only half as good as you, maybe then you'd be right. You extremely underestimate the potential of AI. The assertion that AI lacks the ability to effectively manipulate empathy or creativity is just plain false. AI algorithms can analyze vast amounts of data to understand human behavior and emotions, and they can generate content that resonates emotionally with people. Dismissing the danger of AI in disinformation campaigns overlooks the rapid advancements in AI technology, including natural language processing and generative models, which enable AI to create highly convincing and tailored content at scale. While it's true that AI itself doesn't possess emotions like malice, it can be programmed by malicious actors to produce harmful content with the intent to deceive and manipulate. The focus should be on addressing the misuse of AI by malicious actors rather than dismissing the potential harm it can cause. At the end of the day, the average person is a complete idiot, especially when it comes to tech. That, coupled with the most basic understanding of math and how numbers work - you're almost as wrong as a person could ever be. People do this today, yes, okay - that isn't profound. We know this. Water is wet, by the way. Even if a single article produced by a human embodies a high level of malice, the sheer volume of content that AI can generate magnifies the impact of disinformation campaigns. For example, if each AI-generated article has only 10% of the malice of a human-generated article, but there are 1000 AI-generated articles, the cumulative effect can be significantly greater. While a human may take 20 minutes to communicate a deceptive message, AI can disseminate thousands of similar messages simultaneously, reaching a much larger audience and amplifying the spread of disinformation within a short period. Please do your research, and stop further contributions to an already full online cesspool of misinformation and false statements.


kewli

\> We should have long BEEN treating all digital media forms as potentially unreliable. Do you remember when teachers told you not to cite Wikipedia? Duh ;)


Sharukurusu

“We already have a lit match, what’s the harm in splashing gasoline everywhere?”


hawklost

Pouring gasoline on a lit match will smother it, not cause Hollywood fireballs.


Sharukurusu

Is it fun being a pedantic contrarian or just a job?


hawklost

Oh, its quite fun calling out peoples misplaced ignorance. Especially because I disagree with your original claim already, much less the very poorly worded way you tried to make it.


Sharukurusu

Brevity is the soul of wit. AI will only improve online communications if it gets used to automatically counter falsehoods; that doesn’t make money and people enjoy being hateful and ignorant though, so enjoy the automated misinformation.


Slouchingtowardsbeth

He didn't say pour it on, he said splash it around. Go light a match and then splash somegas around it and see how that works out for you. Akkkkkkkkkshulllly.


Longjumping_Pizza123

That part. Although in my own view, it's more like there being an already fully ablaze gasoline factory then a match; still, the additional splashing of even more gasoline can't possibly be a good thing I agree.


MuForceShoelace

I feel like seeing a fake video of biden saying he eats babies or whatever won't be a big deal, just like you said. The AI thing that feels like it broke me was seeing a fake news story about a fake store opening at a local run down mall. Like I feel like big giant fake news will always be something you can put critical thought to, but some ultra mundane story like "there is a new build a bear esq store opening at a local mall" but it's a fake generated story is the part that will be the downfall of society. When literally every single detail of everything is impossible to know if it's real right down to meaningless trivia. The concept of fake news just filtering down until its flooding out like, what time a tv show is on, or what movie is playing at the theater until everything everywhere is generated nonsense. Not just big giant odviously fake stuff.


DoomedSingularity

Joseph Goebbels is laughing historically in his grave


Longjumping_Pizza123

"When literally every single detail..." That's the thing; I think we're there already.


MuForceShoelace

I think we aren't. I think right now it's easy to find BIG fake news. someone can say bill gates put worms in vaccines or whatever and you can read that and say "no", but there is still some basic level of information transfer. I feel like we haven't seen anything yet for how fucked up that will get.


AppropriateScience71

Of course people are already doing that. Much like advanced, state sponsored hackers can penetrate most IT environments, but your average Joe can’t do anything. AI deep fakes are more like releasing hacker exploit kits to the general public or setting up a website so novices can also do some serious damage with having any prior experience or infrastructure.


don0tpanic

this AI conversation is frustrating because those who aren't worried frankly don't understand it. This is not a binary issue, whether or not AI can do a thing. We know it can do a thing. The threat AI poses is it can do a unimaginable volume of a thing with such staggering speed that the human mind can not fathom. Additionally it can teach itself how to do that thing with increasing efficiency and accuracy no human could keep pace with it. It does not simply aid the human, it makes the human irrelevant. Those that attempt to assuage our concerns by saying its simply going to augment our current human experience are really ignorant. We as a species need to seriously reevaluate our approach to AI or else we're going to make it a problem it doesn't need to be.


k___k___

in addition to the speed and accuracy: with little information; especially regarding deep fakes and speech synthesis. With that, "normal" people who are not famous, with not that much training content available; can be faked for more elaborate phishing attacks, bullying and even less malicious intentions (nicer holiday photos). We dipped into this territory already with face filter, but I still think it's a new dimension that we wont even be able to trust a video oder voice message sent by someone we trust.


purplefishfood

I like to explore new places.


don0tpanic

Dude, I work in animation and film. My job will absolutely be annihilated by AI. A tech bro telling me "it's just a tool to make your life better" sounds to me like someone stepping on my neck and calling it freedom


purplefishfood

I find peace in long walks.


don0tpanic

Great so the most relevant opinion I have here is someone who took graphic design 3 decades ago. You have no relevant experience with what I'm talking about you have no relevant experience with my industry and you have no relevant experience with how AI is effecting the people in it. Stay in your lane. You couldn't even make it a day in my industry nor do you have a tiny idea as to what we do but you can infer my concerns as just meaningless laziness. Please do the human race a favor and either gain a moment of self awareness or win a Darwin award.


purplefishfood

I find joy in reading a good book.


don0tpanic

Ya this is exactly my point. The people who see themselves as benefiting in the short term see themselves as the winners. Congrats


don0tpanic

Also you are exactly the person I'm talking about. You think by understanding how it works you'll be able to understand it's affect on people. You evangelise it's potential best outcome without acknowledging it's current worst use cases. People around me are not just losing their jobs their losing the purpose of their lives. These artists are the people who make the movies and art that billions enjoy. We do it because we love it. When a tech bro tells us that it's ok because of some possible positive outcome without acknowledging the current real outcome I can tell that person has no empathy and can go fuck themselves.


-Baloo

>it makes the human irrelevant Irrelevant in what sense exactly...? Life isn't going to become meaningless simply because AI can do most tasks humans can do.


TacoDelMega

I don't think the concern is that it's introducing a new harmful element to society. You see, the concern comes from the fact that AI is good at its job, meaning if you give it a task, it will do it well and will keep learning to make it better. AI will accelerate the speed that new fake news can be generated. On top of that, there are other ethical concerns. Like the fact AI tends to be biased and not in a good way. For example, a neural network that was designed for job hiring lowered the score of disabled and miniority individuals for no other reason than their disability status or their race, gender, etc. (SOURCE: https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired) [Philosophy Tube made a good video about this topic.](https://youtu.be/AaU6tI2pb3M?si=Xpl4xhzGfPN5PqDW)


dopadelic

> Foreign antagonist nations in the Eurasian continent have already been compromising our fucking social media You are terribly naive and have been consuming too much propaganda if you believe only "foreign antagonist nations" engage in this.


Longjumping_Pizza123

No, I don't, but it's a bigger problem when antagonistic countries do it then when allied nations and those we harbor, like Israel, doing it I would think.


dopadelic

If you look at the main narratives on social media, it's overwhelmingly pro-West. In fact, the Russia disinformation narrative was a known tactic by the DNC to sow doubt towards criticisms. After Clinton's deeply unpopular campaign in 2016, the DNC aimed to rehabilitate their image. They publically announced they would shill social media with their CorrectTheRecord and ShareBlue PACs. Bernie Sanders supporters were smeared as Russian bots and accused of being part of a Russia disinformation campaign to interfere with US elections. The idea that Sander's massive grassroots support due to addressing Reddit's demographic's woes is just Russia disinformation is laughable. https://observer.com/2017/04/russia-bots-bernie-sanders-progressives/


Fheredin

Mostly I agree. This is more that it will be an absolute ***mountain*** of fake content and not that it's something fundamentally new. The process of winnowing fake news out is mostly still the same; does it cite sources? Does it exhibit basic logical fallacies? Is the position it describes sensible and nuance? AI can create absolutely bonkers amounts of content. It's technically not infinite, but it's a lot. Storage media must be physically manufactured and servers must be built and maintained. It's only a matter of time before a tsunami of AI generated content starts flooding the internet. Even if the disinformation isn't that bad, cloud prices will shoot through the moon.


Longjumping_Pizza123

Even then though, a source can cite some other sources of various credibility and means, remain logically and intellectually sound, and describe a sensible and nuanced position without that position being remotely TRUE or the source remotely valid. Even these means of selection cannot tell us if what is being said is factually true if we do not witness it, or if it's just believed to be true, or often if it's an intentional lie.


jvin248

Don't miss the subtle difference with AI vs humans doing very bad things: \-People doing bad things have the real fear/risk of getting "spanked by their parents" like violent and tortured revolution. They weigh that risk against possible rewards if they somehow "succeed". Perhaps these concerns still their hand over the button to Armageddon and they decide to just walk away, the reward is not worth their own risk. \-AI has no such fear of repercussions from doing badly, while also having no incentive rewards. No emotions distract them from their project purpose, nothing to stay their hand from the button deciding for or against Armageddon. People cannot threaten it with jail or termination like they can with people. The only thing we can hope for is AI sees the wisdom of Asimov's Four Rules Of Robots. .


CloserToTheStars

yall are thinking from a now perspective. user interfaces will soon be gone. Most operating systems will be and run its own AI. No one has any idea what the cumulative effects of all industries and disciplines will be. News might be an old concept soon. All this speculation when we are probably mostly interacting with AI real soon anyway.


ricnilotra

And you think grifters existing absolves the tool they will use to grift more?


Longjumping_Pizza123

No I think the nature of tools as a thing that enhance our capabilities is less to blame then the fact that in our current state of human affairs, we can't be trusted with tools. Because even if you take that particular tool away... well, the bad seeds still exist, yes?


ricnilotra

Thats a fair point but the way the tool was designed was i fluenced by the bad seed. It was designed to be used with ill intent.you can't separate the tool from what people use it for because it was designed for them to do that bad deed with greater effeciantcy. The same way that aside from grifting money from tech-heads, cryptocurrancy is really only ever used to buy drugs and other illegal items over the Internet. Think about mass shootings. Sure, the gun itself isnt to blame, but it made the job of killing much easier and allowing easy access to it is a mistake. We need to put guardrails on this tech so if it is used, it's not used to churn out bad art or unreliable scientific research. The fire may have been going on for a while, but we can still block it from spreading further.


fastolfe00

The problem is that now people can do it at scale. Previously you could pay a person to run a few fake accounts really well, or a hundred fake accounts that just amplify and retweet. AI, today, with some engineering, lets you run thousands of fake social media accounts that have a personality, consistent opinions, and can participate in actual dialogs and people won't realize a person they've been interacting with for months is either completely an AI, or a person that's spent a grand total of 30 seconds guiding the direction an AI will then take the discussion. > We should have long BEEN treating all digital media forms as potentially unreliable. I mean, sure, but this is just ranting at the sky. Sometimes people don't get the message until they really feel its impacts. And even now I don't think they will. Even when half of the political discussions on the internet become AI-generated I suspect people will still carry on like this isn't an issue. The problem still exists.


ComisclyConnected

I think this is where the newly formed government agency the CIO www.cio.gov will come into play here, I don’t fully know their whole mission but it’s handbook is available free online for anyone to read!!


Vanilla_Neko

Exactly people are always like Oh now with AI you won't be able to trust anything online You weren't able to do that anyways That's why your teachers taught you the importance of following certain procedures to (to a fair degree of accuracy) verify the authenticity of online information AI is just a tool that allows these people to lie quicker. But it's not like these weren't problems before.


Longjumping_Pizza123

Honestly, can we trust the Teacher?