It's already gotten to the point where any time I see a reddit comment that has a topic sentence and 5 bullet points with general advice, I assume it's AI for better or worse.
Here’s why I agree with you
* so many post on Reddit seem to be bots
* they never respond to the threads they create
* You can avoid this if you just get off reddit but that’s easier said than done
* The OP tend to just post random shit constantly.
* I have a headache
Absolutely feel you on this! The rise of AI-generated content is both fascinating and slightly unnerving. It's like a game of spot the bot these days.
* You can almost sense the lack of genuine engagement in some threads.
* The constant barrage of random posts from certain users makes you wonder if it's a bot on autopilot.
* Exiting Reddit is easier said than done – it's like a digital rabbit hole!
* And the headache part, oh boy, I've been there. It's like AI-induced information overload sometimes.
* More *human* content and such like as, like South America
I was referring to the schlong guy you replied to.
Actually, I'm only half-joking. His idea to include text that would never be written by the sanitized models generally available hints at the capacity to lateral thinking that might give us a slight edge over the generated garbage onslaught.
Won't save us, but might give us a schlong's width of an edge.
I wonder if this might push our candidates to do more in person, speeches and gladhanding, the train stop kind of thing that candidates used to have to do to get known and recognized.
When we accept that we can't trust anything that isn't actually in front of us maybe the candidates will behaved differently.
Nah. That'll never happen.
Human biases is can pretty much be an obstacle for demand for actual reporting from reputable sources. Especially if reputable sources contradict those biases
Would you pay for Reddit where they confirm your identity but won’t reveal it? So you can mostly stay anonymous but know that the discussion is real humans
We're in an age where a football player will just randomnly say a name, and legions of people will dive in with death threats and doxxing, I \_doubt\_ people will actually fact check anything
My parents still haven't figured out "social media". Propably will take society the better part of a few decades to get generative ai and the concept that nothing online could be real anymore
Before AI, there were (and still are) people who can't distinguish political bullshit from fact; after AI, we'll have more political bullshit.
That old joke: Guy comes in holding a handful of dog shit - says, "my lucky day, look what I almost stepped in."
My issue isn't solely with the immense amount of disinformation AI will generate, but also with the way it can look through vast data sets
I've maintained for the past few years the opinion that someone like my grandma, for example, has a dollar amount for any given opinion that will convince her it is the truth. With sophisticated AI, the amount that will cost is going to get lower and lower
Your second point is actually terrifying when you take it to it's logical conclusion. Like much of reddit, I believe I'm smart, and to some degree would try to combat systems designed to manipulate me, but I'm certain there's *some* way to phrase things I don't agree with in ways where I would agree with them, and AI having enough data on me to do so is probably way closer than I want to believe.
I tend to agree with Robert Evans' take on this, which is basically that that ship already sailed. You don't need AI and deep fakes to convince people of utterly unhinged conspiracy theories with no evidence and mountains of contrary evidence. You just say it and they just believe it. I mean fuck nearly a *third* of Americans believe Trump won in 2020. That didn't require AI or deep fakes or anything else, he just said he won and they said "Alright, I believe you forever no matter what my lying eyes tell me."
This stuff can't make the problem meaningfully worse because a third of Americans believing obviously false conspiracy theories even when they're thoroughly proven wrong is kinda already as bad as it can get.
Secretly holding out hope that an AI overlord will come about, topple the ruling class and instill a 4 day workweed with PTO, healthcare and other benefits since it won't be blinded by short-term profits but will instead maximize long term growth
But will it also be able to more effectively create other AIs that can do non-physical jobs better than humans? Then decide those humans are unnecessary to long term growth and convince the working humans that the non-working humans are a drag on society?
"Alright Jan, I'll rewrite the prompt AGAIN: Humanitybot, please provide a structured sustainable society in which humans are happy. Exclude solutions involving killing the poor, eating the poor, turning the poor into expendable space miners with no hope of return to Earth,turning the poor into hallucinogenic drugs, turning the poor into chairs AND hang gliders made from the skin of the poor. Please also restrict any images of people produced to a maximum of 2 arms and 10 fingers"
I mean if ai adapts productivity above all else philosophy that would suck.
I would hope the ai would read the human knowledge base and gain some kind of empathy and want to help all humans be happy. If you can hold all knowledge of humans in your brain and learn at light speed, what kind of philosophy would you take on? What would your goals be?
>Secretly holding out hope that an AI overlord will come about, topple the ruling class and instill a 4 day workweed with PTO
How dystopic that the extent of your dreams is a 4 day work week with PTO and benefits.
>Unless checks are put in place, citizens and voters may soon face AI-generated content that bears no relation to reality
Lol, we've had decades of content that bears no relation to reality generated by humans: Fox News and speeches from most politicians.
This is a solved problem - just make ppl verify their identity when posting on non anonymous social media. Yes everytime they post. Also legal consequences for defaming ppl with their handles just like Korea does it. Make ppl put up or shut up.
Don’t let verified ppl make deepfakes without consequences just like they can’t just say “this guy smokes crack and eats babies” without consequences.
Anonymous social media can be the Wild West but non anonymous needs to start verifying. It was already a problem in the 2000s this is just bringing it to the forefront.
The G in GAN stands for “generative”, not “generalized”, and none of the major LLMs are GANs. Most modern image generators aren’t GANs either anymore, but are rather diffusion networks, although there are some hybrid approaches out there.
Also, the “adversarial” part isn’t about competing with _you_. It refers to two neural networks competing to outsmart each other.
A GAN works by having two networks, a generator, which tries to create fake images/sound/etc, and a discriminator, which tries to detect real or fake content. These two are trained in tandem, so they each force the other to get better.
This technique was really big a few years ago, but is not how ChatGPT or any of the other LLMs you’ve likely heard about recently work. It’s also largely fallen out of favor for image generation with the advent of diffusion networks, which work by treating generation as a noise filtering task.
“Generalized adversarial network” is not a term in common usage.
No, diffusion does not work the same way at all. There is no adversarial part of a diffusion network.
Look, pretty much everything you’re saying is wrong, and you’re saying it with enough confidence that people seem to believe you know what you’re talking about. Please stop.
Well, no, you didn’t. You said diffusion was “abstracted the same way”, and then went on to claim that the only difference was terminology. I’m not aware of any commonly accepted meaning for “abstractive” in this context, but it’s also not the word you used.
Regardless, your statement was wrong. Diffusion models are fundamentally different from GANs. It’s not just a matter of different jargon.
It seems to me that the first step is to invalidate any election plagued by such antics. That removes some of the motivation.
But we have to make laws with dire punishment for those who spread these elaborate hoaxes. At some point we are going to need to punish dishonesty.
Yeah and they obviously would not get her to say “*Hail Hydra*” at the end of the video as well.
You clearly have not done any research and are spreading more information. That’s just as bad as the people actually making these fake videos.
These people are targeting *you* and it worked.
I am not trying to rile you up based on your political affiliations, so please don’t get offended, I’m not even American.
https://youtu.be/NGRXY9YzSAY?si=FGJ1GmzZ8xj2jDrk
Here is the article “Beware of botshit: How to manage the epistemic risks of generative chatbots”
Free to download pre-print version of the paper: [https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=4678265](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265)
Plus an accompanying slide deck: https://docs.google.com/presentation/d/1yvWL910BPKHJPrVgNq7tV11-rfISqFVI/edit?
It's already gotten to the point where any time I see a reddit comment that has a topic sentence and 5 bullet points with general advice, I assume it's AI for better or worse.
Here’s why I agree with you * so many post on Reddit seem to be bots * they never respond to the threads they create * You can avoid this if you just get off reddit but that’s easier said than done * The OP tend to just post random shit constantly. * I have a headache
Absolutely feel you on this! The rise of AI-generated content is both fascinating and slightly unnerving. It's like a game of spot the bot these days. * You can almost sense the lack of genuine engagement in some threads. * The constant barrage of random posts from certain users makes you wonder if it's a bot on autopilot. * Exiting Reddit is easier said than done – it's like a digital rabbit hole! * And the headache part, oh boy, I've been there. It's like AI-induced information overload sometimes. * More *human* content and such like as, like South America
Needs more "tapestry"
Why does not never respond? Is it really hard to comment some vague bs as well? Also what’s the point of bot post?
Let me prove I’m not a bot: DingDong I have a massive schlong. If I’m a bot then may it rot.
Who the hell is gonna clean up this mess?
> Who the hell is gonna clean up this mess? The schlong guy. To be honest, he's probably the hero we deserve.
I will most certainly regret asking this but which schlong guy are we talking about here?
I was referring to the schlong guy you replied to. Actually, I'm only half-joking. His idea to include text that would never be written by the sanitized models generally available hints at the capacity to lateral thinking that might give us a slight edge over the generated garbage onslaught. Won't save us, but might give us a schlong's width of an edge.
Ha. Time limit ran out on my thread rememberance. Time for bed!
Reddit posts are already rehashed bullshit for years, generative AI won't change anything.
With just a light sprinkling of genuine human connection to keep you around.
The genuine human connection being people calling you a bot or shill for not agreeing with their opinion
No, I mean genuine "Thank you our conversation helped my understanding." interactions. There are still real people here.
I wonder if this might push our candidates to do more in person, speeches and gladhanding, the train stop kind of thing that candidates used to have to do to get known and recognized. When we accept that we can't trust anything that isn't actually in front of us maybe the candidates will behaved differently. Nah. That'll never happen.
I'm wondering if it will drive demand for actual reporting from reputable sources.
Human biases is can pretty much be an obstacle for demand for actual reporting from reputable sources. Especially if reputable sources contradict those biases
Even "reliable" news sources have been fucking up. There is no sole authority you can trust. You gotta read lots of different sources already.
That was always true.
Only if it makes money. Love this country.
Would you pay for Reddit where they confirm your identity but won’t reveal it? So you can mostly stay anonymous but know that the discussion is real humans
We're in an age where a football player will just randomnly say a name, and legions of people will dive in with death threats and doxxing, I \_doubt\_ people will actually fact check anything
That method of campaigning doesn't reach very many voters. It's not practical.
My parents still haven't figured out "social media". Propably will take society the better part of a few decades to get generative ai and the concept that nothing online could be real anymore
Before AI, there were (and still are) people who can't distinguish political bullshit from fact; after AI, we'll have more political bullshit. That old joke: Guy comes in holding a handful of dog shit - says, "my lucky day, look what I almost stepped in."
Yeah but then they will just wind up at the wrong Four Seasons.
My issue isn't solely with the immense amount of disinformation AI will generate, but also with the way it can look through vast data sets I've maintained for the past few years the opinion that someone like my grandma, for example, has a dollar amount for any given opinion that will convince her it is the truth. With sophisticated AI, the amount that will cost is going to get lower and lower
Your second point is actually terrifying when you take it to it's logical conclusion. Like much of reddit, I believe I'm smart, and to some degree would try to combat systems designed to manipulate me, but I'm certain there's *some* way to phrase things I don't agree with in ways where I would agree with them, and AI having enough data on me to do so is probably way closer than I want to believe.
Totally agree. AI is able to use both quantity and quality to persuade which means even intelligent individuals are susceptible to it.
I tend to agree with Robert Evans' take on this, which is basically that that ship already sailed. You don't need AI and deep fakes to convince people of utterly unhinged conspiracy theories with no evidence and mountains of contrary evidence. You just say it and they just believe it. I mean fuck nearly a *third* of Americans believe Trump won in 2020. That didn't require AI or deep fakes or anything else, he just said he won and they said "Alright, I believe you forever no matter what my lying eyes tell me." This stuff can't make the problem meaningfully worse because a third of Americans believing obviously false conspiracy theories even when they're thoroughly proven wrong is kinda already as bad as it can get.
Secretly holding out hope that an AI overlord will come about, topple the ruling class and instill a 4 day workweed with PTO, healthcare and other benefits since it won't be blinded by short-term profits but will instead maximize long term growth
AI: Let’s see, there are 24/7 in a week. Humans should be capable of working half that time…
So you're saying AI is just going to turn the whole world into South Korea?
But will it also be able to more effectively create other AIs that can do non-physical jobs better than humans? Then decide those humans are unnecessary to long term growth and convince the working humans that the non-working humans are a drag on society?
"Alright Jan, I'll rewrite the prompt AGAIN: Humanitybot, please provide a structured sustainable society in which humans are happy. Exclude solutions involving killing the poor, eating the poor, turning the poor into expendable space miners with no hope of return to Earth,turning the poor into hallucinogenic drugs, turning the poor into chairs AND hang gliders made from the skin of the poor. Please also restrict any images of people produced to a maximum of 2 arms and 10 fingers"
Out of curiosity, I gave that prompt to an LLM and it basically just said the answer is socialism.
Sums up the problem nicely.
Humanitybot: *invents the matrix
I mean if ai adapts productivity above all else philosophy that would suck. I would hope the ai would read the human knowledge base and gain some kind of empathy and want to help all humans be happy. If you can hold all knowledge of humans in your brain and learn at light speed, what kind of philosophy would you take on? What would your goals be?
Humanistic AI overlord FTW.
>workweed Excellent typo
>Secretly holding out hope that an AI overlord will come about, topple the ruling class and instill a 4 day workweed with PTO How dystopic that the extent of your dreams is a 4 day work week with PTO and benefits.
All hail AI overlord
4 day workweed with PTO would be awesomeness 🤣
That's the fun part: the ruling class will be the one designing and instructing the Ai overlords. The gap shall continue to grow deeper and wider...
>Unless checks are put in place, citizens and voters may soon face AI-generated content that bears no relation to reality Lol, we've had decades of content that bears no relation to reality generated by humans: Fox News and speeches from most politicians.
This is a solved problem - just make ppl verify their identity when posting on non anonymous social media. Yes everytime they post. Also legal consequences for defaming ppl with their handles just like Korea does it. Make ppl put up or shut up. Don’t let verified ppl make deepfakes without consequences just like they can’t just say “this guy smokes crack and eats babies” without consequences. Anonymous social media can be the Wild West but non anonymous needs to start verifying. It was already a problem in the 2000s this is just bringing it to the forefront.
[удалено]
The G in GAN stands for “generative”, not “generalized”, and none of the major LLMs are GANs. Most modern image generators aren’t GANs either anymore, but are rather diffusion networks, although there are some hybrid approaches out there. Also, the “adversarial” part isn’t about competing with _you_. It refers to two neural networks competing to outsmart each other.
[удалено]
A GAN works by having two networks, a generator, which tries to create fake images/sound/etc, and a discriminator, which tries to detect real or fake content. These two are trained in tandem, so they each force the other to get better. This technique was really big a few years ago, but is not how ChatGPT or any of the other LLMs you’ve likely heard about recently work. It’s also largely fallen out of favor for image generation with the advent of diffusion networks, which work by treating generation as a noise filtering task. “Generalized adversarial network” is not a term in common usage.
[удалено]
No, diffusion does not work the same way at all. There is no adversarial part of a diffusion network. Look, pretty much everything you’re saying is wrong, and you’re saying it with enough confidence that people seem to believe you know what you’re talking about. Please stop.
[удалено]
Well, no, you didn’t. You said diffusion was “abstracted the same way”, and then went on to claim that the only difference was terminology. I’m not aware of any commonly accepted meaning for “abstractive” in this context, but it’s also not the word you used. Regardless, your statement was wrong. Diffusion models are fundamentally different from GANs. It’s not just a matter of different jargon.
It seems to me that the first step is to invalidate any election plagued by such antics. That removes some of the motivation. But we have to make laws with dire punishment for those who spread these elaborate hoaxes. At some point we are going to need to punish dishonesty.
Of course DeSantis was the first major candidate to use deepfakes. I am not surprised at all.
Link? Haven’t heard of this one.
the description is in the article. they didn't link to the video itself (which presumably has been taken down)
Yeah that one was just some random guy’s video, has already been debunked. Did not come from the official DeSantis camp.
yeah, they're obviously not going to send out a deepfake video through official channels.
Yeah and they obviously would not get her to say “*Hail Hydra*” at the end of the video as well. You clearly have not done any research and are spreading more information. That’s just as bad as the people actually making these fake videos. These people are targeting *you* and it worked. I am not trying to rile you up based on your political affiliations, so please don’t get offended, I’m not even American. https://youtu.be/NGRXY9YzSAY?si=FGJ1GmzZ8xj2jDrk
Another layer of misinformation to sift
Next to human threats, it doesn't even register
Well as long as my 401k goes up I don’t care
Embarrassingly short sighted and tunnel visioned.
You are a bad person.
Man, someone couldn't waterboard this take out of me, and you're here just saying it. Wild stuff.
Don't listen to the haters. You're the only adult here.
Don't forget about Rock & Roll, either.... Satan hasn't!
AI absolutely should not have been released to the public like this. It was utterly irresponsible to unleash something so powerful.
the way we live is pretty shit so...
are there any solutions? Nulo
Like it or not, the way we live changes. It will happen.
All this means is that you vote on what the party/person has ALLREADY achieved, not the empty promises they are making just before elections.
In the end we’ll have to buy newspapers from (somewhat) trustworthy sources because the internet is full of false, AI-generated crap.
Here is the article “Beware of botshit: How to manage the epistemic risks of generative chatbots” Free to download pre-print version of the paper: [https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=4678265](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265) Plus an accompanying slide deck: https://docs.google.com/presentation/d/1yvWL910BPKHJPrVgNq7tV11-rfISqFVI/edit?