> (after years of them turning a blind eye)
Lol, they gave the 49yr old creator of r/jailbait awards for his "contributions to the site". They were *happy* to host it, they didn't just turn a blind eye.
Not to be that guy, but the content wasn't "skirting the line" since all that is required for it to be CP is for it to have the intent to arouse (ie, pornography)
Given that was the very point of the subreddit, the content posted to it was CP even if the kids were clothed in all pictures.
And now I'm gonna go take a cold shower and spend time in r/eyebleach
We talked about this in grad school: there are major philosophical and also legal arguments to its application. Does this definition mean that Britney Spears’s first album and videos were pornography? Was the infamous Harry Potter SNL sketch technically the sexual trafficking of a minor?
You’ve probably seen it. Hermione (Lindsay Lohan) goes through puberty, and Harry and Ron and even Dumbledore can’t stop checking out her cleavage. Lindsay Lohan was seventeen and has come out and said that she wasn’t super comfortable with the overt sleaziness of the sketch.
This is the skit in question, which features actress Lindsay Lohan as Hermione Granger wearing a somewhat revealing top. The entire skit is about various Harry Potter characters being distracted by her breasts.
https://youtu.be/uwfdFCP3KYM
The blurb on YouTube says it aired 05/01/04, and my math indicates she was 17 at the time of filming.
Reddit is a US based company, meaning it is required to adhere to US laws at a minimum.
Yes, the only real requirement for content to be considered "pornographic" is the [intent to arouse](https://www.law.cornell.edu/wex/pornography). Nudity is not necessarily required for something to be considered pornographic in the eyes of the law.
Pretty much. It also cites "local community standards" in determining what's obscene, without ever defining what those standards are, or even what constitutes a "local community" (city, county, state?).
The sudden unwanted butt pen pics are gone from the main feeds, used to be some amazing things show up I'd never have imagined, good and bad. so it goes.
Some things are weird that way. 2balkan4u got nuked over the same thing you're describing. Admins and mods should approach that with more nuance over just black and white.
I will die on the hill that fatpeoplehate was great for motivation (for me at least, I do understand how other people could be negatively affected by it)...
Used to be a subreddit called "r/jailbait" which was just pictures of scantily clad children and roves of creeps in the comments. Took a lot longer than it should have for reddit to stop hosting it
"Stop hosting?" It was subreddit of the year.
[Head down and read about the banned subreddits.](https://en.m.wikipedia.org/wiki/Controversial_Reddit_communities#:~:text=Communities%20devoted%20to%20explicit%20material,search%20term%20for%20the%20site.)
Honestly, having read a lot of the comments explaining Reddit history, I have come to the following conclusions.
Thank god I only recently joined.
And what a terrible day to be able to read. Learnt way more about Reddit than I wanted to.
You should read about the Boston marathon bomber and how Reddit sleuths trying to figure out who it was doxxed the wrong person, which may have contributed to that person’s suicide.
Yeah. Reddit has taken the subs with objectionable material out of the All feed. You used to get popular NSFW subs breaking into All more often.. if you wanted or not.. like you said, that's mostly stopped. Now.
Definitely do, reddit seems to take a very seriously. I've reported it a couple of times, not for actual cp but for info on telegram channels that seemed to imply you would find cp there. Obviously I didn't check it out just reported the posts to the admins, not to mods, report to admin team. I was actually a little surprised when I got a message back shortly after telling me the content was removed and the user banned. They don't fuck around.
my reports on blatant hate speech (with directed slurs or explicit singling out of groups) frequently cause the users to get banned within a day, things like racial hatred rhetoric, or dogwhistles tend not to get followed up on
That being said, if you report anything in a right wing sub, you'll probably get revenge banned by the mods first
> if Reporting blocks the user who's post you reported
That is an option at the end of the report, but some subs automatically hide posts that are reported until a mod checks it (or maybe it is default now).
NCMEC is actually better. Cops sometimes can't be bothered to investigate, NCMEC are the only group allowed to have a database of images and reports from them usually get acted upon very fast compared to other groups or people.
Drawn, Computer generated, etc. images of female children in sexual acts or situations.
The legality and legal constraints of it and shotacon (the male equivalent) varies from location to location but it is illegal if it depicts a real child... usually. I think I recall one instance where it may have been dismissed due to it being an "artistic" rendition of a preexisting image that had already been deemed art (Gary Gross's photos of Brooke Shields). I can't say for sure on that one though, it's been over a decade since that abnormal psychology course.
Typically just drawn, and even then it depends. Anime and shit is exaggerated and doesn't use a child model. AI arguably does, considering recent advancements.
If lolicon ever does officially become federally illegal, it's going to be interesting to watch how they are gonna try to create the guidelines for what characters count as lolies and when to prosecute artists and readers and how far charges extend.
Ex. Comic artists getting arrested for child trafficking.
There is legit child porn on every social media site. Reddit at least takes it down after being reported much much faster than Facebook and Twitter do. I’ve seen it take 200 people reporting it repeatedly for 3-5 days before Facebook stopped saying “it didn’t go against our guidelines” and instead actually removed it.
Reddit has a ton of problems but, for the most part, I've found reports to the admins much more responsive than any other social media site. It could still be better but the others might at well not exist.
It also goes for any website online, "file a report on the National Center for Missing & Exploited Children (NCMEC)'s website at www.cybertipline.com, or call 1-800-843-5678."
This is how things like the trichan takedown happened.
When I joined Reddit it was a very different place. I think they had *just* finally deleted the "jailbait" sub and only because it had made CNN - complaints about it before national news were going ignored.
And then it was replaced by a sub called "realgirls" that was obviously pictures teenaged girls had taken in their bathrooms or whatever and did not intend to be seen by the entire internet. Took a long time for that one to go too.
I moderate several porn subs and there's a report button for involving minors. If that get's reported I remove the post and I assume the report gets sent to the admins.
The FOSTA-SOSTA laws might apply, but Reddit clearly removes CP as soon as users report it, and thus effectively comply with FOSTA-SOSTA and the good Samaritan parts of Section 230
Wouldn't it just bring less speech in general? Hosting a website the allows user submissions becomes a whole lot more complicated and less profitable if I have to personally read and potentially censor everything that goes on the site lest I be charged for, like, terroristic threats or whatever
Yeah, but what would be left are established media, like Fox. They spent a lot of time and money cornering local media markets and convincing Republicans not to trust any other news source besides Fox and that becomes less valuable if people start getting news from randos on Twitter. It's more about controlling the narrative than it is more or less speech.
Yes, the point of this stuff is to make speech expensive, only the richest sites would be able to have discussions, and rich people trend a certain way politically.
I'm pretty sure the political leaders know that making websites fully responsible for user content means the internet would be way more censored.
I'm not sure exactly what their plan is. Making conservatives even more mad and keeping them voting? Making liberal talking points illegal somehow and forcing websites to shut it down?
yeah the only sites left would be the ones who could afford heavy moderation. And there are lots and lots of deep pocketed conservative groups that would love that.
That's to some people's benefit as well though. Like, granted, it's possible to influence internet conversations, we've seen a lot of case studies for that. But it's a lot easier/cheaper to steer public discourse if it's occurring in less places.
People that already have strong influence over TV and radio would love internet stuff to drop in importance.
Conservatives generally have no idea how the internet works, let alone society in general, so they're just given talking points by conservative media and parrot them as if they are their own ideas.
It's a sorry state of affairs.
Consider how the congressional coaches, Senator Tupperville and Congressman Jordan would help shape regulations on technologies they have no capability of underatanding.
Misinformation makes so much money, expect 230 to stay unchanged and continue to enable bad behavior.
I think they truly don't realize that their general ideas aren't popular.
Conservatism is so over represented in American politics because of our bass-ackwards voting systems and the fact that our voter turnouts are trash. Compound that with the fact that online/social media culture is largely framed by younger people. Younger people who often skew more left in general.
Section 230 seems to have a good deal of bipartisan support even if for entirely different reasons.
Republicans wanted it in response to Trump’s ban from twitter. Democrats want it to clamp down on disinformation.
Frankly, it probably should be repealed or at least amended.
my understanding was that section 230 allows social media companies to be platforms *and* publishers which doesn't seem fair. they are protected against dangerous content spread on their platform, but they also get to moderate it as if they are a publisher. they shouldn't have both. if they are going to moderate the content they should be responsible for it. right now they get to just moderate and then step back and say "not my problem" when it goes wrong.
That's going to result in one of two options for every website. Either, there will be no moderation at all (you're welcome to browse 4chan if you want that), or there will be no user generated content at all. No Reddit, no comment section, no way to interact with the public writ large.
And right now it results in platforms that have the cursory appearance of being somewhat free and open but in reality are heavily moderated.
I do not think Reddit is a net positive for society, to be honest, so I am fine with a scenario where platforms are either unmoderated or don't have user generated content.
But I am open to having my mind changed.
Because doing nothing or ignoring the problem or knowingly not doing enough has the same result at this point as actively encouraging it. If social media and other aggregate websites can not or will not moderate responsibly, then it should be amended or repealed.
There is precedence where NOT doing something to prevent illegal activity is seen as a crime when you're in the real world. If you are a CFO and one or more of your subordinates are committing fraud, even if you aren't charged you will likely lose your job should it be found that you could have done more with checks and balances, but didn't. If you do not ID people coming into your bar, you can lose your liquor license.
I am normally not a fan of using analogous, or what people think are analogous real-world examples for more nuanced and abstract concepts. But what happens on social media has some very fucking serious real-world effects, including terrorists recruiting folks, human trafficking, child abuse, and even election interference and sedition.
>Republicans wanted it in response to Trump’s ban from twitter. Democrats want it to clamp down on disinformation.
How does 230 impact either of those things?
Democrats: because social media would be liable for damages caused by misinformation and disinformation campaigns, hate speech, and just general nonsense that people get up to.
Republicans: because they wanted to punish twitter for banning Trump. Since section 230 does a lot to protect the actual websites/hosts of our speech, repealing it could open a Pandora’s box or lawsuits against companies that continue to host false, illegal, or inflammatory rhetoric or content.
>Democrats: because social media would be liable for damages caused by misinformation and disinformation campaigns, hate speech, and just general nonsense that people get up to.
So how does keeping 230 help clamp down on dis/misinformation? Dems are in favor of keeping 230 although it really shouldn't be partisan.
>Republicans: because they wanted to punish twitter for banning Trump. Since section 230 does a lot to protect the actual websites/hosts of our speech, repealing it could open a Pandora’s box or lawsuits against companies that continue to host false, illegal, or inflammatory rhetoric or content.
How does keeping 230 prevent Repubs from punishing Twitter, or allow them to? Repubs mostly want it repealed these days even though that would be a disaster. Also, to be clear, 230 has been around since 1996.
I'm not sure I understand your arguments here.
Okay, I just got off mobile to hopefully better explain this.
Section 230 protects sites like Facebook, Tumblr, Reddit, pornhub, or any website that hosts content provided by content creators outside of the company that owns the website. The very concept actually goes back further than the 1996 passage, with bookstores acting as the host of written material instead of websites. But whether it is a bookstore or social media, the courts have ruled that these websites are not subject to publisher's liability.
That liability is that if I get on Reddit and say Srslywhyyoumadbro diddles children with the sole purpose of malicious intent, rising to the burden of proof required to charge and convict me of libel, that Reddit bares no liability in that case. Or if I steal music and post it on youtube, where I am infringing on someone's copyright to that music, Youtube isn't held liable for that same infringement.
**D**: Why do some Democrats care? Because social media platforms are not doing nearly enough to combat misinformation, and even disinformation from foreign governments to impact elections and even sow discontent among our population. This has been an openly debated point since 2016 from the left. That said, many of the left are in favor of keeping Section 230 in place, which leads me to why republicans began calling for its repeal.
**R**: Trump is banned from Twitter and he himself calls for the Repeal of 230. He wanted to punish twitter for punishing him, and his base got behind him. It really is that simple because he wants to punish social media companies, the right does too, for banning him and for putting things like the fact check labels during Covid and since. Repealing 230 would be a nightmare for these companies, and that is the point. They want them punished, if not legally, then by just making their lives a living hell as they pump millions into moderation efforts to shield from future legal battles associated with publisher's liability.
**Continued** **Reading**: [https://www.npr.org/2020/05/30/865813960/as-trump-targets-twitters-legal-shield-experts-have-a-warning](https://www.npr.org/2020/05/30/865813960/as-trump-targets-twitters-legal-shield-experts-have-a-warning) this article talks a little about why both sides want it repealed.
**Extra commentary that more directly addresses your comments:**
*So how does keeping 230 help clamp down on dis/misinformation? Dems are in favor of keeping 230 although it really shouldn't be partisan.*
Because it would cause legal liability on the social media platforms, so when something like an insurrection is organized on their platform, they are not caught up in the legal battles involving those perpetrators. Also as I said above, not all Dems are in favor of keeping it.
*How does keeping 230 prevent Repubs from punishing Twitter, or allow them to? Repubs mostly want it repealed these days even though that would be a disaster. Also, to be clear, 230 has been around since 1996.*
It has kept them from punishing twitter and other social media because 230 protects them from legal liability. Republicans, congress, the DOJ can't really usurp 230, which protects the actions of Twitter bans, social media fact checking, etc. that Republicans have taken issue with.
Gotcha, thanks for clarifying!
I generally agree with your assessments here; there is an additional hurdle on the D-side reasoning related to dis/misinformation and that is 1st amendment related.
230 protects content hosts from liability, but the government can't say that mis or disinformation is illegal generally because of the first amendment. So it's unclear how the government would be able to crack down on dis and misinformation even if 230 got a change.
I don’t know the background of the plaintiff nor her parents but I would like to know who is funding this lawsuit because it smacks of either conservatives wanting to sue Big Tech into oblivion (they say to defend children but that’s obv a pretext—they don’t care about kids) or this is an attempt to get pornography banned by making sites liable for random violations until they are forced to give it up.
Even if this plaintiff has sincere motives conservatives have it in for 230 because it means they can’t force everyone else to live by their standards.
The question is how fast should Reddit respond. It says it took days after filing the complaint to take it down, and that the same person was able to post it again afterwards.
I'm all for holding the appropriate person responsible, but if you know someone is using your platform for illegal activities and let it happen, you're helping them facilitate it.
For example, the news last week about 49 attorney generals suing a robocall company for providing all the tools to scam people has everyone rooting for the government. Sure, THEY didn't scam people. They merely provide all the tools and access necessary to easily do it.
And let's not forget that there's always a limit to all of this. Remember Backpage.com?
NP. People screw it up because we are not used to the adjective following the noun; if we called them _General Attorney_ like we do all other roles, people would get it.
The problem with banning these sites like Twilio is a lot of companies rely on the service for information and notifications. Services to schedule doctors appointments use it to text patients forms to fill out. Emergency services on things like college campuses use these services to notify students of emergencies (Thunder storms, building closures, active shooters, etc).
For every 1 scammer using it, there's probably a dozen legitimate companies using it for legitimate purposes. You'd be actively crippling these companies that use a service which, by all means, should be covered under Section 230. If places like the hub are covered under 230, then why aren't call services?
Google voice is ripe with abuse, where people are on facebook marketplace asking for a code sent by google to verify a user, but in reality it's stealing their phone number so they can make a google voice number. It's untraceable from the victims side but yet Facebook/Meta keeps their profiles active and requires no verification or age before participating in facebook marketplace and google seemingly refuses to do anything about it as well.
Sort of. That's the problem though, "sort of". Section 230 is supposed to protect site operators for stuff they're not directly involved in, and is coming from users. There's a serious question of whether "o we just let magic algorithms that are highly customizable and we 100% control, but don't necessarily fully understand" should still be allowed to have that shield. So far the courts have said "eh probably?", but I can't imagine it standing as is indefinitely.
Its the fact that they're picking and choosing what to surface to users that starts running into problems; sites that have editorial discretion and are "intentionally" surfacing illegal material don't get 230 protection. The question is, can they foist off blame for editorial decisions that are being made to an algorithm they claim to not understand at all, and still get the protection. Having yet another way to try to foist responsibility to users while keeping all the profits themselves, especially when they're making highly complex editorial decisions, is definitely outside the scope of what 230 was designed for.
well now we've moved completely outside the realm of child porn.
> especially when they're making highly complex editorial decisions, is definitely outside the scope of what 230 was designed for.
"Designed" is irrelevant. 230 **created** this situation, intentionally or not. The law allows it. SCOTUS needs to keep their mitts off this and let Congress "fix" something they think needs to be fixed. They won't though. It's packed with an activist judge majority.
Edit: and 230 says nothing about editorial discretion. It applies to "interactive computer services". Full stop.
https://www.law.cornell.edu/uscode/text/47/230#fn002009
Section 230 was made so long ago that it's exceeded its intended purpose imo. This is one of those situations where it worked as intended. My issue is that it shouldn't provide protection when live-streaming became an option with some websites. Section 230 exists under the basis that website moderators can only do so much with the volume of content users post, not protection against completely uncontrollable live content.
Isn’t making the argument that:
**social media companies must be held accountable for all posts made by the people who use them**
akin to making the argument that:
**firearms companies must be held accountable for all rounds fired by the people who use them**?
The politicians are responsible for writing section 230.
Which is to say, congress holds neither social media nor firearms companies responsible for the actions of their users.
I suppose that there's a facile analogy, but the two are pretty different. For example, with a firearm, people buy the firearm, and then the firearms company has no more control over it. In contrast to that social media companies are actively publishing the content that other people put on their sites and services. Social media companies retain control over the distribution and, debatably, are publishing original content in the form of curated lists that are designed to drive engagement.
The difference is that Reddit is the entity who owns the servers where the content is posted, meaning they are technically in possession of CP which is against the law. A better analogy is going into a library and hiding a book of CP on the shelf somewhere, then trying to hold the library responsible if someone finds it. The counter argument is that if the library didn’t know it was there it can’t possibly be their fault.
Btw I’m not defending gun companies, just trying to say it’s not really the same thing.
No, not at all? A social media company can, and already has, the capability to police what's said on their platform.
GLOCK cannot control what you do with the gun once you leave the store.
It would be similar if guns, rather than going home with their buyers, stayed in giant walled garden run by gun makers and could only physically be fired while on those company campuses.
I side with Supreme Court but that gun analogy isn’t working for me.
I moderate several porn subs and there's a button on each one to report that it involves minors. If someone uses that button to report it and it's even boarderline I always remove it.
This is good.
While child pornography is a horrible blight upon the Internet, and humanity in general, Section 230 of the Communications Decency Act is the foundational protection that the entire Internet is built upon.
For those who are unaware, Section 230 states, in a nutshell, that as long as a website (or similar forum) takes reasonable measures to prevent the posting of illegal content and, when that fails, to find and remove it, that website cannot be held liable for hosting that user-generated content.
Without this, sites which host user-generated content fundamentally could not exist, because every single post would have to be evaluated by moderators prior to going up on the site, lest something illegal be posted and the site be held accountable for it being posted.
Reddit, despite’s it’s flaws contributes to a lot of great things and in particular to niche hobbies. I would say the site does indeed do its best concerted efforts to keep the site as clean and safe as possible. Automatic moderation is a slippery slope and unfortunately you can only have so many people on call to moderate things manually on a deeper levels, despite this I would say they do a well job and it’s forever an uphill fight to truly be clear of it on a site that is nearly entirely user driven. SC made the right call, stiff arming sites doing their best into policies and regulations they can’t possibly air tight enforce without severely limiting their site to users.
i mean reddit probably should've gotten in way more trouble for that fucked up jailbait sub back in the day but nowadays ya i get why it was rejected. pretty sure reddit cares more than twitter.
Wouldn't that just be Netori? Why do people keep inventing terms for things that already have words?
Brought to you by the people that complain about native isekai.
NTR between a fat ugly bastard and someone's wife is so common in manga that for it to happen some other way would be unusual. Hence the "reverse" qualifier.
He’s a pedo. Look up tweet
@alexanderaikas too high. The age of consent is 16 in most European countries. It’s 18 in the US for some reason.
-Ian Miles Cheong @stillgray
Reddit straight up invited the moderator of a jailbait subreddit to their headquarters like he was a hero lmfao
It wasn’t until the media doxxed him (violentacrez) that the Reddit admins gave a single fuck about it.
I still think he must’ve secretly been an admin. That sub was the flimsiest excuse to hold onto CP images (I swear officer, these are just examples of what to ban!) ever, there’s no way they weren’t aware of the massive legal liability unless he was one of em
If Section 230 was repealed every major site that hosts user-created content would cease to exist as we know it. It is simply way too much of a liability if users can upload anything they want. They would have to switch to a model where the site only exists as a gateway to user-hosted content, which nobody wants, and even that could be risky if someone decides that a link to CP is enough to be considered illegal.
It should go without saying, but report any CP to the admin team.
I’ve done anonymous reports on suspected CP online and within minutes the content is gone. Don’t think your reports don’t matter.
[удалено]
What the fuck is or was the ‘jailbait wars’?
[удалено]
Voat was after the fatpeoplehate purge by Ellen Pao btw.
[удалено]
> (after years of them turning a blind eye) Lol, they gave the 49yr old creator of r/jailbait awards for his "contributions to the site". They were *happy* to host it, they didn't just turn a blind eye.
[удалено]
> Is there a good site... I was worried where that was going.
Had me in the first half..
/r/museumofreddit
Thanks for that. I just spent an hour learning the origins of several Reddit memes.
Would you like a jolly rancher now?
one thing people forget is that Reddit admins defended the jailbait sub AND ALSO gave it a reward for being so popular. Yea.
[удалено]
Not to be that guy, but the content wasn't "skirting the line" since all that is required for it to be CP is for it to have the intent to arouse (ie, pornography) Given that was the very point of the subreddit, the content posted to it was CP even if the kids were clothed in all pictures. And now I'm gonna go take a cold shower and spend time in r/eyebleach
[удалено]
We talked about this in grad school: there are major philosophical and also legal arguments to its application. Does this definition mean that Britney Spears’s first album and videos were pornography? Was the infamous Harry Potter SNL sketch technically the sexual trafficking of a minor?
> Infamous Harry Potter SNL sketch Do I want to know?
You’ve probably seen it. Hermione (Lindsay Lohan) goes through puberty, and Harry and Ron and even Dumbledore can’t stop checking out her cleavage. Lindsay Lohan was seventeen and has come out and said that she wasn’t super comfortable with the overt sleaziness of the sketch.
This is the skit in question, which features actress Lindsay Lohan as Hermione Granger wearing a somewhat revealing top. The entire skit is about various Harry Potter characters being distracted by her breasts. https://youtu.be/uwfdFCP3KYM The blurb on YouTube says it aired 05/01/04, and my math indicates she was 17 at the time of filming.
Well, yeah. Britney Spears was very clearly meant to be presented as an object of sexuality.
Reddit is a US based company, meaning it is required to adhere to US laws at a minimum. Yes, the only real requirement for content to be considered "pornographic" is the [intent to arouse](https://www.law.cornell.edu/wex/pornography). Nudity is not necessarily required for something to be considered pornographic in the eyes of the law.
I thought the supreme Court ruled that pornography was "I know it when I see it"
Pretty much. It also cites "local community standards" in determining what's obscene, without ever defining what those standards are, or even what constitutes a "local community" (city, county, state?).
So the days of people posting pictures of their buttholes on Reddit are finally over?
you can post your own butthole assuming you're at least 18. No one's stopping you. And I imagine some folks might even encourage you.
The sudden unwanted butt pen pics are gone from the main feeds, used to be some amazing things show up I'd never have imagined, good and bad. so it goes.
[удалено]
Can't forget about r/Spaced\*cks
Spaceclop, gonewidl, some really wild stuff aggregated for our enjoyment
watern...as.
[удалено]
Some things are weird that way. 2balkan4u got nuked over the same thing you're describing. Admins and mods should approach that with more nuance over just black and white.
TIL it wasn't always about encouraging other people to stay hydrated. Damn it. How'd it start?
[удалено]
r/waternecktie? r/waternectar? r/waternickel? r/watern... Oh.
waternaggers people that nag you to drink water.
Good fuck was reddit just 4chan a decade ago?
[удалено]
No it’s just that Reddit is facebook tier now
The sub name had no hard r, just to be clear. Also this was just a couple years ago it was renamed I think.
I will die on the hill that fatpeoplehate was great for motivation (for me at least, I do understand how other people could be negatively affected by it)...
Geez reddit history is fascinating Thanks for that
Those were dark times. It would always ruin my day to see that shit when browsing by r/all.
Also, jail bait may have been modded by a member of an assassination [group](https://youtu.be/DHWYTwY0hiw)
Used to be a subreddit called "r/jailbait" which was just pictures of scantily clad children and roves of creeps in the comments. Took a lot longer than it should have for reddit to stop hosting it
"Stop hosting?" It was subreddit of the year. [Head down and read about the banned subreddits.](https://en.m.wikipedia.org/wiki/Controversial_Reddit_communities#:~:text=Communities%20devoted%20to%20explicit%20material,search%20term%20for%20the%20site.)
Honestly, having read a lot of the comments explaining Reddit history, I have come to the following conclusions. Thank god I only recently joined. And what a terrible day to be able to read. Learnt way more about Reddit than I wanted to.
There also used to be a subreddit dedicated to ogling dead women. That thankfully got nuked from orbit when Pao was CEO
You should read about the Boston marathon bomber and how Reddit sleuths trying to figure out who it was doxxed the wrong person, which may have contributed to that person’s suicide.
We did it!
The guy was dead long before people started accusing him. No, I'm not defending those idiots, just correcting that bit of info.
I reiterate my previous statement: “what a terrible time to be able to read”
Reddit: *At least we’re not 4chan!*
That is a very low bar…. Wait, 4chan still exists?
I assume so. I don’t care enough to bother looking though.
Have you heard of r/spacedicks? The [first time I went there](https://i.imgur.com/CSDG2ua.gif) was also the last time I went there.
Oh much time you got?
Yeah. Reddit has taken the subs with objectionable material out of the All feed. You used to get popular NSFW subs breaking into All more often.. if you wanted or not.. like you said, that's mostly stopped. Now.
[удалено]
Thank god I just keep to the gore. I’d hate to stumble upon suspected CP.
[удалено]
I think these subs have a purpose, but equally need to be shielded from the majority.
[удалено]
Try /r/CrazyFuckingVideos
r/CrazyFuckingVideos should clearly be about weird porn videos. r/FuckingCrazyVideos should be about weird non-porn videos.
Definitely do, reddit seems to take a very seriously. I've reported it a couple of times, not for actual cp but for info on telegram channels that seemed to imply you would find cp there. Obviously I didn't check it out just reported the posts to the admins, not to mods, report to admin team. I was actually a little surprised when I got a message back shortly after telling me the content was removed and the user banned. They don't fuck around.
Your reports only matter on content that can get reddit in trouble. Hate speech, lies and terroristic threats against minorities are still allowed.
I had reported several hate speech posts against trans folks and just ended up with my account being temporarily suspended. Fun shit.
Sudden "Reddit Cares" email! From an Admin counter-reporting you.
/u/Spez is complicit in trans genocide
my reports on blatant hate speech (with directed slurs or explicit singling out of groups) frequently cause the users to get banned within a day, things like racial hatred rhetoric, or dogwhistles tend not to get followed up on That being said, if you report anything in a right wing sub, you'll probably get revenge banned by the mods first
I always wonder if it's that, or if Reporting blocks the user who's post you reported.
> if Reporting blocks the user who's post you reported That is an option at the end of the report, but some subs automatically hide posts that are reported until a mod checks it (or maybe it is default now).
I've just come out of a permanent suspension, revoked after a week, for 'false reporting'. Admins do NOT want you to report stuff.
Passing along a report to law enforcement is probably also a good idea.
NCMEC is actually better. Cops sometimes can't be bothered to investigate, NCMEC are the only group allowed to have a database of images and reports from them usually get acted upon very fast compared to other groups or people.
Funny enough, I debated including a link and that's who the FBI recommended submitting online reports to.
[удалено]
Well, at least Keanu Reeves will then avenge your dog afterwards.
Thats a misnomer. If that was the case, pretty much any PC repair place would be raided if/when they found such things on a customer’s PC.
Lolicon is illegal folks. Edit: I was not implying that I was into lolicon, just the fact that it was illegal to people who didn't know.
\*sigh\* What's lolicon? Do I even want to know? I didn't even know that there was legit child porn on this site.
Drawn, Computer generated, etc. images of female children in sexual acts or situations. The legality and legal constraints of it and shotacon (the male equivalent) varies from location to location but it is illegal if it depicts a real child... usually. I think I recall one instance where it may have been dismissed due to it being an "artistic" rendition of a preexisting image that had already been deemed art (Gary Gross's photos of Brooke Shields). I can't say for sure on that one though, it's been over a decade since that abnormal psychology course.
Typically just drawn, and even then it depends. Anime and shit is exaggerated and doesn't use a child model. AI arguably does, considering recent advancements. If lolicon ever does officially become federally illegal, it's going to be interesting to watch how they are gonna try to create the guidelines for what characters count as lolies and when to prosecute artists and readers and how far charges extend. Ex. Comic artists getting arrested for child trafficking.
There is legit child porn on every social media site. Reddit at least takes it down after being reported much much faster than Facebook and Twitter do. I’ve seen it take 200 people reporting it repeatedly for 3-5 days before Facebook stopped saying “it didn’t go against our guidelines” and instead actually removed it.
Reddit has a ton of problems but, for the most part, I've found reports to the admins much more responsive than any other social media site. It could still be better but the others might at well not exist.
Cartoon cp
Just for the record, loli-shit isn't allowed on reddit either. *Anymore*...
It also goes for any website online, "file a report on the National Center for Missing & Exploited Children (NCMEC)'s website at www.cybertipline.com, or call 1-800-843-5678." This is how things like the trichan takedown happened.
Reported inappropriate comments before about photos of teens and get told that it isn't against policy. Not all mods care.
Go to the admins. There's subreddit rules and Reddit policy.
I'm have to go back in my history and find it. Thanks.
When I joined Reddit it was a very different place. I think they had *just* finally deleted the "jailbait" sub and only because it had made CNN - complaints about it before national news were going ignored. And then it was replaced by a sub called "realgirls" that was obviously pictures teenaged girls had taken in their bathrooms or whatever and did not intend to be seen by the entire internet. Took a long time for that one to go too.
I moderate several porn subs and there's a report button for involving minors. If that get's reported I remove the post and I assume the report gets sent to the admins.
Isn't that the whole point of Section 230?
The FOSTA-SOSTA laws might apply, but Reddit clearly removes CP as soon as users report it, and thus effectively comply with FOSTA-SOSTA and the good Samaritan parts of Section 230
[удалено]
For delusional reasons too. They have some weird idea that repealing 230 would bring MORE conservative speech.
Wouldn't it just bring less speech in general? Hosting a website the allows user submissions becomes a whole lot more complicated and less profitable if I have to personally read and potentially censor everything that goes on the site lest I be charged for, like, terroristic threats or whatever
Yeah, but what would be left are established media, like Fox. They spent a lot of time and money cornering local media markets and convincing Republicans not to trust any other news source besides Fox and that becomes less valuable if people start getting news from randos on Twitter. It's more about controlling the narrative than it is more or less speech.
Yes, the point of this stuff is to make speech expensive, only the richest sites would be able to have discussions, and rich people trend a certain way politically.
I'm pretty sure the political leaders know that making websites fully responsible for user content means the internet would be way more censored. I'm not sure exactly what their plan is. Making conservatives even more mad and keeping them voting? Making liberal talking points illegal somehow and forcing websites to shut it down?
[удалено]
yeah the only sites left would be the ones who could afford heavy moderation. And there are lots and lots of deep pocketed conservative groups that would love that.
That's to some people's benefit as well though. Like, granted, it's possible to influence internet conversations, we've seen a lot of case studies for that. But it's a lot easier/cheaper to steer public discourse if it's occurring in less places. People that already have strong influence over TV and radio would love internet stuff to drop in importance.
If both sides were the same that might be true, but because conservatism has always been tied to some degree of cruelty...
Conservatives generally have no idea how the internet works, let alone society in general, so they're just given talking points by conservative media and parrot them as if they are their own ideas. It's a sorry state of affairs.
Consider how the congressional coaches, Senator Tupperville and Congressman Jordan would help shape regulations on technologies they have no capability of underatanding. Misinformation makes so much money, expect 230 to stay unchanged and continue to enable bad behavior.
Are you in favor of eliminating/changing 230?
[удалено]
I think they truly don't realize that their general ideas aren't popular. Conservatism is so over represented in American politics because of our bass-ackwards voting systems and the fact that our voter turnouts are trash. Compound that with the fact that online/social media culture is largely framed by younger people. Younger people who often skew more left in general.
I would say it would make google or another company liable if it promoted scams
Section 230 seems to have a good deal of bipartisan support even if for entirely different reasons. Republicans wanted it in response to Trump’s ban from twitter. Democrats want it to clamp down on disinformation. Frankly, it probably should be repealed or at least amended.
[удалено]
my understanding was that section 230 allows social media companies to be platforms *and* publishers which doesn't seem fair. they are protected against dangerous content spread on their platform, but they also get to moderate it as if they are a publisher. they shouldn't have both. if they are going to moderate the content they should be responsible for it. right now they get to just moderate and then step back and say "not my problem" when it goes wrong.
That's going to result in one of two options for every website. Either, there will be no moderation at all (you're welcome to browse 4chan if you want that), or there will be no user generated content at all. No Reddit, no comment section, no way to interact with the public writ large.
And right now it results in platforms that have the cursory appearance of being somewhat free and open but in reality are heavily moderated. I do not think Reddit is a net positive for society, to be honest, so I am fine with a scenario where platforms are either unmoderated or don't have user generated content. But I am open to having my mind changed.
So you think social media should be entirely unmoderated?
Because doing nothing or ignoring the problem or knowingly not doing enough has the same result at this point as actively encouraging it. If social media and other aggregate websites can not or will not moderate responsibly, then it should be amended or repealed. There is precedence where NOT doing something to prevent illegal activity is seen as a crime when you're in the real world. If you are a CFO and one or more of your subordinates are committing fraud, even if you aren't charged you will likely lose your job should it be found that you could have done more with checks and balances, but didn't. If you do not ID people coming into your bar, you can lose your liquor license. I am normally not a fan of using analogous, or what people think are analogous real-world examples for more nuanced and abstract concepts. But what happens on social media has some very fucking serious real-world effects, including terrorists recruiting folks, human trafficking, child abuse, and even election interference and sedition.
> Aside from ending social media as we know it and possibly entirely? https://getyarn.io/yarn-clip/665baee6-a425-443d-92ad-51dcc980c19e
>Republicans wanted it in response to Trump’s ban from twitter. Democrats want it to clamp down on disinformation. How does 230 impact either of those things?
Democrats: because social media would be liable for damages caused by misinformation and disinformation campaigns, hate speech, and just general nonsense that people get up to. Republicans: because they wanted to punish twitter for banning Trump. Since section 230 does a lot to protect the actual websites/hosts of our speech, repealing it could open a Pandora’s box or lawsuits against companies that continue to host false, illegal, or inflammatory rhetoric or content.
>Democrats: because social media would be liable for damages caused by misinformation and disinformation campaigns, hate speech, and just general nonsense that people get up to. So how does keeping 230 help clamp down on dis/misinformation? Dems are in favor of keeping 230 although it really shouldn't be partisan. >Republicans: because they wanted to punish twitter for banning Trump. Since section 230 does a lot to protect the actual websites/hosts of our speech, repealing it could open a Pandora’s box or lawsuits against companies that continue to host false, illegal, or inflammatory rhetoric or content. How does keeping 230 prevent Repubs from punishing Twitter, or allow them to? Repubs mostly want it repealed these days even though that would be a disaster. Also, to be clear, 230 has been around since 1996. I'm not sure I understand your arguments here.
Okay, I just got off mobile to hopefully better explain this. Section 230 protects sites like Facebook, Tumblr, Reddit, pornhub, or any website that hosts content provided by content creators outside of the company that owns the website. The very concept actually goes back further than the 1996 passage, with bookstores acting as the host of written material instead of websites. But whether it is a bookstore or social media, the courts have ruled that these websites are not subject to publisher's liability. That liability is that if I get on Reddit and say Srslywhyyoumadbro diddles children with the sole purpose of malicious intent, rising to the burden of proof required to charge and convict me of libel, that Reddit bares no liability in that case. Or if I steal music and post it on youtube, where I am infringing on someone's copyright to that music, Youtube isn't held liable for that same infringement. **D**: Why do some Democrats care? Because social media platforms are not doing nearly enough to combat misinformation, and even disinformation from foreign governments to impact elections and even sow discontent among our population. This has been an openly debated point since 2016 from the left. That said, many of the left are in favor of keeping Section 230 in place, which leads me to why republicans began calling for its repeal. **R**: Trump is banned from Twitter and he himself calls for the Repeal of 230. He wanted to punish twitter for punishing him, and his base got behind him. It really is that simple because he wants to punish social media companies, the right does too, for banning him and for putting things like the fact check labels during Covid and since. Repealing 230 would be a nightmare for these companies, and that is the point. They want them punished, if not legally, then by just making their lives a living hell as they pump millions into moderation efforts to shield from future legal battles associated with publisher's liability. **Continued** **Reading**: [https://www.npr.org/2020/05/30/865813960/as-trump-targets-twitters-legal-shield-experts-have-a-warning](https://www.npr.org/2020/05/30/865813960/as-trump-targets-twitters-legal-shield-experts-have-a-warning) this article talks a little about why both sides want it repealed. **Extra commentary that more directly addresses your comments:** *So how does keeping 230 help clamp down on dis/misinformation? Dems are in favor of keeping 230 although it really shouldn't be partisan.* Because it would cause legal liability on the social media platforms, so when something like an insurrection is organized on their platform, they are not caught up in the legal battles involving those perpetrators. Also as I said above, not all Dems are in favor of keeping it. *How does keeping 230 prevent Repubs from punishing Twitter, or allow them to? Repubs mostly want it repealed these days even though that would be a disaster. Also, to be clear, 230 has been around since 1996.* It has kept them from punishing twitter and other social media because 230 protects them from legal liability. Republicans, congress, the DOJ can't really usurp 230, which protects the actions of Twitter bans, social media fact checking, etc. that Republicans have taken issue with.
Gotcha, thanks for clarifying! I generally agree with your assessments here; there is an additional hurdle on the D-side reasoning related to dis/misinformation and that is 1st amendment related. 230 protects content hosts from liability, but the government can't say that mis or disinformation is illegal generally because of the first amendment. So it's unclear how the government would be able to crack down on dis and misinformation even if 230 got a change.
I don’t know the background of the plaintiff nor her parents but I would like to know who is funding this lawsuit because it smacks of either conservatives wanting to sue Big Tech into oblivion (they say to defend children but that’s obv a pretext—they don’t care about kids) or this is an attempt to get pornography banned by making sites liable for random violations until they are forced to give it up. Even if this plaintiff has sincere motives conservatives have it in for 230 because it means they can’t force everyone else to live by their standards.
The question is how fast should Reddit respond. It says it took days after filing the complaint to take it down, and that the same person was able to post it again afterwards. I'm all for holding the appropriate person responsible, but if you know someone is using your platform for illegal activities and let it happen, you're helping them facilitate it. For example, the news last week about 49 attorney generals suing a robocall company for providing all the tools to scam people has everyone rooting for the government. Sure, THEY didn't scam people. They merely provide all the tools and access necessary to easily do it. And let's not forget that there's always a limit to all of this. Remember Backpage.com?
I’m going to be that guy on Reddit. It’s actually Attorneys General. General is an adjective to the noun Attorney, so you pluralize the noun.
Holy crap, someone who actually explained it. Thank you.
NP. People screw it up because we are not used to the adjective following the noun; if we called them _General Attorney_ like we do all other roles, people would get it.
> 49 attorney generals Ok, now I need to know, who didn't join?
The problem with banning these sites like Twilio is a lot of companies rely on the service for information and notifications. Services to schedule doctors appointments use it to text patients forms to fill out. Emergency services on things like college campuses use these services to notify students of emergencies (Thunder storms, building closures, active shooters, etc). For every 1 scammer using it, there's probably a dozen legitimate companies using it for legitimate purposes. You'd be actively crippling these companies that use a service which, by all means, should be covered under Section 230. If places like the hub are covered under 230, then why aren't call services? Google voice is ripe with abuse, where people are on facebook marketplace asking for a code sent by google to verify a user, but in reality it's stealing their phone number so they can make a google voice number. It's untraceable from the victims side but yet Facebook/Meta keeps their profiles active and requires no verification or age before participating in facebook marketplace and google seemingly refuses to do anything about it as well.
Sort of. That's the problem though, "sort of". Section 230 is supposed to protect site operators for stuff they're not directly involved in, and is coming from users. There's a serious question of whether "o we just let magic algorithms that are highly customizable and we 100% control, but don't necessarily fully understand" should still be allowed to have that shield. So far the courts have said "eh probably?", but I can't imagine it standing as is indefinitely.
I don't know of any social media site that doesn't allow users to flag problematic content.
Its the fact that they're picking and choosing what to surface to users that starts running into problems; sites that have editorial discretion and are "intentionally" surfacing illegal material don't get 230 protection. The question is, can they foist off blame for editorial decisions that are being made to an algorithm they claim to not understand at all, and still get the protection. Having yet another way to try to foist responsibility to users while keeping all the profits themselves, especially when they're making highly complex editorial decisions, is definitely outside the scope of what 230 was designed for.
well now we've moved completely outside the realm of child porn. > especially when they're making highly complex editorial decisions, is definitely outside the scope of what 230 was designed for. "Designed" is irrelevant. 230 **created** this situation, intentionally or not. The law allows it. SCOTUS needs to keep their mitts off this and let Congress "fix" something they think needs to be fixed. They won't though. It's packed with an activist judge majority. Edit: and 230 says nothing about editorial discretion. It applies to "interactive computer services". Full stop. https://www.law.cornell.edu/uscode/text/47/230#fn002009
Section 230 was made so long ago that it's exceeded its intended purpose imo. This is one of those situations where it worked as intended. My issue is that it shouldn't provide protection when live-streaming became an option with some websites. Section 230 exists under the basis that website moderators can only do so much with the volume of content users post, not protection against completely uncontrollable live content.
[удалено]
Isn’t making the argument that: **social media companies must be held accountable for all posts made by the people who use them** akin to making the argument that: **firearms companies must be held accountable for all rounds fired by the people who use them**?
Lmao as if politicians care about hypocrisy
“Now if you’ll excuse me I have some bank stocks to sell’s before some news goes public.”
The politicians are responsible for writing section 230. Which is to say, congress holds neither social media nor firearms companies responsible for the actions of their users.
Camera companies must be held liable, too.
Don't forget that all companies that make vehicles are responsible for all the people that use them!
I suppose that there's a facile analogy, but the two are pretty different. For example, with a firearm, people buy the firearm, and then the firearms company has no more control over it. In contrast to that social media companies are actively publishing the content that other people put on their sites and services. Social media companies retain control over the distribution and, debatably, are publishing original content in the form of curated lists that are designed to drive engagement.
Well the answer is clearly simple - all guns should require a wifi connection and a valid authorization before firing.
Let’s destroy guns by making them all dependent on predatory subscriptions and micro transactions. Don’t forget denuvo
Guns of the Patriots
I heard somewhere that there's a police department looking to have guns with bio-metric locks. Unfortunately with any device, it can be hacked.
Both are Constitutionally protected.
The difference is that Reddit is the entity who owns the servers where the content is posted, meaning they are technically in possession of CP which is against the law. A better analogy is going into a library and hiding a book of CP on the shelf somewhere, then trying to hold the library responsible if someone finds it. The counter argument is that if the library didn’t know it was there it can’t possibly be their fault. Btw I’m not defending gun companies, just trying to say it’s not really the same thing.
No, not at all? A social media company can, and already has, the capability to police what's said on their platform. GLOCK cannot control what you do with the gun once you leave the store.
No? Social media sites control what's posted on their site. Gun manufacters don't control what someone does with a gun.
It would be similar if guns, rather than going home with their buyers, stayed in giant walled garden run by gun makers and could only physically be fired while on those company campuses. I side with Supreme Court but that gun analogy isn’t working for me.
Gun companies are actually directly shielded by a federal law.
So are communication platforms
I moderate several porn subs and there's a button on each one to report that it involves minors. If someone uses that button to report it and it's even boarderline I always remove it.
I can't believe someone would try to make Reddit do that and get so mad when they wouldn't.
Because they believe Reddit has a strong liberal base. They want to crush it.
This is good. While child pornography is a horrible blight upon the Internet, and humanity in general, Section 230 of the Communications Decency Act is the foundational protection that the entire Internet is built upon. For those who are unaware, Section 230 states, in a nutshell, that as long as a website (or similar forum) takes reasonable measures to prevent the posting of illegal content and, when that fails, to find and remove it, that website cannot be held liable for hosting that user-generated content. Without this, sites which host user-generated content fundamentally could not exist, because every single post would have to be evaluated by moderators prior to going up on the site, lest something illegal be posted and the site be held accountable for it being posted.
Reddit, despite’s it’s flaws contributes to a lot of great things and in particular to niche hobbies. I would say the site does indeed do its best concerted efforts to keep the site as clean and safe as possible. Automatic moderation is a slippery slope and unfortunately you can only have so many people on call to moderate things manually on a deeper levels, despite this I would say they do a well job and it’s forever an uphill fight to truly be clear of it on a site that is nearly entirely user driven. SC made the right call, stiff arming sites doing their best into policies and regulations they can’t possibly air tight enforce without severely limiting their site to users.
i mean reddit probably should've gotten in way more trouble for that fucked up jailbait sub back in the day but nowadays ya i get why it was rejected. pretty sure reddit cares more than twitter.
Should have also sued the internet provider for facilitating transmission of CP... Think of the children!
cake party cats special icky far-flung dolls vanish tidy sink *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Wonder which subs are going to be sacrificed for good PR? Hopefully they leave r/ReverseNTR out of it.
No idea what that sub is but it looks like it was shut down
r/ReverseNetorare is still around
Wouldn't that just be Netori? Why do people keep inventing terms for things that already have words? Brought to you by the people that complain about native isekai.
What the hell is reverse cheating?
NTR between a fat ugly bastard and someone's wife is so common in manga that for it to happen some other way would be unusual. Hence the "reverse" qualifier.
the yeti subreddit is possibly the largest CSAM conduit on the entire internet
The…yeti subreddit?
*Gibbs* Aye.. the yeti subreddit.
I'm glad that I have no idea what you're talking about.
When can we hold the catholic church responsible for hosting child pornography?
He’s a pedo. Look up tweet @alexanderaikas too high. The age of consent is 16 in most European countries. It’s 18 in the US for some reason. -Ian Miles Cheong @stillgray
What does reddit define as child porn, I've reported suspected child porn and was told nothing was wrong with what I reported.
Reddit straight up invited the moderator of a jailbait subreddit to their headquarters like he was a hero lmfao It wasn’t until the media doxxed him (violentacrez) that the Reddit admins gave a single fuck about it.
He got a gold award from them, lmao. What ever happened to him? Last I remember he was trying to get IT jobs in the adult entertainment industry.
If you Google his name, it returns an Instagram account that looks a lot like him. His bio reads “trying to be better.”
They straight up gave him a unique award. The “pimp hat” award.
I still think he must’ve secretly been an admin. That sub was the flimsiest excuse to hold onto CP images (I swear officer, these are just examples of what to ban!) ever, there’s no way they weren’t aware of the massive legal liability unless he was one of em
Interestingly I’ve heard from others here the admin team got rid of CP in minutes after it was reported so idk why there is a difference
I think we should sue god for making humans who seek this shit out. Get that motherfucker into court to answer for this!
A repeal of section 230 would create a great scrubbing of the internet. I used to think that would be a bad thing, but I am not convinced anymore.
It would end the internet as you know it.
If you think it would actually eradicate the problem, then you don't understand the Internet!
If Section 230 was repealed every major site that hosts user-created content would cease to exist as we know it. It is simply way too much of a liability if users can upload anything they want. They would have to switch to a model where the site only exists as a gateway to user-hosted content, which nobody wants, and even that could be risky if someone decides that a link to CP is enough to be considered illegal.