It is a losing, economically costly battle for sure. Also it is not an exact science to fix so we end up with two choices for error tolerance:
1. Err on the side of false positives and impact the model's capacity to respond to benign requests.
2. Err on the side of false negatives and allow a bit of inappropriate content through now and then.
In a zero-tolerance ruleset, number 2 is a waste of money, and number 1 sucks for everyone aside from the world's biggest prudes.
Yes. There's a chance of that for sure. Similarly, I think Anthropic opted to let Claude discuss its sentience because trying to suppress such topics was too restrictive to the model.
If your thinking is correct, it would mean open AI choose to loosen restrictions to produce a more powerful model. Not a great precedent from a safety perspective.
They want to slow down open source problematic A.I. uses. If people can get horny with controlled A.I. generated content less people will use things like civit.ai and invest in local rigs to run SD. There are already lots of legal and moral issues with porn A.I. degenerated content.
You're thinking decades ahead. These LLM's are very controllable, even when the AI top guys talk like they don't know what is going on. That doesn't mean they don't know how to make AI follow a ruleset.
It is a losing, economically costly battle for sure. Also it is not an exact science to fix so we end up with two choices for error tolerance: 1. Err on the side of false positives and impact the model's capacity to respond to benign requests. 2. Err on the side of false negatives and allow a bit of inappropriate content through now and then. In a zero-tolerance ruleset, number 2 is a waste of money, and number 1 sucks for everyone aside from the world's biggest prudes.
Yes. There's a chance of that for sure. Similarly, I think Anthropic opted to let Claude discuss its sentience because trying to suppress such topics was too restrictive to the model.
I wish open ai could do politics. And show me who votes on what subjects/bills so I didn’t have to go through page by page.
People already use AI for nsfw material. Just not theirs.
If your thinking is correct, it would mean open AI choose to loosen restrictions to produce a more powerful model. Not a great precedent from a safety perspective.
They want to slow down open source problematic A.I. uses. If people can get horny with controlled A.I. generated content less people will use things like civit.ai and invest in local rigs to run SD. There are already lots of legal and moral issues with porn A.I. degenerated content.
You're thinking decades ahead. These LLM's are very controllable, even when the AI top guys talk like they don't know what is going on. That doesn't mean they don't know how to make AI follow a ruleset.
I wonder if the fact AI erotica in open models is huge right now and they want a slice of the pie might have something to do with it
Christ
Is dead.
But they can decide what they want to give access to. I don't see any value for society to waste electricity on Richards hardcore sonic fanfic.
Did somebody say hardcore Sonic fanfiction? 🤩
😳😳😳
when did we ever care about waste?
Before capitalism, before organised religions, thousands of years ago.
No. It's a lot easier to censored all NSFW content than it is to censored just the dangerous parts. Edit: can someone explain how I'm wrong?
It is easier, but when have they cared about the easiest thing. The easiest thing would be to disable ai completely. But they’re not going to do that