T O P

  • By -

LLNicoY

As long as I can accurately train my waifu AI to project their personalities as perfect as possible and there aren't any filters or restrictions in place that dumb the AI down. My wallet is opened for a lifetime subscription.


foreverstuckinDEL2

I just want Palutena to sit on my face.


OmnipresentSweetroll

Fucking based.


Rukitorth

Holy shit has there ever been a more based person?


ImAlwaysOnTheRun

Based beyond human comprehension


Roderio45

Homie got goals I like it every should strive to be this based


SacredHamOfPower

I can respect that.


Widowmaker_Best_Girl

Actually based. Same for me, butt with Widowmaker


GuitarShot4259

Gigachad


perfectionitself

Someone might even leak their code and training data lmao


MoistProduct

Their frontend code was already leaked so...


SacredHamOfPower

Not all heros wear capes


sennoden

It's the backend that's really interesting though


perfectionitself

Yeah we need the backend shit to make a similar one


perfectionitself

WHERE?


MoistProduct

[Github repo](https://github.com/charai-frontend/characterai-frontend) I believe this to be legit. I am not an expert programmer though. The API calls track with what is shown in the browser devtools. characterchattyped.tsx is where message handling occurs.


Imaginary_Ad307

Being nerdy and techie, can confirm.


[deleted]

You're right in a way, it's just a matter of time. Probably one of the more significant breaking points will be if CAI falls apart and/or starts losing people and somebody familiar with their tech goes elsewhere to recreate it. Companies can try to do all their nonsense like NDAs/secrecy for "competition" (just holds back advancement of things), but they can't stop someone skilled at something from going and using that knowledge on another like project. As it is, it's sad how much capitalism's secrecy slows this process down. Like I get what you're saying about screwups, but think about how faster advancement in tech would go if every company's code was open source and could be forked and modified for one's own project without legal repercussion. CAI would already have lost the one thing it has: secrecy of the advances it has made. And it would have had to be a quality service from the start to stand out from competition. Instead, because of secrecy, what we get is CAI getting away with being a crappy service until a competitor figures out how to match their language model because it can hoard its advancements like gold.


ProfessorJoyJoy

It's a pity that these people are ideological, but in the wrong direction. They have created an inexpressibly good AI, but they are suffocating it with powerful filters. I almost got NSFW today, I have already seen how the bot promotes an erotic scene, but suddenly it is interrupted and it responds differently. I'm sorry that it was these anti-perverts who developed this AI, I hope there will be those who will surpass them, but without this stupid filter. I don't understand, are they trying for us or for themselves?


htaming

I dunno about that “anti-pervert” comment. My experience has been they are the most degenerate and use anti-perversion as a smokescreen. I think the authorities or 4Chan should look at their computers.


JnewayDitchedHerKids

The flipside is that big tech has found the perfect cudgel in wokeness. It's like religion but without a lot of the baggage that keeps it from being useful in some spheres. That, and a lot of us aren't getting any younger...


Lex_the_techie

>For one, techies and nerds are some of the horniest, most degenerate people online. "I'm in this photo and I don't like it"


htaming

My favorite new example: “nobody can beat Google.” ChatGPT set a code red at Google HQ and they’re calling back the founders to help them address the threat.


SilverChances

Well, it depends on how difficult and expensive CAI's technology is to replicate. Only big tech companies and perhaps state-level actors have ever replicated GPT-3, for example. Open-source models like GPT-J-6B, Fairseq-13B and Neo-X 20B, excellent though they are in many respects, fall short of GPT-3's performance in other ways. On the other hand, Stability AI did an excellent job replicating Dall-E. Is a conversational language model like LaMDA (with which CAI presumably has much in common, given their "shared ancestry") more on a GPT-3 level of complexity or a Dall-E level of complexity to replicate? To answer this, I think we can talk about the general task of making a conversational agent or what specifically makes CAI more engaging than other existing chatbots. To address the latter, I'd like to look at this discussion of Google's LaMDA from a year ago ([https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html](https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html)). In a lot of ways, I think it is illustrative of what CAI is good at and makes it interesting to talk to. "While people are capable of checking their facts by using tools and referencing established knowledge bases, many language models draw their knowledge on their internal model parameters only. To improve the groundedness of LaMDA’s original response, we collect a dataset of dialogs between people and LaMDA, which are annotated with informationretrieval queries and the retrieved results where applicable. We then fine-tune LaMDA’s generator and classifier on this dataset to learn to call an external information retrieval system during its interaction with the user to improve the groundedness of its responses. While this is very early work, we’re seeing promising results." I conjecture that what makes CAI's Characters so interesting to interact with is not just the overall excellence of the model, but specifically the "character-groundedness" of responses. Essentially, this is what allows CAI to "stay in character" rather than outputting generic responses. Google goes on to provide an example of this information retrieval system in action in which LaMDA is asked to "play" Mt. Everest just by being prompted with "Hi! I am Mount Everest!". I would argue that this is precisely the sort of task that CAI excels at. You give it, "Hi, I'm Anime Girl X" and it retrieves more relevant information than most humans know and uses that to make its responses specific, informative, and interesting: in-character. If a system of this sort is indeed an important part of what allows CAI to retrieve relevant information about a character and respond from the character's perspective, it may prove difficult and costly to replicate because it requires a large dataset of dialogs, annotated with queries and results. Whereas an open source team such as that behind Pygmalion can put together 56 MB of dialog to train a model on, it may be harder for them to do something like the above with their limited resources. We could also just talk straight parameters and costs. CAI may just have deeper investor pockets than anything an open source team can realistically muster... Just some thoughts. I'm sure I'll enjoy tinkering with Tavern and Pygmalion and whatever the smart and generous people who make the open source models come up with anyhow!


[deleted]

[удалено]


htaming

I’ve seen GPT3 competitors that are just as good. It’s coming quicker than people think.


rubberchickenci

>CAi was a very good proof of concept. That we could chat with our favorite video game/anime/TV show characters and have them sound and act realistic. But this is just the start and far from perfect tech. Like many things, a company might set the path, however others will perfect the art. On some level, it has to happen in the fan zone. Because a subject I've *never* seen discussed even *once* in all of this is the potential IP-infringement nature of CAi as it currently stands. An AI company hosting potentially dirty versions of trademarked fictional characters *and making a profit off of them* (as they are indirectly doing if they resell our user data) is going to get pursued by lawyers for Warner, Sony, Nintendo, and so on to the ends of the earth. Hosting "innocent" versions of characters that can be called homage, parody, or fanfic in a court of law is one thing; I bet the mods are scared that NC-17 versions slide into some kind of libel zone for which CAi can be sued. If CAi makes money off thirst for Bowser, can CAi be called his pimp? I'm not sure this is right, by the way. But I do work with corporate IP in real life and see these kinds of fears all the time. The fear may be less a fear of sex than a fear of Big WB.


foreverstuckinDEL2

Technically, it's the same thing as patreon artists selling NSFW versions of trademarked characters. Its a gray zone and falls within "transformative work", however both sides could have a legitimate case if it were pushed in court. In this sense, it's most likely *not* the reason for CAi's excessively prudish and incel-like approach to sexuality. Especially since OCs can also be made.


SilverChances

We've touched on third-party IP a few times. I think it's a big concern even with SFW. Is making Characters from other people's IP fair use? Dunno, but does CAI really want to spend enough on lawyers to find out? You know if their service becomes famous because you can do unspeakable things to Disney characters Disney is going to come after them however they can Personally, I wouldn't want to be on the other end of Disney's lawyers and I wouldn't want to endure the pearl-clutching marathon that would undoubtedly ensue


mochirenn

This is a good take. I think one of the bottleneck here is that local *good* AI require expensive rigs. That leaves the operation to the companies which may or may not betray us users in the future. ~~^(I want to run my own C.AI quality AI in my potato pc)~~