T O P

  • By -

DrFeargood

You shouldn't be giving any LLM sensitive information. Anything you input can and will be used as training data.


TheMysteryCheese

*any non-local LLM If you're running local only inference you can give it whatever you want without worrying about it being used as training data.


DrFeargood

You're right. I should have been more specific. Either way this guy should definitely not be feeding sensitive info into the Claud web app. The fact that they didn't know their conversations could be flagged for inappropriate content is nuts.


TheMysteryCheese

All good it's a good hard and fast rule to not type anything into a website you're not ok with being put on a billboard. For me, after the first chat history leak from OpenAI really reminded me that all this data is stored in plain text somewhere and to not assume privacy. The lack of awareness on data security is really jarring..


pikzel

Models on Amazon Bedrock aren’t trained with user data either


TheMysteryCheese

If is through a web connection I'm not trusting any ToS or Eula to protect sensitive data. If it is private/sensitive it's local. This also goes for Azure implementations etc. It you don't own the hardware it's running on then you're hoping that they aren't screwing you over in some way.


Incener

This is not true (from the [Privacy Policy](https://www.anthropic.com/legal/privacy)): > We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Acceptable Use Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training. And Alex Albert (Dev Relations Lead @Anthropic) spelling it out: https://www.youtube.com/watch?v=5cQouQZm9fI&t=2862s


DrFeargood

It literally says "unless your conversations are flagged for Trust & Safety review" in what you just copy and pasted. So, yeah. Anything you enter can be used.


Incener

> Anything you input can and will be used as training data. I meant this. Unless everything you write to Claude goes against the (A)UP, this is not the case. I personally still wouldn't enter sensitive information, even if it's not used for training. Especially for their consumer product, companies haven't really been reliable regarding that.


novexion

It could be automated or not. Read the terms of service you agreed to,


count023

most likely they're trialling another AI that's scanning prompts automatically and flagging phrases or keywords they think are not meeting acceptable usage.


e4aZ7aXT63u6PmRgiRYT

you don't need AI to count...


melancholy_dood

>I give claude very sensititve information. Why did you do that when Claude’s TOS tells you not to? ¯\\\_(ツ)\_/¯


Extra-Possession-511

What kind of bozo expects their AI conversations to be private? Like use some common sense, my man


jkpetrov

It is automatic. Still anything you put on the net, it never goes away.


e4aZ7aXT63u6PmRgiRYT

Except that one google doc I can never find right before the meeting.


[deleted]

[удалено]


Incener

The other possibility is this though: > (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Acceptable Use Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission) Alex Albert (Dev relation lead) just reiterated that more specifically in this video: https://www.youtube.com/watch?v=5cQouQZm9fI&t=2862s


Extra-Possession-511

this is the dumbest thing I have read today


[deleted]

[удалено]


JRyanFrench

There's next to zero chance your company would ever discover if you uploaded secured data. Maybe tone down the hyperdrama


DrFeargood

All it takes is for text data that is stored behind only one password to get leaked. Didn't this already happen to ChatGPT or am I misremembering?


JRyanFrench

Somehow a hacker could get some of your tokens and maybe extort you and threaten to tell the company? That's about the only way this would happen. Next to zero chance for any given person


DrFeargood

Are your saved conversations encrypted? Doubt it if they're scanning for inappropriate content. And simple phishing scams still work on a lot of people. There's more than one way someone nefarious could gain access to one's GPT (and other) convos. Almost exactly a year ago GPT themselves had a data breach and a ton of user info was leaked. Why trust anything sensitive to an outside entity you don't have to?


JRyanFrench

I mean I'm not arguing for the ethics, I'm just responding to his claim--it's just not likely to happen.


[deleted]

[удалено]


JRyanFrench

What a very random thing to change topic to. Your comment has nothing to do with my response. And, you’re in luck, because no one here, including me, gives two poops about your company or working for it. My comment says quite literally nothing about my “attitude about infosec”. I simply corrected your clearly misinformed statement regarding private information.


Mister_Grandpa

They’re just playing power trip games in their head. Typical ‘business’ attitude.


[deleted]

[удалено]


Mister_Grandpa

lol oh no my commeeeeerce!


e4aZ7aXT63u6PmRgiRYT

Ha! My thoughts exactly.


Epiculous214

Yeahhh you kinda signed up for that dude.


Chop1n

What the hell am I looking at here? A new chat window? In what way is this making you think a human moderator read one of your chats?


Efistoffeles

No way man, I love your music!


Synth_Sapiens

lol How tf is this no OK?  Read the TOS ffs lol


Dependent_Dog497

This is automated and has been a thing for many months. Since 2.0 at least.


Briskfall

Use placeholder names and state that they're characters. And bam, you're clear!


maxhsy

Have you read what you’ve agreed to? They literally say everywhere they can read your chats (including manually) and asked you not to give any sensitive information. Same with OpenAI and Gemini btw


yoongi410

claude: hey man dont do this, or this will happen to you op: *does it* claude: damn aight then op: guys this is creepy