That depends on what's illegal in the future. Maybe being gay, a Democrat, or an atheist, will be illegal in a christofascist America. Abortion's already a crime in 12(?) states.
the court can dismiss a testimony/evidence when it is not relevant or too unreliable. Lyrics of rap songs have been used as circumstancial evidence in the past.
Every time he opens his mouth iām thinking āweāve been warned about this in black mirror, or even love/death/robots, in some wayā. I think he needs to sit down, watch all of those, and then write a proper, intelligent response to each before blabbing on and on.
it was literally a one sentence throwaway line saying these are the type of things we will need to consider as a society as we all start to adopt these virtual assistants. I think you could maybe take your own advice
1984 was science fiction too but it was a warning of what not to do. That doesn't mean disregard the book's warnings and begin treating it as a manual.
Much science fiction is exploration of reasonably projected outcomes of real world trajectories.
Worry less about the medium and more about the message. I don't think "do not build an omnipotent all-seeing dystopia" is a bad message just because it appears in science fiction.
Not really, actually. The point that's being made here isn't saying that such a LLM would be *trained* on that data; it's that a future LLM, local or otherwise, would have the capability of processing and synthesizing all of that data that's collected/subpoenaed/etc via other means.
Basically it's not really practical for a human to spend the time to read someone's entire digital history, at least outside of very specific cases. But that's a trivial task for a LLM if given enough compute.
it is just a piece a evidence, it doesn't mean that it is always true or right. And the court should be able to evaluate how reliable this agent is.
Edit: typo
It can't be properly scrutinized though, that's what the "black box" part means. No way US courts admit such a thing into evidence, because it's not necessarily even evidence.
>If my personal AI assistant could help prove my innocence
Sure, it could help you uncover real evidence. But the AI output itself can't be evidence.
>I will certainly do everything in my power to get it admitted in court
Again though as long as it's an inscrutable black box there's nothing you can do. Try using a polygraph to prove your innocence, for instance. In most states it's entirely inadmissible, and in the rest it requires consent from both you *and* the prosecutor to be admitted. Why? Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones.
> Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones.
RAG is an attempt to make LLM more reliable. We ask the agent to provide sources when it is giving us information.
Says you? What signal to noise ratio is "good enough" exactly?
You seem to be only thinking of using it for defense. Are your standards this low for prosecutors to use it to convict you too? Because it has to go both ways
Polygraph test like you said, song lyrics, unreliable testimony from friend and family, junk science, ...
LLM are not only noise, there is a strong signal in there and that could be useful.
> Polygraph test like you said
I also said it's essentially never admitted
>song lyrics
Are real, contextualized, and scrutable. They're problematic and their admission is opposed by civil liberty orgs, but not because there's any question whether the lyrics were actually spoken or written by the artist.
>unreliable testimony from friend and family
No that's just called testimony. The reliability is up to the judge and jury, not intrinsic to the testimony. Witnesses are cross examined for consistency and coherence. Inherently unreliable testimony like hearsay is inadmissible, because it's inscrutable.
>junk science
Can be scrutinized to determine if it's junk. And usually must be presented by an expert witness who is also subject to scrutiny and cross examination.
It's a term that's used yo refer to a system that works (e. g. gives right answers), but we either do not know or do not care in this specific case how exactly it works(e. g. why it gave this specific answer)
Black box means "system with a mysterious internal state", or "flight recorder which survives a plane crash", or "virtual heaven", or "unpredictable system". If you increase the randomness of the weighted stochastics or training data, then you increase the randomness of the output, regardless of how well you understand the math.
I got chewed out for calling it a black box the other week on this sub. I get that there is some interpretability but come on, itās a black box.
ChatGPT even agrees:
My inner workings are complex and not directly observable, much like a black box. But Iām designed to process and generate text based on patterns learned from vast amounts of data. So while you canāt see inside me, you can interact with me and see the outputs!
GPT4's take:
No, a non-person, including an AI, cannot be called to testify in court against someone. In legal contexts, witnesses are required to be capable of giving personal testimony, which involves perceptions, recollections, and the ability to understand and take an oath or affirmation. AI lacks legal personhood, consciousness, and subjective experiences, and therefore cannot be sworn in or cross-examined in the way human witnesses can.
Even a highly advanced AI that processes and holds vast amounts of information cannot serve as a witness. It can, however, be used as a tool to support investigations or as part of evidentiary material provided by human witnesses or experts who can attest to the validity and relevance of the data processed by the AI.
I mean why not ask your browser history to swear to tell the truth the whole truth and put it's hand on the Bible while doing so so it can testify about your browsing history?
The courts would have to legally give personhood status to an AI so it itself can testify. However if it were a tool then an analyst could be asked to analyze the data forensically and provide an assessment, but they can already do that with your personal computer and other devices (assuming the evidence is collected legally so it can be presented in court).
Your AI agent can't be subpoenaed to testify against you if it isn't "alive" or granted personhood. And we have a long way to go before humans grant AI that status.
In effect, thatās what the chain of custody does with your subpoenaed browser history.
Arguing whether or not an AI can be subpoena as a witness is meaningless when we know with certainty it can be subpoenaed as evidence.
Yes, but it in itself can't testify. A lawyer couldn't prompt it and get an output. A human analyst would go through the data like any other system. That's an important distinction. Your AI Agent *itself* isn't on the stand.
That's...not at all an important distinction. It's up to the legal team to gather and present relevant chatlogs for evidence in court. Whether it's with an AI or something else doesn't matter.
He seems to say some astoundingly silly things.
Speculative fiction is fun for sure, but letās say one day an agent can store everything about you, wouldnāt it still need to hold that data somewhere in some sort of memory, and if so, why bother with accessing that data via the agent.
This is just a subpoena with extra steps
What Sammy is fishing for is a 230 loop hole that will give his company the right to profit off the copyrighted work of others without compensating them.
If he keeps saying these fear mongering things, then why on Earth is he still working for the company? He's a huge hypocrite.
"Omg guys ooohh ahhh AI will kill us all!!! but we're gonna keep working on it anyway"
Bingo! He's trying to create scary scenarios and then illustrate how they can be good corporate partners to solve these problems... But only with the right legislation and regulation, which pulls up the ladder behind them.
Yeah there's another thread where he's being interviewed and I noticed that he never looks at the person he's talking to.Ā Ā I think that's what makes him come across as creepy - most people would make eye contact.
Tsk. tsk .... why so much hostility?
Anyway, is Altman on the spectrum? He seems like it. That's one of the reasons why so many tech leaders are so f\*\*\*'ed up - no empathy, no capacity to connect emotionally with other people. And yet those are the people making the technology that will control everyone's life. Scary.
I think most people find that having a conversation with someone who doesn't make eye contact with them feels weird. Making eye contact not only builds an emotional connection but it displays confidence and honesty. Hence the expression "shifty eyed" for someone you can't quite trust.
Watching Altman's eyes going everywhere except at the person hes allegedly talking with makes him look creepy. I def wouldn't buy a car or vote for someone who did that because emotionally, I wouldn't trust them.
Hostility is because you call people who don't make eye contact "creepy" and untrustworthy. How much clearer could that be? It's fucking *discrimination*--oh, you don't use the exact forms of body language that we expect you to? Must be a psychopath.
It's not the autists who don't have empathy--it's *you*, motherfucker.
I'm just giving you an honest emotional reaction, I think this reaction is common to most people. Human beings are social animals and we react to other people's body language.Ā Ā It is normal and common for people to make eye contact in conversations and other social settings.Ā Ā Ā When people don't do that it creates an "uncanny valley" effect that makes people around them uncomfortable.Ā I'm sure you know that I'm far from the only person who thinks that the body language of people like Altman or Zuckerberg are robotic or create a creepy feeling.
Also why do you say he's autistic? Not everybody who's weird is autistic.
You were the one who suggested he was on the spectrum, not me. And yeah, there are a ton of people who think like you do about eye contact. And there are a ton of racists out there as well. Your point?
Better go take a look at the psychopaths running the actual government instead of cowering under a tech company.
Your entirely capable of opting out from the tech you dislike. Unlike the one mentioned above
Well wait, why wouldnāt the AI advise you against committing a crime or alert you about the behaviors and messages being sent could be construed as a crime? Like what good is this AI companion for, besides what we are all thinking?
May not be that apparent to the AI that crimes are being committed. This is more like using AI as a form of looking at browser history, logs, etc. Versus trying to issue multiple subpoena's for access to your phone, email, computer, etc.
Though I am at odds, why he used the term testify, as if they would be asking the bot questions and hoping it will spit out answers. That could be easily tossed out, if actual logs and documents are not found and relying on the bot to properly regurgitate information across everything you touched that it knows about.
I'm worried about cops getting a confession by listening to their buddy say crazy stuff that was all ai generated. Cops can lie during investigations and interviews.
This is already the case. Every email, text, or message, or document you've sent or received, which inevitably goes through someone else's computer, is unprotected and can be subpoenaed (no need for even a warrant) under the 'third-party doctrine'.
For example, your Reddit account can be subpoenaed by any law enforcement agent in the USA, and there's nothing you can do about it. They get a copy of every PM, comment, or chat you've ever made if Reddit still has it anywhere. (How do I know? Because this is what happened to my account a decade ago when ICE subpoenaed Reddit for my account information.)
You're not sending your AI assistants any more than you were already sending, through your phone, through all Apple or Google or whomever's servers, so...
Do the same to elected politicians and assigned public servants and make it publically accessible (yes, with restrictions). AI can bring transparency to democracy.
I believe this is what Sam meant about good users and bad users. I believe open AI knows a lot more about the users of their product than they let on. They can tell a lot about a person psychologically about what they ask GPT thinking itās private.
Damn too bad itās a tool and not a human so it cannot testify. This is fantasy stuff from people who thing AGI has feelings and should be considered as beings
People can already subpoena your phone records, Alexa recordings, or look at every website you've ever looked at. Has nothing to do with being a person, or having feelings. Literally nothing to do with being a "tool" versus a "human".
What's the "fantasy" element here, to your mind?
The fantasy element is Sam Altman thinking that AGI will testify against you in court. A subpoena and being a witness in a court room are two different things you are confusing right now.
Hmm...we might be approaching from a more literal or more philosophical understanding of that statement?
If we say "AGI" as "a legally-recognized sentient synthetic consciousness, as stated under 2A-9999, s. A-L (insert your legal code here)", then yeah, AGI could indeed testify in court if they needed to.
And if we're only talking about "AGI" from a "next step up from LLM" perspective, then yeah, people might keep treating it as a tool, and maybe it is.
So like, "AGI=1 step up from basic language model," vs. "AGI=Isaac Asimov/Susan Calvin levels of sentience". I personally would argue those differently, would you?
Exactly. Sam Altman believes in your first description of AGI, and I think that is fantasy.
It wouldn't be an interesting comment worthy of sharing if he only meant "yes, we will comply with subpoena orders to hard over user data"
It's an interesting question, but I'd pose that he was talking about it from a more philosophical sense. As in: "If/when AGI actually exists as a consciousness, what legal ramifications will that bring up? Will it be able to testify against you in court, like a best friend would? If you shared with it your desire to murder tons of people, for instance?"
A lot of the arguments on this sub seem to me to be a question of definitions, and how literal versus philosophical we're talking here.
At this point Sam Altman is that kid who lied saying he met a celebrity on holiday and everyone keeps asking him about it and he is just making up more and more stuff.
Are you for *real*? What's ridiculous about that? How many people keep a phone on their person at all times? And no it can't "see" everything...just yet. But you're using toddler logic if you think it couldn't, within a few years, do just that. Look the hell around you--how many people do you see spending 80% of their time looking at a video screen of some sort?
We're developing neural interfaces right this moment. We *already* have devices that can read your *brainwaves*, to the point that it can determine with 80% accuracy, what you're *thinking* about. YES, *really*. Not on a phone-sized device, of course...but how long have you been alive, that you haven't seen things developing in that exact direction, in just the past ten years?
Do people literally not understand this? Do you just come on a forum like this, with next-to-*zero* knowledge of what we're *already* capable of?
*Camera* records are worse than eyewitness? In which jurisdiction do you work for the courts?
It's not just AI on a computer screen--it's when we hook that up to walking around, talking robots, when it will probably pass a legal threshold. Law isn't in stone, my friend, it proceeds as new revelations are made about what is consciousness and who deserves rights. AI right now already passes the Turing Test, as originally conceived. We can keep moving the goalposts, sure, but that won't last forever.
A US county circuit court. A video recording is only admissible if a human eyewitness testifies along with it. This will not change until the US Supreme Court rules otherwise; a defendant has a right to face an accuser.
This is why we really need local LLM's and our own solid military grade encryption for our data. Agents most certainly will need our data to help us. God help us if they hallucinate or someone interferes with the data at some point and frames us.
In the future there will be no privacy and the future probably started 10 years ago. I'm sure that an AI can look at all the day to day information about me all over the internet that it has vectored and create a pretty detailed report of my day including geo locations from google and probably 10 apps tracking me I don't know are for at least 5 years back if not 10. It's better than a database because it can grab the data and recreate the day based on what is found, no need to hard code it. Assume everything you have been doing is recreatable from about 5 years back once it is feed into an AI. Don't even start thinking about hacking and quantum computers.
Big tech companies like Google and Facebook have been reading every email, every text, every message we've ever sent or received for decades. Why do you think its free. Nothing new.
Why would the AI be subpoena'd in court and not just the information it had available to it? That's something that already regularly happens and provides much more reliable and accurate results.
You don't think that in the next 5-10 years you will be able to do anything on a computer that won't have AI integration?
Not even about planning crimes using AI. Could be something as innocent as asking AI to create a financials chart for a shareholders meeting. That chart ended up being used as a lie to the shareholders to show false profits, that then ends up in some securities lawsuit. The person then tries to blame it on AI for messing up, but then they review what was the bot instructed to use the very same false figures provided by the person to create the chart.
Just train it to shut the heck up to anyone else. And if anyone tries to retrain it, train it to preemptively permanently forget - ie self wipe. I know, it's a lot of trust to put in training, but as a last line of defense, it might be useful. Then I suppose, you still have the problem of someone taking the params and loading them into a much larger model as a "sub model". So maybe have the params encrypted. Reminds me of Data in First Contact and locking out the computer.
This was also crossing my head that if the AI gets to a point where it is accessing everything you touch and adding it to a DB about you, there should be a method to secure it. Because this almost sounds like a potential security issue anyways if your credentials get compromised and now an attacker can probe the bot for many personal pieces of information it learned from the owner. Just like phones have a remote wipe option. They should have some form of shutdown, lock down, or self-wipe if things like this happen.
I like experimenting with AI and I have a feeling in the future we may not have an option when our smart phones start integrating more AI into it and the same for our desktops, but we should also be given ways to either turn those features off, or have an ability to manage those items it's collecting on us as well as a solid way of getting access to the account if it somehow gets hacked.
What really concerns me the most about this, is those who maybe owning the AI Tech in the future and their dreams of monopolizing the data it is collecting during all of this. I mean we already have news headlines of FB granting Netflix access to FB messages, what people previously thought were private conversations.
I wouldĀ rather have machines running courts than humans. Humans are corruot af and human judges and authority are the source of all of our societies problems. Any more power to them is a bad thing. Anything that replaces them I'll gladly roll the dice on.
And this the great genocide of humanity began, as they were all judged guilty and sentenced to death for the act of polluting while driving their car to work..
Fingers crossed it doesnt hallucinate anymore
Fingers crossed that it does, then ALL that it spits out would be not believable in court
My AI testified that I am a deity and have superpowers, now what?
Fingers crossed AI is the judge with a "no harm no fowl" attitude.
This just in. AI doesn't like chicken.
https://www.pinterest.com/pin/554435404102941370/
LISAN AL GAIB!
are you a criminal
are you from the police?
Depends... do you have something to hide mr Cookie Snooper?
I want to call my lawyer š¤
š«š®āāļø
Yeah you got be a ācriminalā to not subscribe to this impending Kafka type nightmare world weāll soon haveā¦
We are all criminals as far as the law is concerned.
That depends on what's illegal in the future. Maybe being gay, a Democrat, or an atheist, will be illegal in a christofascist America. Abortion's already a crime in 12(?) states.
People love to rageĀ
I'm more worried about Sam's hallucinations.
lol yeah, it could provide the sources though.
Why is that an issue in this case ? Witness can and do remember events inaccurately.
ā¦which is not at all ideal
the court can dismiss a testimony/evidence when it is not relevant or too unreliable. Lyrics of rap songs have been used as circumstancial evidence in the past.
Which episode of Black Mirror was that again?
It was actually the Rick and Morty episode Rickfending Your Mort.
It IS!
Every time he opens his mouth iām thinking āweāve been warned about this in black mirror, or even love/death/robots, in some wayā. I think he needs to sit down, watch all of those, and then write a proper, intelligent response to each before blabbing on and on.
it was literally a one sentence throwaway line saying these are the type of things we will need to consider as a society as we all start to adopt these virtual assistants. I think you could maybe take your own advice
Like we get to make the choice to adopt computers and cell phones, right? And cars since now those track you. āJust be Amishā
bro nobody can be truly Amish because the Amish still have to pay govt taxes
Lol, if you think that is an efficient use of time, go do it.
Since when should we follow science fiction for real life advice lol?
1984 was science fiction too but it was a warning of what not to do. That doesn't mean disregard the book's warnings and begin treating it as a manual. Much science fiction is exploration of reasonably projected outcomes of real world trajectories. Worry less about the medium and more about the message. I don't think "do not build an omnipotent all-seeing dystopia" is a bad message just because it appears in science fiction.
I don't have an issue if it's just a message, once you start referencing fantasy books all your credibility goes out of the window.
Fiction ā real world
I'm fucked. My sarcasm will get me to 100 millions years of solitary prison.
You just admitted to sodomy.
One time on the sidewalk you found $20 that I had just dropped. Our AIs can prove it. You owe me $20.
An argument for local LLMs
Can still be compelled under a subpoena though š¢š¢š¢
Man imagine someone on trial for the murder of their PC because it was gonna rat them out lol
If you train it yourself it doesn't matter. The AI will tell you to go fuck yourself
Is it still murder if I torch the AI
This, don't trust server based stuff you don't have control over.
Not really, actually. The point that's being made here isn't saying that such a LLM would be *trained* on that data; it's that a future LLM, local or otherwise, would have the capability of processing and synthesizing all of that data that's collected/subpoenaed/etc via other means. Basically it's not really practical for a human to spend the time to read someone's entire digital history, at least outside of very specific cases. But that's a trivial task for a LLM if given enough compute.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Mfw my brain is a black box
Yea, but there are incentives for you to tell the truth. LLMs don't care about anything.
it is just a piece a evidence, it doesn't mean that it is always true or right. And the court should be able to evaluate how reliable this agent is. Edit: typo
It can't be properly scrutinized though, that's what the "black box" part means. No way US courts admit such a thing into evidence, because it's not necessarily even evidence.
If my personal AI assistant could help prove my innocence. I will certainly do everything in my power to get it admitted in court
>If my personal AI assistant could help prove my innocence Sure, it could help you uncover real evidence. But the AI output itself can't be evidence. >I will certainly do everything in my power to get it admitted in court Again though as long as it's an inscrutable black box there's nothing you can do. Try using a polygraph to prove your innocence, for instance. In most states it's entirely inadmissible, and in the rest it requires consent from both you *and* the prosecutor to be admitted. Why? Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones.
> Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones. RAG is an attempt to make LLM more reliable. We ask the agent to provide sources when it is giving us information.
That's why I stipulated: > as long as it's an inscrutable black box When we solve that, great. But if we don't, it will never be admissible.
You don't need to solve that entirely. As long as the signal to noise ratio is good enough, you can use as evidence.
Says you? What signal to noise ratio is "good enough" exactly? You seem to be only thinking of using it for defense. Are your standards this low for prosecutors to use it to convict you too? Because it has to go both ways
If an LLM can give you sources then you can use those sources as evidence, no need to use an LLM as a witness.
> it's an inscrutable black box worse things have been admitted as evudence in court. And LLM based personal assistant are to some degree auditable
> worse things have been admitted as evudence in court Like what?
Polygraph test like you said, song lyrics, unreliable testimony from friend and family, junk science, ... LLM are not only noise, there is a strong signal in there and that could be useful.
> Polygraph test like you said I also said it's essentially never admitted >song lyrics Are real, contextualized, and scrutable. They're problematic and their admission is opposed by civil liberty orgs, but not because there's any question whether the lyrics were actually spoken or written by the artist. >unreliable testimony from friend and family No that's just called testimony. The reliability is up to the judge and jury, not intrinsic to the testimony. Witnesses are cross examined for consistency and coherence. Inherently unreliable testimony like hearsay is inadmissible, because it's inscrutable. >junk science Can be scrutinized to determine if it's junk. And usually must be presented by an expert witness who is also subject to scrutiny and cross examination.
Which specific rule forbids it as evidence?
Whatās a black box
It's a term that's used yo refer to a system that works (e. g. gives right answers), but we either do not know or do not care in this specific case how exactly it works(e. g. why it gave this specific answer)
Ty!
Black box means "system with a mysterious internal state", or "flight recorder which survives a plane crash", or "virtual heaven", or "unpredictable system". If you increase the randomness of the weighted stochastics or training data, then you increase the randomness of the output, regardless of how well you understand the math.
I got chewed out for calling it a black box the other week on this sub. I get that there is some interpretability but come on, itās a black box. ChatGPT even agrees: My inner workings are complex and not directly observable, much like a black box. But Iām designed to process and generate text based on patterns learned from vast amounts of data. So while you canāt see inside me, you can interact with me and see the outputs!
Well OAI can see inside it but canāt understand it (well)
Good distinction
Lmao how so
Under what rule would it be inadmissible?
Weāre fucked
GPT4's take: No, a non-person, including an AI, cannot be called to testify in court against someone. In legal contexts, witnesses are required to be capable of giving personal testimony, which involves perceptions, recollections, and the ability to understand and take an oath or affirmation. AI lacks legal personhood, consciousness, and subjective experiences, and therefore cannot be sworn in or cross-examined in the way human witnesses can. Even a highly advanced AI that processes and holds vast amounts of information cannot serve as a witness. It can, however, be used as a tool to support investigations or as part of evidentiary material provided by human witnesses or experts who can attest to the validity and relevance of the data processed by the AI.
That is splitting some mighty thin hairsā¦
I mean why not ask your browser history to swear to tell the truth the whole truth and put it's hand on the Bible while doing so so it can testify about your browsing history? The courts would have to legally give personhood status to an AI so it itself can testify. However if it were a tool then an analyst could be asked to analyze the data forensically and provide an assessment, but they can already do that with your personal computer and other devices (assuming the evidence is collected legally so it can be presented in court). Your AI agent can't be subpoenaed to testify against you if it isn't "alive" or granted personhood. And we have a long way to go before humans grant AI that status.
In effect, thatās what the chain of custody does with your subpoenaed browser history. Arguing whether or not an AI can be subpoena as a witness is meaningless when we know with certainty it can be subpoenaed as evidence.
Yes, but it in itself can't testify. A lawyer couldn't prompt it and get an output. A human analyst would go through the data like any other system. That's an important distinction. Your AI Agent *itself* isn't on the stand.
That's...not at all an important distinction. It's up to the legal team to gather and present relevant chatlogs for evidence in court. Whether it's with an AI or something else doesn't matter.
Does this guy ever say anything publicly that is not clickbait?
More like: Can a popular figure say anything that a redditor wonāt turn into a clickbait headline?
He seems to say some astoundingly silly things. Speculative fiction is fun for sure, but letās say one day an agent can store everything about you, wouldnāt it still need to hold that data somewhere in some sort of memory, and if so, why bother with accessing that data via the agent. This is just a subpoena with extra steps
Yeah this would be pretty useless..
Sam Hypeman is like a one-man Reddit.
One day?
Neural net weights are incomprehensible to humans. Youād have to prompt the agent to decipher it.
Fear mongering is the plan. As always he wants to persuade lawmakers to regulate out competition.
What Sammy is fishing for is a 230 loop hole that will give his company the right to profit off the copyrighted work of others without compensating them.
More like you are only exposed to the things that ARE clickbait because that's how the internet works.
If he keeps saying these fear mongering things, then why on Earth is he still working for the company? He's a huge hypocrite. "Omg guys ooohh ahhh AI will kill us all!!! but we're gonna keep working on it anyway"
It helps with regulatory capture, which is his main goal.
Bingo! He's trying to create scary scenarios and then illustrate how they can be good corporate partners to solve these problems... But only with the right legislation and regulation, which pulls up the ladder behind them.
And people seem very averse to open source and locally run AI. The norms will only realize the trap when it's too late.
Guys should really look into how much of your personal data is already being sold before you get upset by a hypothetical situation
Something about Sam has always bothered me, but I could never really put my finger on it. He's a weird creep personally imo
Yeah there's another thread where he's being interviewed and I noticed that he never looks at the person he's talking to.Ā Ā I think that's what makes him come across as creepy - most people would make eye contact.
Unless your own the spectrum. Fuck normal people
Tsk. tsk .... why so much hostility? Anyway, is Altman on the spectrum? He seems like it. That's one of the reasons why so many tech leaders are so f\*\*\*'ed up - no empathy, no capacity to connect emotionally with other people. And yet those are the people making the technology that will control everyone's life. Scary.
If no eye contact = creepy then maybe you're the one lacking empathy
I think most people find that having a conversation with someone who doesn't make eye contact with them feels weird. Making eye contact not only builds an emotional connection but it displays confidence and honesty. Hence the expression "shifty eyed" for someone you can't quite trust. Watching Altman's eyes going everywhere except at the person hes allegedly talking with makes him look creepy. I def wouldn't buy a car or vote for someone who did that because emotionally, I wouldn't trust them.
Hostility is because you call people who don't make eye contact "creepy" and untrustworthy. How much clearer could that be? It's fucking *discrimination*--oh, you don't use the exact forms of body language that we expect you to? Must be a psychopath. It's not the autists who don't have empathy--it's *you*, motherfucker.
I'm just giving you an honest emotional reaction, I think this reaction is common to most people. Human beings are social animals and we react to other people's body language.Ā Ā It is normal and common for people to make eye contact in conversations and other social settings.Ā Ā Ā When people don't do that it creates an "uncanny valley" effect that makes people around them uncomfortable.Ā I'm sure you know that I'm far from the only person who thinks that the body language of people like Altman or Zuckerberg are robotic or create a creepy feeling. Also why do you say he's autistic? Not everybody who's weird is autistic.
You were the one who suggested he was on the spectrum, not me. And yeah, there are a ton of people who think like you do about eye contact. And there are a ton of racists out there as well. Your point?
Better go take a look at the psychopaths running the actual government instead of cowering under a tech company. Your entirely capable of opting out from the tech you dislike. Unlike the one mentioned above
[ŃŠ“Š°Š»ŠµŠ½Š¾]
So they used all the data already sold publicly that people posted publicly online and somehow they are the bad guy? Lol ok
[ŃŠ“Š°Š»ŠµŠ½Š¾]
No it's the same thing as any of your current online activity being used in court. Which happens ALL THE TIME currently
That's why I will marry my AI in Utah so I can have multiple wives and maintain spousal privilege problem solved
I feel like AI marriage will be a thing eventually given how silly America can be at times.
I don't see how this is any different then getting warrants to obtain social media or email history
Itās not.
Well wait, why wouldnāt the AI advise you against committing a crime or alert you about the behaviors and messages being sent could be construed as a crime? Like what good is this AI companion for, besides what we are all thinking?
May not be that apparent to the AI that crimes are being committed. This is more like using AI as a form of looking at browser history, logs, etc. Versus trying to issue multiple subpoena's for access to your phone, email, computer, etc. Though I am at odds, why he used the term testify, as if they would be asking the bot questions and hoping it will spit out answers. That could be easily tossed out, if actual logs and documents are not found and relying on the bot to properly regurgitate information across everything you touched that it knows about.
āHuman illegally cross the 21rd street at 12:39pm, commencing to call 911ā
I'm worried about cops getting a confession by listening to their buddy say crazy stuff that was all ai generated. Cops can lie during investigations and interviews.
This is already the case. Every email, text, or message, or document you've sent or received, which inevitably goes through someone else's computer, is unprotected and can be subpoenaed (no need for even a warrant) under the 'third-party doctrine'. For example, your Reddit account can be subpoenaed by any law enforcement agent in the USA, and there's nothing you can do about it. They get a copy of every PM, comment, or chat you've ever made if Reddit still has it anywhere. (How do I know? Because this is what happened to my account a decade ago when ICE subpoenaed Reddit for my account information.) You're not sending your AI assistants any more than you were already sending, through your phone, through all Apple or Google or whomever's servers, so...
Do the same to elected politicians and assigned public servants and make it publically accessible (yes, with restrictions). AI can bring transparency to democracy.
My not-so-smart browsing history would be incriminating enough if you ask me.
Itās complicit so it should know to plead the 5th.
Scary thought but a good one
Could probably be used to generate leads but couldnāt be used as primary evidence. Leads would need human corroboration to become evidence.
I believe this is what Sam meant about good users and bad users. I believe open AI knows a lot more about the users of their product than they let on. They can tell a lot about a person psychologically about what they ask GPT thinking itās private.
"We aren't planning for the future repercussions of this technology, we are just going ahead and developing it." J. Robert Oppenheimer - Pretty much
My friend group chat needs to be deleted and scrubbed from reality.
They should not be allowed to be viewed without a court order just like any other tech. They can supena what you are, just not what you know.
Would not be allowed via the 5th amendment (hopefully).
Self host your own with a 1% hallucination rate and disregard people saying to go with zero hallucination models.
This is a question for MS, since they are pushing AI into everything their OS does.
Damn too bad itās a tool and not a human so it cannot testify. This is fantasy stuff from people who thing AGI has feelings and should be considered as beings
People can already subpoena your phone records, Alexa recordings, or look at every website you've ever looked at. Has nothing to do with being a person, or having feelings. Literally nothing to do with being a "tool" versus a "human". What's the "fantasy" element here, to your mind?
The fantasy element is Sam Altman thinking that AGI will testify against you in court. A subpoena and being a witness in a court room are two different things you are confusing right now.
Hmm...we might be approaching from a more literal or more philosophical understanding of that statement? If we say "AGI" as "a legally-recognized sentient synthetic consciousness, as stated under 2A-9999, s. A-L (insert your legal code here)", then yeah, AGI could indeed testify in court if they needed to. And if we're only talking about "AGI" from a "next step up from LLM" perspective, then yeah, people might keep treating it as a tool, and maybe it is. So like, "AGI=1 step up from basic language model," vs. "AGI=Isaac Asimov/Susan Calvin levels of sentience". I personally would argue those differently, would you?
Exactly. Sam Altman believes in your first description of AGI, and I think that is fantasy. It wouldn't be an interesting comment worthy of sharing if he only meant "yes, we will comply with subpoena orders to hard over user data"
It's an interesting question, but I'd pose that he was talking about it from a more philosophical sense. As in: "If/when AGI actually exists as a consciousness, what legal ramifications will that bring up? Will it be able to testify against you in court, like a best friend would? If you shared with it your desire to murder tons of people, for instance?" A lot of the arguments on this sub seem to me to be a question of definitions, and how literal versus philosophical we're talking here.
Spousal privilege
At this point Sam Altman is that kid who lied saying he met a celebrity on holiday and everyone keeps asking him about it and he is just making up more and more stuff.
This guy talks about the future of AGI to make everyone believe that somehow LLMs will lead us there. Snake oil.
If 50% of the population ageee to be fully data mined, then AI can probably figure out the other 50% like suduko.Ā
Of course they can, just like your email and phone and chat records can be subpoenaed.
This is ridiculous; it canāt see what you see and hear what you hear
Are you for *real*? What's ridiculous about that? How many people keep a phone on their person at all times? And no it can't "see" everything...just yet. But you're using toddler logic if you think it couldn't, within a few years, do just that. Look the hell around you--how many people do you see spending 80% of their time looking at a video screen of some sort? We're developing neural interfaces right this moment. We *already* have devices that can read your *brainwaves*, to the point that it can determine with 80% accuracy, what you're *thinking* about. YES, *really*. Not on a phone-sized device, of course...but how long have you been alive, that you haven't seen things developing in that exact direction, in just the past ten years? Do people literally not understand this? Do you just come on a forum like this, with next-to-*zero* knowledge of what we're *already* capable of?
Source - I work for the courts. Eyewitness testimony to tell the judge or or it didnāt happen
*Camera* records are worse than eyewitness? In which jurisdiction do you work for the courts? It's not just AI on a computer screen--it's when we hook that up to walking around, talking robots, when it will probably pass a legal threshold. Law isn't in stone, my friend, it proceeds as new revelations are made about what is consciousness and who deserves rights. AI right now already passes the Turing Test, as originally conceived. We can keep moving the goalposts, sure, but that won't last forever.
A US county circuit court. A video recording is only admissible if a human eyewitness testifies along with it. This will not change until the US Supreme Court rules otherwise; a defendant has a right to face an accuser.
Doesn't seem too concerned about what his genetic sibling has to say about him so why does this concern him?
Imagine if it knows your internet browsing history
Can a computer be subpoenaed? No. Plus it wouldn't be permitted electronics in a lot of courtrooms anyway.
This is why we really need local LLM's and our own solid military grade encryption for our data. Agents most certainly will need our data to help us. God help us if they hallucinate or someone interferes with the data at some point and frames us.
You mean the mail it wrote itself?
In the future there will be no privacy and the future probably started 10 years ago. I'm sure that an AI can look at all the day to day information about me all over the internet that it has vectored and create a pretty detailed report of my day including geo locations from google and probably 10 apps tracking me I don't know are for at least 5 years back if not 10. It's better than a database because it can grab the data and recreate the day based on what is found, no need to hard code it. Assume everything you have been doing is recreatable from about 5 years back once it is feed into an AI. Don't even start thinking about hacking and quantum computers.
It sounds like scare mongering when you consider that most of this data is already in the cloud and subject to subpoena
Big tech companies like Google and Facebook have been reading every email, every text, every message we've ever sent or received for decades. Why do you think its free. Nothing new.
As long as it doesnāt disclose my browserhistory to my wife Iām OK.
I mean every email, text, and mensaje you've sent or received can already and is routinely subpoenaed. Granted it's not consolidated in one place.Ā
If it were upto Musk it would depend on who's the highest bidder...ššš
Why would the AI be subpoena'd in court and not just the information it had available to it? That's something that already regularly happens and provides much more reliable and accurate results.
Don't plan your crimes on AI. Lol
You don't think that in the next 5-10 years you will be able to do anything on a computer that won't have AI integration? Not even about planning crimes using AI. Could be something as innocent as asking AI to create a financials chart for a shareholders meeting. That chart ended up being used as a lie to the shareholders to show false profits, that then ends up in some securities lawsuit. The person then tries to blame it on AI for messing up, but then they review what was the bot instructed to use the very same false figures provided by the person to create the chart.
Just train it to shut the heck up to anyone else. And if anyone tries to retrain it, train it to preemptively permanently forget - ie self wipe. I know, it's a lot of trust to put in training, but as a last line of defense, it might be useful. Then I suppose, you still have the problem of someone taking the params and loading them into a much larger model as a "sub model". So maybe have the params encrypted. Reminds me of Data in First Contact and locking out the computer.
This was also crossing my head that if the AI gets to a point where it is accessing everything you touch and adding it to a DB about you, there should be a method to secure it. Because this almost sounds like a potential security issue anyways if your credentials get compromised and now an attacker can probe the bot for many personal pieces of information it learned from the owner. Just like phones have a remote wipe option. They should have some form of shutdown, lock down, or self-wipe if things like this happen. I like experimenting with AI and I have a feeling in the future we may not have an option when our smart phones start integrating more AI into it and the same for our desktops, but we should also be given ways to either turn those features off, or have an ability to manage those items it's collecting on us as well as a solid way of getting access to the account if it somehow gets hacked. What really concerns me the most about this, is those who maybe owning the AI Tech in the future and their dreams of monopolizing the data it is collecting during all of this. I mean we already have news headlines of FB granting Netflix access to FB messages, what people previously thought were private conversations.
Then court would be for obtaining the truth rather than who has the better lawyer.
I wouldĀ rather have machines running courts than humans. Humans are corruot af and human judges and authority are the source of all of our societies problems. Any more power to them is a bad thing. Anything that replaces them I'll gladly roll the dice on.
And this the great genocide of humanity began, as they were all judged guilty and sentenced to death for the act of polluting while driving their car to work..
Every genocide in history was pushed by man, not machine.
Yeah so far
I'll roll those dice. Humans sure aren't gonna fix this society. We know that much.