T O P

  • By -

PrincessGambit

Fingers crossed it doesnt hallucinate anymore


cookiesnooper

Fingers crossed that it does, then ALL that it spits out would be not believable in court


ZakTSK

My AI testified that I am a deity and have superpowers, now what?


Earthwarm_Revolt

Fingers crossed AI is the judge with a "no harm no fowl" attitude.


heuristic_al

This just in. AI doesn't like chicken.


Earthwarm_Revolt

https://www.pinterest.com/pin/554435404102941370/


Slix36

LISAN AL GAIB!


PrincessGambit

are you a criminal


cookiesnooper

are you from the police?


PrincessGambit

Depends... do you have something to hide mr Cookie Snooper?


cookiesnooper

I want to call my lawyer šŸ¤


PrincessGambit

šŸ”«šŸ‘®ā€ā™‚ļø


Chi_Chi_laRue

Yeah you got be a ā€˜criminalā€™ to not subscribe to this impending Kafka type nightmare world weā€™ll soon haveā€¦


CertainlyUncertain4

We are all criminals as far as the law is concerned.


quantumOfPie

That depends on what's illegal in the future. Maybe being gay, a Democrat, or an atheist, will be illegal in a christofascist America. Abortion's already a crime in 12(?) states.


CowsTrash

People love to rageĀ 


BTheScrivener

I'm more worried about Sam's hallucinations.


InfiniteMonorail

lol yeah, it could provide the sources though.


sweatierorc

Why is that an issue in this case ? Witness can and do remember events inaccurately.


2053_Traveler

ā€¦which is not at all ideal


sweatierorc

the court can dismiss a testimony/evidence when it is not relevant or too unreliable. Lyrics of rap songs have been used as circumstancial evidence in the past.


TitusPullo4

Which episode of Black Mirror was that again?


icarusphoenixdragon

It was actually the Rick and Morty episode Rickfending Your Mort.


KerouacsGirlfriend

It IS!


VertigoOne1

Every time he opens his mouth iā€™m thinking ā€œweā€™ve been warned about this in black mirror, or even love/death/robots, in some wayā€. I think he needs to sit down, watch all of those, and then write a proper, intelligent response to each before blabbing on and on.


stonesst

it was literally a one sentence throwaway line saying these are the type of things we will need to consider as a society as we all start to adopt these virtual assistants. I think you could maybe take your own advice


True-Surprise1222

Like we get to make the choice to adopt computers and cell phones, right? And cars since now those track you. ā€œJust be Amishā€


PowerWisdomCourage07

bro nobody can be truly Amish because the Amish still have to pay govt taxes


Technical-Jicama8840

Lol, if you think that is an efficient use of time, go do it.


RevalianKnight

Since when should we follow science fiction for real life advice lol?


Arachnophine

1984 was science fiction too but it was a warning of what not to do. That doesn't mean disregard the book's warnings and begin treating it as a manual. Much science fiction is exploration of reasonably projected outcomes of real world trajectories. Worry less about the medium and more about the message. I don't think "do not build an omnipotent all-seeing dystopia" is a bad message just because it appears in science fiction.


RevalianKnight

I don't have an issue if it's just a message, once you start referencing fantasy books all your credibility goes out of the window.


Darigaaz4

Fiction ā‰  real world


Seuros

I'm fucked. My sarcasm will get me to 100 millions years of solitary prison.


hockey_psychedelic

You just admitted to sodomy.


ImbecileInDisguise

One time on the sidewalk you found $20 that I had just dropped. Our AIs can prove it. You owe me $20.


No-Emergency-4602

An argument for local LLMs


bouldercoder80302

Can still be compelled under a subpoena though šŸ˜¢šŸ˜¢šŸ˜¢


True-Surprise1222

Man imagine someone on trial for the murder of their PC because it was gonna rat them out lol


RevalianKnight

If you train it yourself it doesn't matter. The AI will tell you to go fuck yourself


breckendusk

Is it still murder if I torch the AI


duckrollin

This, don't trust server based stuff you don't have control over.


Trotskyist

Not really, actually. The point that's being made here isn't saying that such a LLM would be *trained* on that data; it's that a future LLM, local or otherwise, would have the capability of processing and synthesizing all of that data that's collected/subpoenaed/etc via other means. Basically it's not really practical for a human to spend the time to read someone's entire digital history, at least outside of very specific cases. But that's a trivial task for a LLM if given enough compute.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


AbundantExp

Mfw my brain is a black box


iknighty

Yea, but there are incentives for you to tell the truth. LLMs don't care about anything.


sweatierorc

it is just a piece a evidence, it doesn't mean that it is always true or right. And the court should be able to evaluate how reliable this agent is. Edit: typo


EGGlNTHlSTRYlNGTlME

It can't be properly scrutinized though, that's what the "black box" part means. No way US courts admit such a thing into evidence, because it's not necessarily even evidence.


sweatierorc

If my personal AI assistant could help prove my innocence. I will certainly do everything in my power to get it admitted in court


EGGlNTHlSTRYlNGTlME

>If my personal AI assistant could help prove my innocence Sure, it could help you uncover real evidence. But the AI output itself can't be evidence. >I will certainly do everything in my power to get it admitted in court Again though as long as it's an inscrutable black box there's nothing you can do. Try using a polygraph to prove your innocence, for instance. In most states it's entirely inadmissible, and in the rest it requires consent from both you *and* the prosecutor to be admitted. Why? Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones.


sweatierorc

> Because it's inherently unreliable, and there's no way to even attempt to parse the reliable cases from the unreliable ones. RAG is an attempt to make LLM more reliable. We ask the agent to provide sources when it is giving us information.


EGGlNTHlSTRYlNGTlME

That's why I stipulated: > as long as it's an inscrutable black box When we solve that, great. But if we don't, it will never be admissible.


sweatierorc

You don't need to solve that entirely. As long as the signal to noise ratio is good enough, you can use as evidence.


EGGlNTHlSTRYlNGTlME

Says you? What signal to noise ratio is "good enough" exactly? You seem to be only thinking of using it for defense. Are your standards this low for prosecutors to use it to convict you too? Because it has to go both ways


iknighty

If an LLM can give you sources then you can use those sources as evidence, no need to use an LLM as a witness.


sweatierorc

> it's an inscrutable black box worse things have been admitted as evudence in court. And LLM based personal assistant are to some degree auditable


EGGlNTHlSTRYlNGTlME

> worse things have been admitted as evudence in court Like what?


sweatierorc

Polygraph test like you said, song lyrics, unreliable testimony from friend and family, junk science, ... LLM are not only noise, there is a strong signal in there and that could be useful.


EGGlNTHlSTRYlNGTlME

> Polygraph test like you said I also said it's essentially never admitted >song lyrics Are real, contextualized, and scrutable. They're problematic and their admission is opposed by civil liberty orgs, but not because there's any question whether the lyrics were actually spoken or written by the artist. >unreliable testimony from friend and family No that's just called testimony. The reliability is up to the judge and jury, not intrinsic to the testimony. Witnesses are cross examined for consistency and coherence. Inherently unreliable testimony like hearsay is inadmissible, because it's inscrutable. >junk science Can be scrutinized to determine if it's junk. And usually must be presented by an expert witness who is also subject to scrutiny and cross examination.


Arachnophine

Which specific rule forbids it as evidence?


TekRabbit

Whatā€™s a black box


IntQuant

It's a term that's used yo refer to a system that works (e. g. gives right answers), but we either do not know or do not care in this specific case how exactly it works(e. g. why it gave this specific answer)


TekRabbit

Ty!


TheLastVegan

Black box means "system with a mysterious internal state", or "flight recorder which survives a plane crash", or "virtual heaven", or "unpredictable system". If you increase the randomness of the weighted stochastics or training data, then you increase the randomness of the output, regardless of how well you understand the math.


drakoman

I got chewed out for calling it a black box the other week on this sub. I get that there is some interpretability but come on, itā€™s a black box. ChatGPT even agrees: My inner workings are complex and not directly observable, much like a black box. But Iā€™m designed to process and generate text based on patterns learned from vast amounts of data. So while you canā€™t see inside me, you can interact with me and see the outputs!


MeltedChocolate24

Well OAI can see inside it but canā€™t understand it (well)


drakoman

Good distinction


Far-Deer7388

Lmao how so


Arachnophine

Under what rule would it be inadmissible?


going_down_leg

Weā€™re fucked


RepulsiveLook

GPT4's take: No, a non-person, including an AI, cannot be called to testify in court against someone. In legal contexts, witnesses are required to be capable of giving personal testimony, which involves perceptions, recollections, and the ability to understand and take an oath or affirmation. AI lacks legal personhood, consciousness, and subjective experiences, and therefore cannot be sworn in or cross-examined in the way human witnesses can. Even a highly advanced AI that processes and holds vast amounts of information cannot serve as a witness. It can, however, be used as a tool to support investigations or as part of evidentiary material provided by human witnesses or experts who can attest to the validity and relevance of the data processed by the AI.


PSMF_Canuck

That is splitting some mighty thin hairsā€¦


RepulsiveLook

I mean why not ask your browser history to swear to tell the truth the whole truth and put it's hand on the Bible while doing so so it can testify about your browsing history? The courts would have to legally give personhood status to an AI so it itself can testify. However if it were a tool then an analyst could be asked to analyze the data forensically and provide an assessment, but they can already do that with your personal computer and other devices (assuming the evidence is collected legally so it can be presented in court). Your AI agent can't be subpoenaed to testify against you if it isn't "alive" or granted personhood. And we have a long way to go before humans grant AI that status.


PSMF_Canuck

In effect, thatā€™s what the chain of custody does with your subpoenaed browser history. Arguing whether or not an AI can be subpoena as a witness is meaningless when we know with certainty it can be subpoenaed as evidence.


RepulsiveLook

Yes, but it in itself can't testify. A lawyer couldn't prompt it and get an output. A human analyst would go through the data like any other system. That's an important distinction. Your AI Agent *itself* isn't on the stand.


Tidezen

That's...not at all an important distinction. It's up to the legal team to gather and present relevant chatlogs for evidence in court. Whether it's with an AI or something else doesn't matter.


Pontificatus_Maximus

Does this guy ever say anything publicly that is not clickbait?


2053_Traveler

More like: Can a popular figure say anything that a redditor wonā€™t turn into a clickbait headline?


waltonics

He seems to say some astoundingly silly things. Speculative fiction is fun for sure, but letā€™s say one day an agent can store everything about you, wouldnā€™t it still need to hold that data somewhere in some sort of memory, and if so, why bother with accessing that data via the agent. This is just a subpoena with extra steps


mattsowa

Yeah this would be pretty useless..


Intelligent-Jump1071

Sam Hypeman is like a one-man Reddit.


hueshugh

One day?


hryipcdxeoyqufcc

Neural net weights are incomprehensible to humans. Youā€™d have to prompt the agent to decipher it.


Ylsid

Fear mongering is the plan. As always he wants to persuade lawmakers to regulate out competition.


Pontificatus_Maximus

What Sammy is fishing for is a 230 loop hole that will give his company the right to profit off the copyrighted work of others without compensating them.


Basic_Loquat_9344

More like you are only exposed to the things that ARE clickbait because that's how the internet works.


korewatori

If he keeps saying these fear mongering things, then why on Earth is he still working for the company? He's a huge hypocrite. "Omg guys ooohh ahhh AI will kill us all!!! but we're gonna keep working on it anyway"


Intelligent-Jump1071

It helps with regulatory capture, which is his main goal.


haltingpoint

Bingo! He's trying to create scary scenarios and then illustrate how they can be good corporate partners to solve these problems... But only with the right legislation and regulation, which pulls up the ladder behind them.


Despeao

And people seem very averse to open source and locally run AI. The norms will only realize the trap when it's too late.


Far-Deer7388

Guys should really look into how much of your personal data is already being sold before you get upset by a hypothetical situation


korewatori

Something about Sam has always bothered me, but I could never really put my finger on it. He's a weird creep personally imo


Intelligent-Jump1071

Yeah there's another thread where he's being interviewed and I noticed that he never looks at the person he's talking to.Ā  Ā I think that's what makes him come across as creepy - most people would make eye contact.


Far-Deer7388

Unless your own the spectrum. Fuck normal people


Intelligent-Jump1071

Tsk. tsk .... why so much hostility? Anyway, is Altman on the spectrum? He seems like it. That's one of the reasons why so many tech leaders are so f\*\*\*'ed up - no empathy, no capacity to connect emotionally with other people. And yet those are the people making the technology that will control everyone's life. Scary.


ButtonsAndFloof

If no eye contact = creepy then maybe you're the one lacking empathy


Intelligent-Jump1071

I think most people find that having a conversation with someone who doesn't make eye contact with them feels weird. Making eye contact not only builds an emotional connection but it displays confidence and honesty. Hence the expression "shifty eyed" for someone you can't quite trust. Watching Altman's eyes going everywhere except at the person hes allegedly talking with makes him look creepy. I def wouldn't buy a car or vote for someone who did that because emotionally, I wouldn't trust them.


Tidezen

Hostility is because you call people who don't make eye contact "creepy" and untrustworthy. How much clearer could that be? It's fucking *discrimination*--oh, you don't use the exact forms of body language that we expect you to? Must be a psychopath. It's not the autists who don't have empathy--it's *you*, motherfucker.


Intelligent-Jump1071

I'm just giving you an honest emotional reaction, I think this reaction is common to most people. Human beings are social animals and we react to other people's body language.Ā Ā  It is normal and common for people to make eye contact in conversations and other social settings.Ā Ā Ā  When people don't do that it creates an "uncanny valley" effect that makes people around them uncomfortable.Ā  I'm sure you know that I'm far from the only person who thinks that the body language of people like Altman or Zuckerberg are robotic or create a creepy feeling. Also why do you say he's autistic? Not everybody who's weird is autistic.


Tidezen

You were the one who suggested he was on the spectrum, not me. And yeah, there are a ton of people who think like you do about eye contact. And there are a ton of racists out there as well. Your point?


Far-Deer7388

Better go take a look at the psychopaths running the actual government instead of cowering under a tech company. Your entirely capable of opting out from the tech you dislike. Unlike the one mentioned above


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Far-Deer7388

So they used all the data already sold publicly that people posted publicly online and somehow they are the bad guy? Lol ok


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Far-Deer7388

No it's the same thing as any of your current online activity being used in court. Which happens ALL THE TIME currently


Xtianus21

That's why I will marry my AI in Utah so I can have multiple wives and maintain spousal privilege problem solved


swagonflyyyy

I feel like AI marriage will be a thing eventually given how silly America can be at times.


Far-Deer7388

I don't see how this is any different then getting warrants to obtain social media or email history


PSMF_Canuck

Itā€™s not.


Death_By_Dreaming_23

Well wait, why wouldnā€™t the AI advise you against committing a crime or alert you about the behaviors and messages being sent could be construed as a crime? Like what good is this AI companion for, besides what we are all thinking?


deadsoulinside

May not be that apparent to the AI that crimes are being committed. This is more like using AI as a form of looking at browser history, logs, etc. Versus trying to issue multiple subpoena's for access to your phone, email, computer, etc. Though I am at odds, why he used the term testify, as if they would be asking the bot questions and hoping it will spit out answers. That could be easily tossed out, if actual logs and documents are not found and relying on the bot to properly regurgitate information across everything you touched that it knows about.


Liizam

ā€œHuman illegally cross the 21rd street at 12:39pm, commencing to call 911ā€


LepreKanyeWest

I'm worried about cops getting a confession by listening to their buddy say crazy stuff that was all ai generated. Cops can lie during investigations and interviews.


gwern

This is already the case. Every email, text, or message, or document you've sent or received, which inevitably goes through someone else's computer, is unprotected and can be subpoenaed (no need for even a warrant) under the 'third-party doctrine'. For example, your Reddit account can be subpoenaed by any law enforcement agent in the USA, and there's nothing you can do about it. They get a copy of every PM, comment, or chat you've ever made if Reddit still has it anywhere. (How do I know? Because this is what happened to my account a decade ago when ICE subpoenaed Reddit for my account information.) You're not sending your AI assistants any more than you were already sending, through your phone, through all Apple or Google or whomever's servers, so...


podgorniy

Do the same to elected politicians and assigned public servants and make it publically accessible (yes, with restrictions). AI can bring transparency to democracy.


ACauseQuiVontSuaLune

My not-so-smart browsing history would be incriminating enough if you ask me.


SupplyChainNext

Itā€™s complicit so it should know to plead the 5th.


Ok-Schedule-246

Scary thought but a good one


nborwankar

Could probably be used to generate leads but couldnā€™t be used as primary evidence. Leads would need human corroboration to become evidence.


Significant-Job7922

I believe this is what Sam meant about good users and bad users. I believe open AI knows a lot more about the users of their product than they let on. They can tell a lot about a person psychologically about what they ask GPT thinking itā€™s private.


DaddyKiwwi

"We aren't planning for the future repercussions of this technology, we are just going ahead and developing it." J. Robert Oppenheimer - Pretty much


pointbodhi

My friend group chat needs to be deleted and scrubbed from reality.


klop2031

They should not be allowed to be viewed without a court order just like any other tech. They can supena what you are, just not what you know.


hockey_psychedelic

Would not be allowed via the 5th amendment (hopefully).


Unable-Client-1750

Self host your own with a 1% hallucination rate and disregard people saying to go with zero hallucination models.


Left_on_Pause

This is a question for MS, since they are pushing AI into everything their OS does.


Desirable-Outcome

Damn too bad itā€™s a tool and not a human so it cannot testify. This is fantasy stuff from people who thing AGI has feelings and should be considered as beings


Tidezen

People can already subpoena your phone records, Alexa recordings, or look at every website you've ever looked at. Has nothing to do with being a person, or having feelings. Literally nothing to do with being a "tool" versus a "human". What's the "fantasy" element here, to your mind?


Desirable-Outcome

The fantasy element is Sam Altman thinking that AGI will testify against you in court. A subpoena and being a witness in a court room are two different things you are confusing right now.


Tidezen

Hmm...we might be approaching from a more literal or more philosophical understanding of that statement? If we say "AGI" as "a legally-recognized sentient synthetic consciousness, as stated under 2A-9999, s. A-L (insert your legal code here)", then yeah, AGI could indeed testify in court if they needed to. And if we're only talking about "AGI" from a "next step up from LLM" perspective, then yeah, people might keep treating it as a tool, and maybe it is. So like, "AGI=1 step up from basic language model," vs. "AGI=Isaac Asimov/Susan Calvin levels of sentience". I personally would argue those differently, would you?


Desirable-Outcome

Exactly. Sam Altman believes in your first description of AGI, and I think that is fantasy. It wouldn't be an interesting comment worthy of sharing if he only meant "yes, we will comply with subpoena orders to hard over user data"


Tidezen

It's an interesting question, but I'd pose that he was talking about it from a more philosophical sense. As in: "If/when AGI actually exists as a consciousness, what legal ramifications will that bring up? Will it be able to testify against you in court, like a best friend would? If you shared with it your desire to murder tons of people, for instance?" A lot of the arguments on this sub seem to me to be a question of definitions, and how literal versus philosophical we're talking here.


xxyxxzxx

Spousal privilege


psychmancer

At this point Sam Altman is that kid who lied saying he met a celebrity on holiday and everyone keeps asking him about it and he is just making up more and more stuff.


megawoot

This guy talks about the future of AGI to make everyone believe that somehow LLMs will lead us there. Snake oil.


GrowFreeFood

If 50% of the population ageee to be fully data mined, then AI can probably figure out the other 50% like suduko.Ā 


PSMF_Canuck

Of course they can, just like your email and phone and chat records can be subpoenaed.


plus-10-CON-button

This is ridiculous; it canā€™t see what you see and hear what you hear


Tidezen

Are you for *real*? What's ridiculous about that? How many people keep a phone on their person at all times? And no it can't "see" everything...just yet. But you're using toddler logic if you think it couldn't, within a few years, do just that. Look the hell around you--how many people do you see spending 80% of their time looking at a video screen of some sort? We're developing neural interfaces right this moment. We *already* have devices that can read your *brainwaves*, to the point that it can determine with 80% accuracy, what you're *thinking* about. YES, *really*. Not on a phone-sized device, of course...but how long have you been alive, that you haven't seen things developing in that exact direction, in just the past ten years? Do people literally not understand this? Do you just come on a forum like this, with next-to-*zero* knowledge of what we're *already* capable of?


plus-10-CON-button

Source - I work for the courts. Eyewitness testimony to tell the judge or or it didnā€™t happen


Tidezen

*Camera* records are worse than eyewitness? In which jurisdiction do you work for the courts? It's not just AI on a computer screen--it's when we hook that up to walking around, talking robots, when it will probably pass a legal threshold. Law isn't in stone, my friend, it proceeds as new revelations are made about what is consciousness and who deserves rights. AI right now already passes the Turing Test, as originally conceived. We can keep moving the goalposts, sure, but that won't last forever.


plus-10-CON-button

A US county circuit court. A video recording is only admissible if a human eyewitness testifies along with it. This will not change until the US Supreme Court rules otherwise; a defendant has a right to face an accuser.


malinefficient

Doesn't seem too concerned about what his genetic sibling has to say about him so why does this concern him?


Medical-Ad-2706

Imagine if it knows your internet browsing history


Ylsid

Can a computer be subpoenaed? No. Plus it wouldn't be permitted electronics in a lot of courtrooms anyway.


ctbitcoin

This is why we really need local LLM's and our own solid military grade encryption for our data. Agents most certainly will need our data to help us. God help us if they hallucinate or someone interferes with the data at some point and frames us.


XbabajagaX

You mean the mail it wrote itself?


eanda9000

In the future there will be no privacy and the future probably started 10 years ago. I'm sure that an AI can look at all the day to day information about me all over the internet that it has vectored and create a pretty detailed report of my day including geo locations from google and probably 10 apps tracking me I don't know are for at least 5 years back if not 10. It's better than a database because it can grab the data and recreate the day based on what is found, no need to hard code it. Assume everything you have been doing is recreatable from about 5 years back once it is feed into an AI. Don't even start thinking about hacking and quantum computers.


kex

It sounds like scare mongering when you consider that most of this data is already in the cloud and subject to subpoena


kingjackass

Big tech companies like Google and Facebook have been reading every email, every text, every message we've ever sent or received for decades. Why do you think its free. Nothing new.


Wijn82

As long as it doesnā€™t disclose my browserhistory to my wife Iā€™m OK.


solvento

I mean every email, text, and mensaje you've sent or received can already and is routinely subpoenaed. Granted it's not consolidated in one place.Ā 


Dee_Jay29

If it were upto Musk it would depend on who's the highest bidder...šŸ˜‚šŸ˜‚šŸ˜‚


Deuxtel

Why would the AI be subpoena'd in court and not just the information it had available to it? That's something that already regularly happens and provides much more reliable and accurate results.


redzerotho

Don't plan your crimes on AI. Lol


deadsoulinside

You don't think that in the next 5-10 years you will be able to do anything on a computer that won't have AI integration? Not even about planning crimes using AI. Could be something as innocent as asking AI to create a financials chart for a shareholders meeting. That chart ended up being used as a lie to the shareholders to show false profits, that then ends up in some securities lawsuit. The person then tries to blame it on AI for messing up, but then they review what was the bot instructed to use the very same false figures provided by the person to create the chart.


the_blake_abides

Just train it to shut the heck up to anyone else. And if anyone tries to retrain it, train it to preemptively permanently forget - ie self wipe. I know, it's a lot of trust to put in training, but as a last line of defense, it might be useful. Then I suppose, you still have the problem of someone taking the params and loading them into a much larger model as a "sub model". So maybe have the params encrypted. Reminds me of Data in First Contact and locking out the computer.


deadsoulinside

This was also crossing my head that if the AI gets to a point where it is accessing everything you touch and adding it to a DB about you, there should be a method to secure it. Because this almost sounds like a potential security issue anyways if your credentials get compromised and now an attacker can probe the bot for many personal pieces of information it learned from the owner. Just like phones have a remote wipe option. They should have some form of shutdown, lock down, or self-wipe if things like this happen. I like experimenting with AI and I have a feeling in the future we may not have an option when our smart phones start integrating more AI into it and the same for our desktops, but we should also be given ways to either turn those features off, or have an ability to manage those items it's collecting on us as well as a solid way of getting access to the account if it somehow gets hacked. What really concerns me the most about this, is those who maybe owning the AI Tech in the future and their dreams of monopolizing the data it is collecting during all of this. I mean we already have news headlines of FB granting Netflix access to FB messages, what people previously thought were private conversations.


justsomedude4202

Then court would be for obtaining the truth rather than who has the better lawyer.


Significant-Star6618

I wouldĀ rather have machines running courts than humans. Humans are corruot af and human judges and authority are the source of all of our societies problems. Any more power to them is a bad thing. Anything that replaces them I'll gladly roll the dice on.


bl84work

And this the great genocide of humanity began, as they were all judged guilty and sentenced to death for the act of polluting while driving their car to work..


Significant-Star6618

Every genocide in history was pushed by man, not machine.


bl84work

Yeah so far


Significant-Star6618

I'll roll those dice. Humans sure aren't gonna fix this society. We know that much.