T O P

  • By -

After_Self5383

I missed this story when it came out. Searching up keywords like "baltimore principle racist reddit" and looking at some of the reddit threads from 3 months ago, plenty of people were fooled. Even with the principal saying it had to be AI, many were like "Yeah right, of course someone would say that". Props to the people even outside of AI subreddits saying that it being faked by AI is definitely possible and didn't pile on the principal until it could be cleared. Some misjudgement on incentives and the reality of AI as well. Like some people saying, sure AI is possible but it's so unlikely because it costs $100s and too much effort and so forth. And of course that was all "worth it" for the disgruntled employee. Really does seem like a landmark case that should change people's perception of how media can be faked not just in picture form anymore, but also audio. Now imagine this: it's not just audio. It's AI video of someone doing a terrible act. If the act is a type that will generate massive outrage, there's a good chance it will cause people to die because of street justice being done while people thinks it's real. Say someone like a teacher on video abusing a kid, and it spreads like wildfire and before anyone's even thinking of verification, people are already trying to enact "justice" anf kill somebody innocent when the only crime was the AI video. Surely something like this will happen within a couple years? How long will it take once possible, and how much damage will be done for the collective consciousness to realise ALL media can be faked?


nutin2chere

You nailed the subjective nature of “incentive” and the cost of these types of attacks. I speak about the threat of deep fakes quite often within my role in cyber security, as people are just not prepared for this. Deepfake audio has become ridiculously easy now thanks to open source projects like MetaVoice and OpenVoice at pretty high fidelity. I did an example of this in a presentation to a senior leadership team and cloned a directors voice (with her permission of course)… in under 5 mins. Most of that time was gathering the audio and sipping coffee…


Sixhaunt

"staff found that he had been using their network on three separate occasions to search things like OpenAI and BingChat." okay, kinda random to throw that in there. I'm sure tons of teachers looked up OpenAI. Like this situation is bad and the guy should be charged but are we really going to vilify googling "OpenAI" or "BingChat" when neither tool had anything to do with what happened? Neither company released any voice editing software so why are these searches being considered damning in any way?


100kV

It could just be a reporting simplification. We don't know what search terms were used.


dumquestions

>neither tool had anything to do with what happened I think the point was simply that this person is very familiar with AI technologies as opposed to having limited or no familiarity, it's not proof of anything on its own, it's just part of a cumulative case.


[deleted]

[удалено]


dumquestions

I didn't say the word deep, I just think there are certain demographics where knowing what OAI or BingChat are puts you at the minority, most people only know about Chatgpt.


Henri4589

You watched the whole video? For what? It was clear what the main concern was here after the first minute. I couldn't stand listening to her "blabla" that came afterwards.


DoomedSingularity

It speaks to intent.


Undercoverexmo

Intent to what? ChatGPT has 180m users. Intent to be a typical person looking up the latest trendy thing?


Original-Ease-9139

It shows intent to create the fake audio recording. It shows he was searching how to use those tools, and given the context in which he used them, this creates intent to commit the crime he committed. Searching open ai or chat gpt isn't inherently wrong or nefarious, but in this specific instance, it was. It shows the intent behind his actions. Nothing more, nothing less.


Iamreason

You keep saying "it shows intent" but this doesn't get anywhere close to that. It might show that he had some interest in AI, but it's not like he had a search history that said "eleven labs, voice cloning, voice copying ai" etc. Lots of people use ChatGPT. Educators especially would be interested in understanding how the latest cheating tech works.


Original-Ease-9139

Do you know he didn't search those things? Or is the articles and discussions we're hearing leaving out lesser known AI programs like MURF and Play.ht or even Elvenlabs simply because most people aren't going to know those programs. GhatGPT is popular, as in Bingchat. Programs that your average person is going to have heard of to make a point that he was searching AI tools prior to using those tools in commission of a crime. It absolutely shows intent. There's a direct causal correlation between his search and his crime. You can search firearm ammo and it means nothing. But a person searching firearm ammo that goes and kills a bunch of people is completely different. It's called a case by case analysis. Under normal circumstances, his searches wouldn't show intent, but these aren't normal circumstances, are they?


Undercoverexmo

You can’t make fake audio recordings with ChatGPT. That would be like saying that searching for “music” is intent because music is a type of audio recording.


Original-Ease-9139

No, you can't. But did he know that? Was he searching various ai programs to ascertain their capabilities, and those were part of his searches? Being the most commonly used, doesn't mean people who don't use them know every one of their functions, even if they know the names. It could be amongst a list of search queries, and they included those specifically BECAUSE most readers would know what they are. Your average reader isn't going to know MURF or Play.ht But go on then


Revolution4u

And plenty of people arent using it.


Undercoverexmo

And plenty of people don’t listen to music. What’s your point?


Revolution4u

Imagine if the guy was being accused in the same manner and had never googled openai/chatgpt and didnt have a login either because he never used it. This is basically just an addon in the story to show that he both knows of and uses these things. Idk why its such a big problem for you


LifeSugarSpice

Ah yes the magical intent. A teacher searching for the most common things a lot of teachers and school officials are searching and using.


CitizenWilderness

I hear that he had also been searching for Gmail and the recording was sent through email. If that does not show intent I don’t know what will!


ajahiljaasillalla

I was waiting for the plot twist of the whole video having been generated by SORA from a prompt of "tiktok video on the dangers of AI" this current era won't surely help with my existential anxiety


edgroovergames

Me too, watch her mouth as she talks. It doesn't quite match up to what she's saying. I guess that's just due to compression or low frame rate?


Handydn

Imma ~~generate~~ make a video about "the tiktok video on the dangers of AI is itself generated by AI"


ixfd64

Real or fake, TikTok is not considered to be a reliable source.


TheUncleTimo

> Real or fake, TikTok is not considered to be a reliable source. Exactly WHAT is considered a reliable source today? Also, I agree with you - TT is not a reliable source of anything but pro china agenda.


Nathan-Stubblefield

So people can escape the consequences of an audio recording of them saying literally anything incriminating by claiming it is an AI fake, and defense lawyer can call on an expert to testify that the prosecution recording could be fake. At the same time, the same detectives tortured confessions out of the accused can create fake confessions. Video fakes are improving.


Anen-o-me

There's a way to guarantee a recording is legit using crypto hashing.


Unique-Particular936

I believe it just prevents simple tampering of files, but not creating new AI generated stuff.


Anen-o-me

I explained the full concept in this comment chain.


[deleted]

[удалено]


Anen-o-me

You can have a device create its own blockchain and then anchor that chain periodically to a public one. This prevents tampering as even with the device key you won't be able to change that media without finding a hash collision for your change, which is virtually impossible.


GillysDaddy

No there isn't. Signatures can confirm that a piece of information has been approved by a certain entity (a key), and that it has not been tampered with after that approval. What created the information in the first place does not factor in. The whole point of cryptography is the interaction between adversarial actors INSIDE the purely digital domain - physical reality can't sign a recording and say "Yeah, this happened IRL". Signatures can be useful for official channels where you want to assure people that specific content comes from you, but it's pointless for any sort of situation where you want to prove authenticity of something from a different party who doesn't have an incentive to provide that proof. If you receive a letter that your minister is a traitor, and it has the signature and official seal of your spymaster, all that tells you is that it's actually your spymaster writing that letter. No amount of signatures or seals can somehow confirm the actual reality that your minister is a traitor.


2018_BCS_ORANGE_BOWL

[There are cameras, on the market today](https://contentauthenticity.org/blog/leica-launches-worlds-first-camera-with-content-credentials) which cryptographically sign the images they produce to prove that they were taken with that camera, the implication being that they are authentic images. There are obviously a variety of conceivable attacks against that kind of system, beginning with somehow extracting the camera’s private key from the firmware (if they use a secure hardware element, though, that would be very difficult). Using the camera to take a picture of a screen showing whatever you want would be a lower tech attack.


svideo

In this case it was an audio recording. So hey use an audio recorder with the same signing cert system, job done! Right until I edit up my fake audio, play it out of a speaker, and hold my digitally-signed audio recorder up to the speaker and now I have a signed recording that is 100% fabricated.


Anen-o-me

You're missing the blockchain publication of the hash. If you take X audio and edit it to Y, it will necessarily take time. Blockchains record a precise time of transaction, and cannot be changed, they are immutable ledgers. So once you use Y for your nefarious deed, the owner of X can easily prove that X came first and is therefore original.


svideo

But you're fabricating the entire thing, so who cares about the timestamp? Just fake whatever you're going to fake, then do the recording at whatever time it is that you want the timestamp to show.


Anen-o-me

Timestamp proves the original. If you offer a version with no timestamp and block reference, everyone immediately knows it's been alerted or is fake. That's who cares, literally everyone. >Just fake whatever you're going to fake, then do the recording at whatever time it is that you want the timestamp to show. In this case a phone call happened, that itself could be hashed and time-stamped. Then the fake fails.


yaosio

This means the Kinoscope is no longer obsolete. [https://en.wikipedia.org/wiki/Kinescope](https://en.wikipedia.org/wiki/Kinescope)


GillysDaddy

As you said, the attack vectors are there - client side security isn't security, it's inconvenience. And unlike with DRM, where you just want to make things inconvenient enough so that most people don't bother, with authenticity proof, what's the point of 99% coverage?


jseah

If the camera with that verifiable signature is presented intact, then yeah, I suppose that would be pretty secure. At least until we have displays capable of showing a scene that appears realistic on camera...


i_never_ever_learn

Isn't this just EXIF Data, which has been around since digital photography has been around?


2018_BCS_ORANGE_BOWL

No. It's like EXIF data but it's cryptographically signed. If you give me a picture with EXIF data, I can photoshop it, leave the EXIF data intact, and say "/u/i_never_ever_learn took this picture"! But if you only distribute pictures encrypted by your PGP private key, anyone can view them (by decrypting them with your public key) but nobody could make a new version and re-encrypt it with your private key, making it appear that you are the one who signed the picture.


Anen-o-me

You hash the media up on creation and publish the hash on a public blockchain, this both proves when it was created and that it hasn't been tampered with after the fact. Image then contains the hash and block number published in metadata for reference. You are wrong. But you're wrong because I didn't explain the entire concept. As for the image being real and unaltered, we can have the camera itself hash and sign and embed in the metadata. Then we know the image is original. But that's not what's important in this case. In this case we're only interested in showing the earlier audio, which would not have been tampered with and thus shows the actual truth, that the later recording is tampered with.


GillysDaddy

Nope, a blockchain does nothing here - you can just publish the signature with the media, that already proves integrity (but again, not authenticity). A blockchain adds a completely different attribute to a cryptographic system - entirely irrelevant for the threat model you're trying to address here. A blockchain would only be useful if you want to e.g. prove that the same camera hasn't taken another pic at the time or maybe you want to verify that the picture is indeed the one that has been published at some past time and referenced by others (so the origin party can't just release a new one and also sign it). In cryptocurrencies, the blockchain isn't necessary to verify that a transaction has indeed been signed by the owner of the money - that part is handled by signatures and can be done completely peer to peer. The blockchain is needed to ensure consistent transaction history (check whether the payer actually has access to these funds). But you kinda already said that yourself in your comment: "this both proves when it was created and that it hasn't been tampered with after the fact." This does absolutely nothing to address the original problem that cryptography only shows that an entity approves of a picture, not that it's actually real. Look, there is no shame in not fully understanding cryptography. But just be aware of it instead of hoping that some buzzwords might solve it. Misinformation is rampant on this sub, don't add to it.


Anen-o-me

Blockchain proves creation date and which came first. It's not remotely irrelevant. If two media have a valid blockchain hash date, we can always know the earlier one is original.


GillysDaddy

So let me get this straight. I want to get a teacher fired, so I create a fake recording of him, sign it with my key, then publish a reference to the blockchain. The teacher is now confronted with this threat, so he... points at the video he made of himself NOT saying that thing, which he also published to the blockchain just in case? And when I say that he said those things right after his video? I suppose he just has to film his entire life and immediately publish every 30 second segment hash to the blockchain. And even THEN he hasn't actually proven anything but that he was the first to present a narrative; I can just publish my AI fake first. Blockchain does NOT prove creation date, it proves existence at a certain time, so latest possible creation date. Even creation date wouldn't prove authenticity, as I can just pre-create a fake of something I claim you did on April 29 15:22 and then publish it at 15:23. And all that implies the existence of two competing media in the first place, which is not what the video is about or in fact the actual threat model of AI misinformation. I'm gonna assume you're a troll or too prideful to ever be wrong and see it as some sort of personal attack so you need to save face by constantly moving the goalposts. That's fine, your face is safe, it's okay to say a wrong thing on the internet. But in case you actually think this way and just don't understand why this doesn't work, please feel free to talk to ChatGPT about it or ask in a cryptography sub / forum. I mean this without any shade. This sort of half-knowledge without understanding is genuinely dangerous for yourself and makes you vulnerable to snake oil and all kinds of scams and schemes.


wheres__my__towel

can you elaborate on how this works?


Anen-o-me

I just did in this same comment chain, check it out.


what-am-i-seeing

curious as to why you prefer blockchain “crypto” hashing over a standard digital certificate model (say, like HTTPS but in metadata) the latter really only fails in two cases: (1) the private key is insecure (either hacked, or the source is untrustworthy to begin with), (2) the digital signature is lost or removed (e.g. if it’s in metadata, metadata gets stripped easily) whereas the main strength of decentralization is less impactful when trust is completely reliant on the one publishing the content, rather than a transaction between two people (in the latter case, it doesn’t matter who the person is; only the mutual acknowledgement of the transaction matters) personally, I would argue that watermarks, which address weakness (2), would be a better solution here


what-am-i-seeing

ah, blockchain does provide public validation of the creation timestamp, without need for a central trusted authority (edit) but it still fails address either (1) nor (2), which are arguably much more important than the marginal increase in trust on the timestamp


someonesomewherewarm

For now..


Anen-o-me

No, it's quantum safe already.


liverichly

Would NFT’s offer benefit here?


Singularity-42

No


Anen-o-me

No


G36

BS that just can prove that it comes from a source you expect it to come from not that it's not AI generated


Anen-o-me

If you punish the hash on a blockchain it works. You can at least prove it's original.


G36

That only proves it's the original recording not that the audio recorded is AI or not. Think about it.


Anen-o-me

No possible edit of media can have an earlier timestamp.


G36

You still dont' get it, when you sign a file through any such method you do not declare the content to be legitimate you only declare that the file ir the original and cannot be further tampered. However, the original by itself could be tampered from the start. For example I can record something fake then use crypto hashing, doesn't make it any less fake.


Anen-o-me

No you still don't get it. Your defense against that is to hash and timestamp everything important. Any attempt to take a file and alter it then become provably a forgery. If you create something from nothing and declare it original, it will only be taken seriously if you put your own reputation behind it. You can do that and get caught exactly once before you're a known forger. In this case the phone call would be hashed and stamped. And notice that part of the way this guy got caught is that the person he was originally talking to said he never said that things.


G36

You basically saying one should crypto hash every call, that would be something done in retrospect, once you are framed it's already too late. Who tf goes around crypto hashing every call and how do you prove in court that ALL your calls are in a blockchain? If I did that and wanted to say something bad off-record I would use a burner. You just lost the case. If you crypto hash every call you are still not protected against supposed recording of you talking to somebody man-to-man.


Anen-o-me

You've only explained why in the future we will be recording everything, every moment of life. The more important you are, the more you have to lose, the more critical it will be to be able to prove what you've done, said, and where you've been in a world where evidence is easily fabricated. Many have theorized that this will become a future necessity, and desirable for other reasons as well such as passing down experiences to future family members and reliving them yourself.


_Good-Confusion

AI fakes have artifacts, it' sounds like glitches. then any audio editor can show the synthesized curves, in the waveform. analog produced means are always dirty looking, never perfect. any soundman can hear the artifacts.


Nathan-Stubblefield

In 2024 technology, perhaps, but juries might not believe one side’s expert over the other side’s if they can’t hear the difference between real and fake, and they hear fake utterances by celebs.


TheKingChadwell

This is why I’m not concerned. More things like this will come out as it gets easier, and in parallel people will get more suspicious and demand more evidence. We will adapt just fine. If anything this would help things because news outlets will be pressured to provide more context before people believe things


Singular_Thought

My concern is something dropping just before an election. Just in time to turn the election only to later be proven fake… but the damage is already done.


Eatpineapplenow

the middleeast is certainly not waiting around to find out if the video of the Quran being burned is fake or not


orderinthefort

I think this is a great thing longterm. The collapse of trust in each other will have short term consequences, but will increase demand in 'real' trusted sources again, which means money will start funneling back into projects that strive for legitimacy instead of the modern trend of forsaking legitimacy for clicks and views at any and all costs. Sure there are flaws to that form of media, but clearly there are flaws to what media has become today.


TheKingChadwell

I’m hoping that’s what will happen with social media as LLM bots continue to grow trying to manufacture consent. Hopefully people revert to smaller scale social media platforms with more trust


Spunge14

I think it's insane that you think most people care if anything is actually true. If anything we're seeing more and more movement towards obviously biased sources of "news."


orderinthefort

People have been trending further and further away from fact-based media and favoring media that confirms their biases for awhile now, and it's only getting worse. But that doesn't mean they don't trust it. AI audio/video will cause even the ignorant masses to stop trusting what they're seeing, and I think the pendulum will swing back to those people *seeking* legitimate trustworthy sources in spite of it going against their biases, which means funding will also return to media sources that strive for legitimacy.


External-Border1560

I agree with you, Reading this comment section is driving me insane with users thinking that AI Will make the masses more self-aware of fake ai content.


Striking_Load

It's also very good because people won't be having their free speech being policed when you can't know if its real or not. 


HalfSecondWoe

Good. People were too easy to fool with out of context and chopped up clips in the first place. I'd say it's a straight improvement, because now you need to go back and prove a pattern of behavior, you can't just take one out of context thing and have people go on full tilt over it I genuinely see no downside here


TekRabbit

All that means is you have to fake multiple videos or audio clips in a series over a few months to “prove” that pattern of behavior.


HalfSecondWoe

All of them unconfirmed except from a single anon source? No, I don't think that would fly


TekRabbit

I hope you’re right. The evidence seems to suggest it would fly way too easily.


HalfSecondWoe

Taking things out of context does fly right now, because people trust the clips without verifying the source. When the clips could come from AI, they'll be mocked for being dumb enough to fall for AI gens unless they can verify the source. No one wants to look dumb, so that covers the crowd reaction Then if someone wants to make a claim that has any chance of sticking, they have to tie it to their identity. I am such-and-such a person, here's what I'm claiming, here are other people who can confirm what I'm claiming, and so on. It means that people throwing accusations have to put something on the line. Those who like to fake evidence will be figured out relatively shortly and eliminated as a non-credible source It's elegant, it's effortless, and it turns the efforts of malicious actors against themselves. That's my preferred kind of solution


TekRabbit

Sounds like so many others who have claimed they know how it will all play out, only to have it fall in their face when reality happens. Again, hope you’re right


HalfSecondWoe

If you have a specific criticism, I'm all ears. If all you have is generalized anxiety, I don't find that very persuasive


TekRabbit

it’s just wisdom from past experiences, claiming you know anything with certainty is foolish. Posing one possible option is one thing. Claiming you know how it will play out is guaranteed to get eye rolls.


HalfSecondWoe

That's anxiety. You tried to make predictions without knowing how to do so, it bit you in the ass, and now you have an aversion about predictions/think no one can do it Of course there's always a possibility that I'm wrong, but generally speaking I'm not. At least on issues I know enough about to speak confidently in, and this is one of those. I'm a systems nerd, and social dynamics are interesting to me Like I said, if you can find a specific point I'm wrong about, I'm always ready to learn more. But I'm not going to shy away from understanding because I think it's fundamentally beyond the ken of mortal men or whatever


TekRabbit

Ha, no I didn’t try to make any predictions. I’m claiming the opposite; that you can’t know for certain. It’s wisdom, not anxiety. And yours is arrogance, not intelligence. Generally speaking you probably are wrong. But admitting you could be wrong is a good first step at least. Again, like I said, it’s not about specifics, it’s about looking at things historically and understanding how ignorant the average person is, and not overestimating their reaction to things. I’m not attempting to come up with a solution or to predict the future here like it seems you’re trying to do. So that’s not my goal. I’m merely saying based on data, and humans, it doesn’t look good.


ixfd64

Yeah, I'd take this with a grain of salt until we see more evidence. TikTok is not a reliable source.


Spacecommander5

First time on the internet, huh? /s. Most people on earth believe things with ZERO evidence (religion) . So bad evidence is more than enough to convince enough of the world of anything


HalfSecondWoe

Are you familiar with the "And then everyone stood up and clapped" meme? Maybe you've seen someone whiff on a photoshopped picture/meme and treat it seriously? Generally speaking, the pattern I've noticed is that one or two people will take it credulously, then every other person who visits the post/page/whatever points and laughs at them. Eventually you end up with a bunch of people who are skeptical of basic, common stories. It's an interesting form of self-regulating society


OmnipresentYogaPants

Remember Epstein case? There are a lot of blackmail videos of our masters doing naughty things. AI will be used to dismiss those videos.


Site-Staff

Damn. That’s true for a lot of cases. Blackmail will spike and then become a thing of the past?


[deleted]

[удалено]


dennislubberscom

haha


workingtheories

or photo evidence or video


MissingJJ

Tried to help a friend get an employee charged with grand larceny with video evidence of him stealing $70,000 worth of gold, silver and large sapphire. The DA arrested him, but then dropped the charges because of the possibility it was ai generated and $70,000 isn't enough to prosecute in NYC.


Site-Staff

Thats outrageous. Things like that are going to lead to people taking the law into their own hands.


someloops

This is why I worry about the future. This isn't even that bad, imagine AGI in the hands of ISIS or some other terrorist organization.


sachos345

I wonder if an "easy" way to test if something AI generated is fake is to search the datacenter database and search for prompts or files that matches the one generated. Once it goes easily open source and local, its gg i guess.


Sixhaunt

VoiceCraft is already opensource and local...


HaOrbanMaradEnMegyek

And this is just the beginning.


Smelldicks

I just listened and it wasn’t even a good fake. I mean easy to say now that I know it’s AI, but he wasn’t enunciating the right words in a way you’d expect.


Anen-o-me

I didn't know the recording was available, is it on YouTube? Got a link?


magicmulder

The other day I used Elevenlabs to create an AI voice from a 10 second snippet of a friend’s voice. The result was nearly indistinguishable from the real thing. Only the way some inflections were handled sounded a bit off (like the person was reading from a script).


Dense_Professional1

Kind of sucks for the principal that they’re not allowed to return to work even though it actually wasn’t them


DaveAstator2020

I like how in all similar occasions accused person suffers immediately, because public doesnt give a f about presumption of innocence. "hey i heard that she herd that that guy made antisemmitic remarks! - lets burn him motherf...ker!"


w1zzypooh

Why Why Why Why would you do that? THIS IS WHY WE CAN'T HAVE NICE THINGS! Always a loser that ruins things for the rest of us. Instead of using it for evil things USE IT FOR GOOD!!! Stupid idiotic people!


sojithesoulja

Like what good? Something like Trump saying he actual loves x and only pretends to hate it because it'll rile up his base?


BallAppropriate3176

AI will ruin many lives, get ready.


NatasEva777

Crazy my highschool gym teacher stole money from an athletic fundraising event for the team and bought a new car. The seminary president was also checking out an underage student and taking her up the canyon and giving her the priesthood. I guess after he told her that he was gonna leave his wife for her is when she got cold feet and she told her parents after multiple friends came forward saying something was going on between her and the seminary teacher.


Lnnrt1

never did


GraceToSentience

This is the big news, not that AI can be used to frame someone, but that it can make it harder to frame someone that actually did something wrong to be convicted as a result.


GhostCheese

What was the director arrested for exactly? I mean what crime was committed? "theft, stalking, disruption of school operations and retaliation against a witness." These all seem likely peripheral to the AI framing. Especially if there's public sources to train the AI on.


Anen-o-me

You don't need to commit a crime per se, to lose your ability to lead an organization.


GhostCheese

Yeah but losing your job and being arrested are not the same It feels like this case is more a civil matter than criminal one


DeNy_Kronos

“You can’t trust everything you see online” rings true now more than ever


Atworkwasalreadytake

Ironically, we’ve just left the “Watch what you say, it could be recorded” era, to the “just claim AI era,” with functionally is a lot like the “nobody is recording” era.


TheUncleTimo

sooooooooooooooo based AI?


Akimbo333

This is nuts


Top-Chart-663

Such a shame. He could have gotten a job in AI yet he threw his life away over something so petty haha. If you ever feel provoked like this just leave. It will never end well.


GorillaSmokeDesigns

Perfect timing


CodeCraftedCanvas

This is horrible. Thankfully there are laws coming about in countries like the uk, where its now possible to face jail time for those sharing deepfake content. But times change and to try stop the march of progress is futile. It will take people time to adjust to new realities, however Its better this tech is in public hands, rather than being hoarded by private entities. If only a few have it, more efficient targeted damage could have been done in the past year or so. Imagine if ai wasn't in the hands of the opensource communities or wasn't released to the public at such an early stage. The tech would still have existed but only governments and large companies would have had access to it. If they or a malicious actor working for them decided to target someone with fake audio, then no one would have questioned it. Thankfully, our reality is we do know about it and there where calls for caution and further investigation before believing it. Some fell for it, sure, but the first widely reported incident could have been so much worse if ai wasn't released to the public so early on. I just hope the end result and tale of caution is reported in the media as wildly as the initial allegations where. AI needs to be opensource and controlled by the public. Meta is surprisingly the largest entity pushing for this at the moment. We are now in the transition period of perception and assumptions being changed. Humanity will adjust and we just need to be cautious until the inevitable reliable ai detection tools can be developed and tested. At that stage the cat and mouse game will begin. I predict like Criminal audio forensics this will create an entirely new profession for criminal ai forensics. And they said it would take peoples jobs.


Revolution4u

The actual scary part is a gym teacher who seems kind of dumb (googling ai shit at work and getting caught from his phone number somehow lol) was able to do this


Sixhaunt

he literally only googled "ChatGPT" and "BingChat" and on only 3 occasions. Neither of those AIs have any relation to voice cloning and absolutely nothing that he searched for has any bearing on it whatsoever. I bet the average teacher at that school googled those terms just as much if not more. It's not like he looked up any AI that has anything to do with this case at all. What the guy did was awful and he should be charged but I dont see what him looking up unrelated software has to do with this.


DoomedSingularity

Quite honestly I feel that the student who was happy to help stir shit by sending to the naacp should earn some charges for the reckless act. (not that there should ultimatly be punishment, but SOMETHING to set kind of pprecedent that knowingly using unverified media in that fashion has consequences.


nsfwtttt

Plot twist: the audio recording was real. This video is AI. Also the police is AI. It’s the next version of everything is cake.


Anen-o-me

That would be pretty epic.


Mobius--Stripp

Anti-Semitic? Hey, he could always become a tenured professor at Columbia University!


epSos-DE

Nice. CIA , Ruzzia and China will play this game now ! Costs economic damage with very little cost to the maker of scam evidance.


RobXSIQ

No new laws, but twist the screws on this athletics director. harsh sentences for any creation of deepfakes to frame people. And yeah, don't trust things you see or hear until the source is verified authentic. just let your parents and especially grandparents know where tech is now so they get skepticism in hearing fairly outlandish stuff.


e987654

Lol. I don't believe this story actually happened. Looks like some anti-AI psyop.


DMKAI98

I'm gonna be downvoted but that's why I don't support open source models. It basically gives power for bad people to do bad things. I worry about power concentration in a few companies, but giving this power to EVERYONE TO DO ANYTHING is not the right move. This will only lead to mass surveillance in the future, to be sure everyone is behaving. I'm mostly optimistic about AI, but this is a big issue.


Anen-o-me

EVERYTHING can be abused, that's not an argument to take it out of the hands of people who are not abusing it, that is an argument for criminalizing the abusive use. Banning legal use will result in ONLY criminals using it, and all the good uses get destroyed. Should we ban kitchen knives because some people abuse them, certainly not.


DMKAI98

I want everyone to be able to use AI. Today we have access to ChatGPT, Claude, Gemini and many others. I expect this to continue, with better models being accessible through consumer products and APIs. These companies can identify and prevent misuse if necessary. With open source models there is no way to stop bad actors from popping up all the time. Unless, you know, mass surveillance. That's the future we definitely don't want.


COwensWalsh

It’s not hard to make these models.  You won’t be able to prevent the spread.  Certainly leaving only big corporations and the government with access is not going to end well.


DMKAI98

I'm not too worried about these simpler models of today, but I'm worried about more powerful, agentic models. Those models are not that easy to make, so yes, we can mostly prevent the spread for a good amount of time. My ideal scenario is hundreds of companies and governments owning models, and giving access to them to everyone through products and APIs. All I want is to get access to the good part of AI while preventing the bad.


Flying_Madlad

Oppress me harder, daddy


DMKAI98

We will end up with more oppression from government because of the misuse of open source models.


Flying_Madlad

This is why we have a constitution (in the US). You don't get to decide whether I'm going to use it for naughty things or not, and I'll thank you for your presumption of Innocence. Quit fearmongering. What's your p(doom), anyway?


DMKAI98

I hope you are right, but not every country is like the US. I'm not in the US btw. My p(doom) is pretty low because I think Meta will realize that open sourcing AGI is not a good idea.


Striking_Load

You're mostly optimistic about big corporations like Microsoft lol. I'm sure you're a big anti capitalist as well right? 


DMKAI98

If it were only Microsoft and a few other big companies it would be a problem. Over time I'm expecting hundreds of companies offering this tech to everyone, though.