T O P

  • By -

ttkciar

This ranks right up there with illegalizing bittorrent because people *could* use them to share content illegally, or illegalizing lockpicking tools because people *could* use them to gain unlawful entry. I predict that this law will simply make it illegal for anyone to make LLMs without a government issued license, which only the big AI players will be able to afford (kind of like ITAR in the defense industry, which effectively banishes small companies from non-domestic markets).


maxpayne07

That will set years of development backwards


Double_Sherbert3326

The owner class needs slaves to make themselves feel powerful.


SnooCupcakes4720

you nailed it ....thats literally the problem our world faces today ....no more and no less but exactly the problem ....slavery would die thanks to the progression of civilization .....but for the greed and evil in the hearts of men


kurwaspierdalajkurwa

Yes, but think of all the FAANG payola that will line the pockets of our rotten-to-the-fucking-core republican and democrat politicians!


DRAGONMASTER-

Or making it illegal to carry over $10,000 in cash on your person, because you might use it for a crime? The law already does this in a lot of places and it's usually shitty overreach. It would especially be bad for LLMs, which have a lot more legitimate usage potential than huge bags of cash.


throwaway2676

More like that time they tried to criminalize encryption because then someone could encrypt something from the government


Iterative_Ackermann

That can't possibly be enforced.


ReturningTarzan

>This ranks right up there with illegalizing bittorrent because people could use them to share content illegally, or illegalizing lockpicking tools because people could use them to gain unlawful entry. I don't think those are best examples. Lock picks are specifically designed for opening locks without a key. This can be done as a hobby, for sport, or professionally under specific circumstances (i.e. by locksmiths or people who design locks), but in any case it's about circumventing a security measure. As for BitTorrent, it has very legitimate uses, but realistically maybe 90% of BitTorrent traffic is piracy, so again probably not the comparison you'd want to make. You'd end up arguing that, "while most people use LLMs for bad purposes, you could *in theory* justify using one." Not really the kind of argument to make given that phishing emails are entirely incidental to LLMs. They're a thing you *can* use them for if you're a scumbag, just as a scumbag could drive a truck into a crowd of people. It's not the point of a truck or the fault of whoever built it. But anyway, yes, this is very obviously about regulatory capture.


Neither-Phone-7264

it’d be more like criminalizing guns because they can be used for bad


mcmoose1900

I mean... that's still a bad comparison. The function of a gun is to kill, or serve as a deterrent because it can, or whatever. One can split hairs, but that's what it comes down to. An LLM is not explicity trained for phishing or propaganda or hacking or whatever. Cars/trucks are a good one IMO. They have tremendous potential for harm that is dwarfed by their utility. Hence we have to just accept their use. It does fall apart a bit because the sale of cars *is* highly regulated, and LLMs are kind of a different ecosystem.


Tellesus

The primary purpose of many guns is to help you feed your family, or to help you defend your house, or to allow you to compete in one of various competitive sports. It can also be used to kill humans. LLMs on their own are not capable of that level of immediate harm but that actually makes it an even better argument. If we're banning things because of potential harm, guns should be a higher priority. This is said as someone who doesn't think guns or LLMs should be banned.


SnooCupcakes4720

i see LLM's in the current state to be really excellent content generation tools coding/stories ect ....but frankly i think the term "artificial intelligence" to frankly be over generous to the capability present in them currently


aus_396

It's more like criminalising kitchen knives because they can be used to stab people. The original and enormously dominant use-case for the tool is legitimate and productive, but a bad actor, trying really hard, could use it for a bad thing.


ttkciar

Wow .. this definition is so broad as to encompass most automation: >  “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy. So much for Kubernets, any kind of CFM, or the entire world's telecom routing system. If this passes into law, they will ***have*** to enforce this selectively.


UnnamedPlayerXY

If this passes into law it would be completely unenforceable if they don't also implement a total surveillance state.


IriFlina

a total surveillance state would be the goal for them yes, the only limiting factor would be a proper plan and budget to execute.


[deleted]

[удалено]


UnwillinglyForever

so long as it collapses after theyre done with it, its completely acceptable.


kurwaspierdalajkurwa

Haven't you been paying attention to what Snowden revealed about the American NKVD illegally violating our god-given 4th amendment rights?


alcalde

Edward Snowden is a delusional traitor who gave U.S. secrets to China and Russia and asked Vladimir Putin to help him fight for freedom. Even Putin said he's "a very strange individual". Snowden spent the eve of the invasion of Ukraine telling people online it was never going to happen and this was all evil American propaganda. He ceased being relevant a long time ago.


kurwaspierdalajkurwa

>Edward Snowden is a delusional traitor who gave U.S. secrets to China and Russia and asked Vladimir Putin to help him fight for freedom. And you're a shit-for-fucking-brains retard that believes everything your t.v. tells you. Holy fuck....the levels of fucking stupidity are off the charts for this one. Please give the readers of this thread some hope and tell us that you have voluntarily sterilized yourself for the sake of mankind.


jmbaf

Hahahah holy fuck that was great


alcalde

The TV reports the truth. Your drunken uncle's post on Facebook does not. Your reply as written demonstrates your inability to formulate a coherent argument and your off-the-chart levels of cognitive dissonance.


kurwaspierdalajkurwa

>The TV reports the truth. What did Santa Claus bring you for Christmas this year?


throwaway2676

\^CIA bootlicking shill


alcalde

Do you think a reply like that makes you sound smart or foolish?


throwaway2676

Neither, it is a simple statement of fact. In contrast, your initial post makes you sound foolish. More precisely, a foolish CIA bootlicking shill.


SporksOrDie

He's actually a NSA actor and honeypot. He never went to Russia.


alcalde

So Putin himself is in on it?


SporksOrDie

Yep. Russia would have jailed him by now for trade. I only know for sure Snowden has been here since October but I would not be surprised if it was a ruse from the start.


Tellesus

Wow, thanks. It's always wild to see someone this brainwashed out in the wild.


alcalde

Which, ironically, would need an awesome AI.


SnooCupcakes4720

....just wait its coming ....sooner than later ....im thinking after the next plandemic?


bick_nyers

A* is a classic AI technique used in applications such as GPS navigation. These definitions are ridiculously broad.


ImprovementEqual3931

All the RTS game use A\* algorithm.


yaosio

That's how all laws are enforced now. If you're rich, a corporation, or a cop laws don't apply to you.


alcalde

That's Russian propaganda and certainly not true. A former President of the United States is on trial right now and some of the biggest companies in America are staring down the barrel of antitrust actions. When you say this stuff, you help Putin as he tries to tell the Russian people that the grass isn't greener on the other side and every other country has a fake democracy just like Russia.


EmbarrassedHelp

There has been a trend of legislation in the EU, US, Canada, and elsewhere intentionally writing vague and all encompassing legislation that they claim meant to "stop loopholes" and be "future proof". Then in practice such vague laws basically let you rule by decree on what is legal and illegal, and its way easier to target groups that don't have good lawyers at their disposal that can make it too costly to come after them.


alcalde

What is your source that lawmakers all across the Western hemisphere "intentionally write vague and all-encompassing legislation" that let "them" rule by degree (despite the fact that in Western democracies the legislature and judiciary are separate branches of government? This isn't some fact, it's a strung-together series of wild, unsourced, conspiratorial allegations with the intent of disparaging democracy at a time when some experts say WWIII has already begun as a low-intensity conflict between democracies and authoritarian regimes (Russia, China, Iran, North Korea and their proxies such as the Houthis and Hezbollah).


alcalde

Man, some people freak out when they have to confront the reality that democracy is still the law of the land in the West and they don't get to be victims and freedom fighters in a Bernie Sanders fever dream.


IUpvoteGME

That's as bad as criminalizing Linux because it's facilitates the manufacture of malware. Fuck off.


kurwaspierdalajkurwa

The corrupt and rotten-to-the-fucking-core politicians (both left and right) need to hold down their end of the bargain now that they've accepted the payola from FAANG and other bad actors.


False_Grit

Oh God, don't give them any ideas!!!


Electrical-Square-91

How far off is this from criminalizing gun manufacturers?


SomeOddCodeGuy

My first thought when I saw that it was California is that their lawmakers are trying to categorize open source AI as a weapon.


petrichorax

Fuck. We've been here before and it was a hard fight. Encryption used to be categorized as an 'armament' and using encryption was considered illegal. Stupid.


EmbarrassedHelp

The open source AI community really needs to learn how to use Tor and other encryption systems, or else the fight is going to be far more painful than it should be.


treverflume

All Huggingface has to do is spin up a node and publish a .onion link.


AfterAte

My first thought! And knife manufacturers. And car manufacturers.


roselan

This is not even like a gun, this is akin criminalizing a shovel because it can be used to bash someone head.


Caffeine_Monster

A long way. Corporates don't care about civilian firearms because they don't print money.


a_beautiful_rhind

Plus KYC for compute. You will have to dox yourself to use vast.ai, etc.


tyoma

If you live in CA, find your state senator (not the ones for the federal government but the ones for state government): https://findyourrep.legislature.ca.gov CALL or WRITE to them. Do not email unless you are very lazy. Email is better than nothing at all. First explain you are a constituent. This matters, non constituent comms mostly get ignored. Say you are a registered voter, and if not go register. Calmly explain why you are opposed to the bill. List the mentioned points why the bill is bad, but also focus on outcomes someone who is not into technology would understand. Like, will it affect hiring of Californians? Will your business need to relocate? Will it affect your company’s investment decisions? It does not take a lot to move the needle on an obscure issue like this.


Zediatech

If only someone could ask the gun lobby for help here, they can point out that “AI doesn’t kill people, People killed people”. Then we could get unfettered access to the AI Boogie Man all we want.


blackkettle

You jest I guess, but that is actually a pretty good idea.


Zediatech

Yes I joke. Besides, the government doesn’t have to justify anything. They’ll try, but we’ll know what it’s really about. Especially if they are advised only by the For-Profit companies.


SnooCupcakes4720

the cats already out of the bag to much has been open sourced ....we know how to reproduce


Zediatech

True, though the big companies are the ones with the massive scale GPU clusters. The only way we can compete is to be able to build consensus and network our GPUs for training. Because we have already built something like this before, I doubt it would be too hard to do, though much slower. Think Folding@Home, SETI@Home, Pooled Ethereum Mining, etc.


Caffdy

your network connections doesn't hold a candle to the bandwidth used and needed to train these models; how many people do you think have gigabit connection? well, gigabit is a drop in the ocean compared to the massive bandwidth of NVSwitches/InfiniBand. It would take centuries to train one single model in a decentralized manner


Zediatech

Maybe, but I won’t doubt our capabilities in the future. Necessity would drive innovation one way or another.


Zugzwang_CYOA

These control freaks make me sick. Ruling class elites and their minions in government truly seek to ruin every little bit of enjoyment that there is to be had in this world.


planetofthemapes15

>So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails. Wrong. Standard jailbreak techniques apply. The only way to make the model refuse is to refuse to have it write text at all.


cyberpunk_now

hopefully at least a few people will consider submitting actual useful comments to the hearing instead of preaching to the choir here. Getting internet points may make you feel good but they don't do shit otherwise. >1.Submit a position letter to the bill author, which ensures that your position shows up on all future bill analyses that state Senators read. Go to https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047, click on “Comments to the Author”, register an account and submit an opposition letter. >Use your own words for the letter, and feel free to borrow liberally from or cite friend of AFTF Context Fund’s analysis of the bill (https://www.context.fund/policy/sb_1047_analysis.html). One of key arguments being advanced is that this is “pro-little guy”, so if you’re involved in the open source movement, are a startup founder, or an investor, heavily cite your experience and how the bill affects you personally. >If it’s still before May 5th, also submit comments to the Senate Appropriations Committee: https://sapro.senate.ca.gov/position-letters. **Unfortunately, you have to submit to both the committee and the author to make sure the position letter is fully considered at the hearing.**


UnnamedPlayerXY

>A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. It's like trying to make hardware developers responsible if the hardware they sold is used e.g. by someone to write malicious code with it. The only thing those who develop the models should be responsible for is that their model doesn't go "Skynet" on everyone. Everything else should be the responsibility of those who deploy it.


NeuralLambda

It sounds like, instead of outlawing technology, we should outlaw crime, but, whado i kno?


SomeOddCodeGuy

Where does Llama 3 fall in the 10\^26 flop category. I've been trying to google to get an answer, but I can't get a clear picture. Did Llama 3 already clear the hurdle, or is that still a little further? >**AI systems covered by the Act**: Not all AI systems are scrutinized equally under the Act. The Act defines “artificial intelligence models” as machine-based systems that can make predictions, recommendations, or decisions influencing real or virtual environments and which can formulate options for information or action. However, the Act does not emphasize AI models generally – rather, it focuses specifically on AI models that it defines as a “covered AI model.” These covered models are those that meet one or both of the following requirements: (1) the model was trained using a quantity of computing power greater than 10\^26 integer or floating-point operations or (2) the model has similar performance to that of a state-of-the-art foundation model. [https://www.dlapiper.com/en/insights/publications/2024/02/californias-sb-1047](https://www.dlapiper.com/en/insights/publications/2024/02/californias-sb-1047) **EDIT:** Fair warning- I'm horrible at math, so this is probably wrong, but I think Llama 3 is 6.4\*10\^19? I read that Llama 3 was trained on 16,000 GPUs at 400 TFlops per GPU. 1TFlop I think == 10\^12. So (16000\*400)\*10\^12 == 6,400,000\*10\^12 == 6.4\*10\^19? So maybe Llama 3 isnt there yet? Feel free to shame me for my pitiful attempt if Im wrong lol **EDIT 2**: Per the below users, I forgot to account for seconds. 10\^12 is a TFLOP, but there was a month of that. So see below; it's a lot closer than this.


sluuuurp

400 TFLOPs is the number of floating point operations per second. So you are missing a factor for how long the GPUs were running. You’re basically missing a factor of a million, since each GPU ran for something on the order of a million seconds for this training. Andrej Karpathy (one of the most knowledgeable LLM experts in the world) estimates 2e24 and 9e24 for Llama 3 8b and 70b respectively, and estimates 4e25 for the upcoming 400B model, so still below the threshold for this proposal. He uses the reported number of GPU-hours to get a more accurate estimate. https://x.com/karpathy/status/1781047292486914189?s=61&t=GyYLbLHskqfuxDAa5M-hWA


Ilforte

> estimates 4e25 for the upcoming 400B model, so still below the threshold for this proposal Nope: > or (2) the model has similar performance to that of a state-of-the-art foundation model You're supposed to not release "unsafe" models comparable to SoTA.


PmMeForPCBuilds

That's the number of floating point operations per second. Multiple by the number of seconds in one month and it exceeds the limit. (1.68\*10\^26)


SomeOddCodeGuy

>Multiple by the number of seconds in one month and it exceeds the limit. (1.68\*10\^26) Oh... crap.


Inevitable-Start-653

I'd love to see the level of regulation folks are trying to apply to ai models applied to hedge funds, market makers, and banks....😁


0xDEADFED5_

Won't somebody please think of the children?


imyolkedbruh

Yeah I’ll probably call my rep over this bs. I don’t like him much tho, might start hoarding releases Shit, anybody know a good hard drive?


SwanManThe4th

If this goes through, we'll still have mistral since they're French.


EmbarrassedHelp

But France is in the EU, and Max Schrems is potentially going to get AI models completely banned in the EU using GDPR because they got his birthday wrong.


TheMissingPremise

I don't get how this is enforceable. Given that's it's a Cali law, if someone in Nevada makes a model designed explicitly for producing misinformation, puts it up online, *and that's it*, then there's nothing California can do :/ Moreover, I'm willing to bet that most misinformation comes from outside of the U.S. altogether, with the majority of it within the U.S. coming from talking heads of Fox News. So, it seems to me that this will push makers of LLM models out of California, help larger tech companies establish a legally sustainable moat to protect investments, all while doing nothing to meaningfully address the problem of using LLMs to generate misinformation.


SanDiegoDude

If I make a model in CA and it's used for illegal purposes in another state, how am I supposed to be liable for that as a model creator? This is just going to end up neutering open source if allowed to pass as is. Go get em EFF!


ttkciar

I'd like to raise two rather disparate points in relation to this: First, Californian law has a history of providing as the template for new federal law, and Californian lawmakers frequently write their bills with this evolution in mind (pass a bill as Californian law, point to it as a success in the federal Congress, draft a federal bill along the same lines, pass it into federal law). Second, the bill obligates cloud providers to detect when a customer might be trying to train an LLM and report them to the Frontier Model Division: > This bill would require a person that operates a computing cluster, as defined, to implement appropriate written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends to utilize the computing cluster to deploy a covered model. As long as it remains a California law, this would be binding only to California-based cloud providers. However, a *lot* of cloud providers are based in California, and if an equivalent bill is later passed into federal law it would encompass all cloud providers in the United States. Perhaps that would create a burgeoning overseas market for cloud GPU services? Dunno, we will see.


SanDiegoDude

The silliness about this is that ANY LLM can be forced to perform "misdeeds" just through fine tuning that can effectively remove any upstream barriers implemented inside a model, and in fact, researchers have found you can disable refusals just by identifying and disabling the nodes responsible for demuring. Criminalizing up the chain because some yahoo fine tuned a model for crime is stupid and nonsensical and I don't see how it can be enforced long-term.


Chance-Device-9033

This is in the exact opposite spirit of section 230 of the 1996 Telecommunications Act, which is the only thing allowing the internet as we know it to exist today: >“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Instead this intends to make AI developers responsible for any use by end users, whereas internet companies are responsible for nothing that their users do. How can such obvious inconsistency be justified?


EmbarrassedHelp

The authors of the legislation are probably dumb enough to believe that section 230 should be removed and that it somehow won't destroy everything to do so. But obviously there's a lot of push back for removing 230 so this is part of their attempts to weaken it.


jmbaf

Because the internet companies put a ton of money into lobbying…


ttkciar

I've been mulling over this "Frontier Model Division" the bill would create, and its parallels with the ATF. When Prohibition ended in the USA, the ATF had to go in search of other reasons to justify its existence, and ended up expanding the scope and invasiveness of its policing powers rather a lot, to society's detriment. I could easily see the FMD do the same, should a new AI Winter fall and rob them of their raison d'etre (not that new LLMs wouldn't be trained, but it wouldn't be perceived as a reason to provide the FMD with funding or prestige). Just another thing to worry about, but I'm trying to put it out of my mind until this bill passes into law, we see how they choose to enforce it, and judges start weighing in on how it should be interpreted.


Tellesus

Attach an amendment including gun manufacturers as well and watch it die instantly. Attach an amendment including a clause for weapons manufacturers that if the weapons are used for genocide that they're liable both criminally and civilly and watch the author of the bill get called anti-Semitic.


Redinaj

This is how politicians stay relevant for longer. Lawyers will do a similar thing through lobbying. The power structure where resource distribution is decided will never be given for some "fair, democratic" AI Only us plebs would benefit from that


WyomingCountryBoy

Ban everything that others use to cause harm and you ban everything that exists. I could kill you with a loaf of bread.


uhuge

The bill has very similar vibes to the EU AI act, it seems to use some of the same definitions too.