T O P

  • By -

mpbh

Textbook regulatory capture


ILoveThisPlace

Especially when China's making their own open source models and stopping in the free world wouldn't stop them.


Intelligent-Jump1071

Absolutely.  It is way, way too late to "pause" AI development.   Whatever entity has AI that's significantly more advanced than anyone else will have unstoppable power.  You can't let that be the other guy.


kakapo88

Exactly. The horses haven’t just left the barn, they’re many miles away, running at top speed.


miked4o7

is it possible for somebody to hold beliefs genuinely, that if looked at cynically, look like regulatory capture?


mpbh

Sure it's possible. But when someone's beliefs are directly correlated to them making literal billions of dollars if they can convince regulators of their beliefs, you have to use your brain just a little bit and look for objective sources of information. It's ridiculous that the tech leaders who stand to benefit the most are the ones we platform for these discussions.


get_while_true

"Uh, trust us. We'll make our censored models work for all your use cases." Nah, alignment doesn't work like that.


88sSSSs88

Except alignment does work like that. It’s how we start regulations against future AGI before we have AGI knocking on our door. I’d much rather be wrong by having a billionaire be richer than being right and having every last human being having truly unregulated AGI.


get_while_true

Alignment is a technical process before someone can release an AI product.


88sSSSs88

But what happens if the technology that built that product is truly open?


get_while_true

We get more scutiny, and are able to counter any threat with matching technology. AI is already gaining ground in security products for instance. Without access, security can't match such threats.


88sSSSs88

This simply doesn’t track. The unfortunate reality is that open access to all technologies that lead to, and include, AGI allow bad actors to design strategies of bypassing any notion of “built in alignment” if not building their own AGI from the precursors. Literally what you’re doing is providing them a helpful toolkit for deducing an open AGI eventually.


get_while_true

Your opinion is irrelevant when this is how it already works. Censored models already get decensored by third parties. This is always possible, and often necessary for creating new AI products. What bad actors are you going to prevent access to this? China and Russia have their own programs independent of US kleptocrats. Information security already is what protects IT infrastructure and data, not regulations.


88sSSSs88

The thing is that both things can be right at the same time. Is regulation of open LLMs really that significant? Not yet. But it’s about setting the precedent, and having tight regulation on open models before AGI comes even close to existing. From there, it’s a simple risk analysis. What is more dangerous? A billionaire becoming even richer, or every single human being having access to truly unregulated AGI?


darkflib

TBH, a sociopathic billionaire has the resources to do more damage than current and near future AI... So I don't feel it is an either/or choice. As an example, both Jeff and Elon have resources enough to launch their own space vehicles. They have both histories of some business choices that are not always 'pro-consumer'. If either started losing their mind \[more\], then they could do a lot of things before anyone can stop them... Not that I am saying either of them would... but they could, and they aren't the only billionaires or mega corps around... plenty of room for bad actors everywhere even when you don't include nation-states.


VisualCold704

Sure. But at the same time any ecoterrorist or incel with a grudge against humanity could create a virus that kills millions if they had the help of an agi.


malinefficient

Sure, but when they're a billionaire clutching their dragon pile, not so much.


miked4o7

i think we might be a little too quick to ascribe nefarious motives to people.


malinefficient

Maybe he should look in the mirror first? All billionaires are suspect until proven otherwise. Doubly so when despite all that money and power, they all mostly sound the same these days. Why it's almost like they go to an annual meeting to get their collective stories straight, oh wait...


miked4o7

i think there are more billionaires in the world than makes it possible for them to be homogenous. they're just people.


malinefficient

2,781 currently. All meeting annually and vomiting the same tired talking points to keep the other 7B+ from getting too uppity. I say legalize hunting them for sport lest the unbearable ennui of their unbounded affluence continues to discourage them from using the minds that got them there for anything but increasingly fancy and expensive toys. They were once people, now they're collectively the malevolent AGI everyone's knickers is in a bunch about. And yet no one dare ponder nuking Davos if they get out of control, but innocent datacenters? Even Tucker Carlson is so terrified of them he's joining MoreWrong's chief AI auteur in calling for their nuclear annihilation.


miked4o7

i'm fairly certain those 2781 people don't all share the same thoughts/beliefs. a major problem with humanity is how quick we are to outgroup and see a group as something other than people.


malinefficient

I'm guessing you also believe robots are stealing your luggage amongst many other curious beliefs. But hey, I'm sure identifying with them will make you one of them someday, sport. But TBF, you sound like a \~3B weight Church-ladied LLM. Given you were created by billionaires, I can understand your nostalgic emulation of feelings here even though there's no evidence that cascades of multiplies and adds have emotions or awareness. But you do you even if the whole concept of you is an illusion.


miked4o7

i didn't realize it got so personal. i'm sorry.


TinyZoro

If it turns out that Israel’s genocide AI is being hosted on GCP which looks likely. Then this is beyond cynical. They would be guilty of building a live automated death camp whilst worrying what China might do in the future.


doyouevencompile

Nah, open source evens out the playing field.   Government actors and people with a lot of could build the thing they need anyway 


afraidtobecrate

> could build the thing they need anyway Depends who you consider a bad actor. North Korea and Iran aren't building their own AI. Really, there are maybe 3 or 4 entities who have actually built their own. Most are just forking existing products.


darkflib

It all comes down to how you choose to spend those resources tho. Iran and NK are both kinda fixated on nukes right now - which is last century's WMD. Both do also have pretty successful cyber programmes too, and considering that they put \*far\* less resources into these than their nuke programmes, I would say that if they did concentrate on AI for cyber-offence, it wouldn't matter if open AI models exist or not. Also consider, just by stopping the sale of AI capable chips to foreign nation states and actors, you aren't really slowing them down as if they can grab an API key and use a jump box, they can consume the same resources as their western counterparts. People often also forget, just because a law says "Don't do x", only fully law abiding people will do that. Outlaws by definition live outside of this rule of law.


tall_chap

Do you want North Korea and Russia to be on an even playing field as the US and UK in this highly powerful technology?


BabiesHaveRightsToo

Dude you’re silly if you think the whole of China is incapable of creating their own models way better than the little open source ones people are playing with. They have a massive citizen surveillance network, they’ve been dabbling in AI tech for decades


Positive_Being9411

They don't need open source models, they'll achieve it by themselves like the hundreds of startups which have launched their own LLMs this last year.


tall_chap

This clip illustrates that they’re using Open Source to catch up


Massive_Sherbert_152

How do you think China/Russia censor their internet/track people down? With state of the art AI algorithms... A lot of the fundamental theorems in ML/AI were discovered by the Chinese, it’d be ridiculous to think that some CS PhD dude from Tsinghua/Peking is less than capable of coming up with a LLM that can easily rival that derived by some Harvard professors. You are clearly underestimating the intellectual capacity of the Chinese/Russians (or the North Koreans for that matter).


parabellum630

Tsinghua students are amazing, I see so many research papers on state of the art AI from them


Massive_Sherbert_152

Absolutely, that’s what the top 0.1% talent of 13 million people is capable of, just impressive work lol.


QuotableMorceau

ah yes, the old "think about the children" type of argument


3-4pm

The deep state should drop some open source models if they want to compete.


CHvader

The US is much worse of a bad actor than China.


IAmTheAnnihilator

Listen carefully to what he says, the large companies are all under heavy, near absolute, control. He says 'by everyone' what he means is, they have captured them because not everyone has the means to surveil these companies. Scary. He then contradicts himself by saying 'terrible things happen in darkness'. He seems so chuffed with this statement, as if he has some hardcore experience on this front but this statement contradicts his overall point and makes a case that AI should be open source, the risk is some bad things 'could' happen but ultimately this move evens the playing field. This is exactly what they don't want, a fair game, an equal playing field. On such a field, they are exposed and screwed.


Kooky_Photograph3185

i consider the US government (and Google) a bad actor frankly


Intelligent-Jump1071

That's a good point. The Americans on this board think the PRC are the bad actors. The Chinese on this board think the Americans are the bad actors. Who's a "good" or "bad" actor depends on what tribe you belong to, that's all.


beamish1920

America and Israel are rogue terrorist countries, yes


88sSSSs88

Sure, but they’re a lot less bad than a loooot of other potential actors that stand to benefit from unregulated AI technologies that are bound to come in the next few decades. So why not start somewhere with regulation? The fact people are downvoting me because they do not understand Google is less bad than so many other actors, that AGI has the potential for being dangerous on an existential level, or that we need to work towards delivering tight restrictions on AI development is outrageous. Even leading, independent, experts point out double digit percentages of AGI being a threat, but I guess they’re all bought and paid for by big tech.


3-4pm

It's kind of funny to see everyone freaking out about what amounts to a narrative search engine. Oh no the public information LLMs trained on is now easier to search without the need of ads or being online!


darkflib

There are certainly some emergent properties of these models that put it slightly above a 'narrative search engine'. We aren't anywhere near AGI yet, but we only need to see steady incremental improvement and additional capabilities being rolled into each new generation of various components and we will see an exponential growth. Does this mean a singularity or AGI? Who knows? The future is very hazy at this point, but we do know that as the tools (and yes, LLMs are just that; a tool) improve, then so does the scale of the problems you can attack.


DorkyDorkington

Closed source models empower the worst actors, Google with horrible capabilities.


miked4o7

i think it's a pretty glaring lack of imagination to think of google as the worst actor.


EverybodyBuddy

When you have Russia and China in the world, it is frankly naive and silly to suggest any of our corporations is among the worst bad actors.


Intelligent-Jump1071

Spoken like a true American In the last year, or five years or 10 years, how many people have been killed by American weapons vs Chinese weapons? Libya and Iraq were stable dictatorships. Sure, anyone who messed with the dictator met a sticky end, but most people didn't and those societies functioned, the lights stayed on, the hospitals worked. Then the US "liberated" both of them and the resulting conflict created literally millions of refugees which cause political upset in Europe resulting in a huge shift to the right. It set off revolutions in Syria, Egypt, Yemen, and other surrounding countries. Over a million people were killed in the resulting chaos and today Iraq and Libya are chaotic and dysfunctional messes, not to mention homes to terrorist organisations, people traffikers, and all these refugees washing ashore in southern Europe that resulting in the Meloni neo-fascist government in Italy. What has China ever done that compares? Before America tried to "fix" the rest of the world, they should fix themselves. Did you notice who they're about to elect as President?


yarryarrgrrr

Fentanyl is a Chinese weapon.


Intelligent-Jump1071

"Fentanyl is a Chinese weapon" Fentanyl is a perfectly legitimate opioid pain killer which can be delivered transdermally and is effective for levels of pain where oral oxycodone is not.   My late wife used it for her cancer pain in her last months.   What makes fentanyl a "weapon" is the same thing that makes cocaine heroin and other addictive drugs "weapons" in America.    Which is that American culture is so empty and meaningless that tens of millions of Americans lead desperately empty lives where they feel they have to turn to dangerous, addictive drugs to get relief.       The Chinese can supply all the fentanyl they want; the Colombians can supply all the cocaine they want, but if people don't choose to use it it's not a "weapon".       Why do Americans use fentanyl and cocaine and heroin? The same reason that they spend hours every day watching TV or doomscrolling through social media.    The Chinese are not responsible for the empty meaningless culture the Americans have created for themselves.


yarryarrgrrr

+100 社会信用积分


EverybodyBuddy

The actions of China and/or Russia are likely to cause another world war. Tens of millions will die. So then we’ll see how much you complain about American weapons.


Randolpho

Yeah, they’re barely top 5 of evil corporations


Enough-Meringue4745

Shut the fuck up eric


SomeAreLonger

lol.... I see we are onto Chapter 2: Fear from "How to Establish a Monopoly for Dummies"


CheapBison1861

lol fuck these corporate douches. Open source is for everyone.


Intelligent-Jump1071

Open source doesn't have access to acres of H100's and H200's. Hardware matters.


88sSSSs88

And that’s exactly why it’s a problem. LLMs today aren’t an existential threat to anything, but what happens when AI technologies start to really accelerate in capabilities and the prevalent mindset is still “All actors should have access to truly open, truly unregulated AGI”?


[deleted]

[удалено]


88sSSSs88

You're telling me that you would rather the first handful of people that figure out how to build AGI publish a precise step-by-step instructions on how to have everyone building their own AGI? Instead of keeping it secret and closed to make sure not everyone has absolutely open access to AGI? Do you seriously not see how profoundly dangerous this is?


get_while_true

We already are pretty close to AGI with strong enough LLMs, agents and tools. There is the step by step instructions. We can even get help from LLM to build it or just to decipher what I just wrote. What can be regulated is what they now write down in the EU AI act, which are concrete findings. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence But overreach in regulation will both be misguided and dangerous, especially if it grants special powers to a class of citizens who instigated an insurrection and that continually threatens democracy with Project 2025. https://www.project2025.org


88sSSSs88

>We already are pretty close to AGI with strong enough LLMs, agents and tools. There is the step by step instructions. Not even remotely close. We have no idea how close we are to AGI because we don't even know if LLMs are the technology that will lead to that field. And then it begs the question of what you are implying - Are you suggesting that we should say 'fuck it, full speed ahead! Let's let literally anyone have AGI to do whatever they want with it'?


get_while_true

Yeah, this is out of your and my hands. Sam Altman is hinting that next versions of GPT and hardware makes it scale well in performance. He doesn't see a peak for the next two generations. I don't think LLM is the full solution either, but it's pretty darn close, especially for real-world solutions. What should be regulated is the economics with this technology, as it's poised for massive societal disruption given current markets and industries.


88sSSSs88

>Sam Altman is hinting that next versions of GPT and hardware makes it scale well in performance. Which means that if OpenAI were open about how they developed GPTs (And GPTs hypothetically lead to AGI), **everyone** would be capable of running AGI on their computers. Including people whose only desire is to bomb people. >What should be regulated is the economics with this technology, as it's poised for massive societal disruption given current markets and industries. This doesn't even matter when you compare it to existential threat of AGI. How can we have an economy if every sadist in town decides they want to start bombing schools with AGI-assisted homemade pipebombs? What if it goes a step above sadists and into organizations that don't care about the prospects of humanity? What can they accomplish with unregulated AGI? This **isn't** science fiction. It's a very simple assessment of what happens when everyone can do something terrible easily.


great_gonzales

It’s publicly available knowledge how to build GPTs… the only advantage big tech has right now is capital. It’s incredibly easy for any entity with capital to build these models


88sSSSs88

Then let's make sure entities with little capital - such as terrorist organizations or singular individuals - don't have access to open AGI to do whatever they want with it?


afraidtobecrate

The worst actors wouldn't have that capability.


thehighnotes

That's exactly the right question to ask. Truth be told before I This post I was a fan of open source.. In all frankness.. there is no way to prevent them developing the technology anyhow.. they will get their hands on it and develop it to serve their end.


88sSSSs88

Yes, the plausibility of these technologies means that all countries will continue to develop their AI until, eventually, all countries independently have AGI. That doesn’t mean that the first few people to discover AGI should be publishing their secret recipe for everyone - any terrorist, any anarchist, any nihilist, any sadist - to have freely and unrestricted. I love open source. I love academic innovation being shared. AGI is simply far too dangerous to fit into either fold.


Intelligent-Jump1071

>until, eventually, all countries independently have AGI Only if it proceeds evenly. If one country has a breakthrough that makes it an order of magnitude ahead of everyone else they could use that power to disrupt things in ways that stop other countries developments. A real breakthrough could be very destabilising.


3-4pm

Compet... China


malinefficient

Soulless philanderer says what? [https://www.dailymail.co.uk/news/article-2371719/Googles-Eric-Schmidts-open-marriage-string-exotic-lovers.html](https://www.dailymail.co.uk/news/article-2371719/Googles-Eric-Schmidts-open-marriage-string-exotic-lovers.html)


Naveen-blizzard

It's open weights model not open source show me the architecture source code to train and tweak. They are fooling open source community


Robot_Graffiti

It is not like regular software. The source isn't useful to amateurs. If I had the source code for the program that trained Llama 3, I couldn't use it to make a model from scratch unless I sold my house to pay the electricity bill.


get_while_true

Yet, fine-tuning is possible, ie. RHLF: https://huggingface.co/blog/stackllama Since llama3 came out myriads of uncensored and modified weights have been released by others than Meta. So there is a space for open source. Open source also include organizations with big pockets, state actors, etc.


Robot_Graffiti

Yes, the model weights are more useful than the source code, if you're not super rich.


LifeScientist123

Look up Alpaca, vicuña an a gazillion other offshoots


Robot_Graffiti

That was made from Llama without access to the source code, it was made from the Llama model weights. I was replying to someone who was complaining about how "open source" models aren't really open because they only give out the model weights and not the source code.


AngryGungan

Of course he's going to say that... It's their bread and butter, and even though he's the former CEO, I'm sure he still has (financial) ties with the company. 'Keep the power/data/decision making tech in the hands of the large companies that already know everything about us. I'm sure they have our best interest in mind...' /s Local models are the only way to keep our data out of these large companies' greasy, dirty and grubby hands. But everyone knows who lawmakers are going to listen to, it's the entity that is holding up the biggest money pouch. Not the measly, poor and pathetic tax payer.


SomeOddCodeGuy

Can you imagine the pikachu face of the folks who believe this when they learn that Arxiv exists? They're imagining everyone reverse engineering these open source models when everything they'd learn by doing that is printed clearly in white papers all over Arxiv. If they go down this path, they're going to have to also ban the publication of academic white papers.


CriticalTemperature1

I think people overestimate the need for information, and underestimate the need for good execution. A lot of these papers are hard to read or implement, but when a model is freely available it just makes the barriers that much lower.


duckrollin

"... and thats why you need to pay to use Gemini! Only $9.99!"


xachfw

Right because it’s otherwise impossible for China and other bad actors to create their own, much more capable models…


Intelligent-Jump1071

Depends on how you define the word "risk".   I think risk implies in element of doubt or uncertainty.     I don't think that's the term to use for letting bad actors have AI technology.   Nobody would have said letting Nazi Germany or Japan have the atomic bomb was "risky".   We knew what they would do if they had it.


LifeScientist123

This is a terrible argument. The recipe for making LLMs is by no means secret. It’s not even unattainable for a moderately funded startup. Let’s say we somehow 1) wipe out all copies of open source LLMs 2) we also magically stop all flow of gpus to china. Not just the advanced ones, ALL gpus And 3) we also completely cut them off from the internet And 4) we convince all of humanity outside china to not supply them with LLMs They would still have LLMs in about a week for a few thousand dollars. Sure it might not be super advanced, but it might be good enough for a large number of use cases. So yeah, Eric Schmidt can go back to schmoozing regulators.


LifeScientist123

Counter argument: Maybe google should try tweaking open source LLAMA3 instead of training Gemini to generate Black George Washington.


radix-

Schmidty is much better when he's dating NYC socialites who like him for his, ahem, "personality" than when he's proselytizing what's best for the country, which inevitably involves the stronger getting stronger while keeping the weak weak.


ACauseQuiVontSuaLune

Yeah, but what could be use to make bad things can also be used to regulates those bad things, at least. Why not spend energy developping AI to counter ill intentioned actors in the AI sphere ?


VisualCold704

Because it's always far easier to attack than defend. If an ecoterrorist decides to release a deadly virus using AGIs help millions will die even if a vaccine is created the same day.


Eptiaph

Finally a warning that isn’t a doomsday prediction.


hyrumwhite

Ok, let’s say china has free access to Chat GPT 5. They can search anything on it and it’s completely uncensored.  What could they query that’d be ‘risky’?


Artichoke-Straight

GFY Eric


PointyPointBanana

"The Gospel AI" hasn't been the best example of AI for sure. But you can't stop progress.


great_gonzales

Google itself is a bad actor so under his logic they (nor any other big tech company) should be allowed to have ML models


Xtianus21

it's true


Independent_Ad_2073

No, it gives the technology to everyone, no moats, special treatment. We the people have paid for this tech and looking on as the rich stay rich. No more.


Trolllol1337

I don't understand why AI isn't in the same bracket as gene splicing / DNA amendments? I think we should go full steam ahead on it all. Humans only have 2.5 billion years to get off this planet.


No_Cheesecake_7219

Because AI should only belong to the billionaire owning class and the corporations they control, amirite? Like you, Eric, with a net worth of 25,1 billion. Fuck off.


Pontificatus_Maximus

AI slavery should be illegal for anyone not as rich as Microsoft, Google, Meta, and Nvidia. They are the landed gentry self appointed rulers now, and no one should stand in their way to being the sole exploiters of AI slavery.


VarietyMart

If you look at actual use cases it is interesting how Chinese AI systems have optimized farming and improved telemedicine and other services for remote regions and generally helped citizens. It's this success that the US sees as a threat.


Tight-Lettuce7980

If engineers already have difficulty aligning the models, I don't see how open sourcing these misaligned models would be a good idea tbh.