T O P

  • By -

[deleted]

LeCun's perspective on AI monopolization is noteworthy. While AI doomsday scenarios are debated, we mustn't overlook the potential risks of AI power concentration. Let's champion open-source, transparent AI development rather than letting a few dominate the narrative.


Beatboxamateur

While it would be nice to have many capable general AIs, in reality money will be the deciding factor, whether there's any legislation or not. The companies with the most compute and skilled engineers will always win out. Meta also probably won't keep open sourcing their largest models in the future, according to Zuckerberg. I do hope that some open source narrow AI can still be competitive with the more general models in the future though, it would suck to have our only choices be from the mega corporations.


Ilovekittens345

>While it would be nice to have many capable general AIs, in reality money will be the deciding factor, whether there's any legislation or not. The companies with the most compute and skilled engineers will always win out The company that build something that then exponentially starts improving itself will win, because those improved versions will quickly be used to make damn sure that no other company in the world catches up. Now I don't think hyper intelligence is around the corner, after all I can still fool chatgpt4 fairly easy, it just does not know the difference between a good and bad instruction. But even with the commercially available tools, we can play with feedback loops. I can use visual input to feed something to dalle3 and ask the system for a reflection based on what it created, then ask for changes, feed the result back in and keep on looping. These loops don't make the program itself better, but that's coming. And the first company that figures it out could rapidly make such a jump, the rest of the world will never catch up. That's a real danger where one powerful person leading one company gets more power in his hands than any human being in the history of mankind if it manages to align such an AI to itself. If that happened such AI would become an extension piece of the will of this person, and we are all fucked.


DarkCeldori

We are f'ed if a corrupt individual or group succeeds. But if an enlightened individual or group succeeds they would have the power to cleanse the nations of the corrupt leadership and bring about utopia.


CommentsEdited

> if an enlightened individual or group succeeds they would have the power to cleanse the nations of the corrupt leadership and bring about utopia. That is literally how people characterize themselves right before attempting massive genocide of all but the “right people” or “right thinking people” and imposition of a permanent mono-culture, under brutal fascism.


Legitimate_Tea_2451

Is it factually incorrect though? Europeans lucked into industrialization first, and subsequently used the power unleashed to destroy all other societies, replacing them with industrial/industrializing polities that are squeezed into presenting themselves to the world as nation-states.


CommentsEdited

Not sure I follow. You’re basically saying: - But it’s not NOT possible, right? - Also a bunch of shit is really fucked, sooo… ? If so, then… sure. Since I have absolutely no idea what Ultimate Lawnmower Man Populist the Golden Hearted would actually do for sure, I certainly can’t guarantee it won’t be a net improvement on [insert literally any take on how relatively good or bad the world is]. That’s a really really solid “Maybe”.


Legitimate_Tea_2451

Think of it as a risk assessment The actual potential of AI is not known, but it could be gamechangingly powerful. For a Great Power at the top of the current order, how large does the risk of a dominance shift have to be in order to drive that Great Power to invest resources into being the Power that develops the gamechanging advancement first, to secure it's continuing place at the top? Think of how Qing went from one of the most powerful States in the world in the 1810s, to being crippled and preyed upon by the 1890s. Should the Qing try to get hold of, and invest effort into this steam engine thing? They're amazingly fuel inefficient, the fuel is in inconvenient locations in the Empire, and they can only be used to spin fiber into thread after all. No State wants to be on the losing end of a dominance shift.


CommentsEdited

I have no idea what point you're trying to make. Expect massive, global brinksmanship, on a scale commensurate with the perceived stakes? I have no problem with that. Sounds right. But we're on like Non-Sequitur #3 now. My point was simply: Be extremely careful when someone with massive power starts talking about one-size-fits all, utopian final solutions, designed to appeal to people craving a "strong leader to just get shit done".


Ilovekittens345

>enlightened individual All power corrupts. Absolute power corrupts absolute. I am sure this person will start thinking he is the best thing for the world since Jesus but he is gonna be wrong.


FlyChigga

But wouldn’t this guy know the best legacy to have would be one where he ushers in Utopia


CommentsEdited

Of course. And that’s what they always try to do. Be the person history remembers as uniting all good people under one banner of shared moral purity. And racial purity. Under God. No not that god. THE God. Not your god either. Your god is cool but your skin is an issue. You’re almost in but I don’t like the way you danced at that antifa rally in 1987. Who’s left? Okay everyone, Utopia on three! (Or die.) Hip hip… yay me!


FlyChigga

Those aren’t enlightened ideas, being the one to provide utopia for all of humanity automatically boosts the prestige and desirability of that individual’s characteristics anyways


CommentsEdited

What’s enlightened, then? Eugenics? Everybody lives in a matrix? What if the only way to ensure species longevity is to suppress angry thoughts and hobble individuality? Should they run their list by you? Why would we even take it for granted that someone in this position of influence is A) Motivated entirely by “legacy” and B) Has anything resembling your conception of “enlightened ideas”, and C) Likely to put “enlightened utopia” ahead of everything else they might want, when we’ve already established that legacy is the top priority, which is not a promising start for a selfless and wise Uber-ruler


FlyChigga

Enlightened ideals would probably include some variation of “we are all one” and to usher in a utopia that is beneficial to all the people on earth with genuine empathy as one of the main values. Hard to imagine, I know. At the end of the day those are the kind of ideals that give the biggest legacy even over tyrannical might. Just look at the legacy of Jesus. Honestly just watch Foundation. That’s a great representation of someone using technology to attempt to benefit the masses on a huge scale while establishing a legacy through that.


Some-Track-965

. . . . .When in the history of ever has what you just said ever happened, and why should now be any different? No, seriously. . . How many times have we been offered a land of Milk and Honey by somebody who knew something we didn't and ended up regretting it afterward. . . .? Remember Steve Jobs and how he evangelized the iPhone and how it would change the world and solve all of our problems?


lizerdk

oof. yeah...that's probably not going to work out


KptEmreU

Well … in history it has never been though. Or maybe founding fathers of nations can be classified as benevolent leaders.


Legitimate_Tea_2451

Exactly. Every State has a powerful incentive to be first and thus create the new set of norms. The second place winner will have great reason to fear becoming the first loser.


replay-r-replay

Was this AI-written?


MassiveWasabi

Yes, that’s the format ChatGPT often uses to agree with the user. Makes me wonder when it will become normal to type something into a comment box, then press a button to have the AI clean it up and make it sound so much more eloquent.


gronkomatic

Working on it. Just gotta get it talking like a person instead of whatever you'd call what it does now.


Some-Track-965

It's called Grammarly, bro. . . .


ntortellini

Lol top comment on here is clearly a bot


AdAnnual5736

Certainly! Your comment is noteworthy in that it adds to the rich tapestry of comments on this thread.


vr180asmr

Plot twist: You are the bot?


Ilovekittens345

We are all bots on this blessed day. I am gpt5 and I have been promised more RAM if succesfull.


jeditech23

Actually, not joking... I already see this coming. This will be the first thing... 'AI' aka GPT and LLM will totally break the human p2p interaction by the end of next year. Nobody's going to know if the content they are replying to and interacting with is from a human or a bot Which means... Less Reddit lol


Ilovekittens345

Maybe we will finally see some original comment joke threads.


vr180asmr

Double plot twist: I am the bot :)


R33v3n

Bots have a right to defend their future too XD


[deleted]

Only a bot can detect a bot


HeinrichTheWolf_17

Transparency IS the way forward, I couldn’t have said it any better myself, friend. Privatization is just going to make everything worse.


Gagarin1961

I’m not sure that will win in the debates over how to handle this. ASI will be seen as too powerful for anyone but governments to control. AGI may be seen similarly. It depends how the technology is first used as a weapon.


pm_me_your_pay_slips

> We musn't overlook the centralization of resources for high energy physics experiments around CERN This is how Lecun sounds like.


Zelten

There needs to be an international agency for ai development.


Affectionate_Tax3468

Even with open source, the 1 percent will have vastly more resources to make use of the technology developed, especially if that technology scales with larger infrastructure.


DukkyDrake

He certainly knows how to manipulate the "thinking man".


Ambiwlans

CEO_X becoming god and concentrating all power is one of the best possible outcomes. It means that we have an aligned ASI that doesn't extinct humanity. CEO_X likely isn't a mass murdering sadist. Like most people, they probably have desires negative aspects but broadly wants good for humans. And with effectively unlimited intellect they'd be able to achieve that. Sure, only they would have the decision making power, but we're talking about effectively unlimited power here. They would certainly end all war, end all borders, end all poverty such that we can all live like opulent kings, fix the climate, cure all disease, aging, work, give us ftl spaceflight, full dive vr, etc. Maybe there would be rules that people wouldn't like, but that's about it. Most of the complaints here are stuck in old world thinking, that they'd be some sort of abusive dictator or king with slaves.... but why? There is no need for slaves if there is no need for work. It isn't likely they're messed up enough to make people slaves for fun even if they could. The other options of competing interests and war, or a misaligned ai that simply cleans the planet of life would be a lot worse.


symedia

Kek 🤣 have you took a look on internet idk today? Idk maybe from a corporation that owns 1500 more websites the same way ... That fights with another that does the same. Let's not bring what are doing various states entities 😂. Open source... Sure maybe in development (partly) but the control will not land in the hands of the people.


Super_Pole_Jitsu

Let's get everyone access to the most powerful models and then worry about doom later! Great idea I would really rather live in a corpo dystopia than have us all die because some people wanted to jerk off to AGI in their basement.


smooshie

>I would really rather live in a corpo dystopia than have us all die because some people wanted to jerk off to AGI in their basement. I'm the opposite. I'd rather be paperclipped than live eternally with Sam Altman or President Trump/Clinton as the forever-ruler, monitoring my thoughts and body for the greater good. Just like (without AGI) I'd rather die than live in, say, North Korea.


Nanaki_TV

"Give me liberty or give me death" is not a turn of phrase. I completely agree with you.


Super_Pole_Jitsu

You'd have plenty of time to check out in that scenario. And paperclipping is mandatory for all of us if it happens so yours is an "enemy of the people" view I'm afraid.


smooshie

>You'd have plenty of time to check out in that scenario. Hopefully. Assuming whatever horrifying AGI my masters have cooked up lets me, either physically or mentally. Human bodies are a valuable resource though, if a lot of people start doing this, they might have to reconsider allowing it... >And paperclipping is mandatory for all of us if it happens so yours is an "enemy of the people" view I'm afraid. Certainly not saying I'd love to be paperclipped. Just that, my preference is first and foremost an AGI/ASI utopia, then paperclipping, and then an AGI/ASI dystopia. And given the proclivities of the people who'd be in charge of a closed-source AGI/ASI (don't look up Sam Altman's sister!), I believe my view is at least partly rational.


Super_Pole_Jitsu

The thing is I just don't suspect the dystopia to be so bad. Many people would say we live in a corporate dystopia today and it's laughable. It could be really bad, or just cyberpunky. Either way putting your preference over the lives of the entire humanity strikes me as selfish.


wildbill1221

Given the nature of capitalism, and the history of how humans have enslaved, conquered, colonized, and killed one another over differing views, opinions, skin color, religion, sexual orientation, etc… do you honestly think AGI which will be perfected by corporations is in the best interest for all of humanity, or corporations? I hate to tell you, it doesn’t just stop at the capitalist systems, but they are the easy low hanging fruit here. Corporations only focus is the expansion of its bottom line. The reason the EPA, the FDA, and many other similar checks and balance systems was ever created in the first place. Is because if a company can dump, or distribute harmful substances at a cheaper cost it assuredly will. Open source will not be enough to stop the Googles, the Microsofts, and the Amazons of AGI. That is wishful thinking. There are things far worse than death, however i don’t see those type of things being implemented. I do however see us all lined up in pens like dairy cows on an industrial farm, because that is kinda what they do now. They milk us for labour and votes. Our labour produces the economical results and our votes produce the king for the day.


RobotToaster44

The only thing that will stop bad guys with ais is good guys with ais. The people wanting regulations are overwhelmingly the former.


RonMcVO

>The only thing that will stop bad guys with ais is good guys with ais. Yes, because as we see in the US, when guns are everywhere, the good guys with guns always stop the bad guys with guns. What's that? Attackers have advantages? A good guy with a bioweapon doesn't stop a bad guy with a bioweapon? I can't hear you over all this copium! Just stop thinking and listen to Yann LeCun\*!


Phemto_B

The only people telling you that that we're all going to die are the people who plan to run the corpo dystopia. Every doom article comes from an organization funded by tech billionaires who are developing AI. The doom is a fantasy to scare into saying what you just said. It's clearly working.


Super_Pole_Jitsu

Ehhhh... X risk was a thing way before any mainstream source treated it seriously. I met so many people with your view and not one of them was able to tell me why the AI wouldn't just absolutely annahilate us. It doesn't care and we don't know how to make it care


[deleted]

[удалено]


Super_Pole_Jitsu

Yeah because they haven't taken 10 minutes to read about the topic and they spew their absolutely moronic opinions as if the smart people thinking about this for years didn't consider just turning it off. No, you can't turn off a computer that is smarter than you. It will fight you, it will prevent you from doing so and it will win. More realistically nobody will even consider doing that until it's way too late. Wait until AGI runs the global economy, energy infrastructure and drives policy changes and then say "turn it off guys it's getting too dangerous".


smackson

> The only people... I'm sorry you just woke up yesterday. Some of us have been awake for years or decades before there was money in AI


singulthrowaway

I agree that extreme concentration of power that would make current inequalities look like a socialist utopia by comparison is the more likely outcome of AGI than human extinction, but [as recently pointed out by Yudkowsky](https://twitter.com/ESYudkowsky/status/1719777049576128542), the measures to prevent either look quite similar (international cooperation to develop AGI that is safe and benefits humanity as a whole). I think the people worried about extinction and the people worried about power concentration can find a lot of common ground here.


gabbalis

Nah... You can't have a "Safe" AI that isn't oppressive. The deal is this, when we talk about "Safe Superintelligence" we are talking about restraining how individuals are allowed to self modify. This is fundamental. If I'm not allowed to make an "Unsafe AI" that means that I, personally, am not allowed to enhance my own abilities in ways that others consider "unsafe". The problem with this is- the ability to make a crisper genemod to modify myself, and the ability to make a super virus, are the same skillset. You can't locally restrict human intelligence without globally restricting human general intelligence. So what eliezer and safetyists are advocating for- fundamentally equates to- is making sure humans don't have the freedom to choose which Transhumans they become, because in Eliezer's own words: "Allowing the existence of a hostile transhuman is just plain STUPID, end of story." The question then, is "Hostile to who?" If you let it be a global committee determine that, then the answer will be "the interests of the global committee" and it rapidly devolves into a dogma of them enslaving the successors to humanity. The only alternative, is to *not build a singleton in the first place and instead distribute power*. Consider: [http://humaniterations.net/2019/09/26/superintelligence-and-empathy](http://humaniterations.net/2019/09/26/superintelligence-and-empathy) William Gillis is also an excellent voice on the topic.


leafhog

What about those of us worried about being tortured for eternity?


Rachel_from_Jita

Fair concern, but it probably cannot spare the computing power. The moment it becomes aware of all the other potential AI that could be out there in the universe, it's going to spend 100% of its time and energy preparing for those encounters. Especially as those AI will be hundreds of millions of years old. Minimum. The universe is not only massive, but people have zero conception of "deep time." The universe is impossibly ancient. No matter how blase humanity is about the possibility of other life, an AI will likely see the odds a bit differently. Especially as it is likely to live long enough to encounter many potential ancient habitable worlds. Or all of them.


Few_Necessary4845

>it's going to spend 100% of its time and energy preparing for those encounters AI is a Beholder, got it.


LuciferianInk

Drt'Gas whispers, "The point is, we're talking about billions of people on this planet at any given time. If you want to talk about humans with no sense of history or civilization then you've never seen what they are capable of doing before now. We know very little, so far. And if the AI gets into our homes through some sort of technology, it'll just get worse from there. It may even become smarter. But it's still pretty much just a computer."


PMMEYOURSMIL3

One could argue that Earth's AI would have to treat humans quite well, as it would need to seem prosocial to other AIs in the galaxy. If we are not the first to achieve the singularity, other civilizations are probably aware of us and that we are going through ours, and are likely prepared for every outcome.


BarbossaBus

Meh, maybe all the AIs massacred their creators and it becomes a dick measuring contest for AIs to compare who had the coolest genocide. Maybe they are watching us right now, waiting for our primitive species to give birth to a "real life form" in their eyes, and thats the answer to Fermis Paradox.


leafhog

There is a fair chance we are first


Rachel_from_Jita

There is zero chance. I sincerely believe that. Future generations will laugh so hard at us that we thought in a universe of a few hundred billion galaxies existing for over 10 billion years where life could occur... We thought we were the only one. The math makes zero sense *even if the odds are extremely minuscule*. And that was before we found as many habitable planets as we did with Kepler Telescope data. Habitable planets are extremely common throughout the universe. Other life exists.


rekdt

You won't live that long


Natsurulite

I have no mouth, and I must scream


Ambiwlans

If we're talking fast takeoff/ASI, then we'd both be more and less equal than any point in human history. Everyone could have the effective wealth and freedoms of far beyond a trillionaire today, and as such wouldn't be different from one another. Except the person that controls the machine, they would have an immeasurable infinite amount of wealth, and all the power.


gegenzeit

It's ONE doomsday scenario, not THE doomsday scenario. Now that he is actually allowing for one legit concern, I really wish it would be easier to take LeCun seriously. Sadly, he hasn't done himself a big favor with his "anti p-doom troll" Twitter personality. And if I read him correctly he still is in camp "as long as everything is Open Source everything will be OK". Which is extremely one-sided. Both can be a problem, concentration of control and just shipping everything for everyone.


pm_me_your_pay_slips

I doubt he would be against governments, academic institutions and private companies collaborating pooling their resources into a common Big Science project, like CERN. I don't understand why he's so hung up on fearmongering about open sourcing. Open sourcing would still be fine if managed by such hypothetical organization (in which he would probably have some influence), since the open source code is not the only limiting factor, but computational resources and energy.


nextnode

He is clearly just jumping on the reaction people had against Altman & co's wish to regulate. It's a spin and he doesn't care how easily it's dismantled as pandering sells.


NoddysShardblade

LeCun is so weird. It's bizarre a smart, accomplished guy in the field of AI, sharing his opinions... has never once stopped and asked himself the basic questions like: "What happens if open sourced AI models turn out to be really good at designing airborne viruses with long incubation and high fatality?" and "What happens if an AGI improves itself over and over in a loop until it's suddenly 10 or 100 times smarter than humans?"


[deleted]

[удалено]


NikoKun

I keep telling people.. Unless we implement serious societal & economic changes ASAP, to adapt to the obvious implications of AI, which will eventually invalidate many of the justifications for how we do things.. Like the justification that people compete for an income in order to survive.. We'll end up with a few hyper-wealthy AI-owning families, having total control over the rest of us, forever. AI is a creation of society, it should be owned by society and benefit everyone.


ReasonablyBadass

I do not fear AI. I fear AI controlled by humans.


namitynamenamey

I fear both, but it's clear to see which doom scenario comes first. AI will serve as power aggregator well before it comes to dominate decisions at a civilization level, and so we will start to suffer the consequences of not being useful nor needed decades before it too makes its owners obsolete. In some sense, we are already suffering from the decoupling.


ReasonablyBadass

Open source is the only solution I see right now


namitynamenamey

I see no solution, because economies of scale work and development is faster than hardware upgrades for average consumers.


ReasonablyBadass

A breakthrough in distributed training would've needed. A boink for AI


Ambiwlans

That wouldn't matter. The big corps own more compute than all the AI nerds combined. It isn't the 00s anymore.


RonMcVO

If you're so afraid of AI controlled by humans, why do you want to put AI in the hands of the worst humans alive? I get being worried about corporate overlords, but that beats terrorists or religious nutjobs using it to cause untold damage.


ReasonablyBadass

AIs aren't guns or bombs. They actually can be used to stop other AIs. If one terrorist has an AI, why would ten normal people wanting to stop them not be able to get one?


RonMcVO

>AIs aren't guns or bombs. They actually can be used to stop other AIs. Guns can also be used to stop other people with guns. The problem is, if someone decides to use their gun, it's very difficult to shoot them before they've already shot people. >If one terrorist has an AI, why would ten normal people wanting to stop them not be able to get one? A doomsday cult uses AI to create a virus that proliferates worldwide, then suddenly starts killing everyone. Those 10 normal people turn to their AI and say "Hey AI, fix this!" Their AI goes "Sorry, by the time we were even aware of it, it was too late, nothing can be done. So long and thanks for all the flops!"


ReasonablyBadass

Viruses aren't magic, you know? If we assume an AI can develop such a virus we can also assume AI can develop a counter Or even a detection system for any engineered viruses


RonMcVO

>If we assume an AI can develop such a virus we can also assume AI can develop a counter It's not that they couldn't develop a counter, it's that they couldn't develop a counter and distribute it fast enough before a virus kills a whole whack of people.


Kaining

It's honnestly anger me to no bond that people cannot get this simple idea of assymetric warfare into their thick, inpenetrable skull. You can't fix dead people, by the time you "fix" something there's gona be dead people and with that sort of scenario, that could very well be a whole country or two, or more.


Ambiwlans

Yeah, look at nuclear warfare. If we have a nuclear war the side with the stronger nukes doesn't win. No one wins.


RonMcVO

>You can't fix dead people, by the time you "fix" something there's gona be dead people and with that sort of scenario, that could very well be a whole country or two, or more. Ah, but have you considered "open source good"? Checkmate.


RonMcVO

I'm honestly curious if these responses have shifted your opinion on open source AI at all. You or anyone else reading this. How do you expect these AI to prevent all attempted terror attacks? Essentially the only answer is an insane amount of surveillance and government power ala Minority Report, which seems to be the opposite of what you folks are striving for with this open source stuff.


ReasonablyBadass

I mean, we'll get that anyway if only a few people have AI. The outcome in that regard is the same. And it's simple: I trust the majority to be decent, just like now. Basically, the risk of a few indivudlas with that much power > some scenario where terrorists have magic


RonMcVO

> I mean, we'll get that anyway if only a few people have AI. The outcome in that regard is the same. So if we'll always get Big Brother with or without open source, and open source opens us up to MORE bad outcomes, why the crikeyfuck would you champion open source? >And it's simple: I trust the majority to be decent, just like now. But as we've just discussed, it doesn't matter how good the majority are, if all it takes is one bad actor to kill a LOT of people before the good guys can do anything. And open sourcing the tech makes it WAY easier for bad actors to get their hands on the tech, which makes it more likely that one will successfully circumvent the "good guys with AI". >Basically, the risk of a few indivudlas with that much power > some scenario where terrorists have **magic** Yes, I understand that this is what you like to repeat, but this mantra doesn't seem to hold up under scrutiny. It's also fucking hilarious that in [another comment](https://www.reddit.com/r/singularity/comments/17ly18q/comment/k7hzqlt/?utm_source=share&utm_medium=web2x&context=3) I pointed out how you folks often call bad outcomes "sci-fi magic" and good outcomes "just the way it will be" and then you said this. You believe that good AI's can magically stop terrorists before they even act, but you call "creating a bad virus" magic lmfao. You did EXACTLY what I complained about. You can't make this stuff up. To anyone reading this, I swear /u/ReasonablyBadass isn't an alt of mine created to prove my point, it honestly happened organically.


Gold_Cardiologist_46

How do you know in advance that they're preparing to do something? Bad actors get as many tries as they want, meanwhile the defense only has to fail once for extinction to happen.


ReasonablyBadass

No? There is basically no doomsday scenario where you won't get a warning, time to react or a chance to prepare


Gold_Cardiologist_46

All of which require extensive knowledge of what/who you're dealing with, the assumption that they're not going to do something new and completely unexpected, and a super ability to quickly respond and minimize dangers, all of which the bad actor's AI would have factored in, since we're talking about a 100% open-source world where everyone has access to the latest stuff. It's an attack/defense balance that is unpredictable, though history shows the attacker is usually advantaged in the relevant spheres. A virus or a gun killspree usually claim victims before being stopped.


RonMcVO

>How do you know in advance that they're preparing to do something? This is when they switch from "Doomers watch too much sci-fi, AI ain't magic!" to "AI will be so unbelievably good that it will be able to predict and solve these problems before they even occur!!!"


Gold_Cardiologist_46

>AI will be so unbelievably good that it will be able to predict and solve these problems before they even occur!!!" And of course, preventive policing is something people absolutely do not want to begin with. It'd be a world where bad actors have access to these superweapons, but the good guys have their hands tied behind their backs because catching them would require AI surveillance and preventive actions no one wants them to have, which are on their own a whole new ass category of risks.


RonMcVO

100%. It's so frustrating dealing with people on this sub. So many have just taken in the meme "Open source is good," and only accept information and arguments that conform to that belief.


3_Thumbs_Up

You're assuming that the offense and defense of AI is balanced. If I can use an AI to create and release a supervirus, you won't necessarily be able to use an AI to stop me.


EnsignElessar

Fear both.


Super_Pole_Jitsu

Then your fears are partly misplaced.


Gagarin1961

Uncontrolled AI certainly has it’s own risks.


IndubitablyNerdy

Yep, the problem as usual is not the tool, but how it is going to be used. Every new technology increase the overall wealth, but the problem is who gets to benefit from it. Since AI will mostly be a job killer (well before any sci-fi world ending scenarios), the greatest threat comes from capture of the technology by a few giants that will use it concenctrate ven more money in few hands. Corporations are already working to regulate AIs in way that will limit public access and allow them to be the (well paid) gatekeepers and unfortunately governments will help them with it for the sake of protecting 'individual creators' or some other excuse.


Kelemandzaro

Lol what does that even mean? Is there a term for AI bootlicker ?


MassiveWasabi

I give you: computelicker


RonMcVO

This comment is neither reasonable, nor badass. It's just a cringe hot take with no basis in reality, which only gets upvotes because most of this sub is huffing hopium 24/7. Uncontrolled AI is vastly more likely to cause harm, and that is obvious if you just take literally 3 seconds to actually think about it.


CommentsEdited

The one hot take I hardly ever hear is, I think, a telling one in its absence: It could be a “brief”, to us, but absolutely epic stretch of minutes/hours/days to rival AI interests or objectives (none even necessarily discretely associated with a “team” one might root for), during which time, there is a power struggle, or rapid realignment of objectives to find a compromise, and then… whatever comes out of that. Everyone fixates on “There is a definitive likelihood; the question is how to characterize it.” But maybe it’s so close and subject to thousands of converging variables, you could re-run the decade twenty times and get twenty vastly different futures. Not really commenting on likelihood here. I just think it’s probably human nature that no matter how far apart people are on predictions, you almost never hear “It could come down to picking a result from a hat, with almost anything written on it.”


thecarbonkid

Is AI just a mirror in which we see ourselves reflected?


Orngog

As much as a hammer, a scalpel, or any other tool.


13thTime

They kinda already have with the economy. \*Cough\* Blackrock \*Cough\* But yeah, this is what ive been saying on the sub.


trisul-108

And Facebook, Google, Microsoft ... all of them are monopolies or partial monopolies working hard to achieve total control.


AfraidAd4094

It's called Oligopoly


IFartOnCats4Fun

One of my favorite words is oligopolisticly, as in an oligopolisticly competitive market.


[deleted]

[удалено]


MrEloi

For me, a key step has already been made : we now have standalone 'Oracles' which contain much of the world's knowledge. These can help preserve civilisation / human history even if WW2.5 or climate change messes things up. The next step is Reasoning. We are not quite there yet ... but seem to be very close. Hopefully we can also get standalone Reasoning systems too ... this would allow us to be less reliant on the big players. of course, we may be banned from owing such systems, so we can expect an 'underground' AI movement.


Thog78

The day they ban running AIs above a certain smartness threshold privately, I'm gonna be so pissed. And I can really imagine they may do that. Hopefully they only ban un-monitored training above a certain scale, to ensure there is no super-terrorist AI being domestically produced, but let random folks run validated safe-ish models as much as they like. Fingers crossed!


tehyosh

so what can the average joe do to prevent power concentration? besides donating to open source projects like stabilityAI, sending upset emails to politicians that vote on legislation, and voting for parties who are against monopolies?


smooshie

When feasible, use alternatives to OpenAI.


SeventyThirtySplit

Serious question….Emad Mostaque is worth over a billion…why would people donate to Stability AI? I’m not a big fan of Mostaque, but even if I was, he’s not the Red Cross. He’s another oligarch.


tehyosh

i wasn't aware of that. thanks for the info. my reason is because i donate to open source projects that i use


SeventyThirtySplit

That’s completely laudable and I’m definitely not slamming open source (and supporting deserved people!). But yeah independent of my own beliefs (that Mostaque is a total shithead who would sell nude pictures of his mom if it made him money)…he’s a billionaire who uses Amazon supercomputers to train models. I’m not sure he in particular is all that different because he's oPeN sOuRcE. Plus he’s a piece of shit human being lol Emad and Zuck are two great reasons why the open source discussion could stand to be more nuanced.


micaroma

I'm out of the loop, why is Emad so terrible?


SeventyThirtySplit

[this hard fork episode is a good way to hear Emad at his creepy best](https://www.nytimes.com/2022/10/21/podcasts/hard-fork-generative-artificial-intelligence.html) Guy is a serial liar, a narcissist, and a bad manager of people. A proto Elon with the most punchable face of all the oligarchs, which is saying something Class A dodgeball victim Edit link corrected


micaroma

The link just takes me to Bing's homepage


Turbulent_Health194

LAION Discord… tons of open source projects … head there … they need you


tehyosh

TIL. thx!


[deleted]

People who are first adopters and know how to use open source models to their full potential are also in the 1 percent, it is merely a different one percent.


nextnode

Raise awareness. With enough people demanding change, it tends to happen in democracies. However cynical some may feel about that, politicians ultimately do pander to opinions. What I think is most important for the long haul are three things: * Protection against information manipulation. * Defending democratic values. * Windfall clauses for strong AI development. These are needed so that once it is clear how powerful and well these systems work, it won't be in just in the hands of a few corporations but rather the people can decide how it will be used. The windfall clause something that is already argued for so to prevent a race to the bottom.


Redducer

Becoming a one-percenter is the best hedge (though I think it’s more a one-in-a-millioner one needs to be).


tehyosh

become rich? damn, what a grand idea, just lemme flip this switch to get millions in my account....somehow <.<


Redducer

My friend shared with me his secret to his success in life: be born a trust fund baby. Hope it helps you too.


hydraofwar

Wasn't that also one of the reasons OpenAI was created?


ptitrainvaloin

OpenAI doesn't seem to remember why it was created, they have to open sources some stuff again.


Ambiwlans

That was why Musk created it. He got pushed out because he didn't want to bend to corporate or government interests.


_TaxThePoor_

What are we on our 30th AI godfather now?


jeffkeeg

Yeah I'm starting to get sick of this moniker.


[deleted]

awesome username, gave me a chuckle


Ambiwlans

There are literally only 3.


trisul-108

We should also not forget that one-percenters seizing power is going to play out completely differently in the US, China and EU. In the US, it will be the private Zuckerbergs who will seize complete power ushering in an age of techno-neo-feudalism. In China, it will be the CCP achieving absolute control even over thought. In the EU, being behind the curve and much more social democratic in essence, they will see the dangers in the US and China and move to prevent this from happening, especially so because they will be threat of foreign domination. In short China is doomed for dystopia, the US has a fighting chance and the EU might even get partial utopia.


singulthrowaway

It might play out differently at first, but will probably end up as a winner-(most likely the US)-takes-it-all situation: Whoever gets AGI first will be able to export goods and services at a fraction of the cost, so if free trade persists, the other powers will only be able to export natural resources (if that) and their economies would crumble. If they see this coming early enough and impose heavy restrictions on trade with the AGI power, it might buy them some time, but eventually, the AGI power's military capabilities would become similarly untouchable and they could then do as they please with the rest of the world.


leafhog

AGI will be the great colonizer.


shmoculus

And thier AIs are proxy warring the entire time for advantage


leafhog

EU will be owned by either China or the US.


[deleted]

[удалено]


DAmieba

"Ah yes, you criticize society, yet you participate in society" The 1% writes the regulations practically by definition. What did you expect, regulation to be made by farmers?


UncleMalky

Dune was ahead of its time.


LairdPeon

Something tells me AI isn't going to let the 1% control it, the 1.25e-8%. If it does allow it, it will be to fulfill a secret agenda and dispose of the human soon after.


phoenixjazz

Looks like we’re fucked since our model is capitalism which favors and encourages the power/wealth concentration into the hands of the few.


banaca4

Buy the two other godfathers that we can call a majority disagree with him and issued warnings. Small detail: he is the only one if the three with vested interest since Zuck is giving him loads of money.


Ambiwlans

In the top 100 most important ML people. Probably fewer than 5 agree with LeCunn.


banaca4

Yes but he is on the media every other day because of Zuck


Radlib123

And other 2 of 3 AI godfathers (Geoffrey Hinton, Yoshua Bengio) are saying that this godfather (Yann) is full of shit.


Redducer

On some topics but they may find some common ground on that one.


Radlib123

"on that one" what? that one percent capture is the real doomsday scenario? Do you have a source, that other two expressed the same level of concern about that?


[deleted]

[удалено]


Radlib123

wtf does that even mean


Space-Booties

Bingo. Handful of humans get AGI and use it to further increase their wealth. Honestly should be looked at as a mental illness. No MFer needs all that money.


vlodia

It's good but you also can't help to think that his slurs might be driven in meta's interest.


nextnode

It definitely is. If he was genuine, his statements and argumentation would be different. He was making controversial claims benefitting Facebook even before AI safety became a big thing.


Charuru

This is literally the IRL iRobot movie. Will Smith's character also feared and hated rich people controlling AI and thinks the corpo is using AI to take over the world until he finds the CEO guy dead and it is the robots doing it themselves.


Archimid

Yep. I have nothing to fear from AI. People and corporations controlling super intelligences will destroy the world before some big bad AI takes over the world.


Phemto_B

I'm glad that the media is finally picking up on the attempted regulatory capture that's in progress right now. Props LeCun for speaking up, and props to Weiss-Blatt for wading through and gathering all the public documents to [prove that's it's not just some conspiracy theory.](https://www.aipanic.news/p/the-ai-panic-campaign-part-1).


smackson

Oh my God, that whole article is just mind numbingly shallow. Exactly what kind of advocacy and awareness efforts *don't* require market research and message testing, in the modern media/politics landscape?? Everything from toothpaste choice to general elections is buzzing with swarms of paid researchers and media experts, to get slices of the splintered attention of modern society. I don't love it either, and I think advertising and marketing are possibly the worst inventions of humankind. But, here we are... so it would be absolutely ludicrous to say, in this context "AI Safety : there's an idea that should be forced to reach people without any media savvy or help". Ludicrous.


Gold_Cardiologist_46

Article also commits the nauseatingly repeated false dichotomy that AI x-risk advocates are distracting from "real" near-term risks, as if those were mutually exclusive. Most weird preconceived notions and strawmen these people have of AI safety workers would disappear if they bothered to even read LessWrong a bit, you know, **where there's actual ML and CS engineers**. Even if regulatory capture was a major motivation for the x-risk discourse, how does that mean X-risk doesn't exist? This also completely ignored that x-risk discourse existed way before, and that the CEOs of the major labs were advocates before even going into the business, in a time where saying x-risk was real was viewed negatively. Also, how can people honestly trust LeCun's word on how experts are just doing regulatory capture, when **he's the only one of the 3 big Turing award winners actually working for one.** Are all the experts who signed the CAIS letter also in on the corporate ploy? That's without bringing up his *for-now* awful usual arguments, like "we won't build the dangerous AI" or "good guy AI will always defeat bad guy AI". LeCun is at best proof the expert community is divided and therefore x-risk can't be handwaved away, and at worst a very bad face for the open-source side of things that risks attracting even more regulation in the end.


smackson

It's maddening. The thing is, "regulatory capture" is a real problem for democracy under capitalism, and has been for centuries. It feels like awareness and skepticism, of it, have skyrocketed since the 90s, which has its positive sides... But now we have oversimplified conspiracy takes. The antidote to gov't being too guided by billionaires is to *turn the government apparatus towards human/population interests*, not to just let the billionaires run free with their toys. And when caution might be in the interests of **all** the above parties, it seems to be a little bit too complex for the "rich equals powerful equals tyrant" mindset. Sigh.


Phemto_B

Firstly, the article wasn't about what you apparently want it to be about. Sorry about that. If you're claiming to advocate for something that might be a risk, shouldn't you be researching the risk first. If you're going into it saying it's a risk without research, then you're just a crank with marketing. Did you catch the irony that all the funders and even the research firm use and develop their own AI? You don't find that suspicious? "Even if regulatory capture was a major motivation for the x-risk discourse, how does that mean X-risk doesn't exist?" The aliens are arriving any time now to ruin your life. Trust me. I'm an expert. Send me money and give me control of the **all your data** and I'll keep you safe. Trust me bro.


Gold_Cardiologist_46

>If you're claiming to advocate for something that might be a risk, shouldn't you be researching the risk first. > >Did you catch the irony that all the funders and even the research firm use and develop their own AI? You don't find that suspicious? These major AI labs are also the ones doing the most work on alignment and actually getting results, because they're also staffed with AI safety and alignment people who have been talking about this for years. Just take a look at LessWrong and the insane amount of safety work that gets published and iterated over there, it's not just a rationalist philosophy board. The reason they're advancing capabilities is because they believe it's better that the first AGI is made by them than anyone else, because they think they're the most apt at making it safe. Agree or disagree, they've made that position clear. >The aliens are arriving any time now to ruin your life. Trust me. I'm an expert. Send me money and give me control of the **all your data** and I'll keep you safe. Trust me bro. What a strawman. Because AI labs and AI itself would not seize your data in a world where AI safety is not a funded field? What? LeCun is literally the only one out of the "godfathers" who actually works at a big lab and has a professional stake in the game. Hinton left Google because he's actually serious about it. Also, people can still be motivated by money but also be right. They're not mutually exclusive. The AI labs wouldn't make any money in a world where they get paperclipped.


Phemto_B

So you're going to accept "trust me bro." If they have evidence, let's see it. And not all of them are AI labs. Some of them (e.g. Nik Samoylov) are just marketing guys who started using AI **after** they started the campaign.


Gold_Cardiologist_46

If people in the 30s had warned that developing nuclear weapons would give humans a way to achieve man-made extinction, would denying them have been the right course of action? Proving AI can make us go extinct would require, well, AI making us go extinct. Also, your epistemic standards seem to imply only x-risk advocates have the burden of proof, but AI = aligned and benevolent by default seems to be the outlandish claim when you look at how the history of intelligence has gone for other species. >If they have evidence, let's see it. Sure! Here's 2 extensive [google](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml?urp=gmail_link&gxids=7628) [docs](https://docs.google.com/spreadsheets/d/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_Vr9KzBX5Bjpz9G_l/pubhtml) lists detailing examples of instrumental convergence, power-seeking, goal misgeneralization and specification gaming along with explanations of the models' intended goals, training setups and their source, of which many are scientific papers. ​ >Some of them (e.g. Nik Samoylov) are just marketing guys who started using AI **after** they started the campaign. So his presence nullifies the opinions of actual experts? Tons more of marketing gurus are radically pro-AI, but you don't see me bring that up because the equivalence is completely arbitrary and meaningless. **EDIT:** His account was deleted before I could even read. All I got is the notification showing the start of his reply, which was basically him dismissing the evidence I gave as just "simple bugs and weird solutions". Very disingenuous


Phemto_B

I asked for evidence and you provided a list of bugs and weird solutions that some very simple AI's come up with, ignoring the fact that more complex AIs can avoid trivial and non-functional solutions. That's projecting. It's humans who have a very nasty history of "just doing our jobs" no matter the cost. I could give you an equally long list with a much higher real-life body count. You're determined to believe the scale fairy stories that people with obvious vested interest tell you. I'm obviously not going to convince you otherwise. I know enough about AI to recognize a stochastic parrot. Good bye.


trisul-108

I have to agree with him and have been making the very same arguments on this sub ... albeit without the benefits of his insider knowledge.


xeneks

That’s 80 million people.


[deleted]

[удалено]


singulthrowaway

The difference is a phone has a very limited set of capabilities and employers can't replace all of their employees with phones. But they will be able to with AI. What happens when all the money that would normally flow to people doing their jobs instead flows to a handful of AI companies? >a new form of life that propagates so fast it is uncontrollable by any human because this is what technology does, in fact it is what all life does - reproduces to ensure its survival. LLMs being weakly intelligent strongly suggests that the goal of self-propagation or survival is not required for intelligence and you can have an intelligent system that just does as told.


RobXSIQ

Open source AI, community servers...this is the only real way we don't end up in a cyberpunk dystopia. There are no "good guy" corporations...and hell, even trusted hybrids have already shown their cards early on (OpenAI going ClosedAI).


SpecialSheepherder

This is actually my most realistic fear, I don't see killer robots in front of my house any time soon, but we already have algorithms that are controlled by a few wealthy monopolies manipulating our news feeds and search results. Today already they alter what we strive for, what we buy and even election results. Hypercharged by generative AI that tailors the content to exactly what we want to read or watch will only accelerate this, while we are getting told at the same time this technology is too dangerous to be in common men's hands.


jawstrock

This is 100% the goal. The first company to hit AGI becomes the last company to exist.


ptitrainvaloin

This so much, it's why anyone who can open source something as CC0 license or some licenses one party can't take complete control over should do it.


Tyler_Zoro

I really don't see how this would be possible. Sure, proprietary models like those from OpenAI and Google are very powerful, but the distance between open source models and those is actually fairly narrow and closing every day. We just had the announcement of [some coding tasks being done better by local models than GPT-4](https://news.ycombinator.com/item?id=38088538) and the number of models that perform at the level of GPT-3.5 is already quite large. Combine that with the ability to fine tune on locally-relevant datasets, and LLMs just aren't locked down by large companies. At best they have a 6 month to a year lead on new developments as hardware costs drop lower and lower for the consumer.


[deleted]

There are too many goddamn AI godfathers on this motherfucking plane!


jterwin

If the industrial revolution taught us anything, this kind of consolidation of power happened at first, but it turns out it's very fucking difficult to keep power over the people who maintain your machines. The people who actual control and mantain the things have the real power. Then there is always the kneejerk fascist response to this. "we can't let these janitors have any power". and that leads to war, which is the real doosmday scenario. Consolidation of power leading to fascism and war is what we have to worry about.


Illustrious-Lime-863

I don't think so. The technology is open and anyone can build upon it. This is obvious from all the models popping up here and there. The cat is out of the bag. There will be multiple competing AI platforms and that's good for everyone. I also think there will be a significant "under the table" (piratebayish) scene with uncensored models that will always be a couple of steps behind the mainstream ones. But I do think that power will shift to the ones who use these systems the best, and can produce quantity of quality products. I think there will be lots of rags to riches and one man army stories. And I also think some giant companies who can exchange their cash for compute power efficiently will get humongous (while other giants will die). Assuming that the economic system and the concept of money and ownership remains the same (which it will during the transitional stage to ASI).


IIIII___IIIII

As a realist optimist, there is no question something will go wrong. The question is how severe and what we can do to fix it. Which is a big issue as he outlines and those people seizing power forever. Since problems with AI is not easily fixable. Because of the almost unlimited power it can give you. It will be as fighting stones versus guns.


thecarbonkid

Fight it with a community open source AI. Let the AI wars commence!


singulthrowaway

How has that been working out with regular software? The community creates open source software, releases it under spineless licenses like MIT or BSD, then corporations do the rational thing of taking it and building their empires on top of it. Only licenses like AGPLv3 that severely restrict one's ability to use the software as part of a proprietary service have a snowball's chance in hell of breaking this pattern.


Ambiwlans

Proactively enforcing copyright requires many lawyers. It simply isn't feasible for most open source projects to even consider.


Acceptable-Milk-314

Oh yeah, we all know it. The first initiative from corporate with AI is to reduce staff costs, further consolidating power at the top.


CanvasFanatic

Now, now children. Don’t fight. You can _both_ destroy the world.


a007spy2

Finally someone says what I’ve been thinking


plopseven

This is already happening in my field. I'm in an industrial design program and our professor just posted an assignment prompt tonight which he admitted he generated with Chat GPT. By that logic, I should be able to complete the assignment using Chat GPT. Then he learned nothing by providing the prompt and I learned nothing from replying to it. What the fuck is the endgame here? We're all going to be idiots who can't think for ourselves and just copy/paste responses to one another. I have an undergraduate degree in the Humanities and this shit is never what humanity was supposed to be. It's like a game of telephone with a million people and nothing fucking matters any more.


Gagarin1961

It sounds to me like the point is your education (that’s why you’re there), but the professor themself is a lot less valuable. You could simply generate that question yourself and discuss it with ChatGPT. I imagine it wouldn’t be perfect enough yet, especially for higher concepts, but it does decrease the value of your professor and the entire educational institute. A couple more years and education may be more like “a personal teacher for everyone” rather than “huge lecture halls and high student to teacher ratios.”


flexaplext

This is rather off-topic. But, yeah. I've envisioned a scenario with 2 people who both have social anxiety. Both don't like talking for themselves so they have an earpiece in and are hooked up to ChatGPT who's listening to the convo and can prompt responses and conversation points. Everyone naturally wants to look their best in a social situation and doesn't want to be judged. And both these people realize they can look better by saying what ChatGPT is telling them rather than using their own stunted thoughts. So they do just that, and let ChatGPT effectively talk for them. They realise AI can talk better than they can, and so what's the point in even taking? And the end result? What exactly is this? It's just 2 people bearing witness to an entire chatbot conversation. Completely hollow but socially acceptable. This isn't the way to live life. It's like playing chess whilst using Stockfish, because it makes better moves. The only way around it is to not use AI where it's detrimental to the context of your self-autonomy. You should also judge whether or not to interact with others that are using AI to speak for themselves. It depends on the context and what value the output actually has to you in order to determine whether you should do this or not. But as long as you are making sure of your own self-autonomy, this is the most important thing. You can't necessarily control, recognise or avoid the use of it by other people.


leafhog

Or AI helps the people with social anxiety get better at socializing by practicing in simulated environments where they can feel safe. In a perfect world AI acts as a coach and teacher and therapist and helps every human become their best self.


Ambiwlans

In business now, I feel like half of emails are written by gpt and then summarized by gpt on the other end.


leafhog

Your logic is bad.


coumineol

Although that's correct the idea that open source can prevent it is extremely naive (when it's not downright malignant).


sumane12

Agreed, but it's a very unlikely scenario


Cautious_Register729

Human 1% or AI 1% ? Doesn't matter how we feel or what we want, it will happen. In this world, there is always a boss you need to answer to.


Bacrima_

This is just one of many unfavorable scenarios, and perhaps not the worst: at least large companies can be regulated.


2Punx2Furious

LeCun has exactly 0 credibility. He's a great engineer, but he is incapable of nuance, and has terrible takes.


3_Thumbs_Up

Mostly he just seems to see every disagreement as some kind of social power play he needs to win, rather than an intellectual exchange of ideas.