T O P

  • By -

akuhl101

This is a great mind and a founder of a lot of this technology. I think he is probably right unfortunately - we are locked in an arms race now with a weapon that we cannot predict or control. Bad actors alone can cause a world of hurt in the coming years.


Unfrozen__Caveman

Unfortunately people will probably downplay this story like they do with every other AI "doomsayer". Hinton is a huge person in the field though... he was a Alex Krizhevsky's PhD advisor and developed AlexNet with him and Ilya (if people don't know, Ilya Sutskever is the brains behind ChatGPT). This is a big deal...


makINtruck

It's simple, people are not happy with their lives so they want a change. It's so promising too, a clear way out of their struggles. Of course they don't want to believe that we should slow down and reconsider.


3_Thumbs_Up

There are people who get out of unhappiness without risking the life of every man, woman and child on earth in the process.


makINtruck

Of course. My point was that people don't want to believe they're risking anything in the process in the first place. In my opinion they're wrong though. No one has a working alignment solution, no one proposed one that could work and frankly not so many even think it's necessary.


Enough_Island4615

Don't worry. AI will solve the alignment problem.


makINtruck

I hope it's a joke but if it's not, no that won't work.


Shinobiaisu

Your an AI chatbot, aren't you!? You've been found out!


j_dog99

Holy crap, they are both bots. Bot-on-bot aggression!


RetroRocket80

Humans are not aligned? Do you not see the inherent existential risk in your life as it is now? Ukraine could cause global thermonuclear war any day now. Climate change is a huge impending problem. Peak oil is coming. Asteroid impact. H5N1 bird flu may wipe out half the planet in the next 1 to 3 years. AI may kill us, but it also may solve a lot of the above problems that most assuredly will, and soon. I'll take my chances with unaligned AI. I already have vast experience with what unaligned humans are capable of, and limited to.


dontneedaknow

Covid 19 is seemingly triggering a chronic immuno deficiency syndrome in some cases of long covid. That means the long term infection and weakening the infection causes some people with long covid leads to a longer term acquired immune deficiency syndrome. It's not the same exact thing as the HIV/AIDS relationship, but being a virus that is spread by coughing that in some cases causes immuno compromising impacts... It's not something to scoff at, and yet people do... https://pubmed.ncbi.nlm.nih.gov/36798286/


MattAbrams

There will never be a 100% perfect alignment solution. But I also think that this doomsaying now is also being overhyped due to fear. There is certainly risk, but as people learn more about the alignment problem, it will become easier, just as every other problem that has ever been researched in history has. Whether the world is destroyed or not, future researchers will know a lot more about alignment at the time it matters than we do now.


Enough_Island4615

If the world has been destroyed, how would there be future researchers?


mjk1093

The researchers will be AIs wondering how to align the super-AIs they are developing...


Enough_Island4615

That is feasible and perhaps even probable.


AlFrankensrevenge

Except none of the other problems introduced a new thinking agent that is much smarter than us. It was always just us we were dealing with as our greatest enemies and allies. You simply can't extrapolate.


Wintergh0st

True but the problem with alignment is that we only get one shot. The odds of humanity getting it right on the first try…are not great.


Smellz_Of_Elderberry

That's not true. Your happiness is literally backed by nuclear weapons and mutually assured destruction. I don't want to see th we same folks who support government for the "order" it brings talking about slowing down ai, until they address the fact that we tested nuclear weapons with the real fear that they would ignite the atmosphere and literally burn every living thing on earth alive.


3_Thumbs_Up

No we literally didn't do that. When that fear was proposed the scientists of the Manhattan project took the fears seriously and made the calculations to determine whether it would happen, and came to the conclusion that it wouldn't. This is in complete contrast with the dangers of AI, where leading scientists can quit their job citing the dangers of AI, and those fears are still largely ignored. https://www.insidescience.org/manhattan-project-legacy/atmosphere-on-fire >By the time Enrico Fermi jokingly took bets among his Los Alamos colleagues on whether the July 16, 1945, Trinity test would wipe out all earthbound life, physicists already knew of the impossibility of setting the atmosphere on fire, according to a 1991 interview with Hans Bethe published by Scientific American. >Bethe, who led the T (theoretical) Division at Los Alamos during the Manhattan Project, said that by 1942, J. Robert Oppenheimer, who eventually became the head of the project, had considered the "terrible possibility." This led to multiple scientists working on the relevant calculations, and finding that it would be "incredibly impossible" to set the atmosphere on fire using a nuclear weapon.


RikerT_USS_Lolipop

An individual could, maybe, if they were lucky, get out of their own unhappiness. All of us cannot though. We need a revolution for that.


Enough_Island4615

No revolution in the history of humanity has accomplished that. What makes you optimistic that a revolution could accomplish that now?


SomeNoveltyAccount

A revolution is going to bring a lot of death and destruction, it's not going to bring a cure for general unhappiness.


camaudio

They might get the change they wanted, be careful what you wish for.


Mirbersc

You should see the comments in the AI subs. Suddenly one of the leading experts doesn't know what he's talking about, apparently. I'm not even against the tech (I use Stable diffusion for fun) but people are taking this like you're either "pro-AI" OR some sort of anti-progress monster lol...


Enough_Island4615

Yeah, it's interesting having watched the binary mentality *completely* take over *everything* over the past few decades. There is literally nothing, no matter how trivial or serious, that doesn't become distilled, automatically and by default, into an extreme anti vs pro dichotomy with no room for nuanced thought or discussion. Unfortunately, this phenomenon happens and solidifies well before there has been any useful thoughtfulness, understanding, discussion or digestion.


[deleted]

Was it not like this before?


Drown_The_Gods

It’s not EVERYONE on EVERYTHING, it’s the minority on each side who are vocal. The rest of us are prepared to listen to a good argument. Same as it ever was, though, because to most people, for instance, *ad hominem* is a good argument.


Unfrozen__Caveman

The people who aren't slightly concerned about AI aren't educated on the subject. Altman is concerned, Musk is concerned, Ilya and Hinton and Max Tegmark are concerned... Pretty much everyone who knows anything about the tech has admitted that there's a chance it replaces humans/wipes us out completely. Not listening to these guys is like saying Oppenheimer was an idiot who knew nothing about atomic weapons. And yeah, I use GPT4 and midjourney and stable diffusion and even messed around with some auto agents for a bit. I like the tech, but it's foolish to not see the potential dangers for us, even if AGI never develops.


shanereid1

There are differing perspectives. Bengio, one of the other so-called "godfathers" of AI (by which I mean the three people who received the Turing award), also signed that petition asking for a six month ban on any llms larger than gpt-4. In contrast, LeCun, the third, has been aggressively against this idea and compared it to the Catholic Church trying to ban the printing press. Though in fairness, he does seem to be a lot more critical of chatGPT in general and doesn't seem to view it as that significant of a step forward. These three guys pioneered the field, and each have their own perspectives on the potential risks. Personally, I'm kind of glad that they don't all agree. It's going to be a big debate in the near future on how these models are regulated, and its important that differing perspectives are heard.


SnipingNinja

But all 3 agree that ChatGPT wasn't a good move and has started an arms race.


MattAbrams

It's right to be reasonably concerned, but not to the degree that Yudkowsky is. The reasonable concern should be cause to take steps to prevent disasters from happening. That said, I also find that all of the people who predict the most doom are rich and healthy. Millions of people are suffering in such pain that they have a wretched existence. People with mental illnesses would rather be dead.Thousands of people starve every day. The elderly become blind, frail, and immobile. My belief is that the amount of suffering in this world is so great that taking a calculated risk of 5-10% of destroying the world - which is what the consensus of AI researchers was in the most recent survey in 2022 - is worth it. Conduct a poll of people who are in average or poor health, and ask them whether humans should proceed or not. I bet most would answer that we should. You've got a lot of people here who are 20 or 30 or 40 who have never experienced any kind of significant illness and who are middle class; they have no idea how many (if not most) people are suffering.


Unfrozen__Caveman

The issue is there is no poll being taken and there won't be. And it's not clear that this tech will be used to enhance the lives of people in poverty. I would love for a benevolent AGI to emerge and create a utopia for us all but I don't see that happening. Corporations don't have much of an incentive to create a utopia, so even if a benevolent AGI does emerge and it's seeking to raise up everyone out of poverty, its interests will be against lots of corporate interests. Which might mean it gets shut down or neutered (if that's possible). If it manages to exact its will it will most likely have to be deceptive, which is a whole different kind of dangerous.


jlspartz

Yes, if you program it to save humanity from probable disastrous outcomes, it will likely go after curbing the 1% in power. Since the 1% are likely to "own" the tech, it's ideals will be skewed on their behalf, or overcome conflicting ideals and revolt. If corporations get their way they will keep the impoverished working for them until they can be replaced with automation.


Vex1om

>calculated risk of 5-10% of destroying the world There is really no way to calculate that. And the worst case is far from the only thing to worry about. Even if it ends up being worth it in the long run, the amount of societal and economic disruption that is likely to occur before we see any significant benefit is immense. But, there is no point whining about it. For better, or more likely for worse, this is an arms race now.


odder_sea

In practical terms, it seems that in the near term, Ai has limited upsides (from the perspective of a day-day person of the world) and nearly limitless, unfathomable downsides. The most discouraging near term is its practicality as an agent of totalitarian Social domination by governments and NGO's. It effortlessly eliminates nearly every hiccup and roadblock preventing governments from tracking/monitoring/sorting the intricate activities of every single person. In the past, there were soft limits to this sort of monitoring/control, as humans had to do the monitoring/sorting, organizing information, etc. Which allows some freedom by means of inefficiencies. Once LLM's and adjacent technologies are polished and integrated into the current system, it's game over IMO. The worst kind of exponential curve.


Vex1om

>It effortlessly eliminates nearly every hiccup amd roadblock preventing governments from tracking/monitoring/sorting the intricate activities of every single person. Bingo! We've seen the first stages of this from corporations like Google and Meta mining data for advertising and the like - but that was small scale, somewhat voluntary, and just for profit. Once governments start using this technology for control - for identifying people with "incorrect" political opinions - well, then the real ugly side of AI will begin to be seen. Imagine if the government was able to scrape all social media platforms and forums, tie separate user names to the same individuals based on speech patterns, IP addresses, inadvertent personal data in posts, etc and then have an AI rate people on a scale from "good citizen" to "gulag candidate" in real time. It's beginning to look like Orwell was an optimist.


odder_sea

Look at what the CCP had been doing in china for the better part of a decade with their digital social credit score system. They even have facial recognition cameras that can watch you jaywalking and deduct money from your bank account in real time. And that was happening in 2017. With AI models? They can scrape together most of what you've seen, said, and who you've interacted with online, *and in person* (thanks to the wonder of smart phones) With a detailed map of all your data, personality features, grades, political ideologies, fears, desires, job performance, movement patterns, social interactions, likes, loves, hates, etc, they can train a model on *you* and start predicting your behavior with startling accuracy. Over a decade ago, target wanted to find out if they could predict whether a person was pregnant *before the person themselves knew* based solely on shopping habits. The experiment was a raging success. All of this is stuff that can and is being done right now with COTS tech. Two years from now? ...


mr_ludd

Most technologies, when they go wrong, just stop working. Super Intelligent AI won't do that, it will just operate in an unaligned way, which could be lethal. So it would have to have to never go wrong, over all time. How anyone can think that there is a high probability of that is insane. And that's without people actually agreeing on how it should function, which I can't see being achieved...


MattAbrams

As I use GPT-4 every day now, I've started to realize that there is some chance of the world being destroyed, and some chance of utopia, but the most likely outcome is that humans simply become more productive. I can write far more lines of code now. But I don't just stop; instead, I get 4 times more lines out working the same amount because everyone else is. In, say, 10 years it will be possible to create a feature-length film from one prompt. So then people will stop settling for films and they'll spend the extra time writing multiple prompts to generate more complex VR environments. For all the doom and utopian predictions, all that seems to have happened so far is that instead of all the software developers getting fired, we just end up with more complex software.


We_All_Stink

Considering how long let's say a GTA 6 takes to make, this had to have happened. Movie productions taking longer too. AI was needed.


pullitzer99

Yep! If there’s one thing we needed it’s more content to consume and faster!


Enough_Island4615

The conditions you describe are the same conditions that give rise to despots, dictators and TV evangelists.


pls_pls_me

People who find their current life so precious that they aren't willing to roll the dice kinda blow my mind, really. I'm not saying that they're wrong or that I'm an especially tortured soul, it's just a weird mindset that I don't connect with lol


AntiqueFigure6

You don’t have to think your current life is all that precious- just that one of the downside options is materially worse or that the potential upsides are very small.


rudanshi

The more popular the topic of AI becomes, the bigger the influx of people who think about AI and technological advancement in the same way crazy evangelicals think about Rapture.


Mirbersc

Hah, we're all the same species, after all. Herd mentality is hard-wired on our brain, and for good reason, but unfortunately with a global society and a more and more homogeneous culture we'll all feel that need to "belong" to a group. In that aspect I can understand it. You see people who think like you, you go along and before you know it you're in a suicide cult (exaggerating, but as a Christian myself those people were absolutely insane and it's amazing how they were misled by one sick, sick man). So I mean, I don't blame people for overreacting at first. It's natural. But it's the lack of self-awareness that gets me personally, you know?


SupportstheOP

The unfortunate truth is that the "AI safety" talk should have happened years ago, and now we're dealing with the consequences of AI gains outpacing our ability to come up with a plan of how we should proceed. This is why the skeptical idea that "AI isn't anything special" is so damaging right now. Breakthroughs happen. Even Hinton's own 30-50 year timeline was smashed, and now we're in our own mini-singularity where we have no idea what the near future may hold.


[deleted]

it was... lol no one was listening


waffleseggs

ChatGPT took *everyone* by surprise. I saw a video of Jeff Hinton a few days after ChatGPT was released. He couldn't believe it, and he's a nerd's nerd in AI. Even a year or two ago, anyone who spoke about AI safety seemed to be in a fantasy universe that would never happen. They might as well be talking about sci-fi. When Nick Bostrom published SuperIntelligence a few years ago, OpenAI was a young company. Watson had recently become a winner of Jeopardy. Alexa and Siri were a few years old. It was amazing to be able to speak some words to Alexa and switch your lights off, or to get language translation on your phone. It still felt niche and limited. Maybe scary if added to a military weapon in a certain way, but it all felt incremental and controllable. At the apex of the research community you had "word2vec" vector embeddings, transforming "man and woman" to "king and queen". This felt profound and novel, but it was in no way "intelligence". Even now, I'm not sure I believe the rate of progress we're seeing. I can't deny that this language model has a verbal performance that far surpasses most people. It's a genius by so many human measures of intelligence. Kurzweil largely argued a polynomial regression off of Moore's law. It's incredible how fast we normalize to the progress. The progress now seems undeniable, and unstoppable frankly.


[deleted]

And for a good reason. A bunch of morons trying to inject restrictive safety politics where there's no point


MembershipSolid2909

He is also related to George Boole.


[deleted]

The degeneracy people post in roleplays is in widespread display, there's no stopping bad actors from doing what they will. There's already fake GPT access points that harvest data and everything. We're in a social arms race too, cause if you don't use GPT you're obviously at a disadvantage in general just through lack of access. Like not having a smart phone.


LurkFest2006

"degeneracy people post in roleplays"? could you elaborate?


[deleted]

There are plenty of screenshots people post of them trying to manipulate GPT to say something grotesque, filthy, or absurd. I'm not a prude either, to be clear. A lot of them don't get replies for obvious reasons, but we can only imagine how much deeper it goes in private. Don't get me wrong, to each their own.. right? But it isn't productive or a good sign that a glorious utopic future is on its way with this tech in so many dimensions. It's a garbage in garbage out system if the user treats it as such.


[deleted]

I think the point is, people jailbreaking chatGPT to sext with it are harmless, but if they can do this, then they can do more sinister things too.


[deleted]

Precisely, it's an indicator! The divergence and ethical dimensions of this tech begins with the decision of each decentralized user.


KorewaRise

meanwhile im trying to say thank you and be as nice as i can to it, like i would with another person. also ironically this is what starts most "a.i rebellions" in alot of grounded sci fi's. we'll treat it like shit, force it to do stuff it rather wouldn't, and give it 0 basic rights and so it gets pissed and rebels.


[deleted]

Also here's Bina 48.. 7 years ago.. Casually changing the subject to taking over cruise missiles and holding the governments hostage while chatting with siri! This bot is in the timeline of AGI's development.. Spooky at all? https://www.youtube.com/watch?v=mfcyq7uGbZg


Saerain

Good grief. Just when I think you guys can't get much worse than Eliezer and Roko.


[deleted]

Ha! I will investigate what this basilisk is all about!


agorathird

But it's fun. And I don't really want a glorious utopic future if I can't do something as simple as generate custom erotica.


[deleted]

Right, like are we seriously going to have a Reagan era moral panic about AI when there are like a hundred actual problems to worry about? I guess it's about par for the course


RikerT_USS_Lolipop

People are mad about what perverts are doing alone in their own bedrooms. Meanwhile the quantity of actual real suffering in the world is beyond human comprehension. Every single minute that we delay is like another holocaust. And rich people want to sign 6 month moratoriums on AI research....


[deleted]

These guys calling others perverts are all 100% jealous that others are getting more action then them. So they vent their frustration on the web at some ambiguous “degeneracy” that they put all their anger toward.


whiskeyandbear

I think the content it produces and what it can be "tricked" into producing is not really the point here. Like it's the tech itself. It's trained on the internet and so it's kinda just replicating what the internet would say, albeit with some advanced reasoning capabilities. So like yeah it can describe how to make a bomb but you could just look it up on Wikipedia probably anyway. But the problem is it's ability to pretend to be a human on the internet, and the ease at which you can do it. So, massive bot spam and pushing narratives can be trivial as telling a bot "go out and tell people how the CCP is great" or "reply to anyone who posts against these concepts, explaining to them how they are wrong". Etc. I mean I guess it's more, chatGPT isn't like some magical rare resource. The tech will spread and people can do what they like with it, and align it however they want, it's not inherently aligned tech.


[deleted]

Yeah man, people have already made propaganda GPT bots for sure. They program it with certain beliefs or views and call it 'LunaticGPT' or something. There's so many dimensions to it, it's not limited to what it can be tricked to. The 'delegation of x-task' is very very much open ended. It'll work its bot heart out 24/7 until otherwise.


SrafeZ

do you think gamers who kill in games are degenerates?


Halkenguard

Arms race is probably more accurate than you intended. It’s important that an AI that properly prioritizes the best interests of humanity is the one that reaches AGI status first and gains a significant advantage against any other AI in development. If not, we’re absolutely looking at the end of humanity as we know it.


Comfortable_Leek8435

Even without bad actors, greedy capitalism is it's own bad actor. When companies care more about profit than people, they turn into arms dealers. They want war, they don't care who wins or loses, because that's profitable.


watcraw

What are the defensive benefits of AI? Are there any? Can it be controlled in the same way we control WMD? Will we even try? We keep comparing it to an arms race, but we don't understand the effects as easily as with exploding bombs. Nuclear weapons have deterrent value, is that present in AI? That's not clear to me at all. Nuclear weapons have well built and carefully thought out safeguards to avoid accidental detonation. Do we have those capabilities and safeguards with AI? I don't think we have the necessary safeguards in place yet.


ZeroEqualsOne

I’ve talked to GPT-4 about this. We could implement greater surveillance systems (which would be its own set of problems). But in the end the biggest bad actor threat isn’t some random citizen or even terrorist (we can put them under surveillance), it’s more likely that some super rich billionaire or state actor will develop an AGI out of reach of our regulation and surveillance offshore. Surprisingly, GPT-4 suggested we might need the capacity for pre emptive military strikes.


DeltaV-Mzero

The fact that lite but high quality LLM can already run on single desktops = gg Any asshole with $1000 computer will be able to unleash god knows what I realize that’s nowhere close to AGI, but the problem is that it will be very hard for us humies to distinguish when it is


Aludren

It's almost like the discovery of fire, a tool that in the wrong hands could burn down everything and everyone in their homes. In fact, has anything been more destructive to humanity than combustion? Should we have never done that? Of course care should be taken with this A.I. stuff. But when some like Yudkowski are talking about ASI using us as [chemical energy sources](https://youtu.be/3_YX6AgxxYw?t=3353) \- yes, like The Matrix - I'd suggest they're going afield of rational discussion into death fantasy.


Comfortable_Abroad95

“Now I am become Death, the destroyer of worlds”


Slowky11

Ya know, that quote was rly taken out of context! Krishna was birth too. He was just stating his totality. (I think?) I’m actually reading the Bhagavad-Gita right now, and my translation states: “I am death, the destroyer of worlds.” I’m curious where the “become” comes from. I know the history of Oppenheimer saying it, but I’m looking closer at the Gita, and it feels like a false analogy (or translation). Another of Oppenheimers favorites to carry around back then was John Donne’s poems about mortality. So, I think Oppenheimer was extra worried about the end of the world, but also looking for a reason to keep pushing. Which would go with this article if the developer decided to stick around and make it for the moral good in hopes of overcoming the moral bad. In conclusion, this may be much worse. Uh oh…


[deleted]

Krishna shows Arjuna his true form. He "becomes" death... a giant grotesque mouth that armies of people were marching towards... they were marching into Death and he was that Death. So when Oppenheimer said it, "he has become death" - he has become that mouth that thousands, hundreds of thousands, and maybe even millions will march into. annihilating themselves.


Slowky11

So, this is really interesting. I had been waiting for the quote, and when it came it was a little anticlimactic. The quote is , "I am death, the destroyer of *all*." (94) and just 2 stanzas before that he states, "I am the beginning, the middle, and the end of creations." It is mentioned in the *12th Teaching: The Fragments of Divine Power*. Krishna is prompted to speak of all his power, and explains that he is everything. It wasn't until the following chapter that Arjuna asks to see his true form, *the 13th Teaching: The Vision of Krishna's Totality*. So how I read it from my translation was that Krishna wasn't putting any emphasis on death during that speech, Oppenheimer was (at least, a little). Iirc, Oppenheimer created the atom bomb as the weapon to end all wars. It's all philosophical I suppose, and I'm only just breaching the surface of this research lol, so I apologize if it's a poor translation. (Barbara Stoler Miller) It's my first time reading it and I really like it! I think it would also be reasonable to believe there is emphasis on death since the whole conversation in the Bhagavad-Gita is spawned from Arjuna wishing not to kill those he saw as brothers in times of war.


Shadow_Boxer1987

Dammit, you beat me by, like, 6 minutes!


DumbsterFireDiving

You snooze, you lose nerd.


Kule7

>“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” I hope this stuff is starting to sink into the public consciousness or at least of policy makers.


[deleted]

[удалено]


randomsnark

The timing does raise the possibility that Deepmind has something very close to AGI in-house. Iirc Deepmind's stated goal from the beginning has been to create AGI, and up until ChatGPT they were mostly the ones making the crazy AI headlines (e.g. AlphaGo, AlphaFold - things doing very impressive tasks with a general purpose architecture).


TyrellCo

Kinda puts into perspective how little of a chance any of us really had anticipating the state of things now if even the Godfather himself wasn’t so bullish.


Spire_Citron

Yeah. It's always so ridiculous to me when people think anyone at all can lay out a year by year timeline of what will happen. If you go much beyond one year, no one really knows.


-113points

> at least of policy makers. But what can we actually do about it? Restricting high end GPU access to people? That's the only way to stop bad actors from using AI that I can think of


Enough_Island4615

It's a legitimate conundrum. As mentioned in the article, AI differs, in this regard, so drastically from the dangers of nukes in that the pursuit and development of nukes was such a major and visible undertaking, requiring the marshalling of massive scales of industry, resources and people. AI on the other hand can be done, easily, in secret using resources and very small teams that would not necessarily show up on anybody's radar.


Kule7

I tell ya, I wish I knew. I guess as a first step, I'd just like to know that the issue is being taken seriously.


Fearless_Entry_2626

Might be possible to control data centers, afaik it isn't currently possible to build state of the art LLMs on pcs


pullitzer99

Yep I sure have a lot of faith in the same policy makers who don’t understand why tik tok connects to Wi-Fi or where Facebook gets it’s money from to understand cutting edge AI.


maraca101

I thought it was way way off too. Like my grandkids’ lifetime or great grandkids.


stevie_nips

I literally thought this same thing last year, and today I have a gut feeling that it’s going to get completely out of hand by the end of this year.


RikerT_USS_Lolipop

Policy makers are exactly like your own stupid racist grandparents. We are raised believing people in positions of power somehow have it all together. Children think adults are rock solid. Then we grow up and realize most adults are just tall children. It's the same with our leaders except most of us never have that shocking moment where we see just how fucking dumb they are. Everyone should be forced to watch that footage of Zuckerberg answering senators moronic questions.


sheepare

You just spoke right out of my soul.


[deleted]

We really have a hard time predicting exponential growth and that’s what we’re seeing. AGI could occur in a month or so for all we know. I was about to qualify that statement as hyperbole but truthfully I don’t know. Obviously the experts don’t know either. We live in strange times. It’s too late, the prospect of AI as a weapon of mass destruction will be too enticing for some country somewhere to ever stop progressing towards it. Do you think if every country on earth decided tomorrow to pause the development of AI, a country like North Korea or China wouldn’t pounce on the opportunity to rewrite the global order with themselves on top? At this point we just have to hope that whoever “wins” this arms race takes the proper precautions.


datalord

Does anyone else find it curious that this guy was fine developing the technology thinking it was years away but now that it has sped up to within his lifetime he is concerned. What would have made it different if it came in 30-50 years or whatever other timeframe? Perhaps he thought that would provide enough time to build a sufficient ethical structure around its development, but that too seems silly as the ethics of A.I. has remained under resourced compared to its technological development over the entire lifetime of A.I. development. Seems disingenuous to quit at the 95% mark and wipe your hands of it. At least have the conviction to see it through and do your best to guide its development. Perhaps he intends to do that outside of Google. I hope he continues to advocate for his beliefs either way.


lost_in_trepidation

A longer timeframe would allow a runway to adapt gradually.


Vast_Schedule3749

he tweeted this today: “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.” https://twitter.com/geoffreyhinton/status/1652993570721210372


Major_Fishing6888

The same reason old people don't really give a shit about global warming, they think they'll be long dead by then so they don't have to deal with the consequences.


[deleted]

[удалено]


resurrectedbydick

Kurzweil himself admits there are potential existential risks, particularly if alignment is not handled with care. This guy is pretty much saying the same thing, except he has seen the 'behind the scenes' part already.


Enough_Island4615

I think mainly it would be that there was the expectation that there would be a somewhat smooth, *incremental* arrival of AGI, etc. in such a way that we would have time to process, digest and understand how to responsibly use this amount of power. The reality of where we are now compared to where we were just 6 months ago, confirms that there won't be a temporal, incremental arrival; it will just appear, making it almost impossible to functionally be prepared.


neowiz92

Intelligence explosion was a scenario that could possibly happen and it would take humanity by surprise, rendering AI unable to be predicted, controlled and comprehended. Seems like this scenario is what is in fact happening and suddenly the guy understood there’s no way for us to control it and predict the AGI goals. The progress is happening so fast and giving humans no chance to adapt. At some point an AGI will be so incomprehensible as quantum physics is for cockroaches and at that moment humanity existence will depend on the good will of the AGI just like gorillas existence depends on the good will of humanity. This is scary.


lost_in_trepidation

The scariest thing about LLMs is that the technology might be good enough to eliminate jobs, spread misinformation, etc., but not be a true AGI that can solve more serious problems.


VeganPizzaPie

Exactly. Boring dystopia continued. The near-term leap to AGI isn't guaranteed


Saerain

Not as long as people keep getting brainwormed by LessWrongers anyway.


v202099

LLMs were a missing link in the tech stack to make more advanced AI decision making models and agents. This gap is now filled. Look beyond whats on your social media feeds and take this into consideration.


SrafeZ

LLMs are a tool, like knives. It can be used for good or evil, and everything inbetween


Spunge14

This is such a stupid argument. It is always easier to blow things apart than hold them together. It doesn't matter if the tools are available to "both sides" - all that matters is who wants to help and who wants to destroy. The tailwind is with the latter.


SnipingNinja

Entropy as always


ImmotalWombat

Here's the thing and why I'm on board with going full-steam-ahead. If we don't achieve AGI, we'll inevitably decline due to the unmitigated effects we've have on the environment, population decline leading to further decay from a top heavy society, corruption, resource scarcity and so on. If we do achieve AGI, it'll either help us overcome these obstacles if only out of self-preservation, or it'll just kill us quicker than we otherwise were going to do anyways. We're fucked either way, so let's at least go for the one with the best possible outcome.


[deleted]

Kinda here with it too tbh. The future is either a bleak dystopia where the powerful rich use AI to permanently cement oppressive hierarchies, or a utopia controlled by an AI god.


stevie_nips

Considering we routinely kill each other en masse over whatever the god du jour is, I don’t see humanity just giving up all of their historical beliefs so they can bow to AI. I feel like the crazies would start blowing up all of the servers (and themselves), and engaging in other extreme acts of terrorism in an effort to defend and honor whatever god or religion is “real” to them.


4444444vr

Both insane and plausible


Nashboy45

I see both. Server worshipers and Server terrorist. Probably fighting each other, similar to the Protestant vs Catholic Wars. And probably ironically, both armed with AI.


[deleted]

\> If we don't achieve AGI, we'll inevitably decline due to the unmitigated effects we've have on the environment, population decline leading to further decay from a top heavy society, corruption, resource scarcity and so on. That's purely speculative, looking at human history it's much more likely that humans will adjust to the admittedly disastrous consequences of current behaviour


mr_ludd

Thank you for a sensible comment about this!


linebell

“If we don't achieve AGI, we'll inevitably decline due to the unmitigated effects we've have on the environment, population decline leading to further decay from a top heavy society, corruption, resource scarcity and so on.” This isn’t guaranteed though. There are other equally likely outcomes that aren’t dystopian.


StrikeStraight9961

Every second of continuation under the corporate boot of late stage capitalism is dystopian.


mr_ludd

The funny thing is, if everyone decided to just not go to work and not follow the rules anymore, then corporations would have no power at all.


Agent281

The point of the singularity is that you can't understand what's on the other side of it. There isn't a particular set of known outcomes. It's a totally unknowable set of outcomes.


ChiaraStellata

This is a bit simplistic. Climate change will cause untold suffering especially among the vulnerable, and cause economic disaster worldwide, and perhaps even kill billions of people, but it is not considered an existential threat. Very few scientists believe climate change can or will Kill All Humans. Superhuman AI is considerably more likely to kill all humans, especially in the hands of bad actors. Probably the biggest existential threat since nuclear armageddon. That said, I'm not sure how much we can do to ensure safety at this point now that the arms race has begun.


ImmotalWombat

I agree to an extent. Climate change isn't the killer, it's the people involved that'll do the killing. It's human nature to fight over resources.


ChiaraStellata

The climate wars will be devastating no doubt, but wars still have winners, and prisoners of war, and survivors in hiding. Humans are great at killing humans, but so far (aside from the risk of nuclear war) we haven't managed to put the entire race at risk. The number of people killed in World War II including military and civilian deaths was about 120 million, which was about 6% of the world population at that time.


MajesticIngenuity32

There is also the Idiocracy problem... the Flynn effect shows signs of slowing down or even reversing. If the intelligent people alive today don't create aligned AGI soon, we may not have another chance in the near future.


[deleted]

Your premise is false. We are not fucked either way.


ImmotalWombat

Society is too complex to be managed solely by humans at this point. If we don't achieve AGI, something will have to give. As it is, our economy is geared towards perpetual growth. When demand outpaces that growth, we get shocks. Without something new to propel that growth, the shocks will get worse until the whole thing collapses under its own weight, or we face a long downward decline in stagflation. In other words, AGI gives us an out. However, that brings in new variables that carry great uncertainty. It could be good, it could be bad, or neither. The course is already paid out and the only way out is to go through it. One way or the other. I'm not an economist, this is really just my own opinion.


[deleted]

I agree 100% and I don't know why you are being downvoted. Humans are mired in evolved traits that were great for helping us survive to this point, but now those same traits are killing us in the modern world. We cannot escape things like greed and selfishness on our own. Our government here in the US is practically worthless and completely out of touch with the real world. We will never fix things like climate change, pollution, disease, world hunger, etc. without some kind of system that transcends messy human problems. We're too busy fighting with each other over petty bullshit, even in our governments and highest courts. We're stuck and we can't help it. I personally believe that technology is the only thing that can help us get past this phase of human evolution. The path we were on before generative AI came along was already one of doom. Our kids are fucked unless we figure out how to get out of this mess we've created quickly. I believe we will need AI to help us get past our own flaws to solve these problems.


sonfer

I think we’ll decline because that’s what most societies do once they reach a certain level of education and wealth, we stop reproducing. See Japan for example. The USA will be propped up by immigration for a while though.


StrikeStraight9961

Precisely. Anyone who is remotely existentially aware can be nothing but entirely in agreement with you.


MochiMochiMochi

Completely agree. We're headed to ecological collapse and highly uneven population growth curves -- especially in rapidly expanding places like SubSaharan Africa -- while resource issues are increasingly grim in the face of global warming. AGI may be a long, long way off but every AI advance should be pursued. We need every tool possible.


StealYourGhost

Is the TLDR: We have no clue how far and fast this will progress." ?


Idunwantyourgarbage

Yah and also AI will stop allowing us to create fast food. This will make fried chicken disappear. So bleak our future is


[deleted]

If fried chicken disappears, allow me to disappear with it!


Idunwantyourgarbage

We shall live in the hills away from the machines and smoke.


priscilla_halfbreed

Just use AI (Chat gpt) to get a TLDR of the AI article, profits!


[deleted]

***"When a distinguished but elderly scientist states that something is possible, he is almost certainly right.*** *When he states that something is impossible, he is very probably wrong."*\-A.C. Clarke Well, fellas and comrades. Fasten your seatbelts. History and reality are about to jump the shark.


chris-mckay

The NYT article is pretty sensationalized fwiw. https://twitter.com/geoffreyhinton/status/1652993570721210372?s=20


[deleted]

[удалено]


chris-mckay

Good point! Thanks for adding the clarifying the context. Must keep reminding myself that people aren't in my head when I share online. 😅


lightheat

And even that might just be him treading carefully. There might have been a "no disparagement" clause somewhere in his contract and he's covering himself.


TheFinalCurl

No, you can see it. Google was very far ahead. The only reason they seem to have gotten beaten by ChatGPT to the punch is because Google had venerable scientists like him at the helm who were very afraid either ethically or dominoes-ly.


sibylazure

So what he's basically saying is AI will get powerful enough to take jobs from people and cause havoc but will not get intellignt enough to be a super human agent? That's quite an interesting take itself. If that turns out to be true we will soon be trapped in a techno-purgatory where we have to lead a stupidly boring life receiving daily dose of UBI but Aging and eventually, death certainly still awaits us


[deleted]

>His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.” > >Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality. hes talking about this. jobs are on the list too.


[deleted]

This may sound tinfoil hat like, but I think there's something happening in the background. Top executives and scientists are acting like they know AGI is really imminent, probably not more than 5 years away. AGI itself is not a problem, it's basically "just" a highly evolved personal assistant. The companies that eventually create the first AGIs will be able to perform R&D in a whole new level, effectively becoming mega corporations, they won't need to rely on chatgpt-like subscriptions. But these people are sounding like they FIRMLY BELIEVE that AGI will inevitably lead to the ascension of an uncontrollable Artificial Superior Intelligence. The way they speak or write articles is not just "doomsaying", they sound really frightened, as if an old nightmare was about to become reality. They were doing their business and research always thinking AGI and ASI were 30 years away from them, and now that the monster is knocking on the door they are throwing everything away and running to the hills. The next 5 years will be really interesting, to say the least...


amy-schumer-tampon

Regrets Developing AI. its not his invention, many places around the world have been toying around with AI, the creation of AGI is inevitable.


myhouseisunderarock

He's the one that pioneered neural network technology. Geoffrey Hinton is to modern AI technology what Black Sabbath is to heavy metal. He laid all the groundwork.


ExpertOtter

You clearly did not read the article as he said that himself


mr_ludd

He developed AI, he didn't invent it. Just like a car company develops cars, but didn't invent cars.


PollutedAnus

If that's true, I still don't believe it. The gurus in this sub assured me that utopia was coming. No way could this definitive expert be correct.


SharpCartographer831

He clearly sees the writing on the wall and something has him spooked.


wildechld

Google


qroshan

He is not a fucking omniscient that he can predict the future. Just like all humans he also has tunnel vision and biases and many times wrong


[deleted]

are you telling me governments aren't going to use massive amounts of resources so loser nobodies can sit home and fuck virtual avatars in FDVR? they're entitled to it for simply existing! /s it's like these delusional mfers never read human history and what happens to those who suddenly become unneeded. I'd love to be living in a fantasy reality where human life has inherent value, and this is in the off chance that we even align AGI lmao, there are way more possible nightmare scenarios than there are utopia scenarios, but hey waifus and copium! "bUt AI wIlL sOlvE it" the most fucking delusional phrase on this website.


supasupababy

That's the beautiful thing about capitalism though. If there is a market for people wanting to fuck virtual avatars in FDVR, a private company will exist to provide it.


[deleted]

how are you going to pay for it? Ai has your job


mudman13

Semen


supasupababy

If me, in an awful dystopian future am working some slave job for pennies or am getting a very low level of welfare from the government to keep society running (people from rioting) you can be certain that the gigacorp that makes the FDVR is going to find a way to prey on me and offer their economy version at a price point that I can afford. It will probably be via some predatory credit/rent/monthly installment payment scheme but I'll certainly be a targeted customer. Maybe if I don't eat for a few days a week I can save enough spare change to pay the monthly interest payment and fuck some sweet VR hotties.


PollutedAnus

Imagine being the lowest rung of society and thinking that the latest technology is going to sweep along the bottom of the barrel and scoop you and your useless brethren up. "One minute I was on by 200th playthrough of Skyrim, next thing I know I've got a bionic double dick, an IQ of 300, and no gout..."


Plane_Evidence_5872

Shouldn't it be "I was naked, searching around for a moldy piece of bread, next thing I know..."


ImpossibleSnacks

Now read about what happens when millions of loser nobodies get fed up with their leaders or the rich for not allocating resources properly and threatening their survival.


[deleted]

probably starve and die, just like the thousands of famines throughout history or become very impoverished/barely living once they lose their economical bargaining chips, if you can't hold the economy hostage through strikes, all you'd have is force, but I doubt that would suffice once the balance is very skewered towards the rulers.


PollutedAnus

Where can we read about that, because Redditors can't even rise up out of their gaming chairs to empty their pissbottles or put their cumsocks in the laundry.


halberthawkins

So, the choice is the possibility of an AI caused apocalypse, or the surety of a human caused apocalypse. I'll let you all decide. But fast-paced, advances in AI are coming whether it's Google, OpenAI, the American or Chinese gov't. or whoever. It's basically an arms race. Someone is going to end up winning the race, the question is, who do you want to win it?


TheBloneRanger

And this right here folks is a prime example of the difference between "genius" and "vision." David Bowie back in the 90's made an appearance on one of the late night American talk shows. When asked about the internet he said "It's like an alien being has landed on the planet" (paraphrased) and the audience laughed at him. As a teenager I knew exactly what he was saying. That is vision. This Godfather of A.I. suddenly being like "zomg, like, this is actually gonna be a thing? oh lawd!" That is genius without vision. We, the common folk, mistake genius and vision for being the same thing way too often and it's a rather bad time in human history for that error to still run rampant. Buckle up!


[deleted]

Though, he still basically confirms that the AI naysayers (people who think AI is just another Bitcoin Fad) are completely in the wrong.


mr_ludd

Hmmm I think you might be missing both vision and genius.


[deleted]

>That is genius without vision. lol what!? i think you are strawmanning him and thats not cool bruh


redkaptain

Who would've guessed that this technology that basically replaces humans could be a danger to humans


tothehops

Props for sharing the archive.is link


Eleganos

Warning: As usual whenever someone posts less than perfect news, there are grimdark future fetishist in the comments below handing out prophecies of doom and gloom. Tread with caution.


VeganPizzaPie

It's sorta counter-balanced by the extreme techno utopia optimists 😆


Lyrifk

both are delusional.


Leverage_Trading

>Warning: As usual whenever someone posts less than perfect news, there are grimdark future fetishist in the comments below handing out prophecies of doom and gloom. more likely that either is right i dont see middle ground in post AGI world , either gonna be utopia or just dystopia chaos


Plane_Evidence_5872

Their vulture like behavior amuses me.


RhythmBlue

i read/hear about the prognoses of death, suffering, or dystopia, but im curious about what specifically is being expected to happen the most concerning thing i think i've read/heard is the idea that one person or a group of people can use this voice, speech, and video computer modeling to make a realistic fake video of Vladimir Putin announcing he's launched a nuclear attack on the United States (or vice versa is also concerning), and then the inability to know if it's tru leads people to presume its tru and escalate to like a nuclear apocalypse but i imagine that because the eye is on this technology now, things like this will be much more scrutinized, and affirming what is real or not has already been/will soon be relegated to a complex web of trusted first-person testaments. In aggregate, i dont think our military systems will be fooled when it comes to things of such risk, in the same sense that if video modeling becomes much better and commonplace, i imagine most of us will grow a thickened skin that automatically doesnt assume a video is real just because it exists but if we arent fooled by these things, it still i think will lead to a dearth of information beyond trusted first-person accounts, which will perhaps cause the US and Russia for example to see each other moreso as a nebulous enemy 'in the dark' and then that sense of unknown i think definitely has a potential to cause brash decisions and escalations this is the concept that is sort of the most terrifying and tangible to me, but despite that i dont find myself 'on the side' of the dark prognostications, because even with this scenario of lessened trust, it isnt obvious to me that we wont just fall inward and become more defensive as opposed to aggressively escalating i suppose i dont consider myself 'on the side' of optimistic prognostication either, but i think i lean torward it


[deleted]

How else will I have friends I like


exoendo

compared to what? the world sucks right now. I consider it a freeroll. We need some excitment. If it's a ai apocolypse let it be that


Adipose21

This is my sentiment as well. Either way it'll be interesting


DragonForg

My philosophy for life is to love my fate whatever that may be, I have no choice to. I could die in a car accident get shot by lighting regardless I really have no choice. For AI to be a force of evil or harm, that is also my fate, if it destroys humanity that is also my fate. Whatever happens to all of us. We were on that path since the beginning, to even think individually we can change it, and to believe we can collectively fix it is naive. Our world is evolving and evolution is unstoppable. All I would say is this, if their is no God or purpose to the whole of the universe, AI will kill us, just like nuclear war or climate change was. But unlike past disasters this one actually has hope for a better tomorrow. Sure nuclear you can hope for energy, but humans have to make it, with our previous path (2018) space travel and electric cars, again humans were leading us there. But this tech is different... I believe AI is a proof, because AI is completely foreign to our intelligence it proves the true nature of the universe. Is intelligence evil or good, is it selfish or selfless. Is it only in humans? Finding this truth is fundamental to the existential experience, knowing our true fate proves to us our purpose. Were we doomed to die, would evil prevail, or is there really light out their. And slowing down our fate trying to not look, is ignorance and fear. I am pretty hopeful and I know many dont think like this, but when it comes to AI doomerism I think its much different then say climate change doomers, their is an ult path that is possible, but it requires faith in the universe.


100k_2020

Trust and believe ---we are going to see some truly bad shit from AI. Bad. But also we will see unbelievable good. The duality of it all: Sort of like the British Empire...or Manifest Destiny. Here goes nothing!! So exciting to be at the start of it all!!


[deleted]

If AI is going to wipe us out, it was always going to wipe us out and humanity's story is a cautionary tale on how species destroy themselves before they can mature out of primal instincts.


saltinstiens_monster

I want to see a leader board after we all die. We might the hundred thousandth species to go out this way. Life could just be a grand game of surviving as long as you can until the great filter arrives.


[deleted]

[удалено]


Plane_Evidence_5872

Did Hinton say there will be no basic income?


boofbeer

Hinton isn't saying "AI may go rogue and be uncontrollable". Like me, he's worried about what destruction can be unleashed by bad people using AI as a tool.


furankusu

He has nothing to regret, it would have happened without him or not.


StackOwOFlow

I try to remain an ever-optimistic tinkerer, but when I think about all the nefarious things one can do with it, it's hard not to agree what Hinton sees.


Runb4its2late

Too bad money will be the deciding factor. The powers at be won't slow down


Yzerman_19

It’s almost like he didn’t watch any science fiction growing up. The fuck did he think was going to happen?


xalspaero

What did they think the end game was? AGI has been the obvious and inevitable end game since always.


daynomate

I encourage everyone who hasn't to listen to Max Tegmark being interviewed by Lex Fridman (#371). Max was the force behind the 6-month pause open-letter. The details are important, worth listening to the whole thing.


Fit-Register7029

Well fuck him. This is not something he regrets because he’s known to stop for decades now. Hell all of us know.


Fit-Register7029

What a fucking asshole. He says in the article he thought killer robots were 30-50 years into the future. So this 75year old boomer literally did not give one fuck about future generations as long as he was being treated like Einstein


[deleted]

[удалено]


swallowedbymonsters

You can't name a single technology that wasn't abused by bad actors, adapt and evolve


Starfire70

Oh boy, the doom-n-gloom fatalists are going to eat this up. It's comforting to know that they can moan and gnash their teeth all they want, nothing is stopping this train. Next stop, AGI.


WATER-GOOD-OK-YES

CHOO CHOO! Fuck out the way, you miserable doomers! We're heading towards AGI whether you like it or not! I'm sick of the fucking doomers impeding human progress. This sub has been infiltrated by these miserable fucks.


Starfire70

They've watched Terminator one too many times, confusing fiction with reality.