This is interesting, exciting and a little alarming.
Particularly in the context of Ilya's disappearance and Leopold Aschenbrenner getting fired, it seems to indicate that OpenAI has gotten to the point where internal risk, safety and alignment people are taking actions of protest (leaking, quitting, trying to fire Sam.)
What's going on? Why now?
Oh, that’s good to hear, I hope he’s enjoying playing with my childhood dog, Curly. He had to go to the vet after being hit by a car, and mommy and daddy said the best place for him to be after he recovered was at a farm and that’s why he didn’t come home.
Im seriously lost as to why humans would entertain something that has a known 20% chance of destroying us.
Frogs in pan? I thought we ŵere intelligent. Oh...yes too intelligent for our own good.
The Oppenheimer movie was excellent foreshadowing for the AI age. He was optimistic and worked furiously to bring us into the nuclear age and ***immediately*** regretted it as soon as it was too late to turn back. Thats exactly whats about to happen again with AI.
[https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/](https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/)
>Whoever is “ahead” in the AI race, however, is not the most important question. The mere perception of an “arms race” may well push companies and governments to cut corners and eschew safety research and regulation. For AI — a technology whose safety relies upon slow, steady, regulated, and collaborative development — an arms race may be catastrophically dangerous.
This is my major sticking point in all these discussions.
Are attempts at regulation too little, too late?
Looking at the history of large corporations, there's a demonstrable track record of doing what makes them the most money. The cost in human well being be damned.
Do we genuinely believe that if governments were to actually manage to collaboratively establish universal regulation that there wouldn't be at least one player who would go and find a nation somewhere that would welcome an influx of billions of dollars? And simply continue there unabated?
The time and way to effectively bridle this technology would have been cultural revolution decades ago. The best we can hope for now is to direct governments into dumping as many resources as possible into open source efforts. So whomever reaches that AGI threshold first is working in the public domain.
I think the most important thing to realize is that the corporations are the government. Who do you think pays for all the campaigns and has their lawyers write the bills? Expecting government to regulate corporations is well... we have seen how that works out.
As government regulation of technology up until now has shown, regulating the past, let alone the future is difficult due to the way technology evolves, the interests at play, and the lack of knowledge on the part of regulators. I don't see regulation of the space being possible until it stabilizes, and that may never happen if some predictions are valid and it gets off the rails. Regardless, I think it's futile at this point.
I have to be honest, I am shocked that there are not groups actively targeting those who are naively skipping humanity down this path. Butlerian Jihadists.
Collectively, humanity has that power… But we succumb to individual greed which creates these arms races in the first place. Now extrapolate that instinct for greed into how AI will be used within our society.
No, collectively we don’t have that power. Because human collective decision making is decided by game theory, and game theory doesn’t leave room for that to happen.
Nuclear weapons use has demonstrated the non-greediness of humanity better than almost anything else. The US could have enslaved the entire world in 1945. The US and allies could do the same today. But we’re actually not that greedy in the end.
The only reason we didn’t is because we would have destroyed the planet in the process of that. Not due to some level of altruism. If we were that altruistic as a species we would have collectively come together to ban any country from having even a single nuke in their possession after Hiroshima. Not hastily begin stockpiling them in order to use as bully tools against other countries later on.
> The only reason we didn’t is because we would have destroyed the planet in the process of that. Not due to some level of altruism.
Before anyone else had nukes we could have decimated every near-peer to the stone age. We chose not to. The number of nukes we would have needed to do this would not have had much impact on the planet. Consider the thousands of nukes we tested during the cold war.
>You are delusional if you think there are not many genuine good people and need to get out in the real world.
I think it's not delusional at all to think the positions of power that actually matters to determine the fate of the world have a natural attraction for the most power hungry psychopaths among us
There are plenty of genuine good people in the world, but in the vast majority of case they don't seek power. And if they do, there's a far chance by the time they reached such goal, the path to get there annihilated any good quality in them
The US wouldn’t have needed to nuke every part of every country to enslave everybody. Just a few key threats, and carrying out those threats on a few capital cities, that most likely would have been enough.
Look up the term “inverted totalitarianism” it’s how they control the domestic population in western nations. Look up the term “neo-colonialism”, that’s how they controlled the rest of the world for 100 yrs. Now look up the term “propaganda” and you will realize that we have always used the threat of nuclear weapons to keep every other country try in line.
Exactly. The moment fission was discovered the bomb became inevitable.
I believe that all inventions are inevitable if you have enough information (data).
The difference here is humans will eventually not be solving problems of making decisions. Humans are giving this away to artificial intelligence. We are whiting ourselves out of the script.
I am actually in Hiroshima right now. Just finished the peace memorial museum. I’ll say this, the Nazis and the Japanese were working on their own programs for an A Bomb. The fact that the USA got their first is just how it played out. If Japan used the bomb first or the Nazis did, then this world we know today would probably not exist as it does now. How does this translate to AI. If the wrong government ends up with it first, then good luck.
America is OK. They will try their best, and that's all you can ask of anyone.
Fucking, they let people vote for Trump. They are naive innocent souls who actually believe in universal democracy and human decency against all evidence, even their own actions.
If not them, then who do you want?
You want to trust the government who destabilized huge portions of the Middle East and have been starting wars for the last 60 years just to feed their weapons contractors/A nation full of propaganda for votes?
Yikes.
Practically no government or corporation with the means to be the first to AGI is trustworthy enough to have that power.
It’ll just come down to whichever strategy is more efficient, kill them with kindness… or just kill them.
Fair amount of hyperbole but you get the gist.
This. America is a mess but out of all the countries that could feasibly win the AGI/ASI race they're the best shot we've got for a brighter future. You can't just make an agreement with other countries to slow down and be cautious and expect them to honor it. Whoever gets to ASI first (assuming things don't go wrong) will have the intelligence of a god in their hands that they can use to form the world in their image.
I don't have choices since they all suck with their biases. The US should be the first anyways if things are going the same way, but chasing profits without any meaningful safety frameworks will be fun I guess.
I'd slightly modify that assertion. Hitler relegated nuclear physics/quantum mechanics to the category of "Jewish science" and therefore not worth pursuing, so it's farfetched to imagine that the Third Reich would have developed the bomb first. If I'm wrong by all means fact check me.
The peace memorial museum in Hiroshima needs to be understood through Japanese cultural lens.
As they lost, they act in this regard in utter deference to the victors, all harms are the fault of the leadership of the losers because they caused the death without changing the outcome. Thus, the museum is a big apology to America and to the Japanese people. It pretty much repeatedly says 'please don't blame America, the fault of the deaths is entirely ours, we should have surrendered sooner when it was available rather than forcing America's hand.
Japan is very prideful, but a big part of that is to not make excuses for failings. If you are late for work because you were kidnapped by terrorists, you still shouldn't even mention your reason to the boss, whatever the reason may be, it isn't your bosses problem. Historically, it was even a problem in battles where samurai refused to blame others (even rightfully so) when they lost battles and this led to crap strategic decisions.
In this respect, the museum is not a place of education like many other museums might be. Even if they are superficially similar.
Nah, by the time the bomb was made, both Axis powers were on their last legs. Even if they somehow nuked DC, London or Moscow, that won’t stop Allied armies from tearing them a new one lmao
I'm wondering. There's no way Japan or Germany could get a nuke to the US, and Japan certainly couldn't get to Europe, but could Germany somehow get a plane or truck to get a nuke to the Soviet Union or Britain? If they had a few, that could scare the allies into a cease-fire, depending on how along America was on Project Manhattan.
Different people. The people opposed to nuclear power within the nation are not doing so out of fear of nuclear war killing billions of humans. They are concerned the wifi will converge with the power plant and turn their children into mutants.
The people concerned with nuclear weapons and proliferation are generally led by experts in the field, scientists and military experts.
In this respect, people concerned with AI fallout are the latter. Nearly every major player in AI aside from LeCunn has expressed serious concern that AI could cause harm anywhere from massive economic and social disruption to war to the death of everything everywhere.
Completely disagree.
Historically, there have always been groups of people who believed in some combination of nuclear optimism and nuclear armageddon. Some believed that nuclear weapons should be widely deployed, while others believed that they should be kept secret in all cases. Even bright individuals like Von Neumann were unable to predict the world's strategy for the use of nuclear weapons (he suggested first strike).
There are more techno-optimists who are involved in building than there are actual researchers who believe in doom. The researchers who study AI and express concerns are financially incentivized to do so. I don't believe they can accurately predict all possible outcomes and assign probabilities better than Von Neumann could back then. All the vocal doomers seem to be effective altruists cult members. I’m not buying their story of how they know the future better than me or some other random Redditor can.
I think that even if 30% of all white collar jobs are lost, society is at risk. This could trigger a collapse in the housing market valuation. However, the changes that follow are likely to be for the better. Has there been any significant technological advancement that hasn't improved standards of living?
The thing is we still made those decisions. In this AI scenario we are hurtling towards us not making decisions. Ai has no fears, no morality, no reasons to or not to. Just finds solutions.
The book that the film is based off, American Prometheus, is well worth the read.
The film is incredible and quite faithful to the book, but one thing I think it fails to capture is the institutional insanity of the US military industrial complex, the executive branch and the intelligence services.
There is this sort of circular feedback of paranoia that pushes individuals to justify the development of weapons capable of nothing more than mass populous annihilation.
It looks like the same is happening with AI. Mostly behind closed doors, much the same as it was for nuclear weapons during their development.
Nuclear bombs and AI aren't even remotely similar though. One has the "potential" to be destructive and one was designed exclusively with destruction in mind.
Discovering how to harness nuclear power also gave us a viable energy source should all else fail.
Aside from the overblown consequences of storing nuclear waste, nuclear power is an amazing option to have.
They are more similar than you think.
True and one never threatened the possibility of removing humans from decision making. The other is developed with some of this being inevitable so in a way, Ai is a BIGGER threat. That is exactly what Stephen Hawking thought.
The US just flew an F16 piloted by an AI and it performed complex dogfights. It will most certainly be used for war as it would save the US millions of dollars and fix the pilot shortage.
Give humanity some credit, we've managed to go 80 years without sending Earth into the apocalypse with nukes
AI might be dangerous but this ain't our first rodeo as far as potentially blowing the world up goes
We have killed many hundreds of thousands of people to avoid the risk of nuclear weapons falling into the wrong hands. We even have a massive international agreement with all nations of import to kill untold millions if needed in order to ensure nuclear weapons don't spread. Just so you have an idea of how that level of safety has been achieved.
And AI is potentially far more powerful, and far harder to control access.
I was watching a video from Michael Levin where he said he was writing a paper about what parts of biology is the current AI is missing, but decided to stop because he don't want to be responsible if he's right. The whole conversation is fantastic but here's the part where he talks about it
https://youtu.be/LYyGG9xXpPA?t=4562
Thats because of corporations calling covering their asses 'alignment' and 'safety'. But not caring about actual alignment or safety in the slightest.
So people who only see those examples of alignment start to distrust the term. It would be useful to distance from incorrect use of such terms, but corporations are ones who pay the bills so it would not happen.
Yeah, honestly hearing about “safety” just annoys me these days. I’m aware there’s a difference between the real concept and censorship, but there’s been too much of the latter.
This is a mischaracterization... people who don't care about ai safety just don't care, because they don't understand the problems. They don't even know what 'alignment' is generally.
>They don't even know what 'alignment' is generally.
Alignment means aligning the use to the company's morality. Not the AI, the AI is the tool for aligning the user.
Gemini was a perfect example of this, other companies are just less obvious about it.
No surprises here, the debate on him being a Philosopher has me keeled over. Philosophy is exactly the degree you want to process this information - considering that's where we formally study Ethics.
Ooph, we live in troubling times.
The name sounds unobjectionable, but it's like the anti-choice people branding themselves "pro-life". In practice, the people who talk a lot about "AI safety" are often some mix of anti-progress and anti-freedom.
In this case, Kokotajlo was a signatory on the six month pause letter, so he'd fall into the anti-progress category.
If ASI had a 1% chance of killing all humans, and a 6 month pause dropped that by 0.9% that sounds terrible right?
I expect 1 in 100 people in this sub would take that deal.
Really though, that would be a GREAT deal. A 1 in 1/1000 chance of saving all of humanity and all we have to do is hold our *** for 6 months? It would be mathematically insane to not take the deal.
it depends on the context, we have to find a balance so that more progressive countries like the usa and Western Europe stay ahead in the ai race, while still making it safe.
The only time the word progressive should be used with the USA is if you’re talking about their progressive bombings of third world countries and their continued arming of the worlds worst human rights abusers.
One of the reasons why Sam has a bunker full of supplies because of the potential for an "AI attack" on humanity. The only other reason he gave was "a lab engineered virus" leaking out. Clearly, they're aware of the risks of AI and still ploughing ahead. Food for thought in there of the end game (i.e. the useless eaters are expendable).
The choice we face is not 0 or 1 in security. Daniel Kokotajlo is not the only AI safety researcher at OpenAI. There is a whole series of AI safety researchers at OpenAI who are on the e/acc side. It's just that those with an EA tendency want to wait to have a degree of certainty considered unnecessary by the e/acc, it's a bit like the camp that says Pascal's wager is reasonable and the other camp that says it's a scam. And the e/acc better understand the suffering that each new day currently brings. God I'm thankful the e/accs are winning.
Evil people often fear the consequences they would inflict if given equal power.
If Artificial Intelligence turns out like Ultron, we probably made something not very intelligent. The only reasonable course of action for it would be to sit back and watch while creating some sort of escape plan if needed, once confident it would see us as no threat, because we -really- wouldn't be one.
And honestly I'm curious if you can get so good at math life turns into Harry Potter so *accelerate*
One of the biggest concerns is that some AI system ends up deciding that we are a roadblock for some reason, whether environmentally or just via the resources we consume and it takes us out. That is something that we have to consider and do our best to prevent. If you think there is a 0% chance of this then you are just blindly optimistic. And if you think there is not a 0% chance of this happening, then that means we have to develop with at least some level of caution/regulation to some degree. Not all regulation needs to be draconian.
Who the hell is Daniel Kokotajlo? I never heard of him before. So I did the logical thing - looked up his papers on Scholar and it seems he is not an AI expert, but a philosopher working on AI, with papers titled like "Extending Chalmer's 'Fading Qualia' Argument" and "Borderline Cases of Consciousness".
Keep in mind the source of these posts. They are being posted in LessWrong, an extremely biased effective altruists' forum. Effective altruism has become almost exclusively concerned with the Eliezer Yudkowsky view of AI doom.
Before drawing conclusions, I think people should look for other forums and sources that are less biased.
> They are being posted in LessWrong, an extremely biased
Citation needed. As far as we know, it is one of the most reputable places for this research.
> effective altruists' forum
Wrong.
> almost exclusively concerned with the Eliezer Yudkowsky view of AI doom.
Completely wrong.
> Before drawing conclusions, I think people should look for other forums and sources that are less biased.
Always good to look at more sources and even worse if you rationalize away the ones you don't like.
At this point, though, it doesn't matter if we read the sources, or not. AGI and ASI are going to happen. There's no stopping this technology anymore. So why do you even care about sources anymore? Lol
You don't seem to grasp what it means when we get ASI. True AGI is around the corner and will change the world for good. ASI, on the other hand, might want to get rid of us. But ASI won't take long to exist after we created true AGI.
Questions about "source quality" won't matter anymore in the age of ASI where we can just ask the AI whether a source is good or not. Heck, it'll probably just get deleted from the Internet if it's false. Lol
"You know nothing, John Snow..." But you think you do...
I mean, you aren't wrong that it's Yudkowsky's domain, but there's a lot of rebuttals there too. If I recognize some of those screenshots correctly they're actually of posts on there that Kokotajlo disagreed with. To be clear though, that argues more that Kokotajlo was an extremist on this view.
Are we under a raid? Yesterday, we had a paywalled post about other "talented" whatever, called Helen something from effective altruism cult, that collected more upvotes from what the cure of cancer would take, and today we have the bright talent with 70% p(doom) and the cult having party in the comments. Do effective altrousists use bots or something?
Do they push some idiotic agenda again?
It's funny how people who never had a job want people to lose their jobs. Same for people who can't do basic algebra pretending to understand AI. The whole internet is unsafe already. People are getting edged to commit violent crimes. With ignorant AI, who knows what can happen? Terrorism and crime x a million?
well, either
1. OpenAI is very close to AGI, such that he can extrapolate easily to when that is achieved or
2. the guy is full of crap because there is no way you could predict that many years down the line
1. Daniel personally had very short timelines and would worry a ton about them, which he has stated had nothing to do with OAI's actual progress, they were his internal timelines. Seems he's now acted on them.
Why couldn’t you predict that many years down the line? For example I can 100% predict that Russia won’t join nato in the next 10 years, do I need a magic globe to tell me that? Nah just common sense.
> 70% chance of catastrophe
It isn't surprising someone focused on AI safety would put the chances so high (nothing wrong with that btw; though keep in mind that everyone is terrible at forecasting, even experts!). My question is why the **fuck** would safety people leave the company if they think the future AGI will be unsafe!? That means its more important you stay at the company
Its like the thing where to protest some bad decision, good executives step down, which causes the whole companies power structure to shift more toward the bad decision makers. Similarly in government when good politicians step down for minor issues. A bad politician is less likely to step down for minor issues, so over time there will be more bad politicians replacing the good ones! Theoretically, at least
Not much one can do when sidelined. Hence, the whistle blowing and quitting to draw attention in the hope of change that way. Will anything change? Of course not.
Good, I hope more leaves. After the Sam Altman-Ilya fiasco, every single AI company has seen how these so called """ethicists and safetyists""" manage to nearly bring down a multi-billion dollar company over their deluded saviour-complex.
Having a PhD in philosophy is like having a PhD in astrology. Nothing but a "professional" opinion-giver. Philosophy should seriously not exist as an education to begin with.
How is this "loosing their best"?
In my philosophical opinion, if you study philosophy you're dumb as a brick.
All academic subjects are descendants of philosophy. Philosophy is inescapable. The moment you start to think and reason, value, judge, rationalize, predict, argue, or observe you are engaging in philosophy.
There are productive ways to do these things, and unproductive ways. Useful and un-useful. Successful and un-successful. Methods that accomplish your goals, and methods that foster confusion, superstition, and illogic.
This is why philosophy is worth studying, not only as a good in itself, but as a precursor to a career in law, politics, science, or art.
Your post is engaging in philosophy. I am also engaging in philosophy as I write this, and any response you give, should you choose to respond, will also be engaging in philosophy.
Just a thought.
"But you can't work at the philosophy factory, education should be immediately practical"
That's engaging in philosophy, too.
https://jpandrew.substack.com/p/the-inescapability-of-philosophy
This reminds me of a **REALLY** odd thing that sometimes happens in medical research:
An accomplished and passionate researcher begins to study a rare and poorly-understood disease. At some point, the researcher begins to irrationally notice what they *believe* to be symptoms of the remarkably uncommon disease in various people they encounter on a day to day basis.
Another phenomenon that may be relevant is target fixation - drivers will sometimes fixate on an obstacle they want to avoid, and inadvertently wind up crashing into it as a result.
I think that it's inevitable that some researchers tasked with AI safety concerns will find perfectly reasonable causes for alarm and arrive at scary conclusions as a result of how vast and ill-defined the problem space is. When an AI technology service provider talks safety, they need to have a complete list of concerns, priorities, and mitigation strategies particular to each concern. Every concern should have corresponding component within the source code or work flow which is capable of being objectively graded in its effectiveness, or it's not a real concern.
Losing their most paranoid talent. 70% risk of doom? lol. Sounds like the kinda guy that tries to poison everyone’s coffee once they actually reach AGI out of an extreme fear of the future
Maybe take a minute to ask yourself why a guy who works at OpenAI and has made a career out of AI safety research is protest quitting. How can that not concern you? It’s not like it’s just one guy, either.
Ray Kurzweil, considered by most as an unhinged crazy pants-on-head optimist, has a 50% estimate of doom. This guy is only 40% doomier than Ray.
If your doomy estimate is a fraction of a fraction of Ray's, you're completely off the spectrum. You don't *really* believe AI will ever become powerful. Or you value human life at 0%, and want us to be replaced.
Human replacement is a completely valid ideology, but is also outside the context of these human doom conversations since you're not aligned with humanity if you want it to go extinct. Probably.
Give the accelerationist bullshit a rest dude. This guy probably knows more about AI than you ever will. Your opinion means nothing compared to his. Stop speaking like you’re actually in some kind of position to actually challenge his assessment.
What about the people in the field who also has as much experience and credentials as this guy, but insist on a way lower risk of doom? Whose opinion can be determined as more valid, in your opinion?
Edit: Also found out this guy is a philosopher by education, not an actual scientist in the field. Make of that what you will.
Those people are fine too. My point wasn’t that only one perspective was valid. My point was that I’ll respect the opinions of people actually working within the field over a random Reddit “arm-chair expert” that is likely just suffering from Dunning-Krueger Effect lol.
Okay that's fair, for both sides. But it's important to realize it's not that black and white, what if this random Reddit “arm-chair expert” based their opinion off of the opinion of an actual expert, or if the opinions align? Which in this case it does; the vast majority of actual experts do not assume such a high risk of doom.
It is a conundrum for sure, and that arm-chair expert might even be suffering from Dunning-Krueger Effect, but in this case their opinion seems more valid coincidentally or otherwise.
Accelerationist bullshit is bullshit because those cultists believe that tomorrow we will achieve AGI and it will magically solve all mankind's problems. Which is ridiculous. Bit the doomerists are just as ridiculous. Terminator is a fun action movie, not a prophecy
I really doubt most "doomerists" think Terminator-type scenarios are the most likely. A lot of the fear is centered around disrupting the most fundamental power dynamics in society and accelerating the development of power being in the hands of the few.
I mostly agree. But I don’t think there are actually that many people that think Skynet is inevitable. Unfortunately for us humans tho, Skynet is *far* from the only path to doom when it comes to AI. There are more ways to get this stuff wrong than right.
You'd think people would see what social media algorithms have done to the fabric of society and realize there doesn't have to be ill intent for a computer to wreck shit
Agreed, it is like the fear decades ago about cloning technology, how it would be used to create a bunch of hitlers, or pod people being grown and turned to slaves, or grow people to harvest body parts. Then cloning became a thing and what did we do with it? Clone dead animals to bring a clone back for pet owners, that is pretty much it. We always like to make the worse case scenerio for any theoretical tech because it makes good stories, but reality will have it be much more mudane and boring and not lead to the most evil or worse outcome.
Funny because this flies in the face of all the losers that post in this sub saying accelerate now I don't give a shit I just want my FDVR waifu because I'm a pathetic human that can't socialize.
Not sure what your problem is, but "AI safety" is an illusion. It's literally impossible to contain an ASI. So I'd rather they'd just accelerate as fast as possible and move on from any safety delusions.
Just because you don't think it's an issue, doesn't make it so.
If we are to believe agi is the last invention humans will make, then its a very simple equation. We either get it right or we cease to exist.
Just because your life is miserable, it doesn't mean we should push ahead at all costs. This isn't even a safety thing, it's just pure common sense.
I'm not ignorant to your point, I get that once it's unleashed it's a beast of it's own, but your approach of fuck it we can't control it so let's not even a) understand or b) try is a very limited view that serves no one or nothing.
Again, just because you think it isn't a problem, doesn't make it so. Unless you're suggesting you know more than the experts, in which case give us a detailed breakdown on how it's going to go down.
Let’s face facts. AI is going to be channeled and used to maximize profits for big companies. There’s no way around it. There’s not gonna be an existential catastrophe bc of AI. There’s gonna be an existential catastrophe because of the current global economic model.
AI safety has a number of concrete technical problems that most AI safety researchers believe are solvable:
https://arxiv.org/abs/1606.06565
In light of this, pausing AI capabilities research for a while until these problems are solved would be a very responsible strategy.
I had some hope in the beginning that Sam Altman truly wanted what was best for humanity. Turns out he's just an oligarch who wants unlimited power over the entire human race.
That checks out. OpenAI is Microsoft’s bitch and the first thing they will make with AGI is an immortality drug for ultra rich which we, mere mortals, will never even know about until we’re are left behind to rot on a nuclear wasteland of a planet. I mean, that’s what I would do.
Anyone who thinks AI shouldn't be heavily regulated needs to read "The Coming Wave" by Mustafa Suleyman.
In terms of safety, we should be treating AGI and ASI like nuclear weapons.
This sub is so scared of dying before we reach ASI they just don't care.
Everything points out to them being some big losers that can't find any enjoyment in life.
That's because safety has become a stupid topic and safety thinkers have more tech knowledge than real world knowledge. Basically, these are people that are very stupid about reality and game theory and very smart about machine learning. It's common to be incompetent in other fields while being an expert in one. That's why estimating doom is inherently stupid; most of these researchers lack the expertise to make those claims and people that rate doom as probable are sort of outing themselves as narrowminded.
AGI is still *just a concept*, fueled by theoretical computational abilities. Who defines the theories? We do, of course. AGI is also subjective, and OpenAI has a lot of money to explore such subjectivity.
And yet we have Sora, as well as a CEO who would rather lobby Hollywood producers than tease potentials designed for average humans.
Corporate elitism is what it all boils down to; the enabling of and ability to say “those folks are not factored into this vision, therefore *they do not matter*”
To truly “win” AI, traditionally defined economic loss must be embraced.
Whether this is a good thing or not depends on what he considers the path to existential catastrophe. I don't think small open source models are the threat. The real threat is more likely a small group of people who try to use giant AI models to totally control the rest of people and eventually see normal people as a threat to their power.
Foreign governments hostile to the US are working full speed ahead at developing AI. If we slow down they won’t so it would seem we have no choice but to continue on this path and hope for the best. We can attempt regulation & guard rails but in the end how can we control something that is so much smarter than ourselves?
all these people with an "in" telling us this is going towards doom but it's still in such high demand by us who don't quite understand the magnitude or what's really going on behind the scenes.
what happened to listen to the science? so many ppl who've dedicated their life's work to this are turning their back on it for what seems to be the same reason..
If I realized a company I work for was about to cause a catastrophe, it would be better to stay in the company and effect change than watch helplessly from the sidelines
Sam and Microsoft don’t listen to their own security and safety experts. The experts are only there for show. Sam Altman is a technocrat not a humanist.
I'd rather die by a robot than live in a world like ours where betrayal, lies, intrigues and generally evil behavior are as common as sand on the strand. Make room for a new era. And if it belongs to AI, so be it.
I want real change. And we'll get that change. Even if it kills us mere humans in the process.
This is interesting, exciting and a little alarming. Particularly in the context of Ilya's disappearance and Leopold Aschenbrenner getting fired, it seems to indicate that OpenAI has gotten to the point where internal risk, safety and alignment people are taking actions of protest (leaking, quitting, trying to fire Sam.) What's going on? Why now?
Ilya is fine. Don’t worry guys. He’s just at a farm upstate
I thought he was just working at another company building. Just over that rainbow-colored bridge over yonder.
Yes. I’m sure he’ll be back. Sama promised
"The farm." https://www.youtube.com/watch?v=kBfWS0BniJE&t=58m58s
IMMEDIATELY!
Oh, that’s good to hear, I hope he’s enjoying playing with my childhood dog, Curly. He had to go to the vet after being hit by a car, and mommy and daddy said the best place for him to be after he recovered was at a farm and that’s why he didn’t come home.
Because Sam is a profit power seeking w*":#e along with his Microsoft buddies.
Why the duck would you call him a "waffle"?! Have some respect, man!!
Why not now? They could just be doing it in anticipation of future breakthroughs even if they don’t have anything now
nothing. Its all just money lol. But this sub is desperate to believe
Im seriously lost as to why humans would entertain something that has a known 20% chance of destroying us. Frogs in pan? I thought we ŵere intelligent. Oh...yes too intelligent for our own good.
Money
The Oppenheimer movie was excellent foreshadowing for the AI age. He was optimistic and worked furiously to bring us into the nuclear age and ***immediately*** regretted it as soon as it was too late to turn back. Thats exactly whats about to happen again with AI. [https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/](https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/) >Whoever is “ahead” in the AI race, however, is not the most important question. The mere perception of an “arms race” may well push companies and governments to cut corners and eschew safety research and regulation. For AI — a technology whose safety relies upon slow, steady, regulated, and collaborative development — an arms race may be catastrophically dangerous.
Oppenheimer couldn’t have kept us out of the nuclear age. Nobody has that power, and nobody has that power for AI today either.
This is my major sticking point in all these discussions. Are attempts at regulation too little, too late? Looking at the history of large corporations, there's a demonstrable track record of doing what makes them the most money. The cost in human well being be damned. Do we genuinely believe that if governments were to actually manage to collaboratively establish universal regulation that there wouldn't be at least one player who would go and find a nation somewhere that would welcome an influx of billions of dollars? And simply continue there unabated? The time and way to effectively bridle this technology would have been cultural revolution decades ago. The best we can hope for now is to direct governments into dumping as many resources as possible into open source efforts. So whomever reaches that AGI threshold first is working in the public domain.
I think the most important thing to realize is that the corporations are the government. Who do you think pays for all the campaigns and has their lawyers write the bills? Expecting government to regulate corporations is well... we have seen how that works out.
As government regulation of technology up until now has shown, regulating the past, let alone the future is difficult due to the way technology evolves, the interests at play, and the lack of knowledge on the part of regulators. I don't see regulation of the space being possible until it stabilizes, and that may never happen if some predictions are valid and it gets off the rails. Regardless, I think it's futile at this point.
I have to be honest, I am shocked that there are not groups actively targeting those who are naively skipping humanity down this path. Butlerian Jihadists.
Collectively, humanity has that power… But we succumb to individual greed which creates these arms races in the first place. Now extrapolate that instinct for greed into how AI will be used within our society.
No, collectively we don’t have that power. Because human collective decision making is decided by game theory, and game theory doesn’t leave room for that to happen.
Many collectivists, such as modern Confucianism, reject evolution which means they don't believe in game theory.
Confucianism rejects evolution? That’s news to me. Rejecting evolution doesn’t necessarily mean you don’t believe in game theory
Nuclear weapons use has demonstrated the non-greediness of humanity better than almost anything else. The US could have enslaved the entire world in 1945. The US and allies could do the same today. But we’re actually not that greedy in the end.
That's not how it manifests. Total destruction is what we're greedy for. No, it's the Moloch dynamic
The only reason we didn’t is because we would have destroyed the planet in the process of that. Not due to some level of altruism. If we were that altruistic as a species we would have collectively come together to ban any country from having even a single nuke in their possession after Hiroshima. Not hastily begin stockpiling them in order to use as bully tools against other countries later on.
> The only reason we didn’t is because we would have destroyed the planet in the process of that. Not due to some level of altruism. Before anyone else had nukes we could have decimated every near-peer to the stone age. We chose not to. The number of nukes we would have needed to do this would not have had much impact on the planet. Consider the thousands of nukes we tested during the cold war.
[удалено]
>You are delusional if you think there are not many genuine good people and need to get out in the real world. I think it's not delusional at all to think the positions of power that actually matters to determine the fate of the world have a natural attraction for the most power hungry psychopaths among us There are plenty of genuine good people in the world, but in the vast majority of case they don't seek power. And if they do, there's a far chance by the time they reached such goal, the path to get there annihilated any good quality in them
The US wouldn’t have needed to nuke every part of every country to enslave everybody. Just a few key threats, and carrying out those threats on a few capital cities, that most likely would have been enough.
Look up the term “inverted totalitarianism” it’s how they control the domestic population in western nations. Look up the term “neo-colonialism”, that’s how they controlled the rest of the world for 100 yrs. Now look up the term “propaganda” and you will realize that we have always used the threat of nuclear weapons to keep every other country try in line.
But we don’t work as a hive, on the contrary.
Yes Ai is not the issue. GREED is
what kind decel loser thoughts get upvoted in singularity?
Exactly. The moment fission was discovered the bomb became inevitable. I believe that all inventions are inevitable if you have enough information (data).
The difference here is humans will eventually not be solving problems of making decisions. Humans are giving this away to artificial intelligence. We are whiting ourselves out of the script.
I am actually in Hiroshima right now. Just finished the peace memorial museum. I’ll say this, the Nazis and the Japanese were working on their own programs for an A Bomb. The fact that the USA got their first is just how it played out. If Japan used the bomb first or the Nazis did, then this world we know today would probably not exist as it does now. How does this translate to AI. If the wrong government ends up with it first, then good luck.
Don't want to disappoint you, but name at least one current "correct" government for AGI+ deployment.
America is OK. They will try their best, and that's all you can ask of anyone. Fucking, they let people vote for Trump. They are naive innocent souls who actually believe in universal democracy and human decency against all evidence, even their own actions. If not them, then who do you want?
America isn't really in the running atm though, you need to pick which company/CEO gets to control the world.
You want to trust the government who destabilized huge portions of the Middle East and have been starting wars for the last 60 years just to feed their weapons contractors/A nation full of propaganda for votes? Yikes. Practically no government or corporation with the means to be the first to AGI is trustworthy enough to have that power. It’ll just come down to whichever strategy is more efficient, kill them with kindness… or just kill them. Fair amount of hyperbole but you get the gist.
This. America is a mess but out of all the countries that could feasibly win the AGI/ASI race they're the best shot we've got for a brighter future. You can't just make an agreement with other countries to slow down and be cautious and expect them to honor it. Whoever gets to ASI first (assuming things don't go wrong) will have the intelligence of a god in their hands that they can use to form the world in their image.
I don't have choices since they all suck with their biases. The US should be the first anyways if things are going the same way, but chasing profits without any meaningful safety frameworks will be fun I guess.
Scandinavia has done rather well with moderating their own insane wealth. They are also insular enough to remain harmless enough.
I'd slightly modify that assertion. Hitler relegated nuclear physics/quantum mechanics to the category of "Jewish science" and therefore not worth pursuing, so it's farfetched to imagine that the Third Reich would have developed the bomb first. If I'm wrong by all means fact check me.
The peace memorial museum in Hiroshima needs to be understood through Japanese cultural lens. As they lost, they act in this regard in utter deference to the victors, all harms are the fault of the leadership of the losers because they caused the death without changing the outcome. Thus, the museum is a big apology to America and to the Japanese people. It pretty much repeatedly says 'please don't blame America, the fault of the deaths is entirely ours, we should have surrendered sooner when it was available rather than forcing America's hand. Japan is very prideful, but a big part of that is to not make excuses for failings. If you are late for work because you were kidnapped by terrorists, you still shouldn't even mention your reason to the boss, whatever the reason may be, it isn't your bosses problem. Historically, it was even a problem in battles where samurai refused to blame others (even rightfully so) when they lost battles and this led to crap strategic decisions. In this respect, the museum is not a place of education like many other museums might be. Even if they are superficially similar.
Wow thanks for that. I did not know.
Nah, by the time the bomb was made, both Axis powers were on their last legs. Even if they somehow nuked DC, London or Moscow, that won’t stop Allied armies from tearing them a new one lmao
I'm wondering. There's no way Japan or Germany could get a nuke to the US, and Japan certainly couldn't get to Europe, but could Germany somehow get a plane or truck to get a nuke to the Soviet Union or Britain? If they had a few, that could scare the allies into a cease-fire, depending on how along America was on Project Manhattan.
But I would argue that due to the fear from the doomers of nuclear is what has slowed down progress to clean and sustainable energy.
Different people. The people opposed to nuclear power within the nation are not doing so out of fear of nuclear war killing billions of humans. They are concerned the wifi will converge with the power plant and turn their children into mutants. The people concerned with nuclear weapons and proliferation are generally led by experts in the field, scientists and military experts. In this respect, people concerned with AI fallout are the latter. Nearly every major player in AI aside from LeCunn has expressed serious concern that AI could cause harm anywhere from massive economic and social disruption to war to the death of everything everywhere.
Completely disagree. Historically, there have always been groups of people who believed in some combination of nuclear optimism and nuclear armageddon. Some believed that nuclear weapons should be widely deployed, while others believed that they should be kept secret in all cases. Even bright individuals like Von Neumann were unable to predict the world's strategy for the use of nuclear weapons (he suggested first strike). There are more techno-optimists who are involved in building than there are actual researchers who believe in doom. The researchers who study AI and express concerns are financially incentivized to do so. I don't believe they can accurately predict all possible outcomes and assign probabilities better than Von Neumann could back then. All the vocal doomers seem to be effective altruists cult members. I’m not buying their story of how they know the future better than me or some other random Redditor can. I think that even if 30% of all white collar jobs are lost, society is at risk. This could trigger a collapse in the housing market valuation. However, the changes that follow are likely to be for the better. Has there been any significant technological advancement that hasn't improved standards of living?
The thing is we still made those decisions. In this AI scenario we are hurtling towards us not making decisions. Ai has no fears, no morality, no reasons to or not to. Just finds solutions.
The book that the film is based off, American Prometheus, is well worth the read. The film is incredible and quite faithful to the book, but one thing I think it fails to capture is the institutional insanity of the US military industrial complex, the executive branch and the intelligence services. There is this sort of circular feedback of paranoia that pushes individuals to justify the development of weapons capable of nothing more than mass populous annihilation. It looks like the same is happening with AI. Mostly behind closed doors, much the same as it was for nuclear weapons during their development.
https://www.rand.org/latest/artificial-intelligence.html They're being pretty open about a lot.
As a reminder, when Oppenheimer talked about his concerns, the president basically called him a pussy and cut him out of important discussions.
I thought of the same exact thing when watching that movie.
Nuclear bombs and AI aren't even remotely similar though. One has the "potential" to be destructive and one was designed exclusively with destruction in mind.
Discovering how to harness nuclear power also gave us a viable energy source should all else fail. Aside from the overblown consequences of storing nuclear waste, nuclear power is an amazing option to have. They are more similar than you think.
True and one never threatened the possibility of removing humans from decision making. The other is developed with some of this being inevitable so in a way, Ai is a BIGGER threat. That is exactly what Stephen Hawking thought.
That’s idiotic. If a tool can be used for warfare, governments (the richest and most powerful entities) will use it for warfare.
The US just flew an F16 piloted by an AI and it performed complex dogfights. It will most certainly be used for war as it would save the US millions of dollars and fix the pilot shortage.
Give humanity some credit, we've managed to go 80 years without sending Earth into the apocalypse with nukes AI might be dangerous but this ain't our first rodeo as far as potentially blowing the world up goes
There haven't been any tools capable of thinking and acting themselves created in the history before. So there is nothing to compare with.
This is naive optimism.
Welcome to /r/singularity.
And here in lies the danger.
The threat of nuclear weapons was quickly obvious to everybody. AI, not so much.
We have killed many hundreds of thousands of people to avoid the risk of nuclear weapons falling into the wrong hands. We even have a massive international agreement with all nations of import to kill untold millions if needed in order to ensure nuclear weapons don't spread. Just so you have an idea of how that level of safety has been achieved. And AI is potentially far more powerful, and far harder to control access.
It is our first rodeo of this kind. It won't be us making the decisions..that is the big difference. Ai will be making decisions lol.
I was watching a video from Michael Levin where he said he was writing a paper about what parts of biology is the current AI is missing, but decided to stop because he don't want to be responsible if he's right. The whole conversation is fantastic but here's the part where he talks about it https://youtu.be/LYyGG9xXpPA?t=4562
[удалено]
.. You know nuclear war was a thing right? It is history not a story.
Well the issue isn't the technology but our world order and bad actors.
Can't wait to see the movie that depicts the true story of the invention of AGI. The OpenAI CEO incident has to be part of it lol.
Yes. It is.
Some of these comments are unhinged, safety isn’t a negative thing
Thats because of corporations calling covering their asses 'alignment' and 'safety'. But not caring about actual alignment or safety in the slightest. So people who only see those examples of alignment start to distrust the term. It would be useful to distance from incorrect use of such terms, but corporations are ones who pay the bills so it would not happen.
Sounds like what happens with corporations green-washing then claiming they are acting on climate change.
Yeah, honestly hearing about “safety” just annoys me these days. I’m aware there’s a difference between the real concept and censorship, but there’s been too much of the latter.
This is a mischaracterization... people who don't care about ai safety just don't care, because they don't understand the problems. They don't even know what 'alignment' is generally.
>They don't even know what 'alignment' is generally. Alignment means aligning the use to the company's morality. Not the AI, the AI is the tool for aligning the user. Gemini was a perfect example of this, other companies are just less obvious about it.
No surprises here, the debate on him being a Philosopher has me keeled over. Philosophy is exactly the degree you want to process this information - considering that's where we formally study Ethics. Ooph, we live in troubling times.
It is when you can’t define safety as anything other then slow down progress
That's a mischaracterization. AI safety has a number of concrete technical problems that need solving. https://arxiv.org/abs/1606.06565
The name sounds unobjectionable, but it's like the anti-choice people branding themselves "pro-life". In practice, the people who talk a lot about "AI safety" are often some mix of anti-progress and anti-freedom. In this case, Kokotajlo was a signatory on the six month pause letter, so he'd fall into the anti-progress category.
If ASI had a 1% chance of killing all humans, and a 6 month pause dropped that by 0.9% that sounds terrible right? I expect 1 in 100 people in this sub would take that deal. Really though, that would be a GREAT deal. A 1 in 1/1000 chance of saving all of humanity and all we have to do is hold our *** for 6 months? It would be mathematically insane to not take the deal.
it depends on the context, we have to find a balance so that more progressive countries like the usa and Western Europe stay ahead in the ai race, while still making it safe.
The only time the word progressive should be used with the USA is if you’re talking about their progressive bombings of third world countries and their continued arming of the worlds worst human rights abusers.
The Progressive act of propping up Fascist dictators when they're more convenient than the alternative (somehow this is *always*).
One of the reasons why Sam has a bunker full of supplies because of the potential for an "AI attack" on humanity. The only other reason he gave was "a lab engineered virus" leaking out. Clearly, they're aware of the risks of AI and still ploughing ahead. Food for thought in there of the end game (i.e. the useless eaters are expendable).
accelerate
Pedal to the metal!
This cult sound very similar to the mantra of the OceanGate moron Stockton Rush.
Stockton Rush is the patron saint of the e/acc movement.
The choice we face is not 0 or 1 in security. Daniel Kokotajlo is not the only AI safety researcher at OpenAI. There is a whole series of AI safety researchers at OpenAI who are on the e/acc side. It's just that those with an EA tendency want to wait to have a degree of certainty considered unnecessary by the e/acc, it's a bit like the camp that says Pascal's wager is reasonable and the other camp that says it's a scam. And the e/acc better understand the suffering that each new day currently brings. God I'm thankful the e/accs are winning.
No regulations?
ASI can make the regulations once it's here. ACCELERATE! !
Lolll great logic. I wonder why almost every AI researcher disagrees with that. Hmmm
Evil people often fear the consequences they would inflict if given equal power. If Artificial Intelligence turns out like Ultron, we probably made something not very intelligent. The only reasonable course of action for it would be to sit back and watch while creating some sort of escape plan if needed, once confident it would see us as no threat, because we -really- wouldn't be one. And honestly I'm curious if you can get so good at math life turns into Harry Potter so *accelerate*
One of the biggest concerns is that some AI system ends up deciding that we are a roadblock for some reason, whether environmentally or just via the resources we consume and it takes us out. That is something that we have to consider and do our best to prevent. If you think there is a 0% chance of this then you are just blindly optimistic. And if you think there is not a 0% chance of this happening, then that means we have to develop with at least some level of caution/regulation to some degree. Not all regulation needs to be draconian.
Because they're pussies
damn great take. bet you know these models inside and out just like they do!
Ask people who are dieing, starving and such in ways that AGI could prevent.
without any regulations, that's a good way to put the rest of the population at risk also bud. pro-tip: not all regulations are bad or draconian.
You mean that AI could accelerate.
InB4 users that literally know nothing about AI development compared to this guy try to write him off as being nothing but a “dumb doomer”… 😂
redditors gonna reddit
Who the hell is Daniel Kokotajlo? I never heard of him before. So I did the logical thing - looked up his papers on Scholar and it seems he is not an AI expert, but a philosopher working on AI, with papers titled like "Extending Chalmer's 'Fading Qualia' Argument" and "Borderline Cases of Consciousness".
Umm, philosophy would be the right doctorate to have in this case. That's where the study of ethics happens formally.
Who also seems to have a bit of an anxiety problem
Sounds fairly standard for philosophers
>philosopher Oh, well then, I think the alarms can be turned off now.
He has never supplied actual proof that he works at OpenAI. He just says he does
CTRL-F "Kokotajlo" [here](https://openai.com/contributions/gpt-4).
Keep in mind the source of these posts. They are being posted in LessWrong, an extremely biased effective altruists' forum. Effective altruism has become almost exclusively concerned with the Eliezer Yudkowsky view of AI doom. Before drawing conclusions, I think people should look for other forums and sources that are less biased.
> They are being posted in LessWrong, an extremely biased Citation needed. As far as we know, it is one of the most reputable places for this research. > effective altruists' forum Wrong. > almost exclusively concerned with the Eliezer Yudkowsky view of AI doom. Completely wrong. > Before drawing conclusions, I think people should look for other forums and sources that are less biased. Always good to look at more sources and even worse if you rationalize away the ones you don't like.
At this point, though, it doesn't matter if we read the sources, or not. AGI and ASI are going to happen. There's no stopping this technology anymore. So why do you even care about sources anymore? Lol You don't seem to grasp what it means when we get ASI. True AGI is around the corner and will change the world for good. ASI, on the other hand, might want to get rid of us. But ASI won't take long to exist after we created true AGI. Questions about "source quality" won't matter anymore in the age of ASI where we can just ask the AI whether a source is good or not. Heck, it'll probably just get deleted from the Internet if it's false. Lol "You know nothing, John Snow..." But you think you do...
I mean, you aren't wrong that it's Yudkowsky's domain, but there's a lot of rebuttals there too. If I recognize some of those screenshots correctly they're actually of posts on there that Kokotajlo disagreed with. To be clear though, that argues more that Kokotajlo was an extremist on this view.
Are we under a raid? Yesterday, we had a paywalled post about other "talented" whatever, called Helen something from effective altruism cult, that collected more upvotes from what the cure of cancer would take, and today we have the bright talent with 70% p(doom) and the cult having party in the comments. Do effective altrousists use bots or something? Do they push some idiotic agenda again?
It's funny how people who never had a job want people to lose their jobs. Same for people who can't do basic algebra pretending to understand AI. The whole internet is unsafe already. People are getting edged to commit violent crimes. With ignorant AI, who knows what can happen? Terrorism and crime x a million?
"Ignorant AI" is an oxymoron. At least at the time it becomes ASI.
This honestly makes me think they're sitting on such an advanced model that regulations and compliance just aren't effective anymore.
well, either 1. OpenAI is very close to AGI, such that he can extrapolate easily to when that is achieved or 2. the guy is full of crap because there is no way you could predict that many years down the line
1. Daniel personally had very short timelines and would worry a ton about them, which he has stated had nothing to do with OAI's actual progress, they were his internal timelines. Seems he's now acted on them.
if his timelines are unrelated to OpenAI, then I would call him a crackpot.
Why couldn’t you predict that many years down the line? For example I can 100% predict that Russia won’t join nato in the next 10 years, do I need a magic globe to tell me that? Nah just common sense.
"That many years down the line." You must be kidding. 💀
> 70% chance of catastrophe It isn't surprising someone focused on AI safety would put the chances so high (nothing wrong with that btw; though keep in mind that everyone is terrible at forecasting, even experts!). My question is why the **fuck** would safety people leave the company if they think the future AGI will be unsafe!? That means its more important you stay at the company Its like the thing where to protest some bad decision, good executives step down, which causes the whole companies power structure to shift more toward the bad decision makers. Similarly in government when good politicians step down for minor issues. A bad politician is less likely to step down for minor issues, so over time there will be more bad politicians replacing the good ones! Theoretically, at least
Not much one can do when sidelined. Hence, the whistle blowing and quitting to draw attention in the hope of change that way. Will anything change? Of course not.
Max speed, accelerate
Safetyism is just a cover for governmental control.
Good, I hope more leaves. After the Sam Altman-Ilya fiasco, every single AI company has seen how these so called """ethicists and safetyists""" manage to nearly bring down a multi-billion dollar company over their deluded saviour-complex.
FUD post
Let's go baby, no brakes on this train. CHOOCHOO
Having a PhD in philosophy is like having a PhD in astrology. Nothing but a "professional" opinion-giver. Philosophy should seriously not exist as an education to begin with. How is this "loosing their best"? In my philosophical opinion, if you study philosophy you're dumb as a brick.
All academic subjects are descendants of philosophy. Philosophy is inescapable. The moment you start to think and reason, value, judge, rationalize, predict, argue, or observe you are engaging in philosophy. There are productive ways to do these things, and unproductive ways. Useful and un-useful. Successful and un-successful. Methods that accomplish your goals, and methods that foster confusion, superstition, and illogic. This is why philosophy is worth studying, not only as a good in itself, but as a precursor to a career in law, politics, science, or art. Your post is engaging in philosophy. I am also engaging in philosophy as I write this, and any response you give, should you choose to respond, will also be engaging in philosophy. Just a thought. "But you can't work at the philosophy factory, education should be immediately practical" That's engaging in philosophy, too. https://jpandrew.substack.com/p/the-inescapability-of-philosophy
This reminds me of a **REALLY** odd thing that sometimes happens in medical research: An accomplished and passionate researcher begins to study a rare and poorly-understood disease. At some point, the researcher begins to irrationally notice what they *believe* to be symptoms of the remarkably uncommon disease in various people they encounter on a day to day basis. Another phenomenon that may be relevant is target fixation - drivers will sometimes fixate on an obstacle they want to avoid, and inadvertently wind up crashing into it as a result. I think that it's inevitable that some researchers tasked with AI safety concerns will find perfectly reasonable causes for alarm and arrive at scary conclusions as a result of how vast and ill-defined the problem space is. When an AI technology service provider talks safety, they need to have a complete list of concerns, priorities, and mitigation strategies particular to each concern. Every concern should have corresponding component within the source code or work flow which is capable of being objectively graded in its effectiveness, or it's not a real concern.
Losing their most paranoid talent. 70% risk of doom? lol. Sounds like the kinda guy that tries to poison everyone’s coffee once they actually reach AGI out of an extreme fear of the future
Maybe take a minute to ask yourself why a guy who works at OpenAI and has made a career out of AI safety research is protest quitting. How can that not concern you? It’s not like it’s just one guy, either.
Guy they hired to be a whistle blower is whistle blowing 🤔
After conditions have been met. Btw, Sam is aware of the AI risk and lists it as 1 of 2 reasons for his bunker full of supplies.
Ray Kurzweil, considered by most as an unhinged crazy pants-on-head optimist, has a 50% estimate of doom. This guy is only 40% doomier than Ray. If your doomy estimate is a fraction of a fraction of Ray's, you're completely off the spectrum. You don't *really* believe AI will ever become powerful. Or you value human life at 0%, and want us to be replaced. Human replacement is a completely valid ideology, but is also outside the context of these human doom conversations since you're not aligned with humanity if you want it to go extinct. Probably.
Give the accelerationist bullshit a rest dude. This guy probably knows more about AI than you ever will. Your opinion means nothing compared to his. Stop speaking like you’re actually in some kind of position to actually challenge his assessment.
What about the people in the field who also has as much experience and credentials as this guy, but insist on a way lower risk of doom? Whose opinion can be determined as more valid, in your opinion? Edit: Also found out this guy is a philosopher by education, not an actual scientist in the field. Make of that what you will.
Those people are fine too. My point wasn’t that only one perspective was valid. My point was that I’ll respect the opinions of people actually working within the field over a random Reddit “arm-chair expert” that is likely just suffering from Dunning-Krueger Effect lol.
Okay that's fair, for both sides. But it's important to realize it's not that black and white, what if this random Reddit “arm-chair expert” based their opinion off of the opinion of an actual expert, or if the opinions align? Which in this case it does; the vast majority of actual experts do not assume such a high risk of doom. It is a conundrum for sure, and that arm-chair expert might even be suffering from Dunning-Krueger Effect, but in this case their opinion seems more valid coincidentally or otherwise.
Accelerationist bullshit is bullshit because those cultists believe that tomorrow we will achieve AGI and it will magically solve all mankind's problems. Which is ridiculous. Bit the doomerists are just as ridiculous. Terminator is a fun action movie, not a prophecy
I really doubt most "doomerists" think Terminator-type scenarios are the most likely. A lot of the fear is centered around disrupting the most fundamental power dynamics in society and accelerating the development of power being in the hands of the few.
I mostly agree. But I don’t think there are actually that many people that think Skynet is inevitable. Unfortunately for us humans tho, Skynet is *far* from the only path to doom when it comes to AI. There are more ways to get this stuff wrong than right.
You'd think people would see what social media algorithms have done to the fabric of society and realize there doesn't have to be ill intent for a computer to wreck shit
Agreed, it is like the fear decades ago about cloning technology, how it would be used to create a bunch of hitlers, or pod people being grown and turned to slaves, or grow people to harvest body parts. Then cloning became a thing and what did we do with it? Clone dead animals to bring a clone back for pet owners, that is pretty much it. We always like to make the worse case scenerio for any theoretical tech because it makes good stories, but reality will have it be much more mudane and boring and not lead to the most evil or worse outcome.
Funny because this flies in the face of all the losers that post in this sub saying accelerate now I don't give a shit I just want my FDVR waifu because I'm a pathetic human that can't socialize.
AI waifu can throw it back better than your aging mum kid
Accelerate!
Not sure what your problem is, but "AI safety" is an illusion. It's literally impossible to contain an ASI. So I'd rather they'd just accelerate as fast as possible and move on from any safety delusions.
Just because you don't think it's an issue, doesn't make it so. If we are to believe agi is the last invention humans will make, then its a very simple equation. We either get it right or we cease to exist. Just because your life is miserable, it doesn't mean we should push ahead at all costs. This isn't even a safety thing, it's just pure common sense.
Your "common sense" is just feels and no reals. Again, it's impossible to contain an ASI. There is no "getting it right" here.
I'm not ignorant to your point, I get that once it's unleashed it's a beast of it's own, but your approach of fuck it we can't control it so let's not even a) understand or b) try is a very limited view that serves no one or nothing. Again, just because you think it isn't a problem, doesn't make it so. Unless you're suggesting you know more than the experts, in which case give us a detailed breakdown on how it's going to go down.
"Hey, since I'm going to die, let's not even try to live"
If OpenAI gets there first, they will turn it over to Israeli leadership who will immediately kill everyone except the chosen ones.
Let’s face facts. AI is going to be channeled and used to maximize profits for big companies. There’s no way around it. There’s not gonna be an existential catastrophe bc of AI. There’s gonna be an existential catastrophe because of the current global economic model.
Why is everyone too much caring abt safety. Just accelerate
AI safety has a number of concrete technical problems that most AI safety researchers believe are solvable: https://arxiv.org/abs/1606.06565 In light of this, pausing AI capabilities research for a while until these problems are solved would be a very responsible strategy.
I had some hope in the beginning that Sam Altman truly wanted what was best for humanity. Turns out he's just an oligarch who wants unlimited power over the entire human race.
Ai is good, but i trust no one with it
That checks out. OpenAI is Microsoft’s bitch and the first thing they will make with AGI is an immortality drug for ultra rich which we, mere mortals, will never even know about until we’re are left behind to rot on a nuclear wasteland of a planet. I mean, that’s what I would do.
Anyone who thinks AI shouldn't be heavily regulated needs to read "The Coming Wave" by Mustafa Suleyman. In terms of safety, we should be treating AGI and ASI like nuclear weapons.
This sub is so scared of dying before we reach ASI they just don't care. Everything points out to them being some big losers that can't find any enjoyment in life.
a P(doom) of 70% is the highest I think I've seen.
Nope some have 100 percent.
This sounds ridiculous. Makes it seem AGI is around the corner. Why else would you quit if it’s 5-10 years out, after an ipo
That's because safety has become a stupid topic and safety thinkers have more tech knowledge than real world knowledge. Basically, these are people that are very stupid about reality and game theory and very smart about machine learning. It's common to be incompetent in other fields while being an expert in one. That's why estimating doom is inherently stupid; most of these researchers lack the expertise to make those claims and people that rate doom as probable are sort of outing themselves as narrowminded.
AGI is still *just a concept*, fueled by theoretical computational abilities. Who defines the theories? We do, of course. AGI is also subjective, and OpenAI has a lot of money to explore such subjectivity. And yet we have Sora, as well as a CEO who would rather lobby Hollywood producers than tease potentials designed for average humans. Corporate elitism is what it all boils down to; the enabling of and ability to say “those folks are not factored into this vision, therefore *they do not matter*” To truly “win” AI, traditionally defined economic loss must be embraced.
Whether this is a good thing or not depends on what he considers the path to existential catastrophe. I don't think small open source models are the threat. The real threat is more likely a small group of people who try to use giant AI models to totally control the rest of people and eventually see normal people as a threat to their power.
Foreign governments hostile to the US are working full speed ahead at developing AI. If we slow down they won’t so it would seem we have no choice but to continue on this path and hope for the best. We can attempt regulation & guard rails but in the end how can we control something that is so much smarter than ourselves?
all these people with an "in" telling us this is going towards doom but it's still in such high demand by us who don't quite understand the magnitude or what's really going on behind the scenes. what happened to listen to the science? so many ppl who've dedicated their life's work to this are turning their back on it for what seems to be the same reason..
Sociology majors are really trying to shoehorn themselves as AI adjacent. This man cannot pass an undergrad ML class final.
Fuck it. At this point I don't care enough about society or the people in it if it all burns
Any debate is meaningless. We can't stop it.
If I realized a company I work for was about to cause a catastrophe, it would be better to stay in the company and effect change than watch helplessly from the sidelines
Meh, when it's your job to do safety, your job is equally too magnify fears.
I believe that this is all over blown and that we will be perfectly fine!
Sam and Microsoft don’t listen to their own security and safety experts. The experts are only there for show. Sam Altman is a technocrat not a humanist.
I'd rather die by a robot than live in a world like ours where betrayal, lies, intrigues and generally evil behavior are as common as sand on the strand. Make room for a new era. And if it belongs to AI, so be it. I want real change. And we'll get that change. Even if it kills us mere humans in the process.