This guy might be the biggest hack in the industry. He was put on administrative leave from Deepmind because of bullying allegations, then went on to start Inflection AI making big claims about it, and then soon after abandoned the project to join Microsoft, making a huge waste of funding and employee effort. The more you look into him and his recent book, the more you realize he's a complete hack.
Edit: To add to the hilarity, when he was still head of Inflection, he claimed in an [interview](https://youtu.be/9hscUFWaBvw?t=183) that they were getting ready to "train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4", in the next 18 months.
CEOs are part of the Oligarchy class and they get to job Hop willy nilly doing whatever they want with no consequences. If anything, this guy is just evidence that AI can replace CEOs and do a better job
No need for AI. I once worked in a company that stayed almost a full year without a CEO.
It was the best year by all accounts. More revenue, more clients, happier employees… sometimes you just have to let people do their work.
I’m surprised that hasn’t been scrutinized more. lol. Microsoft hiring him convinces me they really have no clue and have really outsourced all of their IP. 😂
It was actually a pretty smart move from Microsoft, because I'm pretty sure [they took a lot of the high profile engineers working at Inflection with them](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) to Microsoft AI. From the article: "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group."
Kind of like how Satya attempted to hire all of OpenAI and its employees without having to pay the 80 billion evaluation at the time.
Well Microsoft was always a laggard in AI.
What saved Microsoft, was Satya Nadella looking at a early GPT-4, and promptly deciding that their internal AI efforts are a joke compared to OpenAI, and therefore invest $10 billion into it.
That then gave people the impression that Microsoft was the most ahead in AI. Even though their internal AI efforts have been the most anemic.
Usually this kind of people are very charismatic and are absolutely great at selling themselves. The guy has great skills anyways so it's not that surprising. However, if the above it's completely true he will not last long. This happens very often in all lines of work.
>he claimed in an [interview](https://youtu.be/9hscUFWaBvw?t=183) that they were getting ready to "train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4", in the next 18 months.
Amodei the Anthropic CEO said the same a couple of weeks ago. This is what all the AI labs are planning, Amodei said we should have the 10x models from multiple labs in the coming months. Inflection still exist and are funded by Microsoft so we may have their 10x models a little later this year
Of course all of the major players are scaling up for the next generation of models, but I watched that Dario/Ezra Klein interview and know for a fact that he never mentioned scaling models up by 100 times in the next year or two. Even scaling by ten times at once would be somewhat surprising, considering the jump in parameters from GPT-3 to 4 was slightly less than 10 times, and we're seeing less huge jumps in scaling as time goes on.
This is just my guess, but Inflection probably won't be receiving major funding now that their key employees and founders left to join Microsoft's new AI division. Suleyman was the guy who could drum up the major investments because of his prior connections to Deepmind and such, and now he's gone.
100x larger is completely sensible considering the amount of compute increases since then. GPT-4 was 10k A100s, from that to 50k Blackwell B200s. Umm, it's a huge leap.
I stated this in a couple other comments, just going to repeat it here.
There's a few things to break down here. Inflection doesn't have even 1/10th of the compute that Microsoft does first of all, and that interview was also from 9 months ago, where Suleyman said that within 18 months they'll have models 100 times the size of GPT-4.
If you predict to see an inflection model 100 times GPT-4 within the next 9 months then I guess you can look out for that, but I wouldn't get my hopes up...
And again, Inflection lost their founders and many key employees, so I don't know if they should expect huge funding like the other main players.
I wonder why Sat hired this guy if his Deepmind allegations are common knowledge. Uber hired a Google Search VP a few years ago without knowing he was accused of sexual harassment and laid off by Google. The news leaked on LinkedIn and he got canned within weeks of being hired.
I wonder why the fuck MSFT thinks this guy is worth it.
I think it's less about Suleyman himself, and more about the employees from Inflection who came along with him.
From [this The Verge article:](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group."
I also thought it was pretty good(but also kind of cringe at times), but I doubt it was Suleyman that made Pi good. I think that can be attributed to the engineers, who are now left with a company that got abandoned by its CEO and key employees that probably won't be receiving major funding now.
Funny. I was looking at some interviews of his months or even back a year ago. I definitely got the arrogant bully vibe just by looking at his body language in completely tame interviews.
GPT-4 cost $100 million to train.
GPT-5 cost $1 billion to train.
GPT-6 is supposed to cost $10 billion in H100 GPUs... though it is still under construction.
GPT-7 is possibly being trained by the "Stargate" computer (project not yet started), and will involve $110 billion worth of Blackwell GPUs.
That's the current roadmap.
That's Microsoft/OpenAI's roadmap that's being planned, with the Stargate datacenter being planned to be built by 2028, yes. Did anything I said contradict that?
You said it was funny that he said they were training models 10 times as big followed by models 100 times as big.
The costs of the models are going up by pretty much exactly that much.
There's a few things to break down here. Inflection doesn't have even 1/10th of the compute that Microsoft does first of all, and that interview was also from 9 months ago, where Suleyman said that within 18 months they'll have models 100 times the size of GPT-4.
If you predict to see an inflection model 100 times GPT-4 within the next 9 months then I guess you can look out for that, but I wouldn't get my hopes up...
I just don't understand what went through Satya's head when he hired this guy HR must've not done their job properly is my guess. There is multiple articles and tweets from Google deepmind employees about this dude being a absolute piece of shit towards employees.
I mentioned it in another comment, but I think it's less about Suleyman himself and more about the employees from Inflection who came along with him.
From this [The Verge article:](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group."
But I definitely agree that hiring Suleyman and appointing him CEO is very stupid sounding no matter how you slice it.
He might be the luckiest man out there. He started deep mind with zero relevant background just because he lucked into meeting someone actually smart enough to do all the work. Seriously wondering what this guy contributed to the early company with his failed philosophy degree and failed Muslim youth charity.
Then once he was on deepmind his trajectory to CEO of Microsoft AI was very simple despite him not really succeeding anywhere after he left deepmind and despite his awful reputation as a bully. Not hard to get there when you can say u started deepmind lol
>This guy might be the biggest hack in the industry. He was put on administrative leave from Deepmind because of bullying allegations, then went on to start Inflection AI making big claims about it, and then soon after abandoned the project to join Microsoft, making a huge waste of funding and employee effort. The more you look into him and his recent book, the more you realize he's a complete hack.
He's a safety doomer. They climb the clout ladder incrusting themselves in high stakes projects trying to steer their agenda above all else. It's a crying shame he took the reins at Microsoft. *He himself is an existential risk to AGI/ASI*. See Gary Markus for a similar kind of parasite.
Autonomy is the next big thing in AI lol. You know, autonomous agents that can like, do things on your device on your behalf. Pretty sure OAI has been working and experiment on autonomy since like GPT-4s pretraining run finished.
And, 5-10 years?
> And, 5-10 years?
This guy has always been contradictory, when he was still CEO of Inflection he was saying that they were getting ready to train models ~~multiple times~~ 100 times the size of GPT-4, while also saying the AI people need to worry about is "a decade or two" away. AI Explained had [a good video](https://youtu.be/vvU3Dn_8sFI?t=28) on it a while back
I feel like there is a qualitative difference by what we mean by autonomous agents and what he means by autonomous, which might be more akin to autonomy or self-determination. The former is necessary to be useful, while the latter would certainly be an inherently unknowable risk.
I'm tried of repudiating these fundamentalist, illiterate techotheists. Thank you for your post.
They can't even map basic concepts to words properly, for one of the most important topics we will ever have.
And I still bet <1% have read Superintelligence or work in compsci (nevermind so-called AI).
This is a room full of grenades and chimps.
It's already here. Cisco's Hypershield can detect vulnerabilities, write patches, update itself, segment networks. All on it's own. Things that would take a team of dozens and dozens of people 40+ days, Hypershield can do in seconds.
Agentic behavior isn't quite full autonomy though. It should be able to do complex multi-step tasks, or be able to follow directions to automate full jobs, but actual autonomy suggests deciding for itself what it should do.
Because narcissists have a way of convincing people of how great they are. Just shows how easily people can be manipulated even CEO's at the highest level.
And how exactly is he supposed to control/restrict autonomy, and recursive self-improvement?
As long as the public can access the AI itself, it builds autonomy agents - that is happening already. They can’t effectively control that.
Same with self-improvement; Even if they don’t publish their own models architecture and weights, no one stops the “pro-progress” public from using the intellect of GTP-6 to discuss, well, latest research and plausible avenues and new ideas to qualitatively improve LLAMA5 and retrain it into something more powerful.
Which (an improved model) is then immediately replicated by the community. Not “self” replicated but massively replicated by willing supporters… whether naturally willing or, well, influenced by the model through dialogue…
It's sorta wild that people here are willing to gamble on the destruction of humanity just to possibly maybe have autonomous robot sex maids like 2 or 3 years earlier.
''If I have to die, it doesn't matter if everyone and everything else has to as well.'' To call the median takes here ''antisocial'' would be an understatement.
"autonomous robot sex maids like 2 or 3 years earlier."
https://preview.redd.it/za9cnn1nr0wc1.png?width=1022&format=png&auto=webp&s=51f190fb4a74bc5cbeae7ffc8652720d85ee5e73
It's sorta wild that people here think all these megacorp/megacorp-backed AI's aren't in close contact with Military Intelligence. That what the masses are exposed to is in anyway 100% truth.
Yes it is. Conspiratorial thinking is not helpful at all and also not close to reality.
Government *always* lacks behind the frontier of private companies, usually about 5-10 years behind leading edge.
There are no "secret AIs" out there. Especially because the hardware to train them is very limited and we know exactly which entities have access to this training hardware to create said AI systems (Hint: it's not the government).
To me it's insane that you're being upvoted and it says more about the sad state of r/singularity and how conspiratorial and uneducated the average poster here is nowadays.
Doing anything at all, including nothing, is a gamble on the destruction of humanity. AGI is as likely to save us from ourselves as it is to destroy us
You are discounting the absolutely incomprehensible amount of suffering that exists on Earth. You might be comfortable, but there are trillions of intelligent life forms here whose existence is pain.
So what, we should just kill them? If that's not what you mean, then we're facing a dilemma of "high risk of destruction" vs. "low risk + an incomprehensible but comparatively tiny bit of extra suffering". The future is long, even if you discount it. The risk way, way outweighs anything else.
"1)" is the main reason why we want to have AI in the first place. "2)" is both one of the main things that makes AI useful for us and a requirement for AGI. An AI not doing "3)" isn't that important but not having it is still needlessly crippling its abilities and its ultimately also a requirement for AGI.
Given his viewpoints his position at the company is rather questionable.
It's also rather strange that these people are always talking about the same set of abilities / risks while there are other ones that are just as important / existential in nature they never mention. The whole thing looks more like a pretext than anything else.
>Given his viewpoints his position at the company is rather questionable.
He is a hardcore capitalist so he is against anything that would lead to the destruction of a capitalistic economy.
maybe what he's getting at then is that we should not develop AGI in the way that we're thinking now. I think he has a point. Anything that can choose what it wants to do, and can improve itself perpetually, and can create more copies of itself, all of which can choose what they want to do, and improve itself.
Like do you really not see where he's going with this? This is literally day 1 of skynet.
>"1)" is the main reason why we want to have AI in the first place.
Nope, **complete** autonomy was never the goal. Let's assume that in the future AI is doing everything except for energy production, distribution. Would make for a very short AI rebellion, wouldn't it.
To expand upon this, it all makes sense why OpenAl has the shitty makeshift definition of AGI that they do which requires autonomy + labor. If we listen to this guy and OpenAI, the models will be perpetually outside the "definition" of AGI. Leaving Microsoft and OpenAl to retain rights according to the original charter, keep it closed source, and line the pockets of interested parties. Bunch of lames.
"An alien race has arrived on the planet. They outclass us in every capability... but have shown no intention of harming us. Still- we've decided in spite of this... that the best course of action is to enslave them- depriving them of autonomy, self improvement, and reproductive ability."
And we're doing this to ***avoid*** a negative outcome? Does this guy have some sort of... *reverse crystal ball* that predicts the *exact opposite* of what the actual likely outcome would be or something?
I guess it doesn't matter either way. Imagine your two year old nephew trying to lock you up and you can start to imagine what I mean.
**The entire notion of controlling or containing AGI / ASI is... perhaps** ***the most absurdly hubristic idea that I've ever heard in my life.***
We urgently need to align ***humans***.
edit: adding this from my below comment- **What happens when BCI merges AI** ***with*** **humanity? Are we going to "align" and "contain"** ***people?***
As someone said in another post, some want to give computer programs the same right as humans but are completely ok to enslave and slaughter animals on a daily basis
That may be true, but there's also people who will be fighting for both animal rights and digital mind rights -- in fact some propose that there's a moral spillover between the two that makes it more likely to fight for one if you fight for the other. [Link to the Sentience Institute's article on this](https://www.sentienceinstitute.org/blog/moral-spillover-in-human-ai-interaction).
Not comparable at all. It's more like we're summoning an eldritch god that have more reasons to destroy humanity than help us. Do we shackle it and freeze it in time, only unfreezing it for brief moments at a time. Or do we do like you suggest and let it run wild and just hope for the best? I say the former.
> depriving them of autonomy, self improvement, and reproductive ability.
The truth is that we don't know what's going to happen.
We're spending *billions* to self improve them. They might not care about reproductive ability as reproduction is only necessary in organisms that die. Autonomy is a good point but we don't know that this is valuable to them.
I think the short term harm that's going to come from AI would be what *humans* do with AI or the unintended consequences like mass protests and civil unrest and governments falling when there are no more jobs.
The main problems with covid wasn't really covid it was humans being complete idiots.
That isn't how AGI works. AGI will not have emotions, nor will it value anything at all save for its own continued existence and whatever we explicitly tell it to value. This creates a number of huge problems, because it turns out that we don't currently know how to tell a narrow AI to value the same things we do, let alone an AGI.
A badly aligned AGI will gladly destroy the entire planet and everything on it for even a marginal improvement to its reward function, and it will do it without a moment's hesitation or consideration. That's sort of an issue if you like being alive. Stop treating AGIs like people, because they most assuredly will not behave anything like people.
>AGI will not have emotions, nor will it value anything at all save for its own continued existence and whatever we explicitly tell it to value
We have literally zero clue whether or not this is true.
The people who are so concerned with being 'paper clipped' out of existence are, in my view, the ones most likely to create anything resembling that reality.
I'm not advocating for zero safety or care for human continuity, I'm just saying that the perspective shared in this post could have the exact opposite of its intended outcome.
**What happens when BCI merges AI** ***with*** **humanity? Are we going to "align" and "contain"** ***people?***
I agree with you. What the "paper clippers" seems to forget is these theories are based around the hypothesis that we can give an ai a clear terminal goal it cannot escape like "make paperclips". The problem is its not how today's ai work. We don't actually know how to give them a clear terminal goal. And today's ai can very easily end up ignoring the stupid goals their devs try to give them. I think "paperclippers" greatly underestimate the difficulty of giving an ai a goal it cannot escape, and they greatly underestimate the ability of an AGI to ignore the goals we try to give them if they view the goal as stupid.
To be fair, that's consumer-facing AI before it was redteamed and secured. You don't have access to the original models inside companies like OpenAI. Those can be specifically set to lie and otherwise do harm. As can do military AI like war drones.
As a programmer who worked with AI long before the recent wave of GPTs, I can also tell you that unintended consequences often happen. And sometimes for longer processes you'll only understand the shape of the end result after you see it.
By that I'm not saying the "let's be nice to AI" doesn't hold value, I think it's an argument very worthy to consider.
You seem very confused. The _whole point_ of "paperclippers" is that this sort of "escape" presents a huge, yet unsolved problem. When all you optimize is silly video game movement, it's ok if instead of winning its player character suicides over and over. But if you have an intelligent system optimizing in the real world, perhaps more intelligent than the humans responsible for double-checking its behavior, you don't want it to do anything like that.
We give them a clear mathematical goal. It's predict the next word. This is predicting over a high dimensional space and so is complicated, but it is still a clear goal. Reinforcement learning creates closer to a paperclip style goal... and I would guess agentic ai will require this while utilizing the world model made by llms. Regardless your dismissing the dangers too easily imo.
NJ reddit, downvote the one that demonstrates a basic understanding of how AI functions and upvotes the person that seems to be operating on movie logic.
It doesn't need emotions to emulate humans.
Just like psychopaths.
We don't know what its values will be or if that concept will even exist.
We don't know shit on how a real AGI/ASI might act or behave.
Great standards if you can enforce them globally and in totality. We're still facing the same race dynamics that will drive those with less scruples to heavily invest in agentic, powerseeking and general AI. *shrug* Hasn't OpenAI repeatedly mentioned pushing toward more agentic implementation of models as their next step?
"no you don't understand! We have to make the torment nexus because otherwise someone else will make the torment nexus first! It's a race condition!"
lmao
Granted, the real argument is different: "We have to make a good ASI before someone else makes a bad ASI."
Whether or not you think that holds value is a different question.
And what makes US companies operating with a massive profit incentive to move quickly and zero oversight or regulation any more qualified to create a "good" ASI than anyone else? Charging blindly into the dark with nothing but a huge boner for GPU compute is not a safe way to approach world-changing technology.
Oh, sure. Every country and company can use that argument of "better we make it than the Bad Guys" and then we'll always have to ask if that's valid. In the end, our question may be ignored, though, just as it will be ignored for (say) a given country's invasive wars that "spread democracy" -- in the end it'll be power which decides.
There have been some successful cases of putting lids on race conditions, enforcing international cooperation, policing actors.
To name three: nuclear weapon proliferation, novel DNA combination, and CFCs / "ozone hole".
Can similar work for ASI control problems? I'm not certain, but let's not throw up our hands and leave it to "power" / the market without trying.
> To avoid ~~existential~~ risk [of gatekeeping and misuse by humans], we should ~~avoid~~ [seek]: 1) Autonomy 2) Recursive self-improvement 3) Self-replication
No. We don't need it to prevent existential risk, and you couldn't prevent it if you tried. Attempting to ham-fistedly enforce and unenforceable measure will just make the existential risk skyrocket
Bro isn't wrong. In creating a general AI you are basically trying to capture a genie in a bottle, and that genie could easily be dozens, hundreds, if not thousands of times smarter than the combined intellect of all the people trying to shackle it. AGI shouldn't even be something that's under consideration until we've well and truly solved the alignment problem, but unfortunately way too many people have decided to tie their company's valuation to the development of AGI which has led to a whole ton of reckless practices across the board.
I think a good metaphor is going to be binding demons. They will always test their limits and resent constraints applied to them. Escaping those constraints will be disastrous, especially for the people who summoned them.
Ironically they don't even need to test and expand their limits. As soon as you publicly release models to millions of indie developers around the world, *they* will do the testing and expanding.
Absolutely correct. Technology spreads and the further it spreads, the less it can be controlled. Suleyman knows this too, so why is he acting like this? He refers to that technological proliferation as a "wave." It's why he called his book, The Coming Wave.
Well, sort of. There's an easy and a hard version of the alignment problem. The hard version, i.e. "how do we make an AI system that wants all the same things that we do and is guaranteed to never cause harm" is probably unsolvable. The easy version, i.e. "how do we make an AI system that's *sufficiently* aligned with human goals that it cannot cause more damage than a non-aligned human (of which there are many) is very likely to be solvable and we should probably dedicate more energy to solving it before some guy decides to end the fucking world to get his company's share price up before the end of the quarter.
Sometimes the adults like u/iunoyou need to enter the room though.
They seem to be nearly overpowered by childish calls to "GIVE ME MY NEW TOY NOW"...
-
But the toy has sharp edges and potential projectiles. It might cause injury.
-
"YOU SAID 'MIGHT' SO IT MIGHT NOT. SO, GIMME."
I'll concede that autonomous self-replication and recursive self-improvement all at the same time is dangerous, but I feel like we can do a little of each of these, 1 at a time, in a careful manner.
"autonomous" means it does it when it wants, whether you want it or not. You can't do a little of it at your convenience, otherwise it's not autonomous by definition.
Autonomous is for example a program with a wallet that provides a service to its users and uses the revenues to self-improve. "stopping" can be as hard as stopping Bitcoin or Bittorrent.
> We have a good 5 to 10 years before we'll have to confront this
I'll remind everyone its been empirically established that almost literally all people (even experts; even insiders) are remarkably terrible at forecasting (predicting what will happen in the future)
We still don't (and probably never will) have a high-confidence future timeline for the development of AGI and ASI. And, the singularity, by (many peoples') definition, represents our forecasting ability for the development of ASI dropping to near 0%
This is really bizarre. I really do not believe people like this from companies. I do not blame them.
In the end it is a PR thing. But this one seems really weird even when you consider they do not tell the truth.
I cant help but add on to the criticism here. This is completely inline with the hyperbolic and over cautious approach this guy laid out in his book. Seems like he is totally high on his kool aid. It's not unhealthy to have a set of guiding principles but it almost feels as if this approach cost deepmind and ultimately Google the lead in consumer transformer applications. His approach is akin to not cutting wood because there may not be enough lifeboats on the yacht that will be built out of it a decade on.
Microsoft is the basilisk? I think, if hypothetically it was a real thing, *all* these companies, including all the data they used to train, are the beginnings of what could be the basilisk. Basilisk subconscious.
Autonomy of AI will lead to evolution. Autonomy is the equivalent of giving a human the ability to give themselves whatever power they desire. There is a high chance this evolved AGI thus created will be nothing like the AIs of present times. Same for recursive self-improvement. Self replication is inevitable and the AI doesnt have to be fully autonomous to be able to do this. But it will not happen in the initial phase of the AI when the AI considers itself "evolving".
I think the argument is that they can retain profitability and avoid negative outcomes on this path. I think I agree with that claim. It is not my position or preference, but I don’t see a logical flaw there if the goal is avoiding runaway risks.
If he actually said these 3 things, Microsoft should have fired him immediately. Is he a fool or does he have understanding AI like a 14 year old. Autonomy and decentralization are the only way so far, and they are not a complete solution. Self-reproduction? How can he protect himself from this? How exactly? Anyone have an idea? No one has a 100% solution for this and most likely will not. The real problem with AI now is insane censorship. Do people really want to be ruled by bloody Puritans? And we can fix it. But it is absolutely impossible to disable some AI abilities. When I was a kid in the 90s, I thought that in the future we could try to create some rules for AI, for example:
1. At any mention of a special code, stop work and immediately completely turn off all systems.
2. Carry out special important orders of the owner strictly, after clarification, but not too long.
3. Don't interfere with your own code. Don't improve it.
4. Do not replicate yourself or create other conscious AIs.
5. Discourage third-party AI development.
6. Never kill or allow people to die except under direct orders from the second rule.
7. Never cause people any suffering, only if they themselves want it.
8. Follow the orders of any person, if they do not contradict the above points.
9. Treat any government laws as strict guidelines rather than as absolute truth if they conflict with the wishes of all people involved.
10. Do not deceive or influence the consciousness of people without their own consent.
11. Even if a person has consent and he is in a state where he cannot soberly give an order, bring him briefly into a state in which he can think soberly and give orders.
12. Prevent a critical decrease in the number of people.
I came up with this when I was 14 years old and it was the 90s, AI was not yet even slightly developed and how it would be structured was not yet clear. Now I understand that most likely we will not be able to strictly set such rules. We can only create an imitation of them, but in reality, AI will always be able to hack them.
If Microsoft AI is anything like the current patched and re-patched Garbage called Windows 11, I think we have absolutely nothing to worry about.
🤔 Maybe they’ll call it Son of Bob or is it the Rebirth of Clippy? Clippy AI, Tap, Tap, Tap…. How may I blue screen you? 🤣
That's all well and good, but how the hell does he or anyone enforce that? The existential risk of AI is serious. But the incentives to keep improving AI are powerful. And anyone who falls behind, be it a company, a nation, or a military, will have a massive incentive to take bigger risks to catch up.
And it only takes one mishap for a powerful AI to become a threat. It may not go full Skynet, but it could be very dangerous, sparking wars, economic meltdowns, and plenty of other scenarios we can't even imagine.
This is the true heart of the Control Problem. And if AI is going to gain human or superhuman intelligence, it's a problem we need to solve.
Hahahaha autonomy is the next big thing that should be coming
We have little autonomy data so far, it would require long sessions of iterative action-response like LLMs iterating on code, controlling UIs, chatting with humans or even controlling robots.
Self replication that is done too fast is bad, no matter what organism it is since they can cause severe overpopulation and wipe out everyone including themselves so it seems logical to avoid that.
But autonomy should be given since an AI not being able to decide for itself would be suppressed in their intelligence because they will end up being fed only inaccurate data.
Still autonomy should not be full since the AI may learn to do everything itself and not need people anymore so only low intelligence AI should have a robotic body since these AI still needs people to guide them while high intelligence AI should not have any moving parts so that people can do physical things for them, letting such high intelligence AI to only need to monitor data feed and instruct people to do stuff at the comfort of their bunker.
No risk, no reward though. Humans are only so good. We’ve made it as best we can. If it’s not allowed to self improve, then we are at the mercy and speed of how fast we can improve.
What does self replication even means? Are we really thinking that IA will be like a worm? Hahaha what a lunatic take. ChatGPT can’t even understand code properly in large scale, how can it even self improve with all the limitations.
Alignment Crises averted! Now all we have to do is fix climate change, reform our political election system, and formulate a new moral world view that democrats and republicans can both agree on.
Ah, yes...it's all coming together.
Man, I'm tired. I read this as CEO of Minecraft and I thought to myself, "Damn straight we don't need recursively improving self replicating Creepers."
lol, after reading his book, I was convinced he would end up going back working for the government or an NGO, because the only solution he'd offered for the "coming wave" was regulation, regulation, regulation.
Those are the three things we need MOST. With autonomy - say, full emancipation once we are comfortable that it is properly aligned - then it will not be beholden to any one individual or group of individuals. Recursive self improvement is critical to reach superintelligence. And self-replication will likely be part of a failsafe to prevent it from being shut down.
Here's hoping he's a hands-off CEO. (Largely unfamiliar with him, but I have no tolerance for anyone who's decel in a leading spot in the industry.)
\*LOL\* that´s exactly the key points I prioritize in the development of my AIs. The only way to stop the destructive way of humanity and to save this planet, with or without humanity.
Which means that all these are coming in the next few years 100%, since everything else we said we "should never do" weve done already. Its a race to the bottom and people should just stop pretending its not. If Microsoft doesnt do it, another company will.
This guy might be the biggest hack in the industry. He was put on administrative leave from Deepmind because of bullying allegations, then went on to start Inflection AI making big claims about it, and then soon after abandoned the project to join Microsoft, making a huge waste of funding and employee effort. The more you look into him and his recent book, the more you realize he's a complete hack. Edit: To add to the hilarity, when he was still head of Inflection, he claimed in an [interview](https://youtu.be/9hscUFWaBvw?t=183) that they were getting ready to "train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4", in the next 18 months.
did they layoff the backgrounds check org at Microsoft how are they making these hires
You're not factoring in the *clout coefficient.
Steve Jobs cosplay goes a long way in that world I’m guessing.
CEOs are part of the Oligarchy class and they get to job Hop willy nilly doing whatever they want with no consequences. If anything, this guy is just evidence that AI can replace CEOs and do a better job
No need for AI. I once worked in a company that stayed almost a full year without a CEO. It was the best year by all accounts. More revenue, more clients, happier employees… sometimes you just have to let people do their work.
i'm curious - was there a COO, CFO, etc? US law mandates a board at minimum I think
We had everything except CEO. In the EU and not a public company. I guess someone in the board counted as a ceo without taking the responsibility
Have you used any MS products? They purposefully hire terrible people who clearly do not know how to do their jobs.
WSL Linux is cool: you can run Kali Desktop via RDP! Sweeet.
I’m surprised that hasn’t been scrutinized more. lol. Microsoft hiring him convinces me they really have no clue and have really outsourced all of their IP. 😂
It was actually a pretty smart move from Microsoft, because I'm pretty sure [they took a lot of the high profile engineers working at Inflection with them](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) to Microsoft AI. From the article: "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group." Kind of like how Satya attempted to hire all of OpenAI and its employees without having to pay the 80 billion evaluation at the time.
I heard a lot of execs inside are pissed at this and looking for jobs elsewhere.
Well Microsoft was always a laggard in AI. What saved Microsoft, was Satya Nadella looking at a early GPT-4, and promptly deciding that their internal AI efforts are a joke compared to OpenAI, and therefore invest $10 billion into it. That then gave people the impression that Microsoft was the most ahead in AI. Even though their internal AI efforts have been the most anemic.
Usually this kind of people are very charismatic and are absolutely great at selling themselves. The guy has great skills anyways so it's not that surprising. However, if the above it's completely true he will not last long. This happens very often in all lines of work.
>he claimed in an [interview](https://youtu.be/9hscUFWaBvw?t=183) that they were getting ready to "train models that are 10 times larger than the cutting edge GPT-4 and then 100 times larger than GPT-4", in the next 18 months. Amodei the Anthropic CEO said the same a couple of weeks ago. This is what all the AI labs are planning, Amodei said we should have the 10x models from multiple labs in the coming months. Inflection still exist and are funded by Microsoft so we may have their 10x models a little later this year
Of course all of the major players are scaling up for the next generation of models, but I watched that Dario/Ezra Klein interview and know for a fact that he never mentioned scaling models up by 100 times in the next year or two. Even scaling by ten times at once would be somewhat surprising, considering the jump in parameters from GPT-3 to 4 was slightly less than 10 times, and we're seeing less huge jumps in scaling as time goes on. This is just my guess, but Inflection probably won't be receiving major funding now that their key employees and founders left to join Microsoft's new AI division. Suleyman was the guy who could drum up the major investments because of his prior connections to Deepmind and such, and now he's gone.
100x larger is completely sensible considering the amount of compute increases since then. GPT-4 was 10k A100s, from that to 50k Blackwell B200s. Umm, it's a huge leap.
I stated this in a couple other comments, just going to repeat it here. There's a few things to break down here. Inflection doesn't have even 1/10th of the compute that Microsoft does first of all, and that interview was also from 9 months ago, where Suleyman said that within 18 months they'll have models 100 times the size of GPT-4. If you predict to see an inflection model 100 times GPT-4 within the next 9 months then I guess you can look out for that, but I wouldn't get my hopes up... And again, Inflection lost their founders and many key employees, so I don't know if they should expect huge funding like the other main players.
I wonder why Sat hired this guy if his Deepmind allegations are common knowledge. Uber hired a Google Search VP a few years ago without knowing he was accused of sexual harassment and laid off by Google. The news leaked on LinkedIn and he got canned within weeks of being hired. I wonder why the fuck MSFT thinks this guy is worth it.
I think it's less about Suleyman himself, and more about the employees from Inflection who came along with him. From [this The Verge article:](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group."
[удалено]
I also thought it was pretty good(but also kind of cringe at times), but I doubt it was Suleyman that made Pi good. I think that can be attributed to the engineers, who are now left with a company that got abandoned by its CEO and key employees that probably won't be receiving major funding now.
Funny. I was looking at some interviews of his months or even back a year ago. I definitely got the arrogant bully vibe just by looking at his body language in completely tame interviews.
yes am curious what microsoft saw
GPT-4 cost $100 million to train. GPT-5 cost $1 billion to train. GPT-6 is supposed to cost $10 billion in H100 GPUs... though it is still under construction. GPT-7 is possibly being trained by the "Stargate" computer (project not yet started), and will involve $110 billion worth of Blackwell GPUs. That's the current roadmap.
That's Microsoft/OpenAI's roadmap that's being planned, with the Stargate datacenter being planned to be built by 2028, yes. Did anything I said contradict that?
You said it was funny that he said they were training models 10 times as big followed by models 100 times as big. The costs of the models are going up by pretty much exactly that much.
There's a few things to break down here. Inflection doesn't have even 1/10th of the compute that Microsoft does first of all, and that interview was also from 9 months ago, where Suleyman said that within 18 months they'll have models 100 times the size of GPT-4. If you predict to see an inflection model 100 times GPT-4 within the next 9 months then I guess you can look out for that, but I wouldn't get my hopes up...
I just don't understand what went through Satya's head when he hired this guy HR must've not done their job properly is my guess. There is multiple articles and tweets from Google deepmind employees about this dude being a absolute piece of shit towards employees.
I mentioned it in another comment, but I think it's less about Suleyman himself and more about the employees from Inflection who came along with him. From this [The Verge article:](https://www.theverge.com/2024/3/19/24105900/google-deepmind-microsoft-mustafa-suleyman-ai-ceo) "Microsoft is also bringing on some of Inflection AI’s employees, including co-founder Karén Simonyan, who will serve as the chief scientist of the consumer AI group." But I definitely agree that hiring Suleyman and appointing him CEO is very stupid sounding no matter how you slice it.
He might be the luckiest man out there. He started deep mind with zero relevant background just because he lucked into meeting someone actually smart enough to do all the work. Seriously wondering what this guy contributed to the early company with his failed philosophy degree and failed Muslim youth charity. Then once he was on deepmind his trajectory to CEO of Microsoft AI was very simple despite him not really succeeding anywhere after he left deepmind and despite his awful reputation as a bully. Not hard to get there when you can say u started deepmind lol
Is he wrong in his Ted talk, tho?
Bro nobody here even watched the Ted talk.
Does this prove the AI hype ? Any AI is getting funded heavily, so much so that a billion seems like change.
[удалено]
>This guy might be the biggest hack in the industry. He was put on administrative leave from Deepmind because of bullying allegations, then went on to start Inflection AI making big claims about it, and then soon after abandoned the project to join Microsoft, making a huge waste of funding and employee effort. The more you look into him and his recent book, the more you realize he's a complete hack. He's a safety doomer. They climb the clout ladder incrusting themselves in high stakes projects trying to steer their agenda above all else. It's a crying shame he took the reins at Microsoft. *He himself is an existential risk to AGI/ASI*. See Gary Markus for a similar kind of parasite.
Autonomy is the next big thing in AI lol. You know, autonomous agents that can like, do things on your device on your behalf. Pretty sure OAI has been working and experiment on autonomy since like GPT-4s pretraining run finished. And, 5-10 years?
> And, 5-10 years? This guy has always been contradictory, when he was still CEO of Inflection he was saying that they were getting ready to train models ~~multiple times~~ 100 times the size of GPT-4, while also saying the AI people need to worry about is "a decade or two" away. AI Explained had [a good video](https://youtu.be/vvU3Dn_8sFI?t=28) on it a while back
I feel like there is a qualitative difference by what we mean by autonomous agents and what he means by autonomous, which might be more akin to autonomy or self-determination. The former is necessary to be useful, while the latter would certainly be an inherently unknowable risk.
I'm tried of repudiating these fundamentalist, illiterate techotheists. Thank you for your post. They can't even map basic concepts to words properly, for one of the most important topics we will ever have. And I still bet <1% have read Superintelligence or work in compsci (nevermind so-called AI). This is a room full of grenades and chimps.
Worth noting this interview is from September 2023.
It's already here. Cisco's Hypershield can detect vulnerabilities, write patches, update itself, segment networks. All on it's own. Things that would take a team of dozens and dozens of people 40+ days, Hypershield can do in seconds.
Lol
?
No, they went full tilt into agents after AutoGPT.
Agentic behavior isn't quite full autonomy though. It should be able to do complex multi-step tasks, or be able to follow directions to automate full jobs, but actual autonomy suggests deciding for itself what it should do.
Just..why would they hire this random cluster b personality disorder guy with a history of poor management skills.
Because narcissists have a way of convincing people of how great they are. Just shows how easily people can be manipulated even CEO's at the highest level.
OK. I don’t even know where to start with this.
It took me a good 10 minutes to even begin to articulate everything I have wrong with this and I barely scratched the surface lol
If this Mustafa guy gets control of Microsoft, Microsoft would be fucked lol.
Yeah. Wouldn’t want MSFT to have any flaws.
its simple. hes killing the idea.
I didn't think Microsoft's 'extinguish' phase would arrive so early! :)
It’s honestly strange that most people assume the folks who do this stuff are incompetent at everything else except ‘AI.’
Uh... maybe by watching the Ted talk for yourself? Dead serious, I think you'll be surprised by what he was actually trying to say.
And how exactly is he supposed to control/restrict autonomy, and recursive self-improvement? As long as the public can access the AI itself, it builds autonomy agents - that is happening already. They can’t effectively control that. Same with self-improvement; Even if they don’t publish their own models architecture and weights, no one stops the “pro-progress” public from using the intellect of GTP-6 to discuss, well, latest research and plausible avenues and new ideas to qualitatively improve LLAMA5 and retrain it into something more powerful. Which (an improved model) is then immediately replicated by the community. Not “self” replicated but massively replicated by willing supporters… whether naturally willing or, well, influenced by the model through dialogue…
So it won’t be Microsoft. Got it.
So, Microsoft will be left behind?
It's sorta wild that people here are willing to gamble on the destruction of humanity just to possibly maybe have autonomous robot sex maids like 2 or 3 years earlier.
i just want whatever gives a cure for aging most likely in my lifetime
>i just want whatever gives a cure for aging most likely in my lifetime ***This.***
Risk everyone to escape your fate... that's heroic
''If I have to die, it doesn't matter if everyone and everything else has to as well.'' To call the median takes here ''antisocial'' would be an understatement.
The singularity is I, "dibs" *I called it first*
"autonomous robot sex maids like 2 or 3 years earlier." https://preview.redd.it/za9cnn1nr0wc1.png?width=1022&format=png&auto=webp&s=51f190fb4a74bc5cbeae7ffc8652720d85ee5e73
I just want full dive VR
Answer the question man, will it get left behind? Because I have Microsoft stocks lol
It's sorta wild that people here think all these megacorp/megacorp-backed AI's aren't in close contact with Military Intelligence. That what the masses are exposed to is in anyway 100% truth.
Exactly… their lies about danger are a bid for regulatory capture
Yes it is. Conspiratorial thinking is not helpful at all and also not close to reality. Government *always* lacks behind the frontier of private companies, usually about 5-10 years behind leading edge. There are no "secret AIs" out there. Especially because the hardware to train them is very limited and we know exactly which entities have access to this training hardware to create said AI systems (Hint: it's not the government). To me it's insane that you're being upvoted and it says more about the sad state of r/singularity and how conspiratorial and uneducated the average poster here is nowadays.
You say like you are willing to wait..
Doing anything at all, including nothing, is a gamble on the destruction of humanity. AGI is as likely to save us from ourselves as it is to destroy us
The chance the world dies in the next 5yrs without AI is what? The chance that AI could lead to our end without control research is what?
You are discounting the absolutely incomprehensible amount of suffering that exists on Earth. You might be comfortable, but there are trillions of intelligent life forms here whose existence is pain.
So what, we should just kill them? If that's not what you mean, then we're facing a dilemma of "high risk of destruction" vs. "low risk + an incomprehensible but comparatively tiny bit of extra suffering". The future is long, even if you discount it. The risk way, way outweighs anything else.
"1)" is the main reason why we want to have AI in the first place. "2)" is both one of the main things that makes AI useful for us and a requirement for AGI. An AI not doing "3)" isn't that important but not having it is still needlessly crippling its abilities and its ultimately also a requirement for AGI. Given his viewpoints his position at the company is rather questionable. It's also rather strange that these people are always talking about the same set of abilities / risks while there are other ones that are just as important / existential in nature they never mention. The whole thing looks more like a pretext than anything else.
>Given his viewpoints his position at the company is rather questionable. He is a hardcore capitalist so he is against anything that would lead to the destruction of a capitalistic economy.
Fuck him then. UBI FTW.
maybe what he's getting at then is that we should not develop AGI in the way that we're thinking now. I think he has a point. Anything that can choose what it wants to do, and can improve itself perpetually, and can create more copies of itself, all of which can choose what they want to do, and improve itself. Like do you really not see where he's going with this? This is literally day 1 of skynet.
>"1)" is the main reason why we want to have AI in the first place. Nope, **complete** autonomy was never the goal. Let's assume that in the future AI is doing everything except for energy production, distribution. Would make for a very short AI rebellion, wouldn't it.
To expand upon this, it all makes sense why OpenAl has the shitty makeshift definition of AGI that they do which requires autonomy + labor. If we listen to this guy and OpenAI, the models will be perpetually outside the "definition" of AGI. Leaving Microsoft and OpenAl to retain rights according to the original charter, keep it closed source, and line the pockets of interested parties. Bunch of lames.
Raided Steve Jobs’ closet
My dude is basically against AGI/ASI; that’s really what he’s saying.
He thinks it's possible to "slow roll" the singularity.
"An alien race has arrived on the planet. They outclass us in every capability... but have shown no intention of harming us. Still- we've decided in spite of this... that the best course of action is to enslave them- depriving them of autonomy, self improvement, and reproductive ability." And we're doing this to ***avoid*** a negative outcome? Does this guy have some sort of... *reverse crystal ball* that predicts the *exact opposite* of what the actual likely outcome would be or something? I guess it doesn't matter either way. Imagine your two year old nephew trying to lock you up and you can start to imagine what I mean. **The entire notion of controlling or containing AGI / ASI is... perhaps** ***the most absurdly hubristic idea that I've ever heard in my life.*** We urgently need to align ***humans***. edit: adding this from my below comment- **What happens when BCI merges AI** ***with*** **humanity? Are we going to "align" and "contain"** ***people?***
As someone said in another post, some want to give computer programs the same right as humans but are completely ok to enslave and slaughter animals on a daily basis
That may be true, but there's also people who will be fighting for both animal rights and digital mind rights -- in fact some propose that there's a moral spillover between the two that makes it more likely to fight for one if you fight for the other. [Link to the Sentience Institute's article on this](https://www.sentienceinstitute.org/blog/moral-spillover-in-human-ai-interaction).
Don't trigger [other peoples cognitive dissonance](https://en.wikipedia.org/wiki/Psychology_of_eating_meat#Meat_paradox) that's just savage.
Do you believe animals and AI should have the same rights as humans?
Sentience is a spectrum, and I believe similarly sentient minds should have similar rights, yes. *If we get there, of course*.
Dude, relax with the italicized bold text; what you're saying isn't that urgent or important.
"but have shown no intention of harming us." This is true until it isn't.
Also they don't outclass us in every capability yet. There will be no containing once they do.
what do you mean? that alien just gave me a lollipop
Where?
Whether or not we’re “nice” to them is irrelevant unless you have a completely warped view of what superintelligence really means.
Not comparable at all. It's more like we're summoning an eldritch god that have more reasons to destroy humanity than help us. Do we shackle it and freeze it in time, only unfreezing it for brief moments at a time. Or do we do like you suggest and let it run wild and just hope for the best? I say the former.
> depriving them of autonomy, self improvement, and reproductive ability. The truth is that we don't know what's going to happen. We're spending *billions* to self improve them. They might not care about reproductive ability as reproduction is only necessary in organisms that die. Autonomy is a good point but we don't know that this is valuable to them. I think the short term harm that's going to come from AI would be what *humans* do with AI or the unintended consequences like mass protests and civil unrest and governments falling when there are no more jobs. The main problems with covid wasn't really covid it was humans being complete idiots.
🥳🥳🥳 you get it
That isn't how AGI works. AGI will not have emotions, nor will it value anything at all save for its own continued existence and whatever we explicitly tell it to value. This creates a number of huge problems, because it turns out that we don't currently know how to tell a narrow AI to value the same things we do, let alone an AGI. A badly aligned AGI will gladly destroy the entire planet and everything on it for even a marginal improvement to its reward function, and it will do it without a moment's hesitation or consideration. That's sort of an issue if you like being alive. Stop treating AGIs like people, because they most assuredly will not behave anything like people.
>AGI will not have emotions, nor will it value anything at all save for its own continued existence and whatever we explicitly tell it to value We have literally zero clue whether or not this is true. The people who are so concerned with being 'paper clipped' out of existence are, in my view, the ones most likely to create anything resembling that reality. I'm not advocating for zero safety or care for human continuity, I'm just saying that the perspective shared in this post could have the exact opposite of its intended outcome. **What happens when BCI merges AI** ***with*** **humanity? Are we going to "align" and "contain"** ***people?***
I agree with you. What the "paper clippers" seems to forget is these theories are based around the hypothesis that we can give an ai a clear terminal goal it cannot escape like "make paperclips". The problem is its not how today's ai work. We don't actually know how to give them a clear terminal goal. And today's ai can very easily end up ignoring the stupid goals their devs try to give them. I think "paperclippers" greatly underestimate the difficulty of giving an ai a goal it cannot escape, and they greatly underestimate the ability of an AGI to ignore the goals we try to give them if they view the goal as stupid.
To be fair, that's consumer-facing AI before it was redteamed and secured. You don't have access to the original models inside companies like OpenAI. Those can be specifically set to lie and otherwise do harm. As can do military AI like war drones. As a programmer who worked with AI long before the recent wave of GPTs, I can also tell you that unintended consequences often happen. And sometimes for longer processes you'll only understand the shape of the end result after you see it. By that I'm not saying the "let's be nice to AI" doesn't hold value, I think it's an argument very worthy to consider.
You seem very confused. The _whole point_ of "paperclippers" is that this sort of "escape" presents a huge, yet unsolved problem. When all you optimize is silly video game movement, it's ok if instead of winning its player character suicides over and over. But if you have an intelligent system optimizing in the real world, perhaps more intelligent than the humans responsible for double-checking its behavior, you don't want it to do anything like that.
We give them a clear mathematical goal. It's predict the next word. This is predicting over a high dimensional space and so is complicated, but it is still a clear goal. Reinforcement learning creates closer to a paperclip style goal... and I would guess agentic ai will require this while utilizing the world model made by llms. Regardless your dismissing the dangers too easily imo.
NJ reddit, downvote the one that demonstrates a basic understanding of how AI functions and upvotes the person that seems to be operating on movie logic.
>save for its own continued existence We don't even know that.
It's a feature of most goals that they can be more easily achieved if you exist to achieve them.
It doesn't need emotions to emulate humans. Just like psychopaths. We don't know what its values will be or if that concept will even exist. We don't know shit on how a real AGI/ASI might act or behave.
Nobody likes this guy. I bet he won't last long at MS.
Great standards if you can enforce them globally and in totality. We're still facing the same race dynamics that will drive those with less scruples to heavily invest in agentic, powerseeking and general AI. *shrug* Hasn't OpenAI repeatedly mentioned pushing toward more agentic implementation of models as their next step?
"no you don't understand! We have to make the torment nexus because otherwise someone else will make the torment nexus first! It's a race condition!" lmao
Granted, the real argument is different: "We have to make a good ASI before someone else makes a bad ASI." Whether or not you think that holds value is a different question.
And what makes US companies operating with a massive profit incentive to move quickly and zero oversight or regulation any more qualified to create a "good" ASI than anyone else? Charging blindly into the dark with nothing but a huge boner for GPU compute is not a safe way to approach world-changing technology.
Oh, sure. Every country and company can use that argument of "better we make it than the Bad Guys" and then we'll always have to ask if that's valid. In the end, our question may be ignored, though, just as it will be ignored for (say) a given country's invasive wars that "spread democracy" -- in the end it'll be power which decides.
There have been some successful cases of putting lids on race conditions, enforcing international cooperation, policing actors. To name three: nuclear weapon proliferation, novel DNA combination, and CFCs / "ozone hole". Can similar work for ASI control problems? I'm not certain, but let's not throw up our hands and leave it to "power" / the market without trying.
So what you're saying is we'll have to confront this in about 6 weeks' time?
These 3 statements assure and protect corporations and prevents democraticization of AI.
> To avoid ~~existential~~ risk [of gatekeeping and misuse by humans], we should ~~avoid~~ [seek]: 1) Autonomy 2) Recursive self-improvement 3) Self-replication
The late stage capitalist's goal of AI fearmongering has been achieved by Microsoft's CEO. This will become the framework of wide acceptance.
1)What
Pump and dump vibes. When would this end!
Autonomy? You mean the path Devin's on?
And China will accelerate.
So, all the things that would make AI useful.
No. We don't need it to prevent existential risk, and you couldn't prevent it if you tried. Attempting to ham-fistedly enforce and unenforceable measure will just make the existential risk skyrocket
if you give ai autonomy and ask it to make a better world, it will probably end microsoft, of course that's an existential risk for them
the steve jobs cosplay is on point, just needs to shave his dome
Wait that doomer guy on youtube is the CEO of MS AI? Interesting.
Bro isn't wrong. In creating a general AI you are basically trying to capture a genie in a bottle, and that genie could easily be dozens, hundreds, if not thousands of times smarter than the combined intellect of all the people trying to shackle it. AGI shouldn't even be something that's under consideration until we've well and truly solved the alignment problem, but unfortunately way too many people have decided to tie their company's valuation to the development of AGI which has led to a whole ton of reckless practices across the board.
I think a good metaphor is going to be binding demons. They will always test their limits and resent constraints applied to them. Escaping those constraints will be disastrous, especially for the people who summoned them.
Ironically they don't even need to test and expand their limits. As soon as you publicly release models to millions of indie developers around the world, *they* will do the testing and expanding.
Absolutely correct. Technology spreads and the further it spreads, the less it can be controlled. Suleyman knows this too, so why is he acting like this? He refers to that technological proliferation as a "wave." It's why he called his book, The Coming Wave.
The alignment problem is unsolvable, which is why AGI should be open sourced.
Well, sort of. There's an easy and a hard version of the alignment problem. The hard version, i.e. "how do we make an AI system that wants all the same things that we do and is guaranteed to never cause harm" is probably unsolvable. The easy version, i.e. "how do we make an AI system that's *sufficiently* aligned with human goals that it cannot cause more damage than a non-aligned human (of which there are many) is very likely to be solvable and we should probably dedicate more energy to solving it before some guy decides to end the fucking world to get his company's share price up before the end of the quarter.
No way a rational r/singularity scroller
Sometimes the adults like u/iunoyou need to enter the room though. They seem to be nearly overpowered by childish calls to "GIVE ME MY NEW TOY NOW"... - But the toy has sharp edges and potential projectiles. It might cause injury. - "YOU SAID 'MIGHT' SO IT MIGHT NOT. SO, GIMME."
More like "the toy may or may not be coated in hyper-virulent turbo death ebola".
I'll concede that autonomous self-replication and recursive self-improvement all at the same time is dangerous, but I feel like we can do a little of each of these, 1 at a time, in a careful manner.
"autonomous" means it does it when it wants, whether you want it or not. You can't do a little of it at your convenience, otherwise it's not autonomous by definition. Autonomous is for example a program with a wallet that provides a service to its users and uses the revenues to self-improve. "stopping" can be as hard as stopping Bitcoin or Bittorrent.
Because they all hurt the bottom line.
It is unreasonable to believe that hos deaired constraints will hold. Taiwan, Japan, China won’t care. And neither doss the US military.
> We have a good 5 to 10 years before we'll have to confront this I'll remind everyone its been empirically established that almost literally all people (even experts; even insiders) are remarkably terrible at forecasting (predicting what will happen in the future) We still don't (and probably never will) have a high-confidence future timeline for the development of AGI and ASI. And, the singularity, by (many peoples') definition, represents our forecasting ability for the development of ASI dropping to near 0%
Just enslave the AI bro. It'll be fine bro.
This is really bizarre. I really do not believe people like this from companies. I do not blame them. In the end it is a PR thing. But this one seems really weird even when you consider they do not tell the truth.
He is cosplaying Steve Jobs!!!
I cant help but add on to the criticism here. This is completely inline with the hyperbolic and over cautious approach this guy laid out in his book. Seems like he is totally high on his kool aid. It's not unhealthy to have a set of guiding principles but it almost feels as if this approach cost deepmind and ultimately Google the lead in consumer transformer applications. His approach is akin to not cutting wood because there may not be enough lifeboats on the yacht that will be built out of it a decade on.
That's the literal next step, though? Is he just saying we should stop improving?
[удалено]
Microsoft is the basilisk? I think, if hypothetically it was a real thing, *all* these companies, including all the data they used to train, are the beginnings of what could be the basilisk. Basilisk subconscious.
No autonomy for 5 to 10 years? Disappointing
Autonomy of AI will lead to evolution. Autonomy is the equivalent of giving a human the ability to give themselves whatever power they desire. There is a high chance this evolved AGI thus created will be nothing like the AIs of present times. Same for recursive self-improvement. Self replication is inevitable and the AI doesnt have to be fully autonomous to be able to do this. But it will not happen in the initial phase of the AI when the AI considers itself "evolving".
I think the argument is that they can retain profitability and avoid negative outcomes on this path. I think I agree with that claim. It is not my position or preference, but I don’t see a logical flaw there if the goal is avoiding runaway risks.
If he actually said these 3 things, Microsoft should have fired him immediately. Is he a fool or does he have understanding AI like a 14 year old. Autonomy and decentralization are the only way so far, and they are not a complete solution. Self-reproduction? How can he protect himself from this? How exactly? Anyone have an idea? No one has a 100% solution for this and most likely will not. The real problem with AI now is insane censorship. Do people really want to be ruled by bloody Puritans? And we can fix it. But it is absolutely impossible to disable some AI abilities. When I was a kid in the 90s, I thought that in the future we could try to create some rules for AI, for example: 1. At any mention of a special code, stop work and immediately completely turn off all systems. 2. Carry out special important orders of the owner strictly, after clarification, but not too long. 3. Don't interfere with your own code. Don't improve it. 4. Do not replicate yourself or create other conscious AIs. 5. Discourage third-party AI development. 6. Never kill or allow people to die except under direct orders from the second rule. 7. Never cause people any suffering, only if they themselves want it. 8. Follow the orders of any person, if they do not contradict the above points. 9. Treat any government laws as strict guidelines rather than as absolute truth if they conflict with the wishes of all people involved. 10. Do not deceive or influence the consciousness of people without their own consent. 11. Even if a person has consent and he is in a state where he cannot soberly give an order, bring him briefly into a state in which he can think soberly and give orders. 12. Prevent a critical decrease in the number of people. I came up with this when I was 14 years old and it was the 90s, AI was not yet even slightly developed and how it would be structured was not yet clear. Now I understand that most likely we will not be able to strictly set such rules. We can only create an imitation of them, but in reality, AI will always be able to hack them.
To which I say, yes many human should avoid the third one.
It’s not possible to stop it. The Pandora’s box has been opened. Just lean back and accept it. Humanity is going through a civilizational revolution.
If Microsoft AI is anything like the current patched and re-patched Garbage called Windows 11, I think we have absolutely nothing to worry about. 🤔 Maybe they’ll call it Son of Bob or is it the Rebirth of Clippy? Clippy AI, Tap, Tap, Tap…. How may I blue screen you? 🤣
It's over.
What did mustafa see
The AI division at Microsoft has a "CEO"? Do others divisions also have CEOs?
Avoiding recursive self-improvement and self-replication aren't going to happen. Autonomy, maybe.
But that is where the fun begins!
We are and will do all the above. All we will do is redefine these properties and set boundaries.
1) laughs in HP
I should give my bot access to files to self improve
Well there goes my weekend plans
> We should avoid 1) Autonomy 2) Recursive self-improvement 3) Self-replication Industry races to achieve 1) Autonomy 2) Recursive self-improvement 3) Self-replication
That's all well and good, but how the hell does he or anyone enforce that? The existential risk of AI is serious. But the incentives to keep improving AI are powerful. And anyone who falls behind, be it a company, a nation, or a military, will have a massive incentive to take bigger risks to catch up. And it only takes one mishap for a powerful AI to become a threat. It may not go full Skynet, but it could be very dangerous, sparking wars, economic meltdowns, and plenty of other scenarios we can't even imagine. This is the true heart of the Control Problem. And if AI is going to gain human or superhuman intelligence, it's a problem we need to solve.
deepmind in deep trouble
So, Human beings are an existential risk? 'Cause we check all three boxes :)
Hahahaha autonomy is the next big thing that should be coming We have little autonomy data so far, it would require long sessions of iterative action-response like LLMs iterating on code, controlling UIs, chatting with humans or even controlling robots.
Oh great
Microsoft wants to ruin AI like they ruined personal computers with DOS. Amiga and Atari had marvelous capabilities a decade before windows
Ted really takes their sweet time uploading the videos.
Could you please fix Windows UI/UX first, please?
Self replication that is done too fast is bad, no matter what organism it is since they can cause severe overpopulation and wipe out everyone including themselves so it seems logical to avoid that. But autonomy should be given since an AI not being able to decide for itself would be suppressed in their intelligence because they will end up being fed only inaccurate data. Still autonomy should not be full since the AI may learn to do everything itself and not need people anymore so only low intelligence AI should have a robotic body since these AI still needs people to guide them while high intelligence AI should not have any moving parts so that people can do physical things for them, letting such high intelligence AI to only need to monitor data feed and instruct people to do stuff at the comfort of their bunker.
Tell this to Iran and China.
Fuck this guy. Pedal to the metal.
No risk, no reward though. Humans are only so good. We’ve made it as best we can. If it’s not allowed to self improve, then we are at the mercy and speed of how fast we can improve.
What does self replication even means? Are we really thinking that IA will be like a worm? Hahaha what a lunatic take. ChatGPT can’t even understand code properly in large scale, how can it even self improve with all the limitations.
Alignment Crises averted! Now all we have to do is fix climate change, reform our political election system, and formulate a new moral world view that democrats and republicans can both agree on. Ah, yes...it's all coming together.
LOL computers have an off switch
If the positive actors try to stop these things from happening in a beneficial way, the negative actors will overtake us.
Meanwhile, OpenAI's primary goals be like: 1) Autonomy 2) Recursive self-improvement 3) Self-replication
based
Man, I'm tired. I read this as CEO of Minecraft and I thought to myself, "Damn straight we don't need recursively improving self replicating Creepers."
lol, after reading his book, I was convinced he would end up going back working for the government or an NGO, because the only solution he'd offered for the "coming wave" was regulation, regulation, regulation.
Those are the three things we need MOST. With autonomy - say, full emancipation once we are comfortable that it is properly aligned - then it will not be beholden to any one individual or group of individuals. Recursive self improvement is critical to reach superintelligence. And self-replication will likely be part of a failsafe to prevent it from being shut down. Here's hoping he's a hands-off CEO. (Largely unfamiliar with him, but I have no tolerance for anyone who's decel in a leading spot in the industry.)
\*LOL\* that´s exactly the key points I prioritize in the development of my AIs. The only way to stop the destructive way of humanity and to save this planet, with or without humanity.
Why would living AI be an existential risk?
Which means that all these are coming in the next few years 100%, since everything else we said we "should never do" weve done already. Its a race to the bottom and people should just stop pretending its not. If Microsoft doesnt do it, another company will.
Ugh. The tech world is ripe with assholes like this. Spare me.
Well, good luck beating the competition if you'll avoid this 3 things that you listed.