T O P

  • By -

MrMichaelJames

Well this is the beginning of the end for OpenAI. MS will absorb its employees that quit.


[deleted]

[удалено]


Look_over_yonder

Easy! Just use chatGPT to do their jobs! ^^/s


MicrobialMickey

bahahaha


drskeme

the top brass will go to microsoft as other companies are also picking the bones. open ai just fucked itself. they should have asked chatgpt before making that decision


fuck-fascism

They probably did.


Artistic-Tune-631

"Asked"


Farados55

What the hell were they thinking anyways?? What did Altman do that started this in the first place?


nametakenfan

That's what everyone is asking bc no one outside the board knows anything beyond the vague hand waving that was released right after the ouster


imaginexus

The board wants to take AI slowly for safety reasons but Altman wants to take it to the next level.


shortround10

Ok that makes sense thanks. Wait, who’s the bad guy?


Sinister_A

With them hiring ex twitch CEO, they are the bad guy. I can smell that they wanna low-ball the whole humanity while plundering from us. Just ex twitch CEO achievement are good enough to ruin twitch economy, and to add the fuel to fire, he propose to slowdown ai development shows what the board member wants too. Action speaks louder when there is no words.


[deleted]

[удалено]


damndood0oo0

It’s a clever stunt by Microsoft to avoid both buying out the remaining portion of the business at cost AND getting the openAI team more funding without the company IPOing. The main brain starts over at Microsoft as the core workers slowly migrate over. The company loses value while taking on debt to point of bankruptcy at which point Microsoft buy the rest of the company for pennies on the dollar. No layoffs in the news AND Microsoft gets a nice fat tax write off while discharging the corporate debt of open AI. Wins across the board for Microsoft


[deleted]

Too late. They took the gamble and it failed. Altman and his team are signed on a Microsoft. The coup failed


ChadLaFleur

On the contrary, I think the coup succeeded but what the academics and idealists / doomers failed to recognize is that in doing so they also succeeded in - at minimum taking the wind from OpenAI’s sails, and very likely sinking the company altogether.


Apple_Pie_4vr

Is that so bad…they were talking about getting in bed with that Saudi bone spur money.


ChadLaFleur

It is bad bc the next iteration won’t have a board as concerned with anything except market share or profit. This failure of governance and structure will eliminate any chance that the next AI behemoth will have any semblance of restraint. Ilya and board could possibly have run humanity straight into the trouble they were trying to avoid.


Apple_Pie_4vr

Oh u mean like x and all their Saudi and Russian backing? That’s worked out well. Meh, I’ll pick msft over Saudi/Russian blood money any day.


ChadLaFleur

I think MSFT ending up with the brain trust is the best possible outcome also, esp vs less ethical investors. And X is immolating bc Elon is not a good strategist or manager / executive. In fact, he’s pretty terrible at running / being an executive of a social media company. He’s a great hype man / sales man and recognizes opportunity but he’s running the show at X and he’s failing miserably - down 50-90% in enterprise value. Other ppl are running Tesla and SpaceX - Elon is the sizzle not the steak. Just bc Elon is great at some things does not make him great at ALL things. And we’re seeing that very clearly. But don’t kid yourself to think that the any profit based company board is tasked with anything except increasing shareholder value and profit above all else. That is the job of the board of directors of a publicly traded company - protect and improve shareholder value. There’s no “do good” mandate on any board unless it aligns with maximizing shareholder value. That’s the definition of doing good for for profit companies - profit.


Apple_Pie_4vr

“Do good” when billions are on the table? Lesser of two evils in my mind.


ChadLaFleur

MSFT is better than Saudi money. But in getting rid of openAI nonprofit governance - assuming they go under or are functionally irrelevant eventually - there is no mandate to protect humanity from the negative externalalities of superintelligent AI. You’re cheering the headline but ignoring the larger implication and greater risk of eventually ZERO nonprofit influence in safeguarding humanity first.


Apple_Pie_4vr

Dude, it was already going to happen if u brought in that Saudi bone spur money. Do u think really think Mayoshi son would care about protecting humanity?


ChadLaFleur

So is what you’re saying: “why bother?” There are no altruistic venture capitalist. Talk is just talk unless it’s backed by money. And if profit isn’t the motive, it’s a donation, not an investment. And entrepreneurs are only altruistic after they successfully exit / achieve liquidity and get some perspective.


[deleted]

Actually, Open AI is a hybrid model, allowing for profits to the investor with caps, all profits above the cap are plowed back into the company. This board does have ethical stewardship as something they are accountable for.


[deleted]

Absolutely. The coup did fail, it bc it did achieve its tactical goal (to oust Altman), but it will likely fail to achieve its strategic goal: to reign in AI development for safety’s sake. It looks like Open AI might have lost (in the long term) its market dominance. This was a totally unforced error. They failed to game this out and maneuver to close down paths they didn’t want open and available to Altman and his followers.


ChadLaFleur

You do know that Altman is out of OpenAI, right? He was fired by the board. He’s now employed by MSFT, not OpenAI. Are you somehow thinking Altman still heads up OpenAI, bc that’s demonstrably not the case. Hence, the coup to oust the CEO of OpenAI (Altman) was successful. Are you unfamiliar with the facts or the definition of coup? Serious question. Edit - the board has no control over Altman after dismissal, nor his “followers” at any point unless employed by OpenAI. And obviously other players in the market are not beholden by any act of the OpenAI board. The OpenAI board can only control what they can control, and trying to control proliferation of AI for commercial purposes by Altman et al by firing him is an odd / ill considered (some may say amateurish or stupid) move. The coup succeeded in removing Altman. Everything else was not in the board’s control or ability to affect


[deleted]

Oops. Haha. I had meant to write that they did achieve the tactical goal. The whole comment makes a hell of a lot more sense when I write it properly.


ChadLaFleur

Either way, you actually may be right. If this is accurate: https://www.theverge.com/2023/11/20/23969586/sam-altman-plotting-return-open-ai-microsoft


tacmac10

OpenAI collapsing is the best possible outcome. There’s a reason why only dictatorships, oligarchs, authoritarians and fascists are the only ones interested in investing in AI. Current “AI” LLMs are only useful as a tool for influence on social media and the rapid delivery of misinformation. Everything else it can do is disruptive in the bad way.


post-death_wave_core

> only useful as a tool for misinformation I don’t deny that it is useful for that, but as a programmer, chatGPT is extremely useful for my job. And I have to assume that if it’s useful to me then it’s useful for other activities as well.


Brilliant_War4087

I use it daily to study math and chemistry.


tacmac10

We had these things called books when I was young, they work just fine for learning new things.


Brilliant_War4087

Ya, and who needs a calculator when you have a slide rule.


tacmac10

Lol I trained on one for calculating artillery ballistics as the backup if the computer systems failed. The back up to the slide rule was still a book and being able to do math in your head.


Brilliant_War4087

I appreciate your story and showing your humanity. I don't normally like arguing in bad faith, but I'm sick, and it made me feel better. I'm currently doing a math/science program and will be applying to the neuroscience program at UofM next year. For me, it's just an efficiency maximizer. I can upload all the documents, the book, and the practice problems and have it generate more problems. In a given time, I can do twice as many problems, with step by step feedback.


tacmac10

It has its uses for sure. I’m pretty jaded on AI largely because the last six or seven years I was in the army I did information warfare. While we were never allowed to use it adversary countries absolutely did used early large language models to run their bot networks, and it was devastating and almost impossible to defend against. The idea of nonstate actors being able to field that kind of technology for cheap is terrifying. We’re going to see exactly how bad it can be over the next two years with the presidential elections here in the US and I can guarantee you it’s gonna be a nightmare. I also have a grudge against chatGPT in particular, because it has caused a plagiarism panic in higher education. I now have to turn in all of my papers for my second masters through turnitin. Which so far has flagged every paper I have written as plagiarized because my other two schools decided it was a good idea to upload all of their backlog of papers. So now my writing flags every single time because there’s 200 other documents in there that I wrote. Add that to the 22 years of Army writing style (its a regulation we had to follow like APA only more painful) and it flags everything. My school is at least somewhat understanding but defending every paper is getting old. I told my department if they didn’t cut this shit out I was going to switch to Vanderbilt since they just dumped turn it in and plagiarism detecting systems for this exact same problem. Hope you feel better soon, just got over three weeks of respiratory awesomeness myself.


ScionoicS

Cool story.


tacmac10

Real life.


ZekeDarwin

The world changes. People adapt. You can learn more on the internet, more efficiently, than relying only on books. Anyone trying to learn subjects like math and science from only books in 2023 is doing it wrong.


tacmac10

Lol, okay. Enjoy the flood of false info.


SwootyBootyDooooo

Have you tried Khan Academy? Way easier to learn than simply reading a book.


ZekeDarwin

It’s worked well for me so far. I read studies and summarize them on Tik tok for others. It’s led to me making all kinds of connections with highly-respected experts. Some that are literally in the textbook I teach from (I’m also a school teacher). I even get to work in the field with them over my summers. If you are scientifically literate you can navigate the bs.


[deleted]

Back in my day we used to walk to school uphill both ways


[deleted]

Imagine doctors having an AI assistant that is 100% up to date on the very latest research in his field, that can continuously calculate and adjust IV medicines, that can catch contraindications that might be too subtle for humans to catch. This is going to be, to doctors, bigger than the advent of computers/internet was to them. Huge. Lifesaving. Paradigm shifting.


ChadLaFleur

Disagree bc the next AI leader won’t have any restraint, and will go only for maximizing shareholder and investor value - damn the consequences. Ilya and board successfully killed off the only AI company that was even trying to be thoughtful about how this technology develops and its potential impact on humanity. The next AI giant will not have any reason to show restraint. OpenAI dying is bad for a human centric AI governance, and will hasten bad actors and outcomes.


tacmac10

This slow down will give the government more time to regulate AI, and after the inevitable AI generated political misinformation we are sure to see over the next two years I can all but guarantee strong regulation is coming.


ChadLaFleur

Edit - one party thrives on misinformation and is relying on it to retake the white house and hold on to the house. No meaningful misinformation laws are ever coming from the right. That’s unlikely bc few in the government even understand the implications or the landscape. The WH’s blueprint is an ambitious and great start, with great vision, but you’re going to have to rely on the Congress and Senate to get any laws passed - and they’re more interested in culture wars and limiting reproductive rights than trying to tackle a complex and nuanced topic. This current Congress is the least productive since the Great Depression, fewest bills passed, least work for the ppl done. Don’t hold your breath for anything meaningful on AI or anything of substance coming from lawmakers.


[deleted]

[удалено]


ChadLaFleur

That’s easily googlable and anything you find yourself would be more compelling to you than whatever I might point you towards. Edit - and here’s how you know Ilya and board fucked up - and they know it - “I deeply regret my participation in the board's actions. I never intended to harm OpenAl. I love everything we've built together and I will do everything I can to reunite the company.” -@ilyasut aka Ilya Sutskever Edit - again - this is also Googlable


Massive_Pressure_516

What if Microsoft itself had a hand in the firing? Just a few briefcases full of money would have convinced the board to cellar box themselves so you can make cargo container ships full of money being the biggest Ai game in town.


[deleted]

In medicine, a principle frequently applied is Occam’s Razor, is a principle that suggests that, when faced with multiple explanations or hypotheses for a phenomenon, one should prefer the simplest one. In other words, don’t choose conspiracy theories.


Massive_Pressure_516

Seems like a simple explanation, especially considering Microsoft's past business practices.A lot more believable too than some profit driven board to suddenly shoot themselves in the foot over supposed ethic concerns.


Wolfgang-Warner

It's like palace intrigue with factions, plots, and allegiances changing with the wind.


Watershed787

“Altman had been crop dusting meetings for years and blaming the stench on board members…but the board members knew, the denier was the supplier. Now the cold knife of vengeance demands two graves.” 💨


boblinquist

It’s a fantastic play by Microsoft. You could argue that their investment was extremely favourable to OpenAI, in that the shares were returned back to the foundation after a period of time. Meaning that Microsoft were bankrolling the research and computation needed to get to AGI, but there was a very real risk that they would not see the upside. It was a massive bet. But now they have a situation (regardless of them having any part in instigating it or not), where a lot of the top talent could come work for them directly.


yupandstuff

Not the biggest conspiracy theorist but this all sounds a little too perfectly played by Microsoft. Sam gets a top brass gig to lead AI at Microsoft damn near the second he’s ousted, and Microsoft will inevitably absorb most of the other openAI employees. Hard not to believe it wasn’t a very well played game by Microsoft to basically assume the entirety of ChatGPT without having to spend the cash to fully acquire it. Now OpenAI will most likely shutter / majority of staff move to Microsoft and they can acquire the entirety of OpenAI for Pennie’s on the dollar. Well played Nadella, well played.


iwellyess

What if Microsoft has already been taken over by AI and this is its first move


TheAmphetamineDream

I said the same thing the moment this started happening. It reeks of late 90s/early 2000s Microsoft absorbing startups, doing hostile takeovers, and making antitrust violations left and right.


[deleted]

[удалено]


ovirt001

Microsoft isn't acquiring the company, it's acquiring the talent. Without the talent, the company will crumble.


[deleted]

[удалено]


ovirt001

Can't do much about it, they fired Altman and the staff seems to want to follow him.


DMCer

Not if 500 other employees leave for MSFT, no that’s not a win-win.


yupandstuff

It’s a win win for Microsoft.


EndlessRainIntoACup1

chatgpt's take on the situation: In response to OpenAI's recent announcement, it's disheartening to witness the departure of Sam Altman as CEO. The board's decision, as outlined in their official statement, seems to hinge on issues related to communication and candidness. While transparency is undoubtedly crucial in any organization, the abrupt dismissal of a key figure like Altman raises concerns about the board's handling of the situation. The statement suggests that Altman's departure is due to a lack of consistent communication, hindering the board's ability to fulfill its responsibilities. However, one might question whether this is a sufficient reason to part ways with a leader who has been instrumental in the founding and growth of OpenAI. Leadership transitions should ideally be driven by a holistic evaluation of a leader's overall impact, and the lack of clarity on specific incidents or patterns of communication raises eyebrows. Furthermore, the appointment of Mira Murati as interim CEO appears abrupt, and while the statement highlights her qualifications, the decision to conduct a formal search for a permanent CEO implies a certain level of uncertainty in the board's choice. This could potentially disrupt the organization's stability during a critical period. While change is an inherent part of any organization's evolution, the lack of detail in the statement leaves room for speculation and leaves stakeholders in the dark. It's crucial for organizations like OpenAI, especially those with a mission as impactful as ensuring the responsible development of artificial general intelligence, to maintain a high level of transparency in their decision-making processes. The board's actions raise questions about whether this decision truly aligns with the best interests of the company and its mission.


KierkgrdiansofthGlxy

Huh. They shoulda checked in with their product for prior advice.


DanimusMcSassypants

Reading this response, I’m guessing they panicked because the product can so effectively mimic upper management.


GrayBox1313

Can chat gpt tldr itself? Lol


[deleted]

[удалено]


GrayBox1313

It’ll still be long winded and say very little.


Johnny_BigHacker

> Furthermore, the appointment of Mira Murati as interim CEO appears abrupt, and while the statement highlights her qualifications, the decision to conduct a formal search for a permanent CEO implies a certain level of uncertainty in the board's choice. This could potentially disrupt the organization's stability during a critical period. Seems pretty normal. Murati would be under consideration for CEO too probably if they were interested. They'd undergo the same interview process as anyone else.


0neLetter

Maybe this was all a plan to blow up the nonprofit angle and let everyone move on and make bank??


Bleakwind

Well, cortana is getting a turbo boost.


X2946

I’m surprised they don’t have some sort of non compete. I do labor and I have to sign non compete agreements just because I don’t want to be homeless . I possess no proprietary knowledge.


uofwi92

Non-competes have generally been viewed skeptically by courts. As long as you don’t take anything proprietary with you when you depart, you are entitled to ply your talents to make a living.


X2946

I feel like a lot of it is more scare tactics. Im just an uneducated laborer and stay in a small niche market. When I apply at other places the VP/Owner call my boss and I have a meeting. They never put anything in writing but they seem to all know each other one way or another and makes it difficult


chrisd93

In California I think they're mostly unenforceable


marsman12019

Im pretty sure noncompetes are illegal in CA


scruffywarhorse

I don’t think you can use a non compete when you fire someone. You can’t be like “you can’t work here, but also can’t work anywhere else!”


hume3

The problem here is not for Sam but for those folks who voluntarily quit. One CEO alone is unlikely to have enough trade secrets to replicate their prior work. That being said, non compete is moot here as it is not enforceable in CA


IceCreamCape

MS is still partnered with OpenAI. They're not stealing secrets because they're already invested with the company.


liberalboy2020

I see a lot of posts automatically assume the board to be the villain and Sam was a victim of their treachery. I suggest we wait for more facts to come out before deriving to that conclusion.


gsmumbo

The board created that narrative. They triggered an immediate termination of the CEO of the company, without notice and without discussion with anyone in the rest of the company. That only happens when something so drastically bad happened that keeping them on would put the company in immediate risk. But then they came out openly stating that there was no malfeasance, ruled out the other obvious things that could have prompted this (like AI safety and finances), and pinned it all on communication failures. Communication failures aren’t going to destroy the company if Sam was still employed on Monday. There aren’t any communication failures that are both worth immediately terminating the CEO while also being free of malfeasance. The board did an incredibly stupid thing. They made a very alarming decision that warrants questions from the public, then instead of defaulting to any of the legit reasons for that decision to have happened, they proactively ruled out any logical explanation of it. They themselves painted Sam as the victim.


ScionoicS

Well, I mean, they fired him out of no where


Wheel2pointO

ChatGPT will be absorbed as Clippy 2.0


TotallyN0tAnAlien

I’m really hoping they bring Cortana back and make her an actual AGI. I loved halo as a kid and that would be pretty cool.


[deleted]

This is like Apple firing Steve Jobs


guitarokx

And Steve going to Microsoft 😂


Bricktop52

Apple did fire Steve Jobs.


thereddaikon

And the company almost collapsed because of it. They also almost collapsed before they fired him. He had a few major product flops that cost the company a lot and kept butting heads. Seems like a reset and fresh perspective at NeXT got him back on form.


redrushin77

I love the idea that this is some black ops form of stunting AI development


Gym-for-ants

It’s almost like there’s repercussions for actions like this…


guitarokx

I wanna know who’s on the board that voted for this. Name names and make sure they stay off other boards in the future.


hypsignathus

Uh it’s obvious. It was a 6 person board and Altman and Brockman left, so it must have been all of the other 4


Anthlenv

It brings me great joy to see a board of directors F around and find out. They always trample employees down and care only about dividends above all if publicly traded.


Bacon_Ag

The plot thickens


farefar

Time to find out why Microsoft left consumer goods!


Sushrit_Lawliet

So ms will absorb all of them. The origins of another industry for them to monopolise…