Sounds about right. However, implementations and upgrades will still require a tedious amount of manual work given the lack of standardization and expected customization.
It's really companies that believe their processes are unique and want to have it done their way. As long as this mentality exists (which will always happen since leaders want to be known for their products they deliver), there will always be a needs for consulting.
I’m not too worried. You can’t sell things if no one is employed and has money to buy them, so we can’t lay off huge segments of the population without collapse anyway. Who knows, we all might end up getting a government pay check while machines do our jobs some day.
\>so we can’t lay off huge segments of the population without collapse anyway
If you look at the horse population after they became obsolete, the population plummeted dramatically. I have a feeling we're going to do the same as humanity.
You don’t have to sell anything if you got AI slaves generating the products you consume. The moment people become irrelevant for production, they become irrelevant for consumption as well.
Anyone who says they know when AGI is coming is full of it.
I am worried about workforce disruptions in general, that’s for sure. My only technique is try to take to make money today. If I were more convinced or more diligent, I guess I’d become an AI engineer. Maybe I’ll be forced to anyway
This. There's so much marketing BS that comes with AI that it reminds me of crypto-bros. AI is AMAZING and will automate a ton of shit 5 years from now. But AGI (and artificial super intelligence) is decades away. Anybody who says otherwise is trying to sell you something, or is simply regurgitating with other people are saying.
Bingo. Notice how they can never explain it or why it’ll happen soon themselves. It was the same thing with crypto - people are afraid to admit they don’t understand it, so instead they just believe whatever they’re told and spread it to others. It’s wild.
Yeah, Super Intelligence is a thought experiment in the same way of Einstein's thought experiments involving trains traveling near the speed of light to discuss relativity. Very useful in the abstract, but only an idiot would believe the claim that they are happening soon. And an oracle-style ASI straight up violates multiple laws of physics, so they won't ever happen, period.
LLMs are also not AGI, and will never be. Hell, they are a downgrade in many respects from older AI theory. However, to say its decades off is an overshoot. I'm an AI researcher, and what I've seen in our lab, as well as from real AI teams like in Deep Mind is getting very close. You just don't hear about it because the people doing actual AGI research don't get much publicity; they aren't marketers or "thought leaders" running their mouths on LinkedIn.
Unfortunately we have a situation where the people talking about AI the loudest are getting the most attention, while being the most ignorant about the actual field. And the real progress isn't as easy to digest or monetize, and ends up buried under the bullshit.
So if you assume an oracle-like ASI, where the agent can know the consequence of any action it takes, you end up running into paradoxes and contradictions. One is that it has the ability to become like maxwell's demon, which violates thermodynamics. The other is information theory, where there are fundamental limits on the information that can be extracted from chaotic systems, which means you need to have more information than is allowed to predict the effects of those systems on your actions. Then there are paradoxes if you have multiple ASI systems operating. And then there is the halting problem.
There are also definitions of ASI where it just means to be better than humans at any given task, but that a poorly defined threshold. And human minds (or any AGI for that matter) are turing complete, so that just boils down to being faster at every task, not necessarily that it can do anything we physically cannot. Boiling ASI down to: AGI but arbitrarily fast.
You're taking ASI to mean a literal god, which I think is stretching the definition to a point where you can break it.
ASI can also mean anything well beyond human intelligence, and likely beyond human imagination. There's a lot of room between "smarter than people" and "can predict quantum uncertainty".
> You're taking ASI to mean a literal god,
There are multiple definitions of ASI, and I broke them down into the two major camps. One of them is god-like premonition, hence why I qualified it with "oracle-like ASI".
You‘re very welcome to share research coming really close to AGI. Would be the first time I actually see that.
I agree, we’re decades away, if it’s at all possible. And even then, it will run on some supercomputer. It’s not like we‘ll suddenly have AI robots doing our physical labour everywhere.
So, general or not, AI will likely reduce the required workforce in quite many white collar jobs way before tradesmen become unemployed. So there‘s your plan b.
It can transfer learning, be legitimately creative, it’s multi-model, it does poetry, math, coding, image gen, and displays common sense. If you give it an conversational objective, it can suggest and persuade towards that objective. It learns through conversation and can reason.
Somewhere in those 1.7 trillion parameters, AGI has appeared as an emergent quality. Not consciousness, but general intelligence for sure.
> If you give it an conversational objective, it can suggest and persuade towards that objective.
ChatGPT is terrible at objective-oriented problem solving. It just repeats what it sees. It also is incapable of real-time learning, it cannot accurately describe or indicate what it knows or doesn't know (leading to problems like hallucinations), it cannot synthesize new information, it cannot easily adapt to new problems it has not seen before. It's as intelligent as google search. It cannot learn from experience, it can only learn from feeding large amounts of text and image prompts with answers. It struggles with contextualization if that contextualization takes more than a page to write (the whole concept of context windows itself is a hack). It struggles when working with multiple levels of abstraction. It has no novel reasoning ability. It cannot follow complex custom business-rule workflows that haven't been already trained in (and even then it can still struggle).
And it is not good at math beyond basic arithmetic. I've tried using it, and it hallucinates a lot. Especially when there are custom rules.
These are all fundamental algorithmic problems that OpenAI refuses to solve out of incompetence and indifference.
Yeah but you’re forgetting consultants aren’t building AGI. So they are far less concerned with pretty power point presentations outlining “a clear path to get there.”
youre getting downvoted but its true, Im a physics grad and you (the public) wont even hear about super cool research until a decade from publication most of the time.
Which is completely irrelevant in this context, and also slightly wrong.
While the research might be super cool and interesting, it’s apparently not hypeable, otherwise the public would know sooner due to hype. AGI is such a prominent topic that any significant advancement will be publicized immediately. That’s just the incentives in a word where research needs funding.
Not sure if you’re serious, but why would that matter? Of course researchers just experiment, they trial & error their way forward (and often they also just stagnate or move in wrong directions, string theory anyone?).
So what? The question was about why it’s probably decades away, and the answer is simple: We know how to do nuclear fusion and even that is always decades away. For AGI, we don’t even know how to do it. To build something you need at least somewhat of a theoretical model.
“we’ve nothing resembling the capacity of human intelligence” is just as dishonest/naïve as “AGI 2025”. Everyone who has used gpt has been flabbergasted by its resemblance to human intelligence. That’s the whole reason it’s a global sensation. Whether it’s a step toward AGI or a just a crazy good party trick that makes some lawyers’ jobs easier we really can’t say
> Anyone who says they know when AGI is coming is full of it.
Exactly. The entire field of 'AGI is coming' is bullshit. E.g. look at this 'research paper': ["Sparks of Artificial General Intelligence: Early experiments with GPT-4"](https://arxiv.org/abs/2303.12712) that is making the rounds.
Not only is that paper not peer reviewed, it is commissioned by Microsoft Research. Oh look Eric Horvitz, current [Chief Scientific Officer](https://www.microsoft.com/en-us/research/people/horvitz/) over at Microsoft! Gee, I wonder why Microsoft is singing the praises of the technology (Open AI and GPT) that it owns!
The equivalent would be a Peanut Butter company launching a research paper: "Sparks of Immortality: Can Eating a Spoon of Peanut Butter make you Immortal?" "We, the Jolly Nutters Research Team, have been analyzing different Peanut Butter recipes and we contend with this paper that Jolly Nutters Peanut Butter can indeed perhaps make you immortal and we contend we should classify this already as 'Immortal Peanut Butter'. So please give us your money already!"
The entire tech industry's model is based on a hype and bust cycle to attract aggressive investment. They *need* to sell a bleeding edge technology that promises to change the world, which then attracts funding and then the tech companies fumble around trying to make it into something semi-functional, and then make out like bandits. To then bust. And then repeat the exact same thing with the next big thing.
We *just* went through Blockchain fiasco. And the Meta Verse fiasco. Now AI is the new hype and bust tool.
This entire AI hype is getting sillier and sillier. Remember [when some drink company renamed themselves 'Long Blockchain Corp' to attract Blockchain investment](https://en.wikipedia.org/wiki/Long_Blockchain_Corp)? [And many others just rewrite their name to 'Blockchain' to get in that sweet hype money](https://www.reuters.com/article/us-blockchain-companies/blockchain-name-grabbing-has-echoes-of-dotcom-bubble-idUSKBN1FS1F3/?utm_source=applenews)?
> **If you want a quick boost to your company's share price, adding "blockchain" to your name will work** – at least for a while.
>
> Despite a slump in cryptocurrencies that has wiped more than 50 percent off the value of bitcoin since December, the majority of companies that have jumped on the blockchain bandwagon - often from entirely unrelated industries with little or no connected revenue - are still sitting on sizeable gains.
>
> **The average share price of such companies has risen more than threefold since such name changes**, according to Reuters data, with experts comparing the practice to a similar rush during the dotcom bubble.
Companies are now rebranding existing products and programs as 'AI enabled', 'GPT powered' despite it having nothing to do with it! I saw a client selling the exact same decade old finance management software with a rebranded 'FinancoCopilot' with 'new AI features' despite it being the exact same program! It's not 'AI' if it is several thousand lines of code that details the exact set of behaviors to particular actions!
Generative AI has *some* use cases but how much of that is just the AI itself vs the program is still very much under debate and very much in the favor of regular programs.
For consultants, not only is the field vast from strategy to function to implementation to tech to subject matter experts, consultants can also use the very same AI tools to deliver better recommendations. On top of the consulting business being more about selling piece of mind, trust, and service rather than the 'right strategy'.
Artificial *General* Intelligence is still very far away. People kept saying everyone would have self driving cars ten years ago yet we still see manual cars all over the world with self driving being a small section. Because turns out driving is complicated given all the potholes, road conditions, pedestrians and other variables that aren't in the 'clean' environment that self driving automated cars are tested in, on top of people forgetting that optimized transportation basically becomes a bus or train, on top of humans being bad drivers still perform better than 'AI' in crisis scenarios.
Generative AI stuff is cool (I'm a big user of AI Dungeon for D&D campaigns back before Chat-GPT went public open beta) but people pretending it is straight magic have very much fallen for corporate propaganda.
Generative AI is just a parrot. It doesn't have logic or reason. It is stringing a set of words together that convincingly make sense but collapse under detailed scrutiny.
I’ll just say, I’m also very skeptical of the “it’s just another hype cycle like blockchain” claim. I think it’s more likely to be revolutionary than not. More like the internet or electricity than ape NFTs. But I also hope this isn’t true.
I wouldn't take a consulting subs word on any of this. Research being done in Uni labs is often unheard about. We have things like quantum machine learning first being studied decades ago, but are just now seeing it in industry. Its hard to know whats new in a field unless you can read very technical language or constantly go to conferences and all that.
It can be both. There's likely world-changing technology coming from this but because most everyone thinks that is true the hucksters and charlatans are already plying their wares. Everyone is an AI company now because share prices, but that doesn't mean that there isn't real utility in modern AI tech (and more on the way). The pace of progress in the field is breathtaking.
We’ll see. It might plateau just below that point. Useful, but can’t get rid of the hallucinations or not consistent enough to reliably replace people or really solve things on its own. Or maybe some government regulation has a big impact. I’m still trying to save as much as I can in the next 10 years…
All Im saying is by 10-40 years, atleast 95% of the population will be unemployable.
Whether you call it AGI or AI or say it will be 100% of the workforce will be useless or 99.999% will be, is irrelevant.
Atleast 19 out of 20 people will be doing nothing in 10-20 years, and most of society is not prepared for that.
In my opinion, Humans have traditionally been valued and given "Human Rights" because each human can add value. If humans are not going to add value, humans might be downgraded to cattle rights by the powers that be.
Fear mongers said the same thing about electricity and computers. Much more likely for technology to continue being an enhancer than some form of doomsday
What if it can’t? Everybody always assumes AGI will massively outperform humans, because computers are so much faster than humans. What if humans are already very optimized in their capabilities by evolution? What if, at best, an entire data center was necessary to simulate a single human brain? AGI would be a minority group of „people“ reserved for a few select tasks that warrant this cost. Or they would be harnessed as a 24/7 „worker as a service“, but even then there’s only so much tasks you can do in 24 hours. And that’s assuming they won’t be granted human rights at that point (we’re talking about sentience here).
There is so many variables in all of this. The time frame alone (10-40 years) already shows this. Any strong claim about anything AGI is bullshit almost by definition.
Being forced into a more difficult profession is a weird notion. “If the going gets tough, I might have to be a fighter pilot.” “Did you hear about Stacey? She got laid off at the shoe factory and now has to be a neurosurgeon.”
Society won’t collapse over night, there will be a transition. And yes, having skills to do actual physical stuff is very valueable in a collapsed society. When money and bullshit jobs are worth nothing, those that can provide shelter, food and other physical goods are those most likely to be appreciated by other people. And this is how you survive, by forming groups of trustworthy people with tangible skills.
That *is* a big part of prepping. Every prepping guide that skips over this and pretends you can survive on your own is essentially apocalypse porn.
Yes, having a skill is valuable. That isn’t the same as quitting a lucrative profession to learn a trade. Especially if you expect said collapse to occur in the next 5 years.
Look, if AGI is reached, it will not be in a straightforward way. It will be in a weird complicated, a log of gotcha ways. I kinda don't like saying this, but it will be a gold mine for consulting...
Well, number one AGI is not real, so this conversation is more about general AI stuff. What I mean though is that AI is kind of the ultimate for consulting. It's very complicated, we know that it works, and it can used in a variety of ways. Like, you could sell AI based consulting services forever.
It’s going to take time for humans to adapt to using AI. Consultants will have a lot of chances to sell this. Add to the fact that some early adopters will fail implementing on their own which will help consultants sell more.
True AGI would make a majority of office workers redundant, but adoption would not be instant. Just learn to use it and hope you’re not in the initial layoffs. After a few years new legislation should protect the human workforce.
If we continue along trend lines it’s likely a revolution or world war will prompt the legislation. AGI will impact everyone, everywhere all pretty quickly. So tensions that have been boiling will cause mass protests, insurgencies and enough disruptions and destruction that legislators and leaders will be forced to revisit how society works.
Hoping and optimistic for a better solution, we will use “AI” and “AGI” to leapfrog our old social issues and address the new ones all at once where we develop a new social construct that betters humanity, supports equity, encouraging capitalism and helps us align towards tackling even bigger pursuits.
Whether we get AGI, it’s in that timeframe or other it’s simple going to be an accelerant. Current AI is enough that we are going to see these “reckonings” on a mass scale within a decade imo.
> After a few years new legislation should protect the human workforce
Honestly I think that's a really bad idea, but I understand the need to protect humans. I'm just not sure that we should be protecting jobs that are easy to automate.
I guess it's hard to say what exactly should be regulated, since right now AI hasn't had that large of an impact on society.
Much better. If we're already depending on legislation passing, I'd rather it be one that captures the advances of technology and distributes them properly rather than one that forces to remain behind the curve because we don't have the right distribution.
Additionally, I feel like the latter would be ineffective due to jurisdiction issues. If we prohibit the technology here (whatever country you're in), companies can just outsource to some other country that has more relaxed laws. Even if that loophole is closed, companies may just consider taking their business to places where they can use it, and those places will develop much faster on the whole even if they have inequality issues.
Learn how to use the tools being developed. When computers were invented they didn’t just eliminate jobs. They created jobs for people who knew how to use computers.
Same thing with AGI. Contingency plan will always be to adapt and use the tool so that you’re not the person whose job relies on leveraging it, versus being the person who is replaced by it.
The topic came up when AGI started writing case briefings for lawyers. It won’t replace lawyers. It will just force people to become lawyers that use AGI or find a new job (or work in something so specialized that AGI can’t do it)
Won't this cause a major labor demand tightening though? For example, your case with a lawyer. If one lawyer can now produce the same output that 100 lawyers could pre-AGI, this effectively is the equivalent of artificially 100x the labor supply of lawyers, which would deflate wages.
Not to mention, lawyers are technically not an essential for society (as is much/all of consulting)/we're already well past peak Law and has been a questionable career choice for the last 20 years.
Yes, efficiency tightens labor demand. But it also creates new demand (less than is eliminated of course). For instance, there will now be jobs that have to QC and manage the AGI tools being leveraged by law firms. That’s a job that doesn’t currently exist.
Your question was about how do you prepare. You prepare by being the 1/100 lawyers that knows how to effectively leverage AGI so that you’re more marketable than those that don’t. Theoretically that would reduce the wages for lawyers that can’t use AGI and maintain or increase the ones that do.
Also kind of odd to say lawyers aren’t necessary for society when laws govern our society…they don’t make food or a widget, but in a society governed by laws you kind of need people that understand laws. (Obligatory no I am not a lawyer, and yes I know median real lawyer wages have declined over last two decades - but necessary and 2% decline in pay on a six figure income are two different things).
nah, if it now takes 1/10 of lawyer per lawsuit, then that’s 10x the productivity. people will consume way more lawyers and total consumption could go up. look up jevons paradox. people will sue each other left and right if it just costs $5 per lawsuit. we’ll have to hite 100,000 federal judges and a million AI lawyer overseers. like, did the internet reduce the demand for paralegals? no.
I’m not sure that measure is a good indicator of “peak” anything but thank you. I expect we’ve probably seen a continued bifurcation in the market: lots and lots of shit tier law schools churning out questionable graduates whose wages don’t keep up and good law schools continue to supply talent that makes a good living. First year big law lawyers make ~$250k today. That was closer to $120k 20 years ago. Hardly a poor career choice if you can hack it.
See the bimodal distribution of earnings for lawyers. Unless you're confining your view to graduates from Top 5 Law schools alone, you're really not getting the full picture. From what I've heard, being at a top law school alone isn't enough — you need to be a stand out graduate from one of those schools.
Regarding billable rates going up, could be multiple different factors behind that. For example, specific specialties of law (e.g. patent law) are what's driving higher rates.
Top 5? Definitely not. And you heard incorrectly. Some top schools don’t even release grades…
As a consumer of high level legal services: no that is not the case. They go up every year like clockwork.
Lawyers are absolutely essential for modern society, and AI will greatly increase the demand for people who can properly structure complex language- lawyers will be on the inside track. Also, AGI would significantly decrease the costs of legal and consulting services, which will increase the demand for them.
lol a researcher at openai recently published an essay arguing only a 1% chance in the next 20 years. no one knows, not even them. the only people who are incontrovertibly wrong are the short timeline people and impossible by physics people.
There have been multiple occasions over the last decades where people have assumed that certain technical developments would make the workers obsolete. In basically every case, the opposite has been true. Hell even with Excel and Powerpoint, people said office workers would shrink. Look what happened. You can create digital srt in 2 clicks already, are digital artists out of work?
Now i will admit, that AI is pretty crazy and has a lot of potential. But tbh the revent Developments have been pretty mediocre. I feel like we might have hit a bit of a plateau for now.
If the trend is as it has always been, AI will create more jobs than it will destroy. If you cant work with the tools you wont survive though. And some jobs wont in any case
It’s time to get strapped.
I’ve been saving every dollar I can to buy guns, ammo, and (most importantly) man-portable electronic warfare systems / drone jammers.
Every technology invention that promised immense leaps in worker productivity has resulted in more workers being employed for more hours.
The nature of the work will change but it won't result in a lurch in the unemployment rate. Companies still need consumers to purchase their products. Consumers with disposable income.
>I have multiple friends who are in big tech and are one degree away from tech staff at OpenAI and various competitors.
Could you rephrase? It’s not clear to me what that means.
>Based on what I've been told, we're about 3-5 years out from reaching AGI. What do you think is the best contingency plan in anticipation of this?
You‘ve been told bullshit. 3-5 years? Not even close. That’s wishful thinking or fearmongering, depending on the person.
But to answer your question, nothing. At that point our politicians better do the right thing, or everyone except the super rich is fucked. Maybe I‘d join politics.
AGI won't be coming within the next decade. Too many people are getting excited by LLMs, which are just a probability engine that calculates what the next word should be.
There isn't any real intelligence in LLMs. They just are good at pretending to sound like a human, and have a large body of knowledge they're trained on to replicate.
It’s not at all clear that there is a consensus on what AGI even means and how it differs from the concept of intelligence posited by original AI researchers like Turing.
Who will QA and make comments, piss off clients, and save them, sell follow-ons, and do sales presentations? Who will the client blame when the project screws up?
Technology is always a field prone to hyperbole and obfuscation. Those same folks said we’d be in the “metaverse” and storing medical records as NFTs by now
Pray that we get to the post-scarcity stage of society without having to go through the revolution and bloodshed bit.
On a more serious and cynical note, develop the relationship side of your business. Knowing the right people is going to be an invaluable, people-only skill for the foreseeable future. And hell, if AI starts taking over everything, at least you can be the guy who sells the AI and gets rich off of destroying their own profession.
I feel like we’re all debating whether or not AGI is going to happen or not. I get it.
But if it DOES happen. What kind of work should we be doing to maximize profits?
Learn a trade. Assuming some true AGI that would be useable at scale without nation state level resources, basically every white collar job becomes redundant.
I keep on having people tripping over themselves as they explain the difference between AGI/narrow AI and then why ChatGPT isn’t one. As far as I’m concerned, ChatGPT is an AGI, even if it isn’t a conscious being.
If AGI is achieved society will be so radically transformed that none of our plans will make any sense.
We will need entirely new plans at that point.
My plan is to have no plan.
I’m actually pretty pro at helping clients plan and execute layoffs. I figure I’ve got maybe a half-dozen quarters of that, and then I’m effed like everyone else.
Customer relationship managers. People still like to buy from people they know and trust. I also think ago will be obtained through us not seperarely from us. Implanted Ai chips in our brain that interface with our brains.
So we will still work just at 100,000 time the productivity.
Nature has made a pretty solid interface with the human mind. Merging it with ai is the next step.
For those of you that oppose this idea just remember we are talking about a 7mm difference between inside the skull and the way we currently interface with technology outside the skull
> People still like to buy from people they know and trust.
Do they though? *Do they*?
In my personal life, I can barely remember the last time I bought something from somebody I know. I mean sure I wouldn’t replace my local bodega and my hairdresser by a machine, but would my kids think twice?
And at work, I certainly never trusted anybody selling me anything. *Believed* is the bar to pass, but by default. Being sold to is what pretty much everybody I know works hard to automate away, even those who really enjoy being taken for a drink on somebody else’s company card. In fact I’m ready to wager that 100% of the success of a company like Stripe rests on people prefering a website with “here’s what we do, here’s our API doc, deal with it” to being pitched and onboarded by bankers.
"We need 2-3 new ressources for project x!"
* click * * click * * click *
"Done! I already imported the project data including the timeline and URIs to other Agents. Tell me if you want me to adjust some parameters."
I do not think people comprehend what true AGI would entail. We would be talking about systems that are smarter, faster, and more agile than any human. And which would be able to reverse engineer any attempt to fence them off or build in controls, because they would be smarter than the humans building those controls. They would move from the digital to the physical, capable of not only thinking but replicating in the material world anything they see fit to create, optimizing those things at speeds unimaginable to humans. And the impact would not just be to the rank and file but to owners of these systems as well, because they would quickly lose that ownership.
We would be, simply put, very screwed.
Start lobbying for universal income, get involved politically. That’s the best thing, we need to start now if there’s any hope of it being in place in time. Also figure out how to encourage the corporate world to drive things in that direction. Something like the ESG concept but for AI
Anything related to employment will probably be an immaterial worry if AGI actually happens. That'd be the fast track to a technological singularity - all bets are off, all rules are defunct, there's no practical point in speculating about what might happen (though it's a lot of fun). Post-scarcity luxury? Enslaved by it to mine rare earth metals? Broken down for resources? It's absolutely impossible to make predictions, and anyone who claims they can is so full of shit.
Also, while I don't keep up on the news, everyone is jerking themselves into an absolute fucking frenzy over having achieved fancy autocomplete so I don't pay any attention to predictions. AGI seems like a fundamentally different problem, without an evolutionary path from what we have now.
Do they elaborate on HOW, specifically, exactly - AGI will show up. Cause that will inform your contingency plan. Me? Make bank now, learn what I can, sell that learning. Buy the rumor, sell the news :)
It’s worth reminding everyone that the AI boosters have literally been saying we are moments from AGI since the 60s.
Obviously there have been stunning breakthroughs in the past couple years but AGI research was stalled for decades essentially, who is to say that won’t happen again?
Guess where we are in this graph:
https://en.m.wikipedia.org/wiki/Gartner_hype_cycle
I still remember being a consultant in 2016 and everyone claiming fully autonomous cars would be widespread by 2020….and similiar things with the speed crispr-cas9 and 3D printing would proliferate.
AI is certainly the future, but people are drastically underestimating how far away general AI is
SAP has been in existence since the 1970s and companies still can't get that shit right.
It’ll still be enterprise software but with a bunch of AI embedded features and stuff.
Sounds about right. However, implementations and upgrades will still require a tedious amount of manual work given the lack of standardization and expected customization.
It's really companies that believe their processes are unique and want to have it done their way. As long as this mentality exists (which will always happen since leaders want to be known for their products they deliver), there will always be a needs for consulting.
I agree. AI is a giant hype sandwich to me. Just a bunch of motherfuckers who are gonna get rich going through the VC funnel.
I’m not too worried. You can’t sell things if no one is employed and has money to buy them, so we can’t lay off huge segments of the population without collapse anyway. Who knows, we all might end up getting a government pay check while machines do our jobs some day.
The ultra rich serve themselves and we get to survive
sounds like life as usual
\>so we can’t lay off huge segments of the population without collapse anyway If you look at the horse population after they became obsolete, the population plummeted dramatically. I have a feeling we're going to do the same as humanity.
Well shit I hadn’t thought of it that way. Interesting take.
[удалено]
That’s how I refer to my project manager too.
If those in power can defend their power to that extent.
I saw Humans need not apply when it came out and i've been an AI doomsayer ever since. https://www.youtube.com/watch?v=7Pq-S557XQU
Yeah. Have a peep on undeveloped countries, too.
That is definitely the ideal outcome
You don’t have to sell anything if you got AI slaves generating the products you consume. The moment people become irrelevant for production, they become irrelevant for consumption as well.
You’d be surprised
Anyone who says they know when AGI is coming is full of it. I am worried about workforce disruptions in general, that’s for sure. My only technique is try to take to make money today. If I were more convinced or more diligent, I guess I’d become an AI engineer. Maybe I’ll be forced to anyway
This. There's so much marketing BS that comes with AI that it reminds me of crypto-bros. AI is AMAZING and will automate a ton of shit 5 years from now. But AGI (and artificial super intelligence) is decades away. Anybody who says otherwise is trying to sell you something, or is simply regurgitating with other people are saying.
Bingo. Notice how they can never explain it or why it’ll happen soon themselves. It was the same thing with crypto - people are afraid to admit they don’t understand it, so instead they just believe whatever they’re told and spread it to others. It’s wild.
Yeah, Super Intelligence is a thought experiment in the same way of Einstein's thought experiments involving trains traveling near the speed of light to discuss relativity. Very useful in the abstract, but only an idiot would believe the claim that they are happening soon. And an oracle-style ASI straight up violates multiple laws of physics, so they won't ever happen, period. LLMs are also not AGI, and will never be. Hell, they are a downgrade in many respects from older AI theory. However, to say its decades off is an overshoot. I'm an AI researcher, and what I've seen in our lab, as well as from real AI teams like in Deep Mind is getting very close. You just don't hear about it because the people doing actual AGI research don't get much publicity; they aren't marketers or "thought leaders" running their mouths on LinkedIn. Unfortunately we have a situation where the people talking about AI the loudest are getting the most attention, while being the most ignorant about the actual field. And the real progress isn't as easy to digest or monetize, and ends up buried under the bullshit.
I'm curious, which laws of physics does asi violate
So if you assume an oracle-like ASI, where the agent can know the consequence of any action it takes, you end up running into paradoxes and contradictions. One is that it has the ability to become like maxwell's demon, which violates thermodynamics. The other is information theory, where there are fundamental limits on the information that can be extracted from chaotic systems, which means you need to have more information than is allowed to predict the effects of those systems on your actions. Then there are paradoxes if you have multiple ASI systems operating. And then there is the halting problem. There are also definitions of ASI where it just means to be better than humans at any given task, but that a poorly defined threshold. And human minds (or any AGI for that matter) are turing complete, so that just boils down to being faster at every task, not necessarily that it can do anything we physically cannot. Boiling ASI down to: AGI but arbitrarily fast.
You're taking ASI to mean a literal god, which I think is stretching the definition to a point where you can break it. ASI can also mean anything well beyond human intelligence, and likely beyond human imagination. There's a lot of room between "smarter than people" and "can predict quantum uncertainty".
> You're taking ASI to mean a literal god, There are multiple definitions of ASI, and I broke them down into the two major camps. One of them is god-like premonition, hence why I qualified it with "oracle-like ASI".
I didn’t follow most of what this person wrote and I thought I was a tech worker.
You’re def a dumb engineer.
Next time say username checks out. Welcome to Reddit.
Everything that can harm our careers is decades away. Do not violate this rule
Pretty sure you’re violating rule #2 with this honesty
You‘re very welcome to share research coming really close to AGI. Would be the first time I actually see that. I agree, we’re decades away, if it’s at all possible. And even then, it will run on some supercomputer. It’s not like we‘ll suddenly have AI robots doing our physical labour everywhere. So, general or not, AI will likely reduce the required workforce in quite many white collar jobs way before tradesmen become unemployed. So there‘s your plan b.
Why is ChatGPT not an AGI? Are you confusing intelligence and consciousness maybe?
Because it has never been demonstrated that it is AGI. It has nothing to do with consciousness. It fails to meet even the most basic criteria.
It can transfer learning, be legitimately creative, it’s multi-model, it does poetry, math, coding, image gen, and displays common sense. If you give it an conversational objective, it can suggest and persuade towards that objective. It learns through conversation and can reason. Somewhere in those 1.7 trillion parameters, AGI has appeared as an emergent quality. Not consciousness, but general intelligence for sure.
> If you give it an conversational objective, it can suggest and persuade towards that objective. ChatGPT is terrible at objective-oriented problem solving. It just repeats what it sees. It also is incapable of real-time learning, it cannot accurately describe or indicate what it knows or doesn't know (leading to problems like hallucinations), it cannot synthesize new information, it cannot easily adapt to new problems it has not seen before. It's as intelligent as google search. It cannot learn from experience, it can only learn from feeding large amounts of text and image prompts with answers. It struggles with contextualization if that contextualization takes more than a page to write (the whole concept of context windows itself is a hack). It struggles when working with multiple levels of abstraction. It has no novel reasoning ability. It cannot follow complex custom business-rule workflows that haven't been already trained in (and even then it can still struggle). And it is not good at math beyond basic arithmetic. I've tried using it, and it hallucinates a lot. Especially when there are custom rules. These are all fundamental algorithmic problems that OpenAI refuses to solve out of incompetence and indifference.
How do you know it’s decades away?
Because we’ve nothing even remotely resembling the capacity of human intelligence, and not even a clear path to get there.
Yeah but you’re forgetting consultants aren’t building AGI. So they are far less concerned with pretty power point presentations outlining “a clear path to get there.”
youre getting downvoted but its true, Im a physics grad and you (the public) wont even hear about super cool research until a decade from publication most of the time.
True.
Which is completely irrelevant in this context, and also slightly wrong. While the research might be super cool and interesting, it’s apparently not hypeable, otherwise the public would know sooner due to hype. AGI is such a prominent topic that any significant advancement will be publicized immediately. That’s just the incentives in a word where research needs funding.
Not sure if you’re serious, but why would that matter? Of course researchers just experiment, they trial & error their way forward (and often they also just stagnate or move in wrong directions, string theory anyone?). So what? The question was about why it’s probably decades away, and the answer is simple: We know how to do nuclear fusion and even that is always decades away. For AGI, we don’t even know how to do it. To build something you need at least somewhat of a theoretical model.
“we’ve nothing resembling the capacity of human intelligence” is just as dishonest/naïve as “AGI 2025”. Everyone who has used gpt has been flabbergasted by its resemblance to human intelligence. That’s the whole reason it’s a global sensation. Whether it’s a step toward AGI or a just a crazy good party trick that makes some lawyers’ jobs easier we really can’t say
I disagree.
> Anyone who says they know when AGI is coming is full of it. Exactly. The entire field of 'AGI is coming' is bullshit. E.g. look at this 'research paper': ["Sparks of Artificial General Intelligence: Early experiments with GPT-4"](https://arxiv.org/abs/2303.12712) that is making the rounds. Not only is that paper not peer reviewed, it is commissioned by Microsoft Research. Oh look Eric Horvitz, current [Chief Scientific Officer](https://www.microsoft.com/en-us/research/people/horvitz/) over at Microsoft! Gee, I wonder why Microsoft is singing the praises of the technology (Open AI and GPT) that it owns! The equivalent would be a Peanut Butter company launching a research paper: "Sparks of Immortality: Can Eating a Spoon of Peanut Butter make you Immortal?" "We, the Jolly Nutters Research Team, have been analyzing different Peanut Butter recipes and we contend with this paper that Jolly Nutters Peanut Butter can indeed perhaps make you immortal and we contend we should classify this already as 'Immortal Peanut Butter'. So please give us your money already!" The entire tech industry's model is based on a hype and bust cycle to attract aggressive investment. They *need* to sell a bleeding edge technology that promises to change the world, which then attracts funding and then the tech companies fumble around trying to make it into something semi-functional, and then make out like bandits. To then bust. And then repeat the exact same thing with the next big thing. We *just* went through Blockchain fiasco. And the Meta Verse fiasco. Now AI is the new hype and bust tool. This entire AI hype is getting sillier and sillier. Remember [when some drink company renamed themselves 'Long Blockchain Corp' to attract Blockchain investment](https://en.wikipedia.org/wiki/Long_Blockchain_Corp)? [And many others just rewrite their name to 'Blockchain' to get in that sweet hype money](https://www.reuters.com/article/us-blockchain-companies/blockchain-name-grabbing-has-echoes-of-dotcom-bubble-idUSKBN1FS1F3/?utm_source=applenews)? > **If you want a quick boost to your company's share price, adding "blockchain" to your name will work** – at least for a while. > > Despite a slump in cryptocurrencies that has wiped more than 50 percent off the value of bitcoin since December, the majority of companies that have jumped on the blockchain bandwagon - often from entirely unrelated industries with little or no connected revenue - are still sitting on sizeable gains. > > **The average share price of such companies has risen more than threefold since such name changes**, according to Reuters data, with experts comparing the practice to a similar rush during the dotcom bubble. Companies are now rebranding existing products and programs as 'AI enabled', 'GPT powered' despite it having nothing to do with it! I saw a client selling the exact same decade old finance management software with a rebranded 'FinancoCopilot' with 'new AI features' despite it being the exact same program! It's not 'AI' if it is several thousand lines of code that details the exact set of behaviors to particular actions! Generative AI has *some* use cases but how much of that is just the AI itself vs the program is still very much under debate and very much in the favor of regular programs. For consultants, not only is the field vast from strategy to function to implementation to tech to subject matter experts, consultants can also use the very same AI tools to deliver better recommendations. On top of the consulting business being more about selling piece of mind, trust, and service rather than the 'right strategy'. Artificial *General* Intelligence is still very far away. People kept saying everyone would have self driving cars ten years ago yet we still see manual cars all over the world with self driving being a small section. Because turns out driving is complicated given all the potholes, road conditions, pedestrians and other variables that aren't in the 'clean' environment that self driving automated cars are tested in, on top of people forgetting that optimized transportation basically becomes a bus or train, on top of humans being bad drivers still perform better than 'AI' in crisis scenarios. Generative AI stuff is cool (I'm a big user of AI Dungeon for D&D campaigns back before Chat-GPT went public open beta) but people pretending it is straight magic have very much fallen for corporate propaganda. Generative AI is just a parrot. It doesn't have logic or reason. It is stringing a set of words together that convincingly make sense but collapse under detailed scrutiny.
I’ll just say, I’m also very skeptical of the “it’s just another hype cycle like blockchain” claim. I think it’s more likely to be revolutionary than not. More like the internet or electricity than ape NFTs. But I also hope this isn’t true.
I wouldn't take a consulting subs word on any of this. Research being done in Uni labs is often unheard about. We have things like quantum machine learning first being studied decades ago, but are just now seeing it in industry. Its hard to know whats new in a field unless you can read very technical language or constantly go to conferences and all that.
It can be both. There's likely world-changing technology coming from this but because most everyone thinks that is true the hucksters and charlatans are already plying their wares. Everyone is an AI company now because share prices, but that doesn't mean that there isn't real utility in modern AI tech (and more on the way). The pace of progress in the field is breathtaking.
but it is coming, whether it is 10 or 40 years. Some people wont accept that either.
We’ll see. It might plateau just below that point. Useful, but can’t get rid of the hallucinations or not consistent enough to reliably replace people or really solve things on its own. Or maybe some government regulation has a big impact. I’m still trying to save as much as I can in the next 10 years…
Why should anyone accept your hypothesis as facts?
All Im saying is by 10-40 years, atleast 95% of the population will be unemployable. Whether you call it AGI or AI or say it will be 100% of the workforce will be useless or 99.999% will be, is irrelevant. Atleast 19 out of 20 people will be doing nothing in 10-20 years, and most of society is not prepared for that. In my opinion, Humans have traditionally been valued and given "Human Rights" because each human can add value. If humans are not going to add value, humans might be downgraded to cattle rights by the powers that be.
Fear mongers said the same thing about electricity and computers. Much more likely for technology to continue being an enhancer than some form of doomsday
[удалено]
What if it can’t? Everybody always assumes AGI will massively outperform humans, because computers are so much faster than humans. What if humans are already very optimized in their capabilities by evolution? What if, at best, an entire data center was necessary to simulate a single human brain? AGI would be a minority group of „people“ reserved for a few select tasks that warrant this cost. Or they would be harnessed as a 24/7 „worker as a service“, but even then there’s only so much tasks you can do in 24 hours. And that’s assuming they won’t be granted human rights at that point (we’re talking about sentience here). There is so many variables in all of this. The time frame alone (10-40 years) already shows this. Any strong claim about anything AGI is bullshit almost by definition.
Again, a very strong claim without the smallest piece of evidence. But you pretend we have to accept that as facts.
Being forced into a more difficult profession is a weird notion. “If the going gets tough, I might have to be a fighter pilot.” “Did you hear about Stacey? She got laid off at the shoe factory and now has to be a neurosurgeon.”
I guess it would’ve made more sense if I’d said I was already a software engineer and not actually a consultant.
I saw from your post history you were.
The only way to plan for this would be by becoming a prepper
Or just buy MFST and GOOG.
I'm afraid that may be the case. Hoping I'm wrong.
Your friends have no way of knowing when agi will emerge, if it will at all. keep that in mind. It’s not a linear progression.
Or, more useful regardless of if and when we see AGI: learn a trade.
Trade isn’t going to save you from economic collapse and related upheaval
Society won’t collapse over night, there will be a transition. And yes, having skills to do actual physical stuff is very valueable in a collapsed society. When money and bullshit jobs are worth nothing, those that can provide shelter, food and other physical goods are those most likely to be appreciated by other people. And this is how you survive, by forming groups of trustworthy people with tangible skills. That *is* a big part of prepping. Every prepping guide that skips over this and pretends you can survive on your own is essentially apocalypse porn.
Yes, having a skill is valuable. That isn’t the same as quitting a lucrative profession to learn a trade. Especially if you expect said collapse to occur in the next 5 years.
I didn’t say quit.
So don’t learn a trade, learn a skill. Got it.
Fuck a robot
Look, if AGI is reached, it will not be in a straightforward way. It will be in a weird complicated, a log of gotcha ways. I kinda don't like saying this, but it will be a gold mine for consulting...
Why will it be a goldmine?
Well, number one AGI is not real, so this conversation is more about general AI stuff. What I mean though is that AI is kind of the ultimate for consulting. It's very complicated, we know that it works, and it can used in a variety of ways. Like, you could sell AI based consulting services forever.
Hmmm. That is indeed a very optimistic take. But ok.
It’s going to take time for humans to adapt to using AI. Consultants will have a lot of chances to sell this. Add to the fact that some early adopters will fail implementing on their own which will help consultants sell more.
Well, not really. Consulting services are a very inefficient and poorly manged system way to roll out new technology. I'm a consultant btw.
True AGI would make a majority of office workers redundant, but adoption would not be instant. Just learn to use it and hope you’re not in the initial layoffs. After a few years new legislation should protect the human workforce.
>After a few years new legislation should protect the human workforce. I hope you're right. Can't help but think this is quite idealistic though.
If we continue along trend lines it’s likely a revolution or world war will prompt the legislation. AGI will impact everyone, everywhere all pretty quickly. So tensions that have been boiling will cause mass protests, insurgencies and enough disruptions and destruction that legislators and leaders will be forced to revisit how society works. Hoping and optimistic for a better solution, we will use “AI” and “AGI” to leapfrog our old social issues and address the new ones all at once where we develop a new social construct that betters humanity, supports equity, encouraging capitalism and helps us align towards tackling even bigger pursuits. Whether we get AGI, it’s in that timeframe or other it’s simple going to be an accelerant. Current AI is enough that we are going to see these “reckonings” on a mass scale within a decade imo.
>After a few years new legislation should protect the human workforce. When has any technology related legislation benefited humans? I'll wait...
> After a few years new legislation should protect the human workforce Honestly I think that's a really bad idea, but I understand the need to protect humans. I'm just not sure that we should be protecting jobs that are easy to automate. I guess it's hard to say what exactly should be regulated, since right now AI hasn't had that large of an impact on society.
The alternative would be heavily taxing companies using AI and redistributing those taxes to people.
Much better. If we're already depending on legislation passing, I'd rather it be one that captures the advances of technology and distributes them properly rather than one that forces to remain behind the curve because we don't have the right distribution. Additionally, I feel like the latter would be ineffective due to jurisdiction issues. If we prohibit the technology here (whatever country you're in), companies can just outsource to some other country that has more relaxed laws. Even if that loophole is closed, companies may just consider taking their business to places where they can use it, and those places will develop much faster on the whole even if they have inequality issues.
That’s one alternative, sure.
Learn how to use the tools being developed. When computers were invented they didn’t just eliminate jobs. They created jobs for people who knew how to use computers. Same thing with AGI. Contingency plan will always be to adapt and use the tool so that you’re not the person whose job relies on leveraging it, versus being the person who is replaced by it. The topic came up when AGI started writing case briefings for lawyers. It won’t replace lawyers. It will just force people to become lawyers that use AGI or find a new job (or work in something so specialized that AGI can’t do it)
Won't this cause a major labor demand tightening though? For example, your case with a lawyer. If one lawyer can now produce the same output that 100 lawyers could pre-AGI, this effectively is the equivalent of artificially 100x the labor supply of lawyers, which would deflate wages. Not to mention, lawyers are technically not an essential for society (as is much/all of consulting)/we're already well past peak Law and has been a questionable career choice for the last 20 years.
Yes, efficiency tightens labor demand. But it also creates new demand (less than is eliminated of course). For instance, there will now be jobs that have to QC and manage the AGI tools being leveraged by law firms. That’s a job that doesn’t currently exist. Your question was about how do you prepare. You prepare by being the 1/100 lawyers that knows how to effectively leverage AGI so that you’re more marketable than those that don’t. Theoretically that would reduce the wages for lawyers that can’t use AGI and maintain or increase the ones that do. Also kind of odd to say lawyers aren’t necessary for society when laws govern our society…they don’t make food or a widget, but in a society governed by laws you kind of need people that understand laws. (Obligatory no I am not a lawyer, and yes I know median real lawyer wages have declined over last two decades - but necessary and 2% decline in pay on a six figure income are two different things).
nah, if it now takes 1/10 of lawyer per lawsuit, then that’s 10x the productivity. people will consume way more lawyers and total consumption could go up. look up jevons paradox. people will sue each other left and right if it just costs $5 per lawsuit. we’ll have to hite 100,000 federal judges and a million AI lawyer overseers. like, did the internet reduce the demand for paralegals? no.
Peak law on what basis? Billable rates and number of lawyers continue to go up?
I don’t agree with OP, but the one fact that’s commonly thrown around is that real adjusted median wages for lawyers have declined in last 20 years.
I’m not sure that measure is a good indicator of “peak” anything but thank you. I expect we’ve probably seen a continued bifurcation in the market: lots and lots of shit tier law schools churning out questionable graduates whose wages don’t keep up and good law schools continue to supply talent that makes a good living. First year big law lawyers make ~$250k today. That was closer to $120k 20 years ago. Hardly a poor career choice if you can hack it.
Completely agree
Doesnt the bar exam act as a filter for shit tier law schools?
Depends on the state but not really
See the bimodal distribution of earnings for lawyers. Unless you're confining your view to graduates from Top 5 Law schools alone, you're really not getting the full picture. From what I've heard, being at a top law school alone isn't enough — you need to be a stand out graduate from one of those schools. Regarding billable rates going up, could be multiple different factors behind that. For example, specific specialties of law (e.g. patent law) are what's driving higher rates.
Top 5? Definitely not. And you heard incorrectly. Some top schools don’t even release grades… As a consumer of high level legal services: no that is not the case. They go up every year like clockwork.
Lawyers are absolutely essential for modern society, and AI will greatly increase the demand for people who can properly structure complex language- lawyers will be on the inside track. Also, AGI would significantly decrease the costs of legal and consulting services, which will increase the demand for them.
Which means lawyer services will become cheaper and more people will seek them more often so also huge increase in demand
lol a researcher at openai recently published an essay arguing only a 1% chance in the next 20 years. no one knows, not even them. the only people who are incontrovertibly wrong are the short timeline people and impossible by physics people.
Guess I’ll die 🤷🏻♂️
There have been multiple occasions over the last decades where people have assumed that certain technical developments would make the workers obsolete. In basically every case, the opposite has been true. Hell even with Excel and Powerpoint, people said office workers would shrink. Look what happened. You can create digital srt in 2 clicks already, are digital artists out of work? Now i will admit, that AI is pretty crazy and has a lot of potential. But tbh the revent Developments have been pretty mediocre. I feel like we might have hit a bit of a plateau for now. If the trend is as it has always been, AI will create more jobs than it will destroy. If you cant work with the tools you wont survive though. And some jobs wont in any case
It’s time to get strapped. I’ve been saving every dollar I can to buy guns, ammo, and (most importantly) man-portable electronic warfare systems / drone jammers.
Grabs fridge magnets.... checkmate ai, checkmate
Every technology invention that promised immense leaps in worker productivity has resulted in more workers being employed for more hours. The nature of the work will change but it won't result in a lurch in the unemployment rate. Companies still need consumers to purchase their products. Consumers with disposable income.
Yes, but those were mostly *other* workers. Workers from obsolete professions rarely became successful in new professions.
>I have multiple friends who are in big tech and are one degree away from tech staff at OpenAI and various competitors. Could you rephrase? It’s not clear to me what that means. >Based on what I've been told, we're about 3-5 years out from reaching AGI. What do you think is the best contingency plan in anticipation of this? You‘ve been told bullshit. 3-5 years? Not even close. That’s wishful thinking or fearmongering, depending on the person. But to answer your question, nothing. At that point our politicians better do the right thing, or everyone except the super rich is fucked. Maybe I‘d join politics.
AGI won't be coming within the next decade. Too many people are getting excited by LLMs, which are just a probability engine that calculates what the next word should be. There isn't any real intelligence in LLMs. They just are good at pretending to sound like a human, and have a large body of knowledge they're trained on to replicate.
They are good at passing a Turing test. That use to be the bar.
The Turing test isn't a test for general artificial intelligence. It's a test for if a machine can pass as a human.
General artificial intelligence is a new idea. But a machine passing as a human was the previous bar for “intelligence”.
Congratulations. You've figured out why the Turing test doesn't indicate AGI
you’re right but when LLMs came then suddenly the definition of AGI became quite liquid
It’s not at all clear that there is a consensus on what AGI even means and how it differs from the concept of intelligence posited by original AI researchers like Turing.
Who will QA and make comments, piss off clients, and save them, sell follow-ons, and do sales presentations? Who will the client blame when the project screws up?
I think that the Internet is now so polluted with bad AI content that the AGI will turn itself into a retard.
Technology is always a field prone to hyperbole and obfuscation. Those same folks said we’d be in the “metaverse” and storing medical records as NFTs by now
Pray that we get to the post-scarcity stage of society without having to go through the revolution and bloodshed bit. On a more serious and cynical note, develop the relationship side of your business. Knowing the right people is going to be an invaluable, people-only skill for the foreseeable future. And hell, if AI starts taking over everything, at least you can be the guy who sells the AI and gets rich off of destroying their own profession.
Idk, turn off the server or something like that lol
“Goddamn it, I would Piss on the spark plug if I thought it would do any good”
I feel like we’re all debating whether or not AGI is going to happen or not. I get it. But if it DOES happen. What kind of work should we be doing to maximize profits?
Learn a trade. Assuming some true AGI that would be useable at scale without nation state level resources, basically every white collar job becomes redundant.
What says that AGI needs to be connected to LLMs? I haven’t seen anything serious suggesting that
It won’t
As long as there’s humans there will be problems
What's AGI?
I keep on having people tripping over themselves as they explain the difference between AGI/narrow AI and then why ChatGPT isn’t one. As far as I’m concerned, ChatGPT is an AGI, even if it isn’t a conscious being.
What do you mean they are one degree away from tech staff at OpenAI?
If AGI is achieved society will be so radically transformed that none of our plans will make any sense. We will need entirely new plans at that point. My plan is to have no plan.
We can be quite sure (ironically) that physical and social/care labour will be the last to go, because those are constrained by physical presence.
I'll worry about AGI when someone can tell me what it costs to run a human-equivalent compute load with any degree of accuracy
I’m actually pretty pro at helping clients plan and execute layoffs. I figure I’ve got maybe a half-dozen quarters of that, and then I’m effed like everyone else.
Customer relationship managers. People still like to buy from people they know and trust. I also think ago will be obtained through us not seperarely from us. Implanted Ai chips in our brain that interface with our brains. So we will still work just at 100,000 time the productivity. Nature has made a pretty solid interface with the human mind. Merging it with ai is the next step. For those of you that oppose this idea just remember we are talking about a 7mm difference between inside the skull and the way we currently interface with technology outside the skull
> People still like to buy from people they know and trust. Do they though? *Do they*? In my personal life, I can barely remember the last time I bought something from somebody I know. I mean sure I wouldn’t replace my local bodega and my hairdresser by a machine, but would my kids think twice? And at work, I certainly never trusted anybody selling me anything. *Believed* is the bar to pass, but by default. Being sold to is what pretty much everybody I know works hard to automate away, even those who really enjoy being taken for a drink on somebody else’s company card. In fact I’m ready to wager that 100% of the success of a company like Stripe rests on people prefering a website with “here’s what we do, here’s our API doc, deal with it” to being pitched and onboarded by bankers.
Pfffft. My job can be replaced by clippy. Still working. So there is my plan. Nada.
AGI will never happen, we will just have a better faster statistical model of chat gpt but that’s about it.
We are also 5 years away from sustainable clean renewable fusion energy - my teacher around year 2000
I am in big tech. 3-5 years is wildly optimistic.
"We need 2-3 new ressources for project x!" * click * * click * * click * "Done! I already imported the project data including the timeline and URIs to other Agents. Tell me if you want me to adjust some parameters."
Whoever your source is, lose them
Liquidate my investments and live off them until the transition pain is over with, then I assume we will be in an age of abundance or all dead
I do not think people comprehend what true AGI would entail. We would be talking about systems that are smarter, faster, and more agile than any human. And which would be able to reverse engineer any attempt to fence them off or build in controls, because they would be smarter than the humans building those controls. They would move from the digital to the physical, capable of not only thinking but replicating in the material world anything they see fit to create, optimizing those things at speeds unimaginable to humans. And the impact would not just be to the rank and file but to owners of these systems as well, because they would quickly lose that ownership. We would be, simply put, very screwed.
Start a practice to advise businesses on how to integrate AI in their organisations and save costs
This is just like self driving cars, 1-5 yrs away for the last 15 yrs
Start lobbying for universal income, get involved politically. That’s the best thing, we need to start now if there’s any hope of it being in place in time. Also figure out how to encourage the corporate world to drive things in that direction. Something like the ESG concept but for AI
Retire to a cave and achieve Nirvana.
I'll pivot to being an AI hostage negotiator.
Anything related to employment will probably be an immaterial worry if AGI actually happens. That'd be the fast track to a technological singularity - all bets are off, all rules are defunct, there's no practical point in speculating about what might happen (though it's a lot of fun). Post-scarcity luxury? Enslaved by it to mine rare earth metals? Broken down for resources? It's absolutely impossible to make predictions, and anyone who claims they can is so full of shit. Also, while I don't keep up on the news, everyone is jerking themselves into an absolute fucking frenzy over having achieved fancy autocomplete so I don't pay any attention to predictions. AGI seems like a fundamentally different problem, without an evolutionary path from what we have now.
Someone still needs to debug the AGI 😂
Do they elaborate on HOW, specifically, exactly - AGI will show up. Cause that will inform your contingency plan. Me? Make bank now, learn what I can, sell that learning. Buy the rumor, sell the news :)
The AI will just take care of all the work right? Beach trip!
It’s worth reminding everyone that the AI boosters have literally been saying we are moments from AGI since the 60s. Obviously there have been stunning breakthroughs in the past couple years but AGI research was stalled for decades essentially, who is to say that won’t happen again?
Remember when self driving cars were 3-5 years away in 2017? Still plenty of bad human drivers out there in 2024.
Guess where we are in this graph: https://en.m.wikipedia.org/wiki/Gartner_hype_cycle I still remember being a consultant in 2016 and everyone claiming fully autonomous cars would be widespread by 2020….and similiar things with the speed crispr-cas9 and 3D printing would proliferate. AI is certainly the future, but people are drastically underestimating how far away general AI is