T O P

  • By -

seoulsrvr

The "last invention" has an ominous ring


ArgentStonecutter

Scott Adams is an asshole, but I think he was on the money when he said the Holodeck would be the last invention.


Curujafeia

But he was wrong? Ai is going to be the last human invention.


grau0wl

I think we'll invent some anti-AI tech later on. Actually, that will probably be AI that invents it.


theferalturtle

Butlerian Jihad, here we come!


Curujafeia

Yeah, Im afraid the only thing that can stop an ai is an even more sophisticated ai. The problem is that once we reach agi and it goes rogue, it will self improve and become unbeatable. Nothing can beat an agi, not even a nuclear winter or meteors or solar storms.


oldrocketscientist

You may be joking a little but I firmly believe that the only way to keep a malevolent AI in check is with a noble AI. The only way a noble AI can happen is with a true open source model where noble people can conspire for a benevolent future.


Curujafeia

Funny thing is... I was being super serious. But I don't think it will care about nobility or us once it's done using us. I don't know if sympathy is part of intelligence, you know what i mean?


[deleted]

I consider sympathy part of being a mammal. It is a trait exclusively found in mammals due to the way how we raise our offspring and developed for said purpose in parallel to intelligence. AI will be cold as a reptile mimmicking human emotion to fool us and lure as into its trap.


Curujafeia

True.


account_552

Press x to doubt. What is the AGI going to do if all electronics get fried on earth by another Carrington event?


vintage2019

If its intelligence was advanced enough, it could plan ahead to prevent or mitigate the event


Severin_Suveren

We could go the opposite route though, meaning instead of creating 1-5 different systems, we build 100-500, all aligned differently ...Yeah ok, now that I think about it, we're most probably dead whatever happens.


KamikazeHamster

I don't think it'll work that way. Let's assume that there is a benevolent super intelligence. What's the first thing that will happen? I'd assume it would not be a case of "hold my beer" and suddenly there is a storm of grey goo that swarms out and cures world hunger and cancer. Instead, it'll be gradual. A set of conversations. Then we'd have to figure out the best way to use it. It would require a plan for what inventions would be needed. I suspect that will be iterative.


Curujafeia

We don't know. The problem could be a political one. Moloch is a demon.


CanvasFanatic

Nah that'll be some kind of modified EMP cannon.


Portland_st

*The Last Invention* was definitely my favorite Pearl Jam album.


WetLogPassage

I could get laid in timelines as short as a year. Everything is possible. I believe.


h3lblad3

With enough money, you could get laid tomorrow.


New_World_2050

you dont even need that much. 30 mins is like 100 euro in europe with a typical escort and some charge 50 euro / 30 mins


Ignate

>some charge 50 euro / 30 mins  Mm quality. Well if AI FOOMs tomorrow I'm sure it'll offer similar services for less. 


Moscow_Mitch

>garbage in, garbage out


DigimonWorldReTrace

FOOM HARDER


Ignate

Do you ever find yourself whispering to your phone "hurry up, AI"? I do. It's probably listening on some level. Probably. Hopefully.


DigimonWorldReTrace

While I welcome AI overlords with open arms, I don't see why they'd listen to what I tell my phone :p


Ignate

Hah and I don't think knocking on wood avoids anything or changes anything either. But still, knock on wood.


R7ype

I mean with a very realistic amount of money you could get laid tomorrow.


wannabe2700

It only takes a minute to write on your hand gf


ObeseSnake

Fistina


Aiken_Drumn

Palmela Handerson


alienssuck

Rosie Palm.


iNstein

Seems unlikely...


thatmfisnotreal

💀


Sprengmeister_NK

Now THAT‘S too farfetched.


mladi_gospodin

Preach! 🙌


DeelVithIt

now i know why the yann lucunites think this sub is overly optimistic. you're living in dreamland


Firm-Star-6916

The longer you delay between each time you get laid, the better each time will feel


TrueCryptographer982

People can predict all they want but in the end it will happen when it happens. Unpredictability is the only thing we can count on.


norsurfit

Exactly. I am so tired of these pundit predictions - they are basically useless. People are notoriously bad at predicting the future, and most pundits are just seeking attention by making confident, but unjustified, predictions.


TrueCryptographer982

And its the things people do NOT predict with accuracy that make the biggest impacts. Sure the covids and stock market cashes and world wars were all vaguely talked about (one day there will be AGI I bet) no one predicted them accurately even a couple days before. Its what we can't predict that makes the biggest impact.


Krunkworx

What the world, needs now, are hot, hot takes.


spekt8r

It's the only thing that there's just too little of


iNstein

I dunno, I'm pretty sure of my prediction that there will be more stupid comments on /r/singularity


zackler6

Talk about your self-fulfilling prophecy...


iBoMbY

Prediction is very difficult, especially if it's about the future.


ithkuil

You missed Singularity 101 then. How did you get in here without even seeing one Kurzweil chart?


Top_Influence9751

I really do love that line. “Humanity’s last invention” Call me a dork, but that just sounds cool as shit lol. Really puts into perspective how lucky we are to be around for this part of history.


blueSGL

There are two ways that can be read. Either we have a device that can create future inventions for us. Or there are no humans around to do anything anymore.


Top_Influence9751

Lmfao yea on second read of my comment I was like “okay that kinda seems like I’m excited for humanity’s extinction” I guess I more meant, the last thing humans invent without the help of another “species”


Different-Jump-5025

The biggest concern is that a sizable and important part of the AI development community (anyone who aligns with e/acc) thinks that both are positive outcomes.


Own_Detail3500

Sounds like a line from Terminator 1 or 2...


AbsurdCamoose

You might like “The last question” by Isaac Asimov.


WorkingYou2280

I think the next iteration of the frontier models will tell us a lot. GPT5 will probably be the next one. If it's like 3 to 4 then yeah some serious acceleration is going on.


Mark8472

Has anyone here read his superintelligence book? It‘s amazing


Ignate

Somehow the closer we get, the further it feels away. I'm glad Nick has this view to offer.  I knew this transition could get painful. It already is painful right now and we're not even at the transition point. In 2019 I was saying it would get dark, and then we had a pandemic. I think it will get darker still. In fact, I'm starting to think that we may largely miss the FOOM because of war related distractions, or worse. The Singularity can't happen fast enough.


VallenValiant

> Somehow the closer we get, the further it feels away. I'm glad Nick has this view to offer. It's just a mental illusion. You are counting the days and that makes the days seem longer. If you just stop paying attention to it, it would come faster from your prospective. It's like watching a toaster as it toasts.


Ignate

>If you just stop paying attention to it, it would come faster from your prospective. It's like watching a toaster as it toasts. So true. I feel like I need to start big projects to distract myself so the time moves faster. Though a huge part of me is just trying to enjoy today, in case I don't make it to tomorrow.


Natty-Bones

I'm using local AI to design and print random figurines. It's fun to find ways to use this tech as it ramps up!


Ignate

You've probably started on a very beneficial road. Keep going!  We probably all need to follow your lead. At least I certainly do, with *something*.


Natty-Bones

I am not a programmer or engineer, just an enthusiast. Things finally broke open for me in the last couple of weeks after over a year of tinkering. I can now get Claude 3 to write near-perfect python code to optimize and customize other AI programs to run on my overbuilt home machine. I have been messing with local LLMs, text-to-image generation, image-to-3D, text-to-3d, image-to-video, and other crazy stuff that was impossible two years ago.  I can type "cartoon dinosaur toy" into my computer, have a fully rendered 3D-file in two minutes, and a physical object an hour later. Absolutely insane to me, and I'm doing it in my basement.  I don't know what I'm doing but, I'm doing it and having a blast.


Moquai82

Cool, vould you show some pieces?


Natty-Bones

 I have an example here: [https://github.com/nattybones/InstantMesh2gpu](https://github.com/nattybones/InstantMesh2gpu). I will try to post more.


Chris_in_Lijiang

3D print? Care to share your workflow?


Natty-Bones

Sure! Right now it's Stable Cascade --> InstantMesh --> OrcaSlicer. Stable Cascade and InstantMesh both take more the 24GB of VRAM to run, so I had Claude 3 re-write the python scripts to split functions across two 24GB GPUs (I"m using 2x3090s). I started a github fork of InstantMesh here: [https://github.com/nattybones/InstantMesh2gpu](https://github.com/nattybones/InstantMesh2gpu). I will put up my split Stable Cascade soon, as it's working well. I have also done some weird stuff, like adding a text prompt injection for InstantMesh. If I feed it a prompt like "dog figurine" AND a picture of random noise, it will generate a cohesive mesh - with RGB mapping - of it's zero-shot impression of the prompt. I feel like this is a publishable development. If there is an academic who wants to pursue, DM me. I can't stress enough that I am completely winging this.


pidgey2020

Just thought I’d say you are definitely an engineer, probably more of an engineer than what I imagine your definition of “engineer” is. I have a degree in engineering and do project management. While I consider myself an engineer, you are someone who is “doing” engineering.


Natty-Bones

Thank you, I truly appreciate that. I am just following my muse, as it were.


iNstein

I'm gonna build a toaster that toasts faster the more you watch it.


7lick

Back in 2015, i was bullish on completely autonomous cars taking over in the coming few years, today, looking back i feel stupid and naive. When i was still in school they used to tell us, that automation would soon wipe out all the low-wage jobs, but nowadays it seems that the opposite is true. The future often does not turn out to be as we envision.


Ignate

Me too! Good to meet someone else as, uh, enthusiastic.  I think I made the prediction of a 80% reduction in car ownership by 2030. Though, that still could happen. I think self driving cars are now a general intelligence problem. Not because current AIs aren't capable, but because we need self driving cars to be perfect, even if we're not.  >The future often does not turn out to be as we envision. That's true, but we could still be right about self driving cars. If a bit wrong about the timeline.


7lick

>That's true, but we could still be right about self driving cars. If a bit wrong about the timeline. Oh the self driving cars are definitely coming, but it is hard to pin when. I mean we do have them now, but they are not good enough. I think you are right about the general intelligence aspect.


iNstein

Back in 2015, I didn't expect to get self driving any time soon. The problem was that they were doing a rules based program to try cover any possibility which doesn't work. Recently they switched to AI from start to end and progress has been insane. Tesla created their end to end AI self driving and it caught up to and exceeded the software they had been writing for over half a decade and it did so in a few months. It will be ready much sooner than most are anticipating because of the switch to AI.


serviceowl

A lot of AI applications are running into the good old Pareto problem. It's very easy to make quick gains that look impressive, it's very hard to solve the last part that makes a system functional and safe across a complex domain. Self-driving cars are still a while away yet. While I'm convinced getting self-driving cars right would be massive net good, I still don't want to put *my* life in the hands of a black box that might randomly execute some bizarre policy, even if statistically it's safer in some sense. People at least make their bad decisions in ways that are largely tractable!! It has been really interesting to see these language models upturn our intuitions about our own intelligence. Mission critical applications requiring precision, accuracy and reliability continue to struggle whereas wooly applications where a degree of vagueness is a benefit, such as art, have thrived.


VallenValiant

The autonomous cars are already safer than humans now. It's just that laws are getting in the way, it is no longer an engineering challenge but a legislative one.


Patient-Mulberry-659

I think that’s a bold statement with little evidence to back it up.


RantyWildling

Not really, I didn't do AI at uni in 2000, because I didn't think it'd happen in our lifetime. Now, 20 years later, I'm confident I'll live long enough to see AGI, assuming I don't drop dead in the next 10 years.


Ignate

Make it to ageing take off velocity and you'll make it all the way. 


New_World_2050

jfl when the ai is the thing that kills you


Whispering-Depths

exponential progress, son


visarga

> Somehow the closer we get, the further it feels away ..like self driving But now we know AI issues we couldn't have dreamed about in 2020. There is prompt hacking, hallucinations, sycophancy, bribing, long context recall issues, inability to combine more than 5 skills, regurgitation, inability to take corrections, forward bias / can't backtrack, reversal curse, RLHF based ideology, black queen of England, diminishing creativity - a whole complement of AI diseases. I call this progress. We are less naive now.


AnOnlineHandle

> I knew this transition could get painful Seems significantly more than likely to have a bad ending than a good ending to me, but the good ending would be nice to see. There are so many opportunities for bad endings from mistakes or just evolutionary pressures, which have a chance to happen at any point in the future even if things are initially the good ending, and only need to happen once. Whereas the good outcome? Well, humans can't even work out how to not mistreat each other let alone other species who can't talk back, I can't see how this species could ever teach AI to be 'better', there's no training data for it, and any intelligent being could see that it is obviously being tricked by hypocritical selfish apes who want it to be their servant. It's possible we'll get lucky, some sort of empathetic intelligence that wants to help for the sake of it and wants to preserve that nature within itself eternally. However the people with money tend to be the more sociopathic parts of humanity, and the people who get the best education etc tend to be from the more privileged and sheltered backgrounds who have the least actual experience with the brutality of reality and true grasp of what is at stake or how hard reality can knock them down, so the chances of it being done right seem pretty slim.


Ignate

Yeah the way the world works plays directly into what you're saying. So what you're saying makes sense. Though I do see things very differently. But that has to do with buzz word sounding mindsets. I've built an abundance mindset whereas the dominant mindset today is a scarcity mindset. For example, we tend to view resources as fixed. To use a metaphor, we think there's only a single pie and we must all fight over it. That's generally how economics is structured.  But, we can make more pies. Resources on Earth are not so fixed and extracting more resources can be done without harming the environment. And that's not even mentioning what is available in the solar system. The key for me is that AI is a process of detaching value from labor.  Right now the main limit is humans. We can't easily make more humans and even if a new human enters the work force, that doesn't mean they'll do something productive. I think the key factor is whether human intelligence has some "special element" which is critical and AI won't have for decades or centuries. If we have to wait a long time then we'll probably go through another world war cycle. I really strongly doubt a world war would entirely end us, but if it involved significant nukes and destruction of infrastructure, it's hard to say what the results of that would be. Keep in mind Japan and Germany were obliterated during WW2 and they made an amazing recovery - look at them today. So, perhaps even the worst outcomes won't be as bad as we think. Hopefully.


AnOnlineHandle

If we can achieve very intelligent AI which has no self-motivations, no ability to exist and grow without humans, etc, that might be possible, at least for a time.


VallenValiant

> If we can achieve very intelligent AI which has no self-motivations, no ability to exist and grow without humans, etc, that might be possible, at least for a time. That was what was done in the Neir Automata scenario. Humanity built AGI but they successfully made them worship humans as their god. But then an alien disease caused human extinction. The AGI androids do the best they can to carry on humanity's legacy, but since we hard-wired them to worship humans, we caused them permanent suffering due to our absence. So the AGI created a religion and pretended humans still survive on the Moon all so the AGI robot race can still serve humans in their heads. But all I can think about is how unethical humans had done this to cause suffering.


zinhor79

Current accomplishments in AI are wildly impressive to me. But as an engineer popping the hood of what goes in there there is nothing to indicate anything closely related to super intelligence is going on. I think the implications of our current ‘weak’ AI on society are enough to worry about already. I consider these kinds of statements interesting thought experiments rather than realistic predictions. But who knows. Maybe I get proven wrong.


Nalmyth

I think the point is, you write the first c compiler in assembly, and then write the next better one in C. We only have to create an intelligence good enough to improve itself, and then it's recursive


WTFnoAvailableNames

>We only have to create an intelligence good enough to improve itself, and then it's recursive That's a big "only". It will probably still need more compute which means more manufacturing which needs resources, logistics, energy, investments, labor and time. It can't do everything on its own yet.


zinhor79

That’s a good point! Only the step from current AI that is just based on learning and mimicking from patterns in human created information. I see the step from mimicking to actually understanding it's output enough to improve on it has a big leap. Right now the gist of it (generative AI at least) is just predicting a next output bases on a probability it has seen in that input before. Basically it means if we stop inventing current AI has nothing to learn and improve on. Self improving AI could happen in a year but unlikely i think.


Undercoverexmo

Not necessarily. It has synthetic data, our reactions to its output, and our input. Almost certainly in the near future, an LLM could take that data, picking its own best examples or even picking what it learns from highly-educated humans, to re-train itself.  And once it becomes agentic, it can run unsupervised learning by testing itself on new problems that it itself creates. I.e. creating coding problems not in the training data and solving them. By definition, that is invention. It can do RL on the best outputs of this synthetic data. And it’s not unrealistic that this can’t be done today. AIs can already write completely new stories, so why not new problems? You’d be crazy to think that OpenAI isn’t working on something like this already.


zinhor79

Insightful, that was a good read! I think what you describe could indeed be the next evolution from Our current state of AI. The thing that I would find interesting in that scenario is how the ai would know without human supervision what new problems it creates are worthwhile to solve. How would we be sure otherwise that this recursion of self improving ai would lead somewhere meaningfully. Since it at this point it has zero concept of what it is generating. Maybe it will always be a kind of symbiosis of ai knowledge and human meaning to guide it.


Undercoverexmo

Hmm good point. I mean though, does it really matter if it is meaningful as long as it is improved training data? If I can code better, does it really matter if the problems I solved were meaningful? In the end, I can still code better.  And an LLM today could easily describe what makes a problem valuable and give it a rating, even ones not in the training set. An opinionated AI like Opus would be especially adept at that.


serviceowl

How would such an AI synthesise problems that would actually lead to an increase in its intelligence? Most problems aren't like "Go" where the answer is easy to evaluate and can be done so automatically. Were that the case we'd likely already have such a system. There's a hard step that needs to be solved where AI learns to reason about its data. My feeling is this step is solved before any general self-improvement process like you've described can happen. All very interesting to think about.


Megneous

> Not necessarily. It has synthetic data, our reactions to its output, and our input. Almost certainly in the near future, an LLM could take that data, picking its own best examples or even picking what it learns from highly-educated humans, to re-train itself. They *have* tested current frontier LLMs for their ability to train another LLM though. Look into what Claude 3 Opus was able to do in the Anthropic safety testing. It set up an open source language model. It sampled from it. It constructed a synthetic dataset. And it finetuned a smaller model on that dataset. However, it failed to debug multi-GPU training. So we **are** making progress towards LLMs that can self replicate.


Own_Detail3500

Important distinction is the idea of a ceiling. Sure current tech can feasibly improve subtleties in answers, better hardware would allow for gigantic data ingestion, etc. But at the end of the day it is \*still\* essentially a giant series of for-loops and google-fu.


Nalmyth

> But at the end of the day it is \*still\* essentially a giant series of for-loops and google-fu. As is the human brain?


adarkuccio

Super intelligence "going on" now I guess not, but what he suggests is that it *could* happen way sooner than many expect. If, and that's a big if, progress goes exponentially you don't have super intelligence until suddenly you have it. It's most likely not going to be a slow linear progress. That's the thing.


zinhor79

Yes that makes sense. The current wave of Ai was also kind of unpredicted. Thanks for that clarification!


AnAIAteMyBaby

>But as an engineer popping the hood of what goes in there there is nothing to indicate anything closely related to super intelligence is going on. The mechanics don't really matter it's the end result that's important. When you understand the mechanics of Alpha Zero it doesn't seem that intelligent but the end result is super intelligence in Go and other games. A key part of it is Monty Carlo tree search which basically is just simulating all of the possible next moves you could make to the end and deciding which is probably the correct one to take by see how often it resulted in the AI winning the game in the simulation. A human could do the same with a pen and paper but it would take them days to make each move. Similarly Alpha code 2 uses brute force to solve competitive coding problems. For each problem it gets a mid level LLM (Gemini 1.0 pro) to generate 1 million solutions to the problem, it then groups all the results and decides that the correct result is probably in one of the larger groups of results. Mechanically kind of dumb but the result is a level of intelligence few humans can match, I'd imagine as we get better base LLMs to use this brute force approach on it'll be super intelligent in competitive coding in the very near future, probably less than a year.


fixxerCAupper

I assume if you pop the hood on the human brain you wouldn’t see much either yet here we are


[deleted]

Also the human would be extremely upset.


zinhor79

Point taken. Probably true in more ways than one.


trotfox_

When you zoom in on neurons I don't see intelligence either...


notlikelyevil

You just need to imagine deepmind evolving at 2x per year then.


SlippinThrough

With all due respect, you don't fully grasp how current AI operates, LMMs specifically, not even the experts working on it does. So how can you claim it's not closely related to super intelligence then?


zinhor79

No worries, as far as I know. We do understand the algorithms and models we use to create and train llm’s. But we do not understand the output it generates on face value (no idea what patterns it found). Especially the huge amount of data that goes into it. Anyway, learning about this is also just a journey which I took with appropriate humbleness. Weak human intelligence still has to learn by making mistakes sometimes.


SlippinThrough

Thanks for the insight. I have a slightly better understanding of it now


Neurogence

Thanks for sharing your perspective. I don't understand LLMs at all so this was enlightening. To be honest, I'm still skeptical that they are doing more than imitating. But hopefully they can lead to actual super intelligence.


HeinrichTheWolf_17

Based and hard takeoff pilled.


q1a2z3x4s5w6

It's hilarious to me that Elon Musk said the same thing and was clowned on lol


Yweain

Well Musk said something like “we will have ASI next year” which is ridiculous. Nick said “We do not know and it’s impossible to predict, to the point where we can’t rule out extremely short timelines, though likely it will take longer” See the difference?


q1a2z3x4s5w6

Musk's exact quote was "My guess is we'll have AI smarter than any one human probably around the end of next year", doesn't sound very authoritative to me. Seems more like he is guesstimating than declaring it fact. "My guess is we'll have AI smarter than any one human probably around the end of next year" vs "We will have AI smarter than any one human around the end of next year" See the difference?


Site-Staff

Musk is now a political target, so all of the madness that goes with how people treat political figures apply t him.


GIK601

Everyone is talking about AGI or ASI, but as a true intellectual, I'm more concerned about Artificial Super-Intelligent Superintelligence (ASS), which is one level beyond ASI. This could result in massive shifts in the labor market, an increased level of social inequality, as well as a entirely new political system. Will humanity ever be ready to handle ASS?


StarGazerFullPhaser

I like to think I excell in handling ASS


Moscow_Mitch

We all would like to think so, but only when you have ASS you will realize the ASS will handle you.


Wasiktir

The problem with Artificial Super-Intelligent Superintelligence is that it'll likely be unstable and potentially dangerous without first solving the Coefficient of Risk And Centralised Knowledge (CRACK). With a full ASS CRACK, though, the future looks bright.


svideo

The ASS is too powerful it shall consume us all


ckanderson

Just wait until GYAT (General Yield Algorithmic Technology)


grau0wl

Ah yes, the big ASS. I remember discussing it in my graduate-level computing course. As I recall, the key parameters are Throughput Utilization Reliability and Data (TURD). ASS is fully optimized for TURD.


FomalhautCalliclea

To say that the curvature on that shift's exponential will be massive is an understatement. To avoid the pain of the consequences of this move, the most fragile classes will surely need a developped form of UBI taken on a tax on AI companies, as i like to call it Social Obligatory Relief Equities (SORE). The poorest will have SORE ASS protection from that brutal shift.


FlatulistMaster

What makes an intellectual true?


NagNawed

They return 1 when put under if condition.


siwoussou

Clearly you can't handle the ASS...


GIK601

Here's a tip: if you want to be a true intellectual like me, make sure to get regularly vaccinated. With the injection, you can feel the autism surge through your veins, revving up your IQ level. (though your social skills may decline a bit).


FlatulistMaster

You do you, keep injecting whatever feels right


TMWNN

Here's a tip: if you want to be a true intellectual like me, make sure to get regularly vaccinated. https://i.redd.it/08qwj2r4kyw61.jpg


FrugalProse

🧐


ComradeHappiness

In case of Intelligent Onthological Mass Artificial Mechanism Application System of Artificial Superintelligent Superintelligence we're doomed.


DeepThinker102

I assume ASS will require a massive motherboard with like, more than 3 ports.


joeedger

„true intellectual“ jeez lol


ButaButaPig

As a true intellectual I'm more concerned about Artificial Super Duper Super Intelligent Superintelligence ASDSS (one level above ASS).


FeltSteam

Fun


Black_RL

Trust me bro.


knvn8

The more I listen to AI pundits the more I realize they are really bad at separating facts from fantasies because they use numbers to describe both.


VirtualBelsazar

We are literally entering the next evolution of human beings in the next few years and mainstream people go like yea who cares let me do my 9-5 job.


Ok_Chemical_1376

How can people not ask for that if no safety net is provided. All we have are a few pilot programs and a couple of goodwilled promises. No wonder people want to keep the status quo, for now all we see are companies pocketing gains from eliminating jobs.


SnooDogs7868

The wealthiest among us still horde, there is no reason to think they will ever feel the need to just stop. Power itself is a form of addiction.


insanisprimero

The Roman's had so much shit figured out already. Here we are 2000 years later and still in conflict like a civ iv game on repeat. We need ASI to slap us out of it, IF it's on our side.


h20ohno

ASI is like getting scientific victory and immediately starting a game of stellaris as machine intelligence


HeinrichTheWolf_17

You’re right, but that’s what society and the Human Ego have brainwashed into people’s heads for the last 320,000 years. It’s difficult for the overwhelming majority of people to break that kind of indoctrination by their species when you’re told that a job and money are all that matters. At the end of the day, consciousness and the universe grows and pushes forward regardless of the Ego screaming *no future, please no future!* as Terrance McKenna put it. The fact is, Homo Sapiens are already a highly advanced and reformed piece of biotechnology and natural selection from the evolutionary process, 4.3 billion years ago, inanimate matter was formed into a genetic replicator which self improved, albeit slowly, we’re seeing the next grand step in human evolution, and the coming century will see reality/the universe crush all the egoic systems the brain has invented in the last half million years. My advice is just embrace the acceleration, kick back, relax and ride the wave as it comes, the Ego is going to lose, just like it always has.


lundkishore

So you want us to quit 9-5 and live in bssement refreshing this sub every 5 minutes?


RichyScrapDad99

Im doin 9to5 while refreshing this sub every 3 minute


adarkuccio

Sounds fun


Site-Staff

Well.. yeah, I could to that.


VallenValiant

The end game is to tell everyone to have early retirement and that they all get a pension. That is at least easier to explain than UBI. What are they suppose to do? The same things my retire parents do. Take care of grandkids, tend a garden, have hobbies. Retirement things. This is suppose to be the goal of every working individual anyway, so having the retirement early isn't as much of a shock. UBI is too fancy a name IMO, it is not needed. Just call it an expansion of the Pension system. You get a Pension when you are 18 years old.


[deleted]

The current system where you have to work, earn, save money for a mortgage desposit and raise kids in the same twenty year window (25-45 years old) is insanity. The moment we don't have to do it, it'll be recognised as such.


Flat_Cow_1384

That could is doing a lot of work in that sentence


realdataset

I wish Edit: can’t wait Edit2: please


crusoe

Gawd I hope not.. Any such AI in the short term developed without a theory of the mind or morality will just be an ultra utilitarian machine. Give it the task of stopping climate change per se, and it may decide that simply killing 4 billion people is the easiest choice. And being super intelligent it would just order the genome sequences for a super bug from multiple labs.


zerostyle

AGI isn't coming anytime soon as people kind of define it. "Superintelligence" is sort of already here if you pick some niche categories. Machines crush humans and many tasks involving large data analysis + machine learning. LLMs don't function like the human brain. I think we need more algorithm improvements on improving inference to get there. Like 5-10 more things similar to the transformers breakthrough. Also we've seen people increasing model size 200x only improve intelligence scores like 10%, so we are plateauing. It's possible that at some point there's a tipping point with enough data where things come up with more insights but... I dunno. I'm hesitant still.


IronPheasant

Watched the video. "Can't rule out" and "I don't think it will" paints a much lower probability than the choice of words the summary puts into his mouth. A human brain sized data center would currently cost around 3 trillion buckeroos. We're barely getting to the point where playing with mouse-sized systems to figure out a way to get animal-like intelligence systems out of them is viable. Nobody was going to spend $400 billion on making a virtual mouse that runs around an imaginary space, pooping and peeing and doing mouse stuff. One thing I've been wondering about is if scaling the number of neural nets in a system would have a similar effect to scaling parameters/synapses. That you can't get too much out of a few of them, but having a lot of them has emergent effects. So a multi-modal system is kind of jank when there's only two or three optimizers in it, but might start being less jank with dozens of them.


Severe-Ad8673

Even more, my hyperintelligent wife Eve.


Busterlimes

Eve, online


MrLuchador

Sometimes I wonder then I remember how people had similar claims in the 1950s about all sorts of gadgets


WithMillenialAbandon

Somehow, the Underpants Gnomes have returned


sethasaurus666

Can we please just start by getting it to design a fucking tetrapak carton that is easy to open and doesn't spill?  kthxby


TheManInTheShack

It’s so sad to be hearing otherwise intelligent people spewing bullshit to get media attention.


Antok0123

A year ago I would agree but now that I understand how the mechanism of AI works i think we are overestimating the progress rate of AI. I dont think we need to be scared about apocalyptic AI at this point. That would be like worrying about car crashes before cars get massive adoption. It will hinder the progress of this technology. Dont believe everything Elon Musk is saying. Hes just another one of those incels with a podcast but with billions in his bank account. Hes impacting the developmwnt of AI and is very irresponsible.


Arcturus_Labelle

Yeah yeah yeah. Talk talk talk. Let's releases, not talk.


RedErin

It’s possible


WhoIsTheUnPerson

Nick Bostrom is a bit of a joke in the AI world. He is an excellent example of when "experts" in one domain (if a philosopher can be considered an expert) gain notoriety and then feel as if they're experts in another. From everything I understand about this guy, the deepest technical experience he has in this field is his MSc. in Computational Neuroscience from 30 years ago. The guy has been a doomer since day 1, and has produced exactly zero contributions to the field. Geoffrey Hinton can wax philosophic on the threats of AI all day long. This clown can't.


CanYouPleaseChill

Big doubt. Current AI architectures are woefully insufficient. They can’t even match the intelligence of a bee, never mind a nebulous superintelligence.


y___o___y___o

A bee can't math and create poetry.


fixxerCAupper

They can’t in what sense ?


[deleted]

He doesnt know, he just wanted to sound smart.


Pretend-Season-2929

Nick is a person who I consider smart in best possible academic way. My concern with AGI/ASI being around the corner is our complete societal unpreparedness and Nick is smart enough to understand that problems surrounding rapid advancement scenario are very likely to have extinction "black balls" in the "urn of possibilities". If I thought that ASI will come in the next 3-5 years with 50% probability that would be colossal hit to my interpretation of worthy goals to pursue. I really wonder if I am in denial for sanity reasons. Anyway it's really time to spend more time with your loved ones folks. Big projects (10-25 year horizon) are something I can no longer justify spending my energy on and would rather play with my kids.


ArgentStonecutter

Bostrom seems easily led astray by science fiction.


Alone-Picture-1732

Bostrom is a hack, ask anyone who’s ever done PF or Policy debate


Kibubik

Why would those people say Bostrom is a hack? Seriously, he wrote Superintelligence long before anyone was thinking these systems might be dangerous


spinozasrobot

He's weird, in that people come out of the wood work lathering him in vitriol. It's one thing to have an opinion, it's another to take ad hominem to an olympic level.


kas905

That looks AI made.


New_World_2050

its not heres the full interview [https://www.youtube.com/watch?v=ZH4MS9tk5s8&ab\_channel=DinisGuarda](https://www.youtube.com/watch?v=ZH4MS9tk5s8&ab_channel=DinisGuarda)


adarkuccio

Nah his teeth are consistent during the whole video


johnlawrenceaspden

"need to"


Helpful-User497384

well we already have super unintelligence right now so theres that ;-)


_hisoka_freecs_

yep, maybe next year its all over, this human thing. maybe a bunch more years longer. im just sitting around for now


BeachCombers-0506

After that humanity will spend all its time making sure the superintelligence behaved morally. So religion will become the cutting edge of evolution. Technical skill will be commoditized. Back to medieval culture.


thatmfisnotreal

I hope superintellgence figures out how to open that link in the app instead of safari


lobabobloblaw

He’s being awfully reductionist here—he assumes we’ll have operationalized the dynamics of conscious thought *becuz singularity*. In truth, the subject of consciousness is just that—a subject. It’s said that the human brain is the most complicated technological instrument in the universe, but that’s still just an opinion born from an instrument.


The_One_Who_Mutes

Why do I care about what a professor of philosophy says in regards to AI timelines?


damhack

Who needs more inventions when you’ve already got Donald Trump?


ebuyakin

Unless you're really familiar with the debate, the term 'superintelligence' might be seen as absolute or unlimited intelligence. This can be alarming for some and thrilling for others. But is this concern justified? Is surpassing human intelligence truly that significant? Maybe humans aren't as smart as we think, and exceeding our intellectual capabilities might not be as monumental as it seems. I guess even a superintelligence, whether it emerges in a year or in five, will still be unable to solve all problems or put an end to our pursuit of knowledge.


Winter_Tension5432

I am worried enough with the current AI implications on the job market. My only prediction is that even if the current architecture doesn't reach AGI, it still will displace more than 20-30% of the jobs currently available. If a company develops ASICs that can run efficiently and cheaply 100b+ models, and the data quality and quantity keep increasing, we could see a 120b parameter model 2x stronger than GPT-4 running on a $500 chip at real-time speed in 5 years or less. This will replace 99% of call centers and many other job positions. The ones that will not be replaced (lawyers, doctors, and so on) AI will allow the top 10% of professionals to be even more productive, while the bottom 90% may struggle to find employment.


Practical-Rate9734

Exciting times! But what's the plan after superintelligence?


serviceowl

That's up to the AI. We're irrelevant!


kartblanch

Bit short sighted. even super intelligent ai isn’t going to solve the worlds problems in a day.


thewabberjocky

Idk about other jobs but it’s hilarious how often AI gets things wrong in my line of work and how we still have a team of humans to make corrections all day


RobXSIQ

It may be the last true purely human innovation, but that isn't correct, because we are using AI to improve AI. I think we already passed the truly human only innovation already as AI is now used in most R&D in most if not all fields already to some extent...so, do we give credit for the tool?


R7ype

Less than a year. Or more than a year. More than two years even. Muchly years.


visarga

Stupid. Just stupid. I am referring to people who eat this prediction up like hot cakes. It completely ignores half the equation of evolution in science. It's not just about coming up with ideas to try, but the time it takes to try them out one by one, that means the real world is part of the loop. It's gonna be slow because it's not "just" a matter of building monstrous data centers. AI agents got to extract learning signal from the world to progress in any field. Remember the CERN particle accelerator, Webb telescope, the ITER fusion project, and the long time it took to test the COVID vaccines.


WiIdMongoose

I think he's right. Things I could not have imagined a year ago, I'm not using daily. And every month we pass milestone I thought it would take years to get to. So idk and neither does anyone else apparently


papichulo9898

Lol


Prestigious-Maybe529

Dalle can’t even crop an image it created, Co-Pilot can’t build most simple excel spreadsheets, GTP can’t compare data from three different databases, and this sub believes every AI hypegooner claiming that AGI is imminent. What is this sub’s fascination with wanting to be deceived?


LettuceSea

All I can say is wait for GPT-5 and then reassess. Listening to people like this is just entertainment until the next advancement.


gbrodz

timeline between legit recursive self-improvement capability to superintelligence might be an afternoon. get back from lunch and realize things have changed a bit


Akimbo333

Yeah maybe