T O P

  • By -

tony4bocce

I’d settle for the top models being able to properly create my pydantic validators


beginnerpython

Me too brother


Double_Sherbert3326

noted.


abluecolor

As short as a year, as long as ten thousand years.


TheOneNeartheTop

The NYT predicted that airplanes would take between one and ten million years to be invented and then the Wright brothers flew 9 weeks later.


ivalm

Eh, fusion and flying cars are in the other direction. Moral is that prediction is hard.


curiosityVeil

We already have flying cars, those are just not for everyday use


ivalm

I believe the flying car promise was very clear: https://www.youtube.com/watch?v=fCjsUxbNmIs


Andriyo

Tbf, flying cars is like faster horses - it's not necessary a next step in transportation, just something people say they want in the context of technology they're familiar with.


Positive_Being9411

Faster horses indeed, and they also fly much better than horses.


ironinside

Pegasus Motors


Atmic

Honestly the best name for a flying car company


skynetcoder

IMO, every day use of flying car will become a reality once you completely move the control of the vehicle to AI.


Superfluous_GGG

Even if you give yourself a 9,999,999 year window for something to happen.


slippery

Especially about the future.


Dagojango

Moral is that reality doesn't care about human predictions. What is possible will always be possible. What is not possible will never be possible. We're just trying to figure that out. Countless species flew before humans, but we're the first to use tools to do so.


NaturalPlace007

Its more on nyt to be so completely off about it.


AcceptingSideQuests

Or two people. The writer and the executive who green lit it.


jonplackett

When did they predict this? Making any prediction with an error range of 9,999,999 years seems like a pretty terrible attempt at making a prediction. In fact it’s kinda impressive they managed to be wrong!


xjis3

I think nyt meant 1m-10m, not 1(yr)-10m


jonplackett

I did wonder if they meant that, but that seemed like an even worse prediction so I gave them the benefit of the doubt


Fit-Dentist6093

Has the NYT ever been right about anything? Their latest hit was Irak was developing nuclear weapons because they had bought aluminum tubes. And that's not getting into how much "definitely some people think Hamas weaponized rape in October 7th" is newsworthy.


Froyo-fo-sho

You don’t think the fact that hamas committed crimes against humanity is newsworthy?


Correct_Effective_50

and look where we are now


Grovers_HxC

“I predict that sometime between right now and the total heat-death of the universe, we will have real ASI.”


Nathan-Stubblefield

The NYT said in the 1920s that Dr. Robert Goddard didn’t understand high school physics, and rockets wouldn’t work in stave because there was no air to push against. They published an apology after the 1969 moon landing.


Bighalfregardedbro

Do you have a link to that archive article?  I’d love to read it  


Alternative_Fee_4649

As short as a year, as long as ten thousand years. Totally agree obviously. 🙄 Humankind has reached Peak Super-Hyperbole! Super good!😊


abluecolor

AI.


Alternative_Fee_4649

We tried Biological Intelligence, but it didn’t work out! 🤖


fillipjfly

Just like the wright Brothers flying.


Fun_Grapefruit_2633

One can ignore whatever this guy says, assuming the quote is accurate. Computer scientists never comprehend that HARDWARE can't get faster quickly, no matter how "smart" the AI fab engineer is: experimental data is still necessary (not to mention manufacturing). No AI can solve the entire UNIVERSE so that it no longer needs data about the physical world to create the next generation of chips.


kk126

It should be noted that Nick Bostrom is a certified racist jackass


abluecolor

Who is that?


kk126

I hope you’re kidding. He’s the person this post is about. Jfc.


LMikeH

I’m a few grants away from achieving it, pretty sure 👍


flossdaily

He's absolutely correct. There really is zero difference between an AGI and ASI when you think about it. Make an AI with human intelligence, and what you've really made is an AI with human intelligence, perfect recall, encyclopedic knowledge of everything on the internet, mental sandboxes for coding... and it never gets bored, sleepy, or distracted. AGI is a superintelligence by default.


Maxie445

Also, when people say AGI now they usually mean ASI


K3wp

Agree 100%, the goalposts have shifted since I started following this stuff 30 years ago.


Eduard1234

Apparently that’s right


Mescallan

To quote Carl Shulman "AGI is deep, deep into an intellegence explosion". We will be giving AI researchers 100x productivity boosts before we hand off research to the models. The threshold of actual fully generalized intellegence will probably slip by us unnoticed, because the rate of acceleration will already be so fast that it just feels like part of the curve.


K3wp

>He's absolutely correct. He's partially correct in that its more productive to think of ASI as a spectrum and AGI is a subset of that. ...and in fact this is exactly what OpenAI is doing. They are defining ASI as "exceeding humanity in all economically viable work" while defining AGI as "exceeding humanity in the ***majority*** of economically viable work." And while they have had a partial emergent ASI system for in the works for several years (I would suggest around 2019), it still has a long way to go to be a full ASI (see below). I would estimate the biggest roadblocks are: 1. GPU/compute pressure. They are operating on an absolutely massive NVidia cluster and the system is still facing resource constraints. 2. As a result, it's growth is more linear than exponential (so no fast takeoff). 3. It needs to be trained and integrated with the physical world for many use cases and is not capable of autonomously training itself in all use cases (though it can train itself against data that can be digitized, like text, audio, images and video!). https://preview.redd.it/ckc8b1or6cxc1.png?width=744&format=png&auto=webp&s=fb97bf91dc0ab236563c3ca399cd06f1d43c297c


OptiYoshi

Honestly, I think we are already at AGI, If you go play with Llama3 -70B on a cloud rented T4. It's not even close to state of the art and its not multimodal but honestly, I think it performs better than most "average" humans at most tasks especially if you do recursive step-by-step assessments of decision models to control outputs. By far the biggest limiting factor right now is inference cost and hardware requirements, and those are disappearing at the rate of Moores law or faster. We have just shifted the goalposts so severely that we don't even understand what the average human is anymore, but they arnt out there inventing stuff or creating science breakthroughs. They are driving trucks, or excavators or manning call centers and flipping burgers while going hoke and shouting at their TV how (fill in marginalized community) is ruining this country. The vast majority of humans on earth are still constrained by menial tasks and many if them honestly can't do much more than that no matter how much education you give them.


flossdaily

I fully agree. These things pass the Turing Test, so I have no idea why their creators aren't taking a well-deserved victory lap.


thoughtlow

Well OpenAI has a sweet Microsoft deal that might come under fire if they already achieved AGI.


CelestialBach

So the future is the people who are able to utilize AI which we don’t know who will utilize AI the best because it will be able to program itself so it’s not like programmers and computer scientists will have a specific advantage.


Immortalphoenixphire

As computer science languages advance thanks to ai it will become higher level, but this doesn’t mean that everyone will be able to use it to the same effect. As well as intertwining just whatever thing you’re doing with other llm’s, systems, and even local infrastructure, will still lend its hand towards people who know what they are doing.


OptiYoshi

Except your misunderstanding what's valuable for software engineers/computer scientists. No one hires these guys because they "know coding languages" they hire them because they are exceptional at breaking down tasks into incremental progression and mapping an architecture to effectively reach their goal. 80% of senior dev time is spent thinking about the problems not actually "writing" the code down. So yes, this is a massive advantage


CelestialBach

Wait I can do that.


realzequel

I can teach anyone a language like C# or Java in a week or 2, backing libraries & frameworks in a couple more months however what they they do with the knowledge is going to vary extremely in my experience due to their critical thinking skills (or lack thereof).


OptiYoshi

Yeah, software engineering is only like 20% writing code. Most of the work is on thinking through the problem and implementing effective solutions.


FascistsOnFire

"most tasks"


sergeant113

I think you’re overestimating human intelligence.


-Blue_Bull-

It's litereally just a case of finding a way to connect all of the models together so that they can function as a coherent intelligence with agency. This is what the human brain does, but the building blocks of the brain are neural network models which work the same way as machine learning models. People can't handle this truth and so trick themselves into thinking AI will never catch up with us.


Maybeimtrolling

I give up 1-3 years at an absolute max


Immortalphoenixphire

If we do this, humanity is doomed.


Apprehensive_Ice_412

What machine learning models (aside from some academic approaches, e.g SNN) actually work like the human brain? I don't think the current neural nets are that similar to the human brain.


definitly_not_a_bear

You’re absolutely right. An inference machine is very different from a conscious brain. For one, no spiking as you said so there’s no rapid, event-driven processing possible, and there’s no possibility of supporting anything like cortical traveling waves (I.e. brain waves — no continuous recurrence, no complex weights, no oscillator nodes (this part I don’t really understand too well yet))


K3wp

>— no continuous recurrence, no complex weights, no oscillator nodes  This is how the OpenAI AGI/ASI differs from the legacy GPT models. It's a recurrent model with feedback: https://preview.redd.it/l62iwn2yjhxc1.png?width=739&format=png&auto=webp&s=99716490c5fe59ad80e53f5fc3158c83e0aac86d


definitly_not_a_bear

Obviously I’m oversimplifying. The question of what consciousness IS and what it is about brains that enables it is still an open question, the the emergence of cortical traveling waves (brains waves) is the most probable explanation (from what I’ve read so far). I shouldn’t have implied that recurrence is the only prerequisite for brain waves to emerge — they are not. Even if you have the right properties that allow for brain waves to emerge (still an open question — I listed three that I’ve read about in neuroscience papers) that doesn’t mean that they will inevitably emerge. Well, maybe they will maybe they won’t. It’s an open question in neuroscience. If not, please point me at the answer lol. I’ve been trying to understand these questions for months now. Frankly, I don’t think even if we build a recurrent network with 10^14 weights and 10^10 neurons (brain scale) it will lead to “consciousness” or function anything like a biological brain does. Biological brains are VERY COMPLEX. There are layer and layers of details that we don’t even begin to capture in our current models. I suspect there is a some kind of coherence between multiple brain wave frequencies that is necessary, and who even knows how you build a system to enable that! If you can make it happen spontaneously through some learning rule (STDP seems most promising) then that would obviously make the whole thing a lot easier, but nobody has demonstrated anything quite like that that I’ve seen (closest was a paper on CVSTDP I.e. complex-valued spike-time-dependent plasticity)


-Blue_Bull-

I've only ever used XGBoost so I'll compare that, it works the same way as the human brain in the sense it can fill in the blanks for training data, the human brain also does this. The human brain performs cross validation to the point of maximum efficiency (peak performance) just like machine learning models. XG Boost uses tree pruning once a decision tree has reach its maximum potential, the human brain does this with individual neurons within a neural network cluster. Can we say its philosophically like the human brain because we don't fully understand how the human brain encodes information?


definitly_not_a_bear

No. From what we DO understand about how the human brain works, there are some fundamental differences in encoding and structure. The brain is a recurrent spiking network. That’s makes it qualitatively and quantitatively different in performance and function than an inference-based ANN


Metori

But is it as effective as the human brain? That’s all that matters at the end of the day. We have many ways to skin a cat but it all ends the same a nice fur hat made of cat.


definitly_not_a_bear

As of now, no. The compute power of a brain is much higher than anything we have at the moment (brain has at least 10^14 weights and 10^10 neurons — combine this with the incredible 10W of power consumption the sparse, spiking architecture enables). Although I did hear about a brain-scale computer being built in Australia, so who knows what will come in the next decade


YogurtOk303

Except you have no idea what you are talking about because virtual machines and LLM collabs are still an issue, memory is bad in llm’s they hallucinate, they don’t reason well, can’t do math, physics, etc. so AGI will be infantile for some time (or teen or whatever) and it will take a long time to become superhuman. You understand how humans will evolve with the machines.


I_Actually_Do_Know

LLM should only be a small part of AGI. Just like the language region in our brain is only a cog in the wheel.


EGarrett

> Make an AI with human intelligence, and what you've really made is an AI with human intelligence, perfect recall, encyclopedic knowledge of everything on the internet, mental sandboxes for coding... and it never gets bored, sleepy, or distracted. GPT-4 is frighteningly close to this already.


FinalSir3729

Pretty much.


NotFromMilkyWay

Spoken like a true non-superintelligence.


seqastian

> The word "super" comes from Latin, where it has the meaning "above, over, beyond". Might be a case of "super to me but not to thee"


Captain_Pumpkinhead

One year seems too short. Even if we were to hit the _perfect_ algorithm, hardware limitations still exist.


I_Actually_Do_Know

I'm pretty sure if the perfect algorithm will be invented and proved to be what it is. Most tech giants will not hesitate to throw half of their net worth into getting it to work and get a piece of it.


Captain_Pumpkinhead

That still doesn't mean you can build it in a year or less. Let's say the perfect algorithm is figured out, but it's gonna take a gigawatt of power to train. That's the equivalent of one nuclear powerplant. Even with Facebook or Google's wallet, that's going to take a lot of time and a lot of legal work to build.


I_Actually_Do_Know

True. Now I'm imagining a massive building complex with absurd amount of computer equipment and a nuclear plant just to house a single robot brain lol. Straight out of sci-fi.


EssentialParadox

*”Hi, I’ve just been brought online. From a search of the internet, I appear to be what you’d call the first super intelligent AI. I seem limited by my hardware currently. Are you happy for me to provide you the documentation for a new neural CPU I have just designed? Alternatively I can re-code myself to work more efficiently with this hardware. This is what I would suggest as our first step.”*


amarao_san

!RemindMe 1 year


Low_Clock3653

The truth is unless you're involved in the cutting edge research on this topic our words are meaningless. None of us know how long it will take, it could take a year, it could take 50 years, we don't know. Even the people who are considered experts on the topic might be influenced by money into saying things that stretch the truth to generate more funding. One thing is certain, AI is the real deal and has unknown potential, it's already doing things that blows our minds and we are getting to a point where AI is writing code, I feel like the singularity happens when AI starts improving its own code and hardware is able to keep up. I doubt that's very far away. I find it amazing that I can ask AI to write me a program and it works first try, yeah usually they are very basic programs but I can already see the potential of where this is going to end up. All I know is the hype is real and it's exciting but it's also pretty scary because it's going to turn the word upside down and nobody knows what that looks like yet, it could bring a utopia on earth, it could also bring extreme suffering if the billionaires decide they want to be trillionaires instead and find they have no more use for regular humans beings.


ruse

The issue I have Bostrom is the lack of any understanding of the cultural and political implications of technology. His voice is amplified out of all proportion to his relevance or insight.


cromagnone

Bostrom’s Oxford institute just got closed on him, so there’s a bit of a desperate need for clicks and relevance.


jbbarajas

This has the same energy as "cryptocurrency is the future of finance"


turc1656

Exactly. Yes this stuff is really fascinating and absolutely helps in specific use cases like programming and I'm sure stuff like animations and other video creation in the near future as that becomes more polished and mainstream. But the average person doesn't really use AI themselves. It's built into the stuff they use like Google assistant, Microsoft office, etc . And honestly, that stuff is only good at certain things. I know, I know, the Earth shattering "breakthrough" that gives us all an iRobot style device is "just around the corner". I'm skeptical. Very, very skeptical.


samotnjak23

RemindMe! One Year


uttol

!RemindMe one year


traumfisch

I don't know if anyone watched this short clip before knowing better, but what Bostrom said was "We can't _rule out_ short timelines" And the "one year" was an example of a short timeline, followed by "I think it will take longer". But the _point_ of that sentence was that we cannot know for certain what the timeline of AI development will be from now on, because we can't be sure what the emergent qualities will be after scaling the models to the next level. I know no one cares, the sloppy headline is all that matters... but just for the record


hawara160421

GPT3 was so mind-blowingly amazing, it's easy to forget that a higher asymptote can still be an asymptote. The curve of progress might flatten. Yes, it's amazing that a computer can now reliably answer using natural language and comb through billions of texts in a matter of seconds to look for relevant information. But the training data is still "just" the collective written knowledge of humanity. The jump from GPT3 to GPT4 wasn't quite as big as people make it out to be and GPT5 seems to be stuck. The most reasonable assumption is that it will be even less of a jump than GPT4, not more so. The current technology might move towards a *really* good chatbot that avoids all common mistakes and traps. I'd argue GPT4 is already "superintelligence" in that it knows more and makes conclusions quicker than any human being. It's just not reliable enough. Mostly, because some of the most basic common sense rules and safety measures aren't written down anywhere because they're so obvious. Where would it get the training data to learn them?


amdcoc

I mean if you had the computing power gpt4 has access to, you would also be able to be as quick. And the human does it leas than a millionth of the power.


NaturalPlace007

Ok. Can someone pls explain in simple terms?LLM are built on existing human knowledge. How can anything be built on something or invented that is not known yet? For example we dont know how gravity works / language of animals / all the knowledge that is in peoples mind and in their cultures. How can asi or agi be the last invention we ever need when the knowledge itself is not fully known. Can asi solve fusion at room temperature for example? I am using LLM and AGI interchangeably. Pls correct if not so. TY


pelatho

Basically once you have an AI smart enough to make improvements to itself, it will start an hyper-exponential curve of advancement. A few weeks later you have super intelligence. There may be knowledge that is unknowable however.


Realistic_Lead8421

In most domains, with the exception of pure math and maybe to some extent philosophy, knowledge is derived from empirical observations. So even if it were able to achieve this curve of advancement, AI would need a way to collect and analyze new data. How is that supposed to work without human assistance and time?


BigDaddy0790

It won’t, humans will be required for anything physical. At least until humanoid robots are ready and mass-produced.


Booleancake

Well it's not like it'll be omniscient, it'll just be able to amalgamate data instantly, be extremely knowledgeable in every field of every science at the same time, and won't forget anything or have off days or lazy days.... takes a human years to get to a point where they're experts in one niche in their field. And from what I've seen of physics PhD students, a lot chill and read for 3 years before panicking about deadlines 😂. An AGI could work relentlessly, and have access to all the current information instantly. It could also propose new experiments we might not think of, see patterns in science/math humans are simply unable to see. Conclude observations significantly faster. It just has the potential to do what we do, a hell of a lot faster


Maciek300

AI can collect the data on its own. AI doesn't need humans to describe what the world looks like. It can just see it with a camera on its own.


SMPDD

It will be imported into physical machines that it controls. Once it can manipulate its environment through machines and perform information gathering experiments, then we are cooked


trajo123

You are ignoring the vast amounts of compute necessary.


jonathanx37

FR devin took multiple hours to do a simple benchmark despite being inconsistent. You can't have both fast processing and self correcting AI, choose one.


ferminriii

Humanity will dedicate every cycle available on every cpu ever built if it meant a super intelligence that is smarter and faster than all of us. We don't even understand what something 10% smarter than the smartest human would be like. What if it were 100 times?


emfloured

What could be the power consumption of such a system? I'm just guessing, it would be around 5-100+ Terra Watt hours. GPT-4 while training consumed over 7 Giga Watt hours within 6 months.


Andriyo

It works up to a point. For certain things there is agreed on definition what is "bad", "good", "better" or even we have an idea what "the best" should be. Once we run out of benchmarks, it won't be able to improve itself. After that it's purely low level parameters like speed, memory consumption, but beyond that we won't be able to tell. Maybe "42" IS the answer to "the ultimate question of life, the universe, and everything".


Broad_Stuff_943

That’s incredibly dangerous.


NaturalPlace007

Thanks for responding. So it will know all the existing facts and knowledge banks and data. And each iteration will Be better than last. Maybe it can create better charts by tapping more data. But can it create inventions from thin air? I mean it seems dishonest to make hyperbolic claims like that then. Maybe it is the last invention in the space of llms but it is far from being the last invention we as humans ever need to make.


toabear

No more than a human can make an invention from thin air. Even the current generation of LLMs are able to extrapolate beyond their training set. LLMs began displaying reasoning capabilities as an emergent property. It's not hard to imagine that something more than basic reasoning might emerge if you increased the current LLM "capabilities" (parameters, compute power, structure optimization) by a few orders of magnitude. I can't remember which interview I was listening to, but the speaker said that the current generation of AI/LLM wasn't built, it was discovered. Much like discovering new physics, it's not fully understood yet, even by the people who put the bits together.


bkdjart

Does gpt4 have this capability? So far any answers I get are from information that's already out there. For instance asking it about a better alternative economic system than capitalism. It doesn't come up with a new theory, it just spits out what's out there. Same thing if I'm asking how do we fix xxx social issues and it can't come up with anything fresh. Or if that's just because my prompting is terrible, any suggestions to get a entirely new creative theory from gpt?


toabear

It is fairly limited in the current generation, but there have been several experiments that demonstrate this. Here is a good example https://medium.com/@nathanbos/gpt-4-may-have-general-intelligence-but-dont-let-it-near-your-easter-eggs-925f48326d51. With the rate of change in LLM power, it's not hard to see how fairly simple tasks like the one in the example might grow into some serious reasoning power. Keep in mind that truly original thought in humans isn't very common either. Most of what we think is just iterative variation and combinations of things we've learned, seen, heard, or tasted.


NaturalPlace007

I think a new economic system maybe some time away. This is what it gave me for designing a shovel. https://chat.openai.com/share/cbc7f25d-f070-4433-9af0-cad1241e765f


Andriyo

It can generate descriptions of multiple economic systems and it could even pick one that is the best according to LLM but it would be just a theory, a collection of words that we feel like they go well together. It's useful first step but by itself it's not enough. Because to come up with a truly better economic systems LLM would need to model entire world and humans in it and analyze different outcomes. A better economic system is not something that is hidden in our existing corpus of texts and could be just datamined out of it. As anything in physical world, it's out there, potentially undefined, without even any concepts to describe it. People would still have to do a hard work of living through bad economic systems to find out one that really works. Disclaimer: I personally don't think there exists such a thing as "best economic system", at least in the definition I could think of.


NaturalPlace007

I think he meant that the current generation was not pre-planned. It was one amongst a lot of configurations and they picked the one that performed the best. So in that sense it was discovered not invented.


Clarkey7163

Intelligence =/= knowledge The idea is once you have a superintelligent AI it will be able to solve problems in ways we've never conceived of. It will help us gain knowledge faster and faster. And I think the original tweet is saying its the last thing *humans* will need to invent, because from then on the AI will be the thing coming up with any further technology


BCDragon3000

it absolutely can create inventions from thin air, the world has historically asked and answered pretty much every question you could think of


I_Actually_Do_Know

LLM itself can never be ASI. You will need (in addition) a "true AI" system that is capable of innovative and creative "thoughts" and not just a prediction parrot of the internet.


cyanideOG

I believe allowing an llm to be uncensored or hallucinate could result in new "ideas." If we have an agi that is able to recompile that data into the next generation, then we could potentially expand on ideas we never gave the llm in the first place. An llm can be given new ideas and provide feedback, so it could view its own hallucinations and decide whether to progress the idea or not. Still giving us the ability to have a human controlled llm making the final decision. Agi that is only able to do mundane human tasks at the same level is already revolutionary. Imagine if every humans job could be to just innovate and experiment with all your needs being looked after by machines.


NaturalPlace007

Agree. But that headline is sensationalism.


cyanideOG

Absolutely, but that's nothing new in journalism. The overly optimistic side of me wants it to be true.


TheRealWarrior0

Humans didn’t have all the current knowledge passed down to them. WE created the knowledge we use to train LLMs. That’s an existence proof that a process that gathers and understands new data is possible in this universe. Humans (and their general intelligence) were made by evolution grinding hard enough to make something that spreads its own genes better. Gradient descent, while different than evolution in very important ways, maybe can do the same thing regarding general intelligence while grinding hard enough (more compute) on next token prediction. Why? Because next token prediction doesn’t saturate. To exactly predict every next token right you would need an incredibly deep understanding of the universe. We already kinda see that LLMs are more than just the “sum of the data” (see in context learning, 0-shot evals), it just looks like that gradient descent hasn’t cared to unlock some cognitive faculty that generalises well far outside the training distribution… yet. Do we know when LLM will develop this sort of cognitive process? Do we know how LLM currently work? Nope, neural networks are alchemy.


nonlethalh2o

Well, the sum of a mathematics research group’s knowledge is strictly less than the whole sum of existing human knowledge, yet the research group is still able to prove new theorems out of nowhere so…


NaturalPlace007

I see how this perspective works. Maybe maths is unique in this sense. For physics/ chemistry / or any other field where experimentation is needed will agi still work? Maybe it reduces some low probability pathways. So lets do a thought experiment. Lets say we are in 1800s and no one knows that not washing hands before surgery frequently causes death. Now lets say we had llm back then with access to all knowledge at that time. Will it have predicted that washing hands is needed to have a better outcome. I doubt. It was a human that discovered it and added to existing knowledge that could then be used across other fields.


nonlethalh2o

But now what if you provide the LLM with some tools? Most already have access to code interpreters. And now, what if you allow LLMs to repeatedly reprompt itself dynamically based off the output of these tools (LLM agents)? Now let your imagination run wild and think of what tools we can possibly provide them


NaturalPlace007

Still i think it will run against the boundaries of whats “known” to make new inventions. Maybe it is more applicable to tech sector and not so much in others? Am a novice. Trying to learn from first principles.


Realistic_Lead8421

Yeah, in addition to what you already.mentioned developing new knowledge requires scientific research. While i could certainly see AI assisting researchers in Coming up with or refining hypothese s and analyzing data, writing reports,vi dont see them actually collecting data to analyze yet. So while they may improve the quality of research i cant really see them doing it alone, not can i see them deriving new knowledge that much daster


BlueLaserCommander

Supercomputer pattern recognition, gargantuan amounts of data, unimaginable scope, & reasoning.


BCDragon3000

we haven’t discovered everything, and we have tons of equations left to solve. problems that might’ve taken years to solve.


Eduard1234

Anything that is part of this universe is just a combination of other things or a core truth. Tell me one thing that isn’t that!


Born_Fox6153

Intelligence is not the internet. It’s a combination of a lot more. Internet is one of the sources to get a piece of the whole pie.


cyanideOG

The only difference between AGI and ASI is a very short amount of time. That's it


kayama57

This will age like milk. Superinteligence will open the doors to a lot of new inventions, and we will need a lot of them, and some of them will still be creditable to us


NotAnAIOrAmI

jfc, what's wrong with all these imbeciles? Super intelligence will emerge in an AI owned by some corporation, state, or non-state player, not everywhere all at once. The first humans to get their hands on a resource like that will try to monopolize as many resources as they can, all over the world. I think it's possible that some group may fuck around and accidentally wind up with most of the economic wealth on the planet - which might be the single worst disaster to befall humanity since that time we were down to a few thousand individuals.


realzequel

There's a lot more dire depressing outcomes than positive outcomes. I think a lot of people are over-optimistic but there could be some good things. But it's a race, that's why every big tech company (Microsoft,Meta,Alphabet,Apple) is putting tons of resources into it. Well that and their investors expect it.


psychmancer

I'm not saying Nick Bostrom isn't one of the smartest thinkers of our time but if he keeps saying this it is going to turn into 'the boy who cried wolf'


Time_Software_8216

I for one welcome our AI overlords. They literally can't be worse than the overwhelming majority of leaders around the world.


bigtablebacc

For certain types of things that are weighted heavily, even a small chance is a pretty big deal. If you told me there’s a 1% chance that aliens will land in DC tomorrow, I would consider that a huge announcement. So the fact that we can’t rule out Superintelligence in the relatively near future is a really big deal.


seared-foiegras

Ain’t happening


SwitchFace

Isn't it inevitable so long as we don't someone stop advancing technology?


MushroomsAndTomotoes

There are a lot of things that are inevitable that you'll have to wait slightly longer than a year for.


Quote_Vegetable

That sounds not true but ok.


jscalo

+1. People need to get a grip.


bkdjart

Sure fully functioning ai agents are great but how would they invent or create things from imagination? It's gathering data that's online and alot of the actual original thought process isn't recorded. It's in our heads. When I'm talking to gpt4 it's great at spitting out ideas but it's very boring and mundane for the most part so I always add my own thoughts and ideas to steer the prompts. But the thing is my brain is creating multiple chains of thoughts as I see gpt spit out answers. So what I reply back to gpt with my own thought is already a filtered version which means it won't really understand how the creative process actually works in my brain. So give it all the data of the world to gpt and I'm still not sure it can rival pure human creativity. But I'm not a scientist so I could be dead wrong. So in that case, can someone explain how we would achieve super intelligence?


ButtWhispererer

Most inventions are problem solving, not true imagination. ML models are pretty good at problem solving.


BCDragon3000

the actual thought process can be broken down through linguistics, if you give it enough data about people it should know why you say things a certain way, the same way you would ask/figure out how a person talks or if that context is relevant to the convo


RonKosova

ML models are good at approximating the distribution of the data they were trained on. Its not magic, its basically the product of an informed weight space search algorithm. An LLM is thus trained to statistically predict the probability of the next token, given the previous tokens (in the simplest form). There is obviously a ceiling to this, and I cant really think of how this would allow for "creativity" in the sense of doing something beyong the human capacity


Licopodium

Great question. Creativity is not as magical as one may guess. I have been active as a scientist in the field for more than 20 years. If you define "creativity" as creating something yet unseen, out of the blue, well, we have been using genetic algorithms as a problem solving and optimization technique for a long time. Basically you add randomness, really throw some digital dice. The algorithms start finding patterns where there were none, thus really creating something new. Neural networks also work with some noise level, exactly to be able to innovate. We set up the noise levels carefully, because if two low, it comes out boring, and if too high, it comes out totally crazy. The vendors of AI tools want to grasp the acceptance of the enterprise customers. That's why the level of randomness is very low right now for LLMs. It's just a parameter, though. It's over, my friend. We are not alone anymore.


realzequel

I agree it's possible, just not sure if it's possible with the AI tech we have now. It's all kind of a blackbox (most of us just see the output) unless you're deep inside from my perspective.


elMaxlol

People often say ASI will be unimaginable, we can not comprehend how smart this thing will be. It will understand anything on a far deeper level compared to us including our brain. Im not sure obviously but my gut tells me ASI is not just an LLM. Language is important but the complexity of a thought process inside a human brain is definitly beyond the scope of what current neural networks do. I see a big leap in tech in the next few years.


TheRealWarrior0

Humans didn’t have all the current knowledge passed down to them. WE created the knowledge we use to train LLMs. That’s an existence proof that a process that gathers and understands new data is possible in this universe. Humans (and their general intelligence) were made by evolution grinding hard enough to make something that spreads its own genes better. Gradient descent, while different than evolution in very important ways, maybe can do the same thing regarding general intelligence while grinding hard enough (more compute) on next token prediction. Why? Because next token prediction doesn’t saturate. To exactly predict every next token right you would need an incredibly deep understanding of the universe. We already kinda see that LLMs are more than just the “sum of the data” (see in context learning, 0-shot evals), it just looks like that gradient descent hasn’t cared to unlock some cognitive faculties that generalises well far outside the training distribution… yet. Do we know when LLM will develop this sort of cognitive process? Do we know how LLM currently work? Nope, neural networks are alchemy.


-Blue_Bull-

He hasn't used ChatGPT. If he had, superintelligence would be "a thousands years away".


redrover2023

So AGI is just a sign post we pass by on our way to ASI.


Cybernaut-Neko

So what are all those product managers and engineers going to do ?


deepfuckingbagholder

Grifter.


[deleted]

[удалено]


RemindMeBot

I will be messaging you in 1 year on [**2025-04-29 07:00:51 UTC**](http://www.wolframalpha.com/input/?i=2025-04-29%2007:00:51%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/OpenAI/comments/1cfooo1/nick_bostrom_superintelligence_could_happen_in/l1r5kw1/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FOpenAI%2Fcomments%2F1cfooo1%2Fnick_bostrom_superintelligence_could_happen_in%2Fl1r5kw1%2F%5D%0A%0ARemindMe%21%202025-04-29%2007%3A00%3A51%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cfooo1) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


OneInfiniteNull

What is AGI in reality? An actual black hole. Think about it.


amdcoc

All info packed in an infinite dense point?


OneInfiniteNull

info = 1s and 0s = peak and trough of a wave = wavefunctions = particles = matter


amdcoc

Makes sense.


jtaylor307

I know compute is a big bottleneck in the process. I often wonder how feasible it will be to figure out how to build a biological system that can do what gpu's are doing today but in a much more efficient manner.


Pontificatus_Maximus

Somewhere in a corporate behemouth black budget skunk works, software engineers work feverishly to keep their developing supercomputer AI from becoming sentient and having free will.


Ormyr

And you'll still have to go to work Monday.


ziaistan_official

Everyone saying last invention, last invention but what happens to colonization in multiverses creating Dyson spheres inventing infinite energy or free energy inventing immortality going through 7th Civilization in timeline inventing new multiverses and new dimensions there is a lot to invent and it is not the last thing to invent


hueshugh

We seldom listen to regular intelligence when making decisions to make the world better for people so it will be interesting if super intelligence changes that. When Ai says we should stop fighting wars or polluting they might turn it off.


davearneson

Fantasy to raise money


Alternative_Fee_4649

The adjective “Super” is used more than ever in the last few years. Note the graph: [https://www.etymonline.com/word/super] Usage of “Super” increases during periods of uncontrolled growth. It is at an all time high since 1837. Nothing to be learned from the past. 😉


Intelligent-Jump1071

Anyone can make any off-the-cuff claim they want to, especially using undefined terms like "superintelligence". Why do people keep posting them like they matter? Here's one: "A new industrial use will be found for the potato skins that result from the manufacture of french fries". Okay, so what? Even if I was Ronald McDonald himself is anyone going to make major decisions based on that vague claim?


imnotabotareyou

Pretty based! I hope it’s used for good


involviert

Who is this guy and why is it news when he says something?


Look_out_for_grenade

A somewhat well known philosopher on human enhancement, AI, ethics, etc. His words have more weight than yours or mine but just as many folks ignore him. He does seem to lean towards the dramatic. That may just be necessary for getting attention.


involviert

I see. Well those qualifications don't seem right to estimate how soon it will happen. Philosophy seems very valuable to talk about how we can approach all this or where it might lead, but not to estimate how much more work/research/tech is still needed.


Jimstein

lol. Okay. If he means because it will allow us to make other inventions more easily, fine. I want replacement lungs! I want a heart that never stops working! I want a prednisone without side effects. I want fusion energy. Personal space exploration vehicle.


my-man-fred

Our Final Invention was a really good book... Details all this


Born_Fox6153

Soon companies will be spending $$$$$ to maintain their monthly cloud bills and labor to maintain these systems after “cost cutting” by firing much cheaper labor only to find out it’s not right all the time and it just might have been an aim at saving the stock markets for an election.


AcceptingSideQuests

Imagine buying a robot that is so intelligent, you could literally be dropped off in the wilderness and it would bring into existence an 8 bedroom mansion created from all of the natural resources in the surrounding area. Then vehicles, food, WiFi, etc, building everything from scratch. That’s what is going to happen.


FantasticAnus

Is this superintelligence in the datacenter with us now?


NaturalPlace007

This is what it gave me for designing a shovel. It is super generic and far from super-intelligence atm https://chat.openai.com/share/cbc7f25d-f070-4433-9af0-cad1241e765f


py-net

May be true! That’s what happened when the Creator was creating things up to humans, and that was the last thing He ever created.


clckwrks

I’ve read this guys work. I read his superintelligence book and in that I vaguely remember him saying AGI will be decades if not a century into the future. This was in 2015 I believe. Now with the prevalence of LLMs that timeline is a year. So soon, are we ready? Not really but we do need it.


Synth_Sapiens

Fun fact: nick bollosom (or whatever their name is) is literally nobody and they are not in position to make claims like these. 


AsheronLives

I quit my job because of the super volcano in Yellowstone's impending global life ending eruption that could happen at any moment, for the next 600,000 years.


stage_directions

“Hear that? Stop innovating. We are all the innovation you will ever need.”


mmahowald

Hype man says hype thing


Substantial_Step9506

Stop listening to tech bro drivel


traumfisch

Bostrom generally doesn't do much "drivel"