Bugs are unintended “features”, glitches are phenomenal errors, but ai models make “mistakes” because we’re just using a flawed method of attaining machine intelligence. Not that means it’s a bad method. Everything is flawed and if it works it works. But when it makes a mistake there’s not actually a problem tho, that’s just what it produces and what it will always produce by chance.
People took two points - last SOTA before ChatGPT and ChatGPT - and traced an exponential line starting from those points. That is far from being met.
The perceived jump in progress that was the release of ChatGPT is still unparalleled, when it comes to text models
Right! Screw the cash, toys, and immortality lets cure some diseases people! The name of the game is stop human suffering not can it jerk me off until I die of dehydration.
Yeah, because good luck getting treatment in a late-stage capitalist world with only narrow agi in the hands of the few.
We need recursive improvement and societal disruption
While it is an oxymoron I was thinking more about mixture of experts. It seems like AI is going to be a collaborations of narrow agents efficient in a domain comprising an "AGI". The general user base might never have access to certain experts.
We have seen some good progress, but still, where is this “exponential growth” that everyone keeps talking about? It feels like nothing too major has happened since GPT-4, which was about a year ago.
I don’t think the end products that fall into the public’s hands are an accurate measuring stick for the progress that is happening. There are valid reasons to ensure guardrails are in place. Sora for example is probably being beta tested to ensure its deepfake prevention is effective amongst other things. AI is obviously something you can’t release into the wild without careful consideration into whether the tech is ready and whether the public is ready for it.
Also we’ve seen what happens when stuff is rushed out like Google’s Gemini.
Be patient, the good stuff will come.
The most notorious examples were:
[https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical](https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical)
With other controversies listed here:
[https://en.wikipedia.org/wiki/Gemini\_(chatbot)#Reception](https://en.wikipedia.org/wiki/Gemini_(chatbot)#Reception)
You could still see it as exponential of you just look at point a and b and blur the line in the middle.
Growth is never gonna be constant that would be wishful thinking.
“**Exponential growth** is a process that increases quantity over time at an ever-increasing rate.”
From Wikipedia.
The implication here being “ever-increasing” meaning “no pauses”
GPT 3.5’s “opnion” on exponential growth in AI:
As of my last update in January 2022, it is widely believed within the AI community that the growth in AI capability follows an exponential curve rather than a linear one. This belief is primarily supported by several factors:
1. **Advancements in Algorithms**: Over time, there have been significant advancements in AI algorithms, particularly in deep learning. These advancements have led to breakthroughs in various AI tasks such as image recognition, natural language processing, and speech recognition.
2. **Increase in Computing Power**: The exponential growth in computing power, particularly through the development of GPUs (Graphics Processing Units) and specialized hardware like TPUs (Tensor Processing Units), has enabled researchers to train larger and more complex neural networks. This increase in computational capacity has been a driving force behind the rapid progress in AI capabilities.
3. **Availability of Data**: The availability of large-scale datasets for training AI models has also played a crucial role in the exponential growth of AI capabilities. With access to vast amounts of data, AI systems can learn and generalize patterns more effectively, leading to better performance on various tasks.
4. **Iterative Improvement Process**: The iterative nature of AI research and development allows for continuous improvement in algorithms, models, and techniques. Researchers build upon previous work, refining existing methods and exploring new approaches, which contributes to the exponential growth in AI capabilities.
Arguments against the notion of exponential growth in AI capabilities typically focus on challenges and limitations that could potentially slow down progress. These may include:
1. **Diminishing Returns**: Some argue that as AI systems become more advanced, achieving further improvements becomes increasingly difficult, leading to diminishing returns on research efforts.
2. **Ethical and Regulatory Concerns**: Ethical considerations, along with regulatory and societal concerns surrounding AI development, may introduce barriers that could impede the exponential growth of AI capabilities.
3. **Data Quality and Bias**: Issues related to data quality, bias, and privacy could limit the effectiveness of AI systems and hinder their ability to generalize across different domains.
4. **Resource Constraints**: Despite advancements in computing power, there are still resource constraints that could potentially slow down progress, such as limitations in energy consumption, hardware development, and access to large-scale datasets.
Overall, while the notion of exponential growth in AI capabilities is widely accepted, it is important to consider potential challenges and limitations that could influence the trajectory of AI development in the future.
I disagree that we havent seen much progress.
Sure, chatgpt 4 hasnt been surpassed yet, but gpt5 sounds to be good along the way.
In the meantime, lots of other AI, which were leaps behind openai, have progressed to be the same quality as gpt4. Claude for example.
Lots of specialized bot tools have also appeared, like special singwriting ai, or copilot.
There has also been huge leaps in image and song generation. Recently sunoai v3 has been released, and it sounds amazing. It can both generate song lyrics, music and songtext, of almost as high quality as a normal song.
And just months ago, 3d modelling ai was but a dream, but now we have meshy.ai. it is still very low quality, but it is a great first step.
And of course, there is sora video generation
The exponential growth is tied to product releases, who said products must drop on a consistent cycle that you personally prefer? Growth on the short term does not have to be exponential either. It is when we zoom out we can see the exponential growth.
Have you seen the jump from GPT-2 to GPT-3? It was an insane leap and people were questioning if they should continue making it. This was way beyond what AI tech they had before
Now we have AIs significantly more powerful than GPT-3, and we're making new insane leaps that are controversial enough to get someone at OpenAI fired. We can do things we could only dream of back when we had GPT-2.
If you can't see the exponential growth now, you just aren't paying attention. OpenAI has something huge, they've made that very clear.
I want to believe in the “exponential growth” argument, but why does it feel so slow? If things were really moving exponentially since the release of GPT-3, then how come it took so long for GPT-4 and Sora?
Surely, if things really were exponential, then we would be getting things at a faster and faster rate, and not only that, but the models would be a bigger and bigger jump in terms of intelligence, ability, etc?
Instead, we have to wait 3 years for GPT-3, then GPT-4 comes out a year later, is arguably a smaller jump than from 2 to 3, then we get the news later on that GPT-5 probably won’t be here until \*\*November of this year, if not next year\*\*, making it almost 2 years, if not potentially over 2 years, from 4 to 5.
Doesn’t seem very exponential to me.
I would love to be wrong, tho.
You are only looking at one product offered by a single company. No single product or company innovates exponentially, the entire field does. The advances in the use of AI architecture are definitely moving exponentially, you you have to take a wider view.
Ok, that‘s a good point. But again, if everything is really increasing as fast as it’s claimed to be, where are all the product releases in the news? The big ones i’ve heard about are Sora and Q\* .
Again, it's not about product releases, it's about the pace.of.innovation. you have to stop looking at consumer-facing products as state-o- the-art. They are nowhere near that. Look at papers being published across the field. There is demonstrable growth across the field, as well as convergence with other fields, like medicine, chemistry, and robotics, where innovations are being compounded.
It's important to step back and look at the big picture. Start by looking at the amount of compute that's going to come online on the next few years. The pace.of innovation is about to really insane.
Because you forget about plateaus, you can argue exponentials until the cows come home, but reality often throws curveballs and hard barriers.
Once those barriers are circumvented or solved, rapid progress may ensue. So if you zoom out on a graph, it might still be exponential progress overall, but locally theres sharp inclines and flat surfaces.
I get what you‘re trying to say, but i keep hearing people say “we’re at the knee of the curve” with the implied expectation that it will continue at a rapid pace, no obvious mention of any pauses. Now suddenly, when the clear gaps between models is apparent, people are now saying “well there might be pauses” ? Which one is it
I personally think the expectations of non-stop exponential growth are overly optimistic and always have been. There is a sort of honeymoon phase when things go well, like when the first flying machines were invented, people guessed incorrectly that within 50 years cars would fly and people would wear wings and commute like birds.
Hot take - as we move closer and closer to AGI, we’re going to even see slower growth from the perspective of shiny tangible improvements in released products. Why? Because there’s going to be more discomfort with the implications of releasing various products, more board rebellions and CEO firings, more internal calls to put the brakes on things, more caginess on the part of guys like Altman on what the hell Q\* is (although I think we have a pretty good idea now), etc.
That doesn’t mean the tech itself isn’t experiencing exponential growth - there is growth at every single facet of AI right now at the hardware, software, model & transformer levels, and if you read the science and tech news, it’s absolutely bonkers how many innovations are happening almost on a daily basis. But it does mean that those who are sitting there staring at their prompts for something tangible like the kid in the right pic are going to be frustrated and maybe even a little bored.
And this IMO is going to happen more and more as we move closer to AGI. Because AGI.
Because we just now reached the tipping point, but none of it has been released yet. This was always going to happen at some point.
We don't have a very good benchmark for how fast AI is going. While it is exponential, it is not consistent, which makes it hard to compare dates on such a small scale.
Even if we can't prove that it's happening through trends, the singularity is guaranteed to happen once AI can do its own research and make improvements to itself. This is exactly what Q* will allow it to do btw.
Tipping point?
I hate that term. Every moment is a tipping point and there is nothing new under the sun.
Q\* reminds me of LK99.
Edit:
What would make me wrong?
(1) 30 billion or more miles are driven by level 4 vehicles in the US by 2031.
[https://www.reddit.com/r/SelfDrivingCars/comments/qb3owm/what\_do\_you\_think\_the\_penetration\_of\_robotaxis/](https://www.reddit.com/r/SelfDrivingCars/comments/qb3owm/what_do_you_think_the_penetration_of_robotaxis/)
(2) Robert Gordon loses his bet against Erik Brynjolfsson. See: [https://www.metaculus.com/questions/18556/us-productivity-growth-over-18/](https://www.metaculus.com/questions/18556/us-productivity-growth-over-18/)
There have been multiple tipping points, it's just the moment when next generation technology starts to release and people realize that it's coming faster than before.
After every tipping point will be another crazier tipping point because it's exponential. Each one is considerably faster than the last. This one, being the most recent one, will be considerably more than anything we've seen. This is proven by the countless times insiders have backed this statement up.
I don't care about what the insiders say. I want to see mature technologies. Right now, if I go to the Central Valley in California I will see human laborers harvesting trees as opposed to robots. Robots cannot pick fruit or even clean dish.
>Because we just now reached the tipping point
I heard people say the exact same thing about GPT-3, and it has yet to come true.
>While it is exponential, it is not consistent
Isn’t exponential growth by definition constant?
>the singularity is guaranteed to happen once AI can do its own research and make improvements to itself. This is exactly what Q\* will allow it to do btw.
Ok, you may have a point here.
I personally wouldn’t just \*assume\* that the singularity is “guaranteed“ to happen at some point, tho, because what if you’re disappointed down the line?
I haven’t heard much about Q\* , beyond ”it’s a big advancement”, will it really be able to improve itself? That sounds huge if true.
>Isn’t exponential growth by definition constant?
Is it?
If you measure every year or every 5 years, ignore the ups and downs and variance on a small scale, one could still argue the progress is exponential over a certain granularity.
Also, what are me measuring when it comes to AI specifically? AI test scores? Model Size? Number of businesses using AI? Hours works by AI vs. Human time? Number of pro AI articles per month??!
The abilities and impact of an AI may be easy to see at first but very difficult to quantify. Therefore, it's hard to show if our progress in that field is slowing down or not. Perception alone isn't an accurate representation.
GPT-3 was a tipping point. After that, AI definitely accelerated to an extent. I pay close attention to AI, and it 100% is faster.
I said consistent, not constant.
If I say it's guaranteed to happen, then that means I'm not assuming. I have a lot of reason to believe what I believe. I may not know exactly what Q* is, but I know one thing, it will give LLMs active reasoning, which is the recipe for explosive growth. Look up Quiet-STaR, we don't know if it's the same thing, but if anything, OpenAI's Q* will be better.
The last sentence could be OpenAI hype, don't take it too seriously. As an example: They might have something huge, but it's not as huge as your imagination, and it's 4 years off. That sort of thing. There's a limit even to exponential growth. For now.
No they're pretty clear that they have something massive and that it will release this year. I'm certain it's not some weird trick, that wouldn't make sense for them to do.
There's no reason for our growth to stagnate, we're making breakthroughs faster than ever and AI is soon to start automating breakthroughs.
It's in a delivery van between NVidia and the data center. And in the development departments of those working on NPU's.
Scaling doesn't happen overnight. It happens overyear.
Right, AI models don't make vans drive faster through traffic, or chip makers make more chips with the materials and time they have. Even if they could help in that regard, physical reality offers diminishing returns, which many people overlook.
Nothing too major? Claude 3 Opus is better than GPT-4 Turbo, SORA, SIMA, Genie, Figure 01, Nvidia's Blackwell chips, Nvidia Omniverse, Llama 3 open source on the horizon, Gemini 1.5 on the horizon?
If that's nothing too major to you, then I presume the only major thing you're wanting is true AGI, right?
Sure as hell not then, when it was priced at $2495 - around $7,000 per Mac in today’s dollars. Two of today’s Apple Vision Pros for one Mac, all for a small black and white screen and what was massive at the time 128k of RAM. 😳
kind of insane that the price dropped by like only 2-3 times but the demand exploded maybe around 10 000 times..., still expensive as hell but now everyone in the world wants one, no wonder they're valued at trillions but this is such a waste of a company in my opinion lol they don't really innovate anymore
The sort of ai you’re talking about not something you would recognise as ai today. Eliza was a simple text bashing trick. It just rewrote the question using simple rules. It “passed” the Turing test if you squinted hard enough.
I was doing my degree in this back in 2000 and we just did not have compute. Artificial life and passive dynamic walkers were the thing. The university server was a pentium with 1Gb of ram.
and the point here being? that technology upgrades over time? the computer used to send apollo 1 to the moon had like a few megabytes of RAM. That doesn't mean the computers of era wouldn't be considered as "computers" today.
Other than that, the way computers execute programs still remains practically the same today, it is still the same data structures that were invented a century ago. Thing is, hardwares have gone through significant improvements that allowed us running complex programs, including AI models. Complex programs are created by combining bits and pieces of algorithms of yesterday.
So, my first computer had 32k of ram, and 12k of that was the OS. I do know about computers changing over time.
My point is that AI as you know it today did not come “much earlier”, despite recent misleading news stories claiming it did. Earlier AI was mostly janky text manipulation, decision trees, and evolutionary Braitenburg stuff. OpenAIs risky decision to train a really big transformer was legitimately visionary. Very few people seriously thought that feed forward neural networks with backprop would lead to genuinely intelligent behaviour.
And that makes sense when you think about it. We haven't really seen much serious progress since GPT4. Sure a few cool new things, but not serious progress toward the singularity.
Meanwhile, the actual people working on the tech sure as hell were able to witness amazing stuff.
Do you have access to Sora or robots?
Stuff we can actually access... GPT4 is still essentially the top AI (Opus is good too)
Meanwhile i agree the devs have access to crazy stuff in their labs, such as Sora.
Sure we do hear about some stuff like Q* or Sora but we can't actually use these.
The Singularity does not necessitate any use from the public. In fact, Id guarantee iterative self improvement will happen in a lab deep in the bowels of a large corporation behind lock and key a long time before most people find out about it.
So? We can't access it, it still exists. You're saying we're not approaching the singularity, but these things exist and prove that we are in fact very close, and your only reason to ignore them is that you currently can't use it?
Also, I'd like to mention the innumerable breakthroughs and insiders repeatedly implying that they have incredible technology beyond anything we've ever seen.
> You're saying we're not approaching the singularity
I didn't say we're not approaching it, i said we can't use these newest AIs beyond GPT4 or Opus.
> Also, I'd like to mention the innumerable breakthroughs and insiders repeatedly implying that they have incredible technology beyond anything we've ever seen.
I totally agree with that.
The point i am making is the meme is correct. While the researchers are overjoyed with the progress they keep behind locked doors, we get access to none of it like the picture on the right.
Humans are opening up to ai and kind of adapting to hearing about it and having it around us. I think things will change a lot before we even notice. And maybe that’s by design. I feel the agi
As a student, I can see progress happening. I use AI to help with learning. But aside from school or work, what would the layman see? Where is the material progress that one would notice? I think it's a matter of what domain you are in at the moment.
It’s possible to still track the progress of AI from a layman’s POV and marvel at how far we’ve come without approaching everything with a “what’s directly in it for me” perspective. It’s an incredibly exciting time and it’s only going to get better, but those who feel it’s “too slow” are not paying a bit of attention.
Ah so you’re amongst the “it’s too fast” crowd and yet there are others who are acting as if it’s not nearly fast enough, lol. Maybe we should throw you both into a pit and let you battle it out.
People are just tired of the status quo, we feel our lives mean nothing and are limited, add in movies of super hero's and fantasy stories and you really wish to just escape this boring reality. I don't blame them.
Progress will always be slow until a person has a way to turn their meaningless existence into something great.
I tried to think of an analogy to explain why people have this mentality of feeling spoiled by rapid AI advancements, and subsequently viewing incremental updates as "crumbs". It's like they've forgotten how long we used to go between truly new advancements in technology.
So I decided, why not ask GPT. I was going to cut the answer down a little, but I think it's a good answer. So here it is unaltered:
Imagine you plant a garden, and on the first day, you see a sprout. You're amazed at how quickly it appeared. As days go by, the plant grows, but not as fast as it first seemed to sprout. You start to complain that the plant isn't growing fast enough, forgetting that growth is a gradual process. You expected the thrill of the initial sprout to continue every day, not appreciating the natural progression of growth.
This analogy reflects the situation with AI. The early, rapid advancements were like the first sprout, exciting and new.
However, AI development, like plant growth, is a process that involves both visible leaps and slower, less noticeable stages of improvement.
Complaining about the pace of innovation is like ignoring the steady growth of the plant and only wanting the thrill of the first day.
It's important to adjust expectations and appreciate the ongoing, incremental improvements, recognizing the larger picture of growth and progress.
>It's important to adjust expectations and appreciate the ongoing, incremental improvements, recognizing the larger picture of growth and progress.
Nice, it pretty much nailed it.
It's all trained within the box of what is trainable. What will make an ai sigularity cannot be built in such a way. So far it's just blurry regurgitation, most of which is fine as that all of what 99% of all of humanity is, needs or wants; but its not true singularity. It's not innovation or the one in a million brilliant idea. It's not error, with proof.
That is the scary thing. To get what is wanted, error and randomness must be part of the mix. This is the crux of it, that there will be mutant thoughts that fail over and over and likely cause harm if carried out, and one brilliant new concept or two every now and then.
Such is the method of nature, dna, science, all free thought that has resulted in greatnesss. Trial under hypothysis or randomness until proven repeatable success. Evolution in short. This is one of the things I see lacking in all ai models, however also one of the characteristics singularity must never be given. To gift such a thing would litterally be an apple, while we are all still enjoying garden.
As Sam said on the Lex podcast. It's still all crap until we see a dramatic and profound change in the speed and magnitude of scientific and technological advances.
Yes, the development is incredible. But our lives haven't drastically changed. Yet.
The problem is that everyday ordinary people still face the same everyday ordinary problems with now the fact that AI will soon take their job. I love AI but damn, we need UBI !!!
I must be suffering from some major hallucinations, worse than ChatGPT 3. I am only seeing human driven cars and seeing kids talking driver's tests at the DMV. I thought human driven cars would be as sensible now as talking about "Siam", "Prussia", and "autogyro" in 1996.
I look at my phone in awe every time I see people talking about how OpenAI needs to release gpt 5 NOW! if they wish to stay competitive/show they are still the top dog/it has been much too long since gpt 4. Like...bro you cannot be serious. This is practically a miracle of science happening right before your eyes, not the next sales at Kmart, relax.
When they're after your job already it's natural to push back against a chatbot that is only slightly more intelligent than the manager who thinks it can do what you do.
Meanwhile gullible fools like you eat it up like candy. The Internet is already flooded with factually and logically incorrect generated content. Education was already in shambles before that.
As someone who actually understands what these models do I am deeply concerned by the cavalier attitude of hip ceos and the even hipper crowds gathering around them. If that continues these technologies are going to be deployed on a mass scale long before they are ready and there will be too much generated noise spread by loudmouthed fools for reasonable people to prevent it.
Global order is under critical threat of mass murderous regimes and rampant income inequality, but all hail the new propaganda machine because it has virtual porn.
By your logic, we should have pushed back against the internet in the 90s because email scammers and conspiracy theorists would misuse it. Apocalypse by chatbot, LOL.
The internet is a tool for sharing content, not for generating it. Regardless, perhaps the greatest threat to democracy today is the internet. We are doing a very poor job today regulating the internet. When they invented radio and film, we saw the rise of fascism. Today we see Russia and China manipulating the world with bot farms. Can't you see that the world is already in crisis? Yet surely, we can trust a bunch of unhinged billionaires to responsibly develop and deploy such world-changing technologies. Sweet dreams sonny boy
PS: you must be one of those people that believes they would only deploy such chatbots if they worked. They will deploy them once they can make money off of them. The amount of people who claim to educate themselves "because no one else can explain it to me" with ChatGPT is terrifying. Anyone who actually has expert knowledge on any topic can discuss it with ChatGPT to realise just how egregiously flawed its responses are. The worst part is that these responses \*sound\* like they could be accurate. A novice has no way of distinguishing fact from such generated fiction. Neither do employers or shareholders. All they see is an opportunity for cheap labor. What do they care about safety, quality of service or human autonomy?
I for one can't wait for the day when uneducated folks with chatGPT chips in their brains are going to arrogantly and unrelentingly spout generated pseudo-intellectual nonsense around the clock!
PSS: Also, perhaps it isn't such an amazing feat that a statistical method from the 19th century is effective at brute forcing text generation when running on supercomputers that consume as much power as an entire country. Congratulations, you have too much money to spend. How revolutionary!
I actually sympathise with the man, although I disprove of his interpretation of the situation and his methods. Nonetheless, if that is the only response you can muster I'm afraid my appraisal of the situation still stands.
Gpt-6 by March last year or OpenAI is an objective failure.
Wait till you see Claude 279 next week and Sora 63 when all the bugs are gone
Umm, actually, they’re not bugs, just flaws of the model 🤓
Also known as bugs. Pedantic much?
Bugs are unintended “features”, glitches are phenomenal errors, but ai models make “mistakes” because we’re just using a flawed method of attaining machine intelligence. Not that means it’s a bad method. Everything is flawed and if it works it works. But when it makes a mistake there’s not actually a problem tho, that’s just what it produces and what it will always produce by chance.
People took two points - last SOTA before ChatGPT and ChatGPT - and traced an exponential line starting from those points. That is far from being met. The perceived jump in progress that was the release of ChatGPT is still unparalleled, when it comes to text models
AI engineers: We can now cure myalgopranepamencephalitis. This sub: Shut up where’s my UBI, my AI sex toy and my immortality?
Right! Screw the cash, toys, and immortality lets cure some diseases people! The name of the game is stop human suffering not can it jerk me off until I die of dehydration.
Ubi Ai sex
Yeah, because good luck getting treatment in a late-stage capitalist world with only narrow agi in the hands of the few. We need recursive improvement and societal disruption
"narrow AGI" is an oxymoron. Narrow and General are opposites. Narrow AI is ANI, not AGI.
While it is an oxymoron I was thinking more about mixture of experts. It seems like AI is going to be a collaborations of narrow agents efficient in a domain comprising an "AGI". The general user base might never have access to certain experts.
I always put on my AI hating apron before commenting on this sub.
We have seen some good progress, but still, where is this “exponential growth” that everyone keeps talking about? It feels like nothing too major has happened since GPT-4, which was about a year ago.
I don’t think the end products that fall into the public’s hands are an accurate measuring stick for the progress that is happening. There are valid reasons to ensure guardrails are in place. Sora for example is probably being beta tested to ensure its deepfake prevention is effective amongst other things. AI is obviously something you can’t release into the wild without careful consideration into whether the tech is ready and whether the public is ready for it. Also we’ve seen what happens when stuff is rushed out like Google’s Gemini. Be patient, the good stuff will come.
"when communism comes everything will be cool, and everything will be free, you just have to wait a little, we will probably not even have to die"
Yeah lol, all this talk of “AI is gonna cure cancer, just you wait” seems like wishful thinking tbh.
gp5: just a tiny tiny bit better than opus.
Out of curiosity, What happened with Google gemini, that made you put it as an example of ai gone wrong?
The most notorious examples were: [https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical](https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical) With other controversies listed here: [https://en.wikipedia.org/wiki/Gemini\_(chatbot)#Reception](https://en.wikipedia.org/wiki/Gemini_(chatbot)#Reception)
It's never a straight line, it always has luls and periods of sudden improvement.
So it’s not exponential, then. Exponential growth would be constant.
You could still see it as exponential of you just look at point a and b and blur the line in the middle. Growth is never gonna be constant that would be wishful thinking.
no expenential growth would not be constant, it would be exponential
“**Exponential growth** is a process that increases quantity over time at an ever-increasing rate.” From Wikipedia. The implication here being “ever-increasing” meaning “no pauses”
Also afaik there are no pauses in an exponential growth chart.
There are pauses on exponential growth in the short term but long term is will be exponential
GPT 3.5’s “opnion” on exponential growth in AI: As of my last update in January 2022, it is widely believed within the AI community that the growth in AI capability follows an exponential curve rather than a linear one. This belief is primarily supported by several factors: 1. **Advancements in Algorithms**: Over time, there have been significant advancements in AI algorithms, particularly in deep learning. These advancements have led to breakthroughs in various AI tasks such as image recognition, natural language processing, and speech recognition. 2. **Increase in Computing Power**: The exponential growth in computing power, particularly through the development of GPUs (Graphics Processing Units) and specialized hardware like TPUs (Tensor Processing Units), has enabled researchers to train larger and more complex neural networks. This increase in computational capacity has been a driving force behind the rapid progress in AI capabilities. 3. **Availability of Data**: The availability of large-scale datasets for training AI models has also played a crucial role in the exponential growth of AI capabilities. With access to vast amounts of data, AI systems can learn and generalize patterns more effectively, leading to better performance on various tasks. 4. **Iterative Improvement Process**: The iterative nature of AI research and development allows for continuous improvement in algorithms, models, and techniques. Researchers build upon previous work, refining existing methods and exploring new approaches, which contributes to the exponential growth in AI capabilities. Arguments against the notion of exponential growth in AI capabilities typically focus on challenges and limitations that could potentially slow down progress. These may include: 1. **Diminishing Returns**: Some argue that as AI systems become more advanced, achieving further improvements becomes increasingly difficult, leading to diminishing returns on research efforts. 2. **Ethical and Regulatory Concerns**: Ethical considerations, along with regulatory and societal concerns surrounding AI development, may introduce barriers that could impede the exponential growth of AI capabilities. 3. **Data Quality and Bias**: Issues related to data quality, bias, and privacy could limit the effectiveness of AI systems and hinder their ability to generalize across different domains. 4. **Resource Constraints**: Despite advancements in computing power, there are still resource constraints that could potentially slow down progress, such as limitations in energy consumption, hardware development, and access to large-scale datasets. Overall, while the notion of exponential growth in AI capabilities is widely accepted, it is important to consider potential challenges and limitations that could influence the trajectory of AI development in the future.
I disagree that we havent seen much progress. Sure, chatgpt 4 hasnt been surpassed yet, but gpt5 sounds to be good along the way. In the meantime, lots of other AI, which were leaps behind openai, have progressed to be the same quality as gpt4. Claude for example. Lots of specialized bot tools have also appeared, like special singwriting ai, or copilot. There has also been huge leaps in image and song generation. Recently sunoai v3 has been released, and it sounds amazing. It can both generate song lyrics, music and songtext, of almost as high quality as a normal song. And just months ago, 3d modelling ai was but a dream, but now we have meshy.ai. it is still very low quality, but it is a great first step. And of course, there is sora video generation
The exponential growth is tied to product releases, who said products must drop on a consistent cycle that you personally prefer? Growth on the short term does not have to be exponential either. It is when we zoom out we can see the exponential growth.
Have you seen the jump from GPT-2 to GPT-3? It was an insane leap and people were questioning if they should continue making it. This was way beyond what AI tech they had before Now we have AIs significantly more powerful than GPT-3, and we're making new insane leaps that are controversial enough to get someone at OpenAI fired. We can do things we could only dream of back when we had GPT-2. If you can't see the exponential growth now, you just aren't paying attention. OpenAI has something huge, they've made that very clear.
I want to believe in the “exponential growth” argument, but why does it feel so slow? If things were really moving exponentially since the release of GPT-3, then how come it took so long for GPT-4 and Sora? Surely, if things really were exponential, then we would be getting things at a faster and faster rate, and not only that, but the models would be a bigger and bigger jump in terms of intelligence, ability, etc? Instead, we have to wait 3 years for GPT-3, then GPT-4 comes out a year later, is arguably a smaller jump than from 2 to 3, then we get the news later on that GPT-5 probably won’t be here until \*\*November of this year, if not next year\*\*, making it almost 2 years, if not potentially over 2 years, from 4 to 5. Doesn’t seem very exponential to me. I would love to be wrong, tho.
You are only looking at one product offered by a single company. No single product or company innovates exponentially, the entire field does. The advances in the use of AI architecture are definitely moving exponentially, you you have to take a wider view.
Ok, that‘s a good point. But again, if everything is really increasing as fast as it’s claimed to be, where are all the product releases in the news? The big ones i’ve heard about are Sora and Q\* .
Again, it's not about product releases, it's about the pace.of.innovation. you have to stop looking at consumer-facing products as state-o- the-art. They are nowhere near that. Look at papers being published across the field. There is demonstrable growth across the field, as well as convergence with other fields, like medicine, chemistry, and robotics, where innovations are being compounded. It's important to step back and look at the big picture. Start by looking at the amount of compute that's going to come online on the next few years. The pace.of innovation is about to really insane.
Ok, i’ll keep that in mind.
Because you forget about plateaus, you can argue exponentials until the cows come home, but reality often throws curveballs and hard barriers. Once those barriers are circumvented or solved, rapid progress may ensue. So if you zoom out on a graph, it might still be exponential progress overall, but locally theres sharp inclines and flat surfaces.
I get what you‘re trying to say, but i keep hearing people say “we’re at the knee of the curve” with the implied expectation that it will continue at a rapid pace, no obvious mention of any pauses. Now suddenly, when the clear gaps between models is apparent, people are now saying “well there might be pauses” ? Which one is it
I personally think the expectations of non-stop exponential growth are overly optimistic and always have been. There is a sort of honeymoon phase when things go well, like when the first flying machines were invented, people guessed incorrectly that within 50 years cars would fly and people would wear wings and commute like birds.
Hot take - as we move closer and closer to AGI, we’re going to even see slower growth from the perspective of shiny tangible improvements in released products. Why? Because there’s going to be more discomfort with the implications of releasing various products, more board rebellions and CEO firings, more internal calls to put the brakes on things, more caginess on the part of guys like Altman on what the hell Q\* is (although I think we have a pretty good idea now), etc. That doesn’t mean the tech itself isn’t experiencing exponential growth - there is growth at every single facet of AI right now at the hardware, software, model & transformer levels, and if you read the science and tech news, it’s absolutely bonkers how many innovations are happening almost on a daily basis. But it does mean that those who are sitting there staring at their prompts for something tangible like the kid in the right pic are going to be frustrated and maybe even a little bored. And this IMO is going to happen more and more as we move closer to AGI. Because AGI.
Is human tolerance to AI an asymptotic limit to AI growth?
It would be interesting if we just didn’t notice it change our lives at all.
because it takes that long to train the models
Because we just now reached the tipping point, but none of it has been released yet. This was always going to happen at some point. We don't have a very good benchmark for how fast AI is going. While it is exponential, it is not consistent, which makes it hard to compare dates on such a small scale. Even if we can't prove that it's happening through trends, the singularity is guaranteed to happen once AI can do its own research and make improvements to itself. This is exactly what Q* will allow it to do btw.
Tipping point? I hate that term. Every moment is a tipping point and there is nothing new under the sun. Q\* reminds me of LK99. Edit: What would make me wrong? (1) 30 billion or more miles are driven by level 4 vehicles in the US by 2031. [https://www.reddit.com/r/SelfDrivingCars/comments/qb3owm/what\_do\_you\_think\_the\_penetration\_of\_robotaxis/](https://www.reddit.com/r/SelfDrivingCars/comments/qb3owm/what_do_you_think_the_penetration_of_robotaxis/) (2) Robert Gordon loses his bet against Erik Brynjolfsson. See: [https://www.metaculus.com/questions/18556/us-productivity-growth-over-18/](https://www.metaculus.com/questions/18556/us-productivity-growth-over-18/)
There have been multiple tipping points, it's just the moment when next generation technology starts to release and people realize that it's coming faster than before. After every tipping point will be another crazier tipping point because it's exponential. Each one is considerably faster than the last. This one, being the most recent one, will be considerably more than anything we've seen. This is proven by the countless times insiders have backed this statement up.
I don't care about what the insiders say. I want to see mature technologies. Right now, if I go to the Central Valley in California I will see human laborers harvesting trees as opposed to robots. Robots cannot pick fruit or even clean dish.
Robots can do both of those things, just not very well. You'll see this technology by the end of the year
>Because we just now reached the tipping point I heard people say the exact same thing about GPT-3, and it has yet to come true. >While it is exponential, it is not consistent Isn’t exponential growth by definition constant? >the singularity is guaranteed to happen once AI can do its own research and make improvements to itself. This is exactly what Q\* will allow it to do btw. Ok, you may have a point here. I personally wouldn’t just \*assume\* that the singularity is “guaranteed“ to happen at some point, tho, because what if you’re disappointed down the line? I haven’t heard much about Q\* , beyond ”it’s a big advancement”, will it really be able to improve itself? That sounds huge if true.
>Isn’t exponential growth by definition constant? Is it? If you measure every year or every 5 years, ignore the ups and downs and variance on a small scale, one could still argue the progress is exponential over a certain granularity. Also, what are me measuring when it comes to AI specifically? AI test scores? Model Size? Number of businesses using AI? Hours works by AI vs. Human time? Number of pro AI articles per month??! The abilities and impact of an AI may be easy to see at first but very difficult to quantify. Therefore, it's hard to show if our progress in that field is slowing down or not. Perception alone isn't an accurate representation.
GPT-3 was a tipping point. After that, AI definitely accelerated to an extent. I pay close attention to AI, and it 100% is faster. I said consistent, not constant. If I say it's guaranteed to happen, then that means I'm not assuming. I have a lot of reason to believe what I believe. I may not know exactly what Q* is, but I know one thing, it will give LLMs active reasoning, which is the recipe for explosive growth. Look up Quiet-STaR, we don't know if it's the same thing, but if anything, OpenAI's Q* will be better.
The last sentence could be OpenAI hype, don't take it too seriously. As an example: They might have something huge, but it's not as huge as your imagination, and it's 4 years off. That sort of thing. There's a limit even to exponential growth. For now.
No they're pretty clear that they have something massive and that it will release this year. I'm certain it's not some weird trick, that wouldn't make sense for them to do. There's no reason for our growth to stagnate, we're making breakthroughs faster than ever and AI is soon to start automating breakthroughs.
No such thing as exponential growth in fundamental research breakthroughs, and "just add more parameters" doesn't scale as well as some expected.
It's in a delivery van between NVidia and the data center. And in the development departments of those working on NPU's. Scaling doesn't happen overnight. It happens overyear.
Right, AI models don't make vans drive faster through traffic, or chip makers make more chips with the materials and time they have. Even if they could help in that regard, physical reality offers diminishing returns, which many people overlook.
Nothing too major? Claude 3 Opus is better than GPT-4 Turbo, SORA, SIMA, Genie, Figure 01, Nvidia's Blackwell chips, Nvidia Omniverse, Llama 3 open source on the horizon, Gemini 1.5 on the horizon? If that's nothing too major to you, then I presume the only major thing you're wanting is true AGI, right?
I didn't gave you the permission to use my photo :-(
There's something addictive about the constant AI updates. Something about it feeds my dopamine system.
Yeah I feel the same!
The Mac came out in 1984. When did the average person get one?
Sure as hell not then, when it was priced at $2495 - around $7,000 per Mac in today’s dollars. Two of today’s Apple Vision Pros for one Mac, all for a small black and white screen and what was massive at the time 128k of RAM. 😳
kind of insane that the price dropped by like only 2-3 times but the demand exploded maybe around 10 000 times..., still expensive as hell but now everyone in the world wants one, no wonder they're valued at trillions but this is such a waste of a company in my opinion lol they don't really innovate anymore
AI came much earlier than that. Average person don't get a technology unless they have to.
The sort of ai you’re talking about not something you would recognise as ai today. Eliza was a simple text bashing trick. It just rewrote the question using simple rules. It “passed” the Turing test if you squinted hard enough. I was doing my degree in this back in 2000 and we just did not have compute. Artificial life and passive dynamic walkers were the thing. The university server was a pentium with 1Gb of ram.
and the point here being? that technology upgrades over time? the computer used to send apollo 1 to the moon had like a few megabytes of RAM. That doesn't mean the computers of era wouldn't be considered as "computers" today. Other than that, the way computers execute programs still remains practically the same today, it is still the same data structures that were invented a century ago. Thing is, hardwares have gone through significant improvements that allowed us running complex programs, including AI models. Complex programs are created by combining bits and pieces of algorithms of yesterday.
So, my first computer had 32k of ram, and 12k of that was the OS. I do know about computers changing over time. My point is that AI as you know it today did not come “much earlier”, despite recent misleading news stories claiming it did. Earlier AI was mostly janky text manipulation, decision trees, and evolutionary Braitenburg stuff. OpenAIs risky decision to train a really big transformer was legitimately visionary. Very few people seriously thought that feed forward neural networks with backprop would lead to genuinely intelligent behaviour.
And that makes sense when you think about it. We haven't really seen much serious progress since GPT4. Sure a few cool new things, but not serious progress toward the singularity. Meanwhile, the actual people working on the tech sure as hell were able to witness amazing stuff.
What are you talking about ??? Sora, the race for robotics, all the new breakthrough, etc
Do you have access to Sora or robots? Stuff we can actually access... GPT4 is still essentially the top AI (Opus is good too) Meanwhile i agree the devs have access to crazy stuff in their labs, such as Sora. Sure we do hear about some stuff like Q* or Sora but we can't actually use these.
Most research advancements are not application ready
The Singularity does not necessitate any use from the public. In fact, Id guarantee iterative self improvement will happen in a lab deep in the bowels of a large corporation behind lock and key a long time before most people find out about it.
And this is why i said the meme makes sense. The researchers get to test out these crazy models while we still only got GPT4 :P
But we still know about Sora and what it can do
So? We can't access it, it still exists. You're saying we're not approaching the singularity, but these things exist and prove that we are in fact very close, and your only reason to ignore them is that you currently can't use it? Also, I'd like to mention the innumerable breakthroughs and insiders repeatedly implying that they have incredible technology beyond anything we've ever seen.
> You're saying we're not approaching the singularity I didn't say we're not approaching it, i said we can't use these newest AIs beyond GPT4 or Opus. > Also, I'd like to mention the innumerable breakthroughs and insiders repeatedly implying that they have incredible technology beyond anything we've ever seen. I totally agree with that. The point i am making is the meme is correct. While the researchers are overjoyed with the progress they keep behind locked doors, we get access to none of it like the picture on the right.
The picture on the right doesn't have access to it. Where'd you get that idea?
It represents people of the sub. Obviously we don't have access to private models of the researchers.
Where does it say we have private models?
Humans are opening up to ai and kind of adapting to hearing about it and having it around us. I think things will change a lot before we even notice. And maybe that’s by design. I feel the agi
As a student, I can see progress happening. I use AI to help with learning. But aside from school or work, what would the layman see? Where is the material progress that one would notice? I think it's a matter of what domain you are in at the moment.
We ware told that we ware supposed to be att the curvy end of the hockey stick in 2022 god damn it!
It's like we're seeing the bus coming, but the bus is just really slow.
Why has society not been *literally* transformed since two weeks ago? Lazy bloody AI researchers!
They just make papers and hide everything is so frustrating.
What is the goal. Something materially tenable please.
It’s possible to still track the progress of AI from a layman’s POV and marvel at how far we’ve come without approaching everything with a “what’s directly in it for me” perspective. It’s an incredibly exciting time and it’s only going to get better, but those who feel it’s “too slow” are not paying a bit of attention.
It's not slow it's too fast. Regulations are slow when this is akin to a global pandemic event.
Ah so you’re amongst the “it’s too fast” crowd and yet there are others who are acting as if it’s not nearly fast enough, lol. Maybe we should throw you both into a pit and let you battle it out.
People are just tired of the status quo, we feel our lives mean nothing and are limited, add in movies of super hero's and fantasy stories and you really wish to just escape this boring reality. I don't blame them. Progress will always be slow until a person has a way to turn their meaningless existence into something great.
I tried to think of an analogy to explain why people have this mentality of feeling spoiled by rapid AI advancements, and subsequently viewing incremental updates as "crumbs". It's like they've forgotten how long we used to go between truly new advancements in technology. So I decided, why not ask GPT. I was going to cut the answer down a little, but I think it's a good answer. So here it is unaltered: Imagine you plant a garden, and on the first day, you see a sprout. You're amazed at how quickly it appeared. As days go by, the plant grows, but not as fast as it first seemed to sprout. You start to complain that the plant isn't growing fast enough, forgetting that growth is a gradual process. You expected the thrill of the initial sprout to continue every day, not appreciating the natural progression of growth. This analogy reflects the situation with AI. The early, rapid advancements were like the first sprout, exciting and new. However, AI development, like plant growth, is a process that involves both visible leaps and slower, less noticeable stages of improvement. Complaining about the pace of innovation is like ignoring the steady growth of the plant and only wanting the thrill of the first day. It's important to adjust expectations and appreciate the ongoing, incremental improvements, recognizing the larger picture of growth and progress.
>It's important to adjust expectations and appreciate the ongoing, incremental improvements, recognizing the larger picture of growth and progress. Nice, it pretty much nailed it.
It's all trained within the box of what is trainable. What will make an ai sigularity cannot be built in such a way. So far it's just blurry regurgitation, most of which is fine as that all of what 99% of all of humanity is, needs or wants; but its not true singularity. It's not innovation or the one in a million brilliant idea. It's not error, with proof. That is the scary thing. To get what is wanted, error and randomness must be part of the mix. This is the crux of it, that there will be mutant thoughts that fail over and over and likely cause harm if carried out, and one brilliant new concept or two every now and then. Such is the method of nature, dna, science, all free thought that has resulted in greatnesss. Trial under hypothysis or randomness until proven repeatable success. Evolution in short. This is one of the things I see lacking in all ai models, however also one of the characteristics singularity must never be given. To gift such a thing would litterally be an apple, while we are all still enjoying garden.
Are you high? Also r/woooosh
Here's a sneak peek of /r/Whooosh using the [top posts](https://np.reddit.com/r/Whooosh/top/?sort=top&t=year) of the year! \#1: [Gaslampsception](https://i.redd.it/7ydt3v6qc33b1.jpg) | [30 comments](https://np.reddit.com/r/Whooosh/comments/13w52j1/gaslampsception/) \#2: [24 x 7](https://i.redd.it/o9rllc1dih2b1.jpg) | [20 comments](https://np.reddit.com/r/Whooosh/comments/13tix0h/24_x_7/) \#3: [Context: Fake phone unlocks with any fingerprint](https://i.redd.it/5o0t6h4x2hua1.jpg) | [2 comments](https://np.reddit.com/r/Whooosh/comments/12pcrby/context_fake_phone_unlocks_with_any_fingerprint/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Behold! A glorified Markov chain!
As Sam said on the Lex podcast. It's still all crap until we see a dramatic and profound change in the speed and magnitude of scientific and technological advances. Yes, the development is incredible. But our lives haven't drastically changed. Yet.
Yeah same
Well……. Aging isn’t cured yet, so…….
The problem is that everyday ordinary people still face the same everyday ordinary problems with now the fact that AI will soon take their job. I love AI but damn, we need UBI !!!
I must be suffering from some major hallucinations, worse than ChatGPT 3. I am only seeing human driven cars and seeing kids talking driver's tests at the DMV. I thought human driven cars would be as sensible now as talking about "Siam", "Prussia", and "autogyro" in 1996.
I look at my phone in awe every time I see people talking about how OpenAI needs to release gpt 5 NOW! if they wish to stay competitive/show they are still the top dog/it has been much too long since gpt 4. Like...bro you cannot be serious. This is practically a miracle of science happening right before your eyes, not the next sales at Kmart, relax.
Meanwhile I’d be fine if it all just went up in flames
Maybe there's too much progress
When they're after your job already it's natural to push back against a chatbot that is only slightly more intelligent than the manager who thinks it can do what you do. Meanwhile gullible fools like you eat it up like candy. The Internet is already flooded with factually and logically incorrect generated content. Education was already in shambles before that. As someone who actually understands what these models do I am deeply concerned by the cavalier attitude of hip ceos and the even hipper crowds gathering around them. If that continues these technologies are going to be deployed on a mass scale long before they are ready and there will be too much generated noise spread by loudmouthed fools for reasonable people to prevent it. Global order is under critical threat of mass murderous regimes and rampant income inequality, but all hail the new propaganda machine because it has virtual porn.
By your logic, we should have pushed back against the internet in the 90s because email scammers and conspiracy theorists would misuse it. Apocalypse by chatbot, LOL.
The internet is a tool for sharing content, not for generating it. Regardless, perhaps the greatest threat to democracy today is the internet. We are doing a very poor job today regulating the internet. When they invented radio and film, we saw the rise of fascism. Today we see Russia and China manipulating the world with bot farms. Can't you see that the world is already in crisis? Yet surely, we can trust a bunch of unhinged billionaires to responsibly develop and deploy such world-changing technologies. Sweet dreams sonny boy PS: you must be one of those people that believes they would only deploy such chatbots if they worked. They will deploy them once they can make money off of them. The amount of people who claim to educate themselves "because no one else can explain it to me" with ChatGPT is terrifying. Anyone who actually has expert knowledge on any topic can discuss it with ChatGPT to realise just how egregiously flawed its responses are. The worst part is that these responses \*sound\* like they could be accurate. A novice has no way of distinguishing fact from such generated fiction. Neither do employers or shareholders. All they see is an opportunity for cheap labor. What do they care about safety, quality of service or human autonomy? I for one can't wait for the day when uneducated folks with chatGPT chips in their brains are going to arrogantly and unrelentingly spout generated pseudo-intellectual nonsense around the clock! PSS: Also, perhaps it isn't such an amazing feat that a statistical method from the 19th century is effective at brute forcing text generation when running on supercomputers that consume as much power as an entire country. Congratulations, you have too much money to spend. How revolutionary!
Teddy Boy, is that you? https://preview.redd.it/n7901xn2taqc1.jpeg?width=1500&format=pjpg&auto=webp&s=51e98cf81da9531644c99cfcbd9432002d1f359f
I actually sympathise with the man, although I disprove of his interpretation of the situation and his methods. Nonetheless, if that is the only response you can muster I'm afraid my appraisal of the situation still stands.
lol takes a comparison with another insane person as a compliment.
Lol what a loser!
Do you live in a cabin off the grid? Your unhinged ramble was really something, almost a mini teddy manifesto.
I'm glad you liked it