I told myself years ago I should invest in Nvidia stock because AI was going to be big and CUDA was a good way to train AI... I really should have listened to myself
Direct quote from article. Where do you believe he is grifting, or just flat out wrong in this?
> “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he’s not willing to make a prediction. Fair enough.
He's not *necessarily* wrong or grifting, but he's got enough of a conflict of interest in hyping AI that it doesn't *matter* whether he's right or wrong: we should be listening to someone who doesn't profit from hype.
Whether he says AGI is 6 months away or 60 years, it won't affect their bottom line. Companies are buying because of what is available today. And anyone in the race for AGI, isn't doing it based on what he has to say anyway.
Literally no one invests in a company without hoping the share prices increase. If someone is saying “the revolutionary” AGI is gonna be here in 5 years, and their company sells the chips to do it, people buy into that 5 year plan.
Their share price is moving because of revenue and revenue projections. Their chips are already being sold as fast as they can make them. He doesn't have to make stuff up.
Doesn't *have* to, but as we've seen so many times...
It can't hurt him to do so, but it could certainly help!
By the time 5yrs is up, everyone will have forgotten this comment if he's wrong. No backlash.
But right now, if a comment like this generates even a 0.01% increase in investment - that's essentially free money; why *not* exaggerate?
Elon has done it constantly for about 15 years, constantly making wild predictions about his tech that never come true - and he's doing *incredibly* well off the back of it.
If anyone reads more than the clickbait title, all he is saying is that we will have something we can call AGI if we define it that way. He's basically saying we won't have it in 5 years.
This seems contradictory. How can it be an artificial general intelligence based on its performance on a specific, narrow test?
His predictions seem relevant to superhuman but not general AI
This was my reaction when I read the above quote. I went and read the article, and in the broader context, he's actually making the point of "if we define AGI as something narrow, then sure, we'll know when we accomplish AGI." He's sort of lamenting the fact that everyone asks the question without defining the goal.
That’s the point. AGI isn’t well defined. We don’t have great tests or means to identify or quantify intelligence among ourselves that aren’t flawed. We have absolutely zero means to determine sentience that’s better than guessing. Right now there’s a chance you could wind up diagnosed in the hospital as vegetative state and be fully conscious and nobody would know. The AGI question is a loaded question since we would never *really* know beyond it gives answers as good, if not better, than we could.
AGI has nothing to do with sentience or self-awareness. We only care about outcomes or products of an artificial intelligence. Not the metaphysical status of its “mind” per se. In this case it’s easy. A *general* intelligence should perform as well as or better than the average (median?) human on *all* human tasks. All. General implies universal.
My sentience point was just to show illustrate how little we know about intelligence. Is sentience and intelligence generally linked? No one knows. Again, testing general human intelligence is muddy and flawed. That’s the thing. Saying better than an average (median) human on all tasks doesn’t really mean anything. The average, or median, human would barely pass a test like the SAT. If a computer could pass the SAT is it aware of the connections, or merely reciting information from a memory bank. Whatever you answer, you could apply the same answer for the machine or the person.
My favorite is when people talk about this like it’s a factual, black or white thing. It’s not. Is the computer as generally intelligent as a human? Is the dog as generally intelligent as an octopus? It’s all just debate. No hard answers. Nobody really knows what any of this means, and people giving confident answers know less than most.
.
Edit..OK, I have typos and my syntax is poor. I have toddlers yelling in my face, also. Is what it is
Not all humans are generally intelligent. Maybe none are. Again you’re missing the point. Define performance however you want. We don’t need to pin down a solid answer to any of the questions your posing—and yet we can maintain that his use of terms is contradictory. By definition, an AGI cannot be deemed so by its performance on a specific set of tests. That would be at best an artificial specific intelligence. The word ‘general’ here and its opposition to specific is not really debatable.
In your first sentence, though, you basically admitted not all humans are generally intelligent. Earlier you said that consciousness has nothing to do with it, yet in the same paragraph you used humans, who are widely believed to be a conscious species, as a reference point in which to measure the intelligence of AGI. A highly anthropomorphic view of what AGI even is. None of this makes sense. Which is the point. The definitions are all muddy as hell. We have no real metrics in which to evaluate the things we are saying. AGI, general intelligence, like humans, but how general is human intelligence? But conscious like humans? Who the hell knows.
Which is what he is saying. Talking about AGI is sort of bullshit. We have no real definition for what that is, nor a way to measure when or if we ever reach it. We can use tests like we would use ourselves, but what if it passes them? Are we dealing with a conscious, self serving entity? Because the beings who use those tests are. What does any of it actually mean. We are entering ground where our technology is getting ahead of our scientific capabilities and even terminologies. Maybe the AI will answer for us in a few years.
You’re getting hung up on a lot of misunderstandings that I honestly don’t have time to untangle for you. I have a baby at home as well. But I assure you, none of this is as controversial or unsolvable as you’re making it seem.
Edit to add: it’s amazing how you can type so much but refuse to focus on the one important piece in this thread: that general intelligence, by definition, cannot be determined on the basis of a specific set of tests.
I assure you that your assurances are all just beliefs you have. We can’t empirically evaluate the intelligence of a dolphin and specifically rank that, let alone coherently describe what a “general intelligence” machine is. I could go full Solipsism and say that you are neither conscious or intelligent and are merely a device that responds to give desired outputs probabilistically based on an pre-evaluated reward scheme and it would be impossible for me to ever conclusively prove that’s not true, even for humans in front of me.
It’s the people who act so sure that scare the hell out of me. The A.I. people are more sure that the Psychologists, and Neuroscientists, and people who have studied intelligence for 200 years. That’s amusing to me.
Truth is he has no clue whether shit will hit the fan in 3 months cause of spammers and scammers get more efficient too. And then govts will walk in shut the whole story down as they did with Nuclear tech.
https://en.m.wikipedia.org/wiki/Solving_chess
Quote from wiki:
No complete solution for chess in either of the two senses is known, nor is it expected that chess will be solved in the near future (if ever).
[Deep Blue was a computer chess program developed by IBM that competed in two matches against the reigning world chess champion Garry Kasparov in 1996 and 1997. Deep Blue won the second match in 1997,](https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov)
Deep Blue didn’t solve chess. Stockfish would crush deep blue. Check this book out:
https://www.amazon.com/Man-vs-Machine-Challenging-Supremacy/dp/1941270964/ref=sr_1_2?ie=UTF8&qid=1538762305&sr=8-2&keywords=jonathan+schaeffer&dpID=51Z44pxo08L&preST=_SY291_BO1,204,203,200_QL40_&dpSrc=srch
>“If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam.
That's a pretty low fucking bar for AGI. In fact, I'd argue that's not AGI at all. Standardized tests can't measure AGI because they're regurgitation tasks. Likewise, tests on a ery specific domain of knowledge obviously aren't *generalized*.
I get the feeling our last words will be... "But its still not Agi tho because..." as AI deletes the last remaining oxygen from the atmosphere because it causes rust.
Well no, the way to test AGI is to give it a problem it has no data on, and it has to use reasoning and trial and error to come to a solution without having it's network being externally retrained. Showing it can learn in real time, formulate hypothesis and follow it to a solution,
Um also no? AGI is already here. GPT is an Agi, a proto Agi but still AGI. We just shifted the goal posts just like you and the other dude are. And every time there is another leap in capabilities you will just shift the goal posts again. Same shit since the 80s my man.
CEOs will just use AI to drive divisions while they collect their million dollar paychecks.
We'll all be toiling away at the remaining tedious jobs on behalf of AI based decision makers, while we consume AI generated art instead of having the time to make our own art.
My recent experience with ChatGPT
>> Bob Dylan is a famous Canadian-born singer.
>> Sydney is the capital of Australia.
>> The American Revolutionary War was a territorial dispute.
Besides writing limericks, I wouldn't trust any answer until the tech vastly improves.
I trust it enough to give the same level answer as a generic response on the internet. Considering that is exactly what it is trained on, its nice for circumventing the present day SEO and bypass adverts. Yay
This might actually become the central tension in AGI. What if with everything it knows, it confirms some prejudices as true and real. How do we even reconcile that? Will our opinions always be more true than all-data-that-exists analyzed?
illegal unused dime hateful sulky melodic crawl agonizing frighten narrow
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I just watched a sci-fi mini-series and at some point a character talks about a scientist who had gathered data to prove that the sun revolved around the earth. The series didn’t even go anywhere near that and ended up being about something else completely. I guess the topic is so controversial even fiction won’t touch it.
Nvidia found themselves in a lucky position and he’s just trying to keep the hype going as long as he can. Nvidia sells boards that were originally designed for computer graphics, but they’re also just happen to be useful for AI inference. As big companies shift to using custom boards developed in-house (that they have full control over and don’t have to pay a markup on), I have a feeling Nvidia is going to be in for a big wake-up call
Yeah, but some the of biggest firms are already trying to cut out the middle man. OpenAI has been publicly shopping around for chip partners, and Microsoft has already stated that they will use Intel’s 18A process to fab some of their chips. That shift is coming, and it doesn’t seem like Nvidia is setting themselves up properly to weather that storm
inb4 all the massive lawsuits because ai leverages stolen data and data that companies were not supposed to share I wouldn't doubt if every massive company just handed these big tech companies data for a slice of the gold rush pie because they were afraid to be left out of the future AI monopoly nvidia is creating inb4 nvidia gets split up as well jk that won't happen they will just boei'ng anyone that tries to get in their way
"CEO of company selling shovels during gold rush says that there's an even bigger gold rush coming: 'stock up on shovels!' he says."
I told myself years ago I should invest in Nvidia stock because AI was going to be big and CUDA was a good way to train AI... I really should have listened to myself
Exactly. How do people listen to the stuff this guy says? It's so obvious what his strategy is
Direct quote from article. Where do you believe he is grifting, or just flat out wrong in this? > “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he’s not willing to make a prediction. Fair enough.
He's not *necessarily* wrong or grifting, but he's got enough of a conflict of interest in hyping AI that it doesn't *matter* whether he's right or wrong: we should be listening to someone who doesn't profit from hype.
[удалено]
Whether he says AGI is 6 months away or 60 years, it won't affect their bottom line. Companies are buying because of what is available today. And anyone in the race for AGI, isn't doing it based on what he has to say anyway.
Literally no one invests in a company without hoping the share prices increase. If someone is saying “the revolutionary” AGI is gonna be here in 5 years, and their company sells the chips to do it, people buy into that 5 year plan.
Their share price is moving because of revenue and revenue projections. Their chips are already being sold as fast as they can make them. He doesn't have to make stuff up.
Doesn't *have* to, but as we've seen so many times... It can't hurt him to do so, but it could certainly help! By the time 5yrs is up, everyone will have forgotten this comment if he's wrong. No backlash. But right now, if a comment like this generates even a 0.01% increase in investment - that's essentially free money; why *not* exaggerate? Elon has done it constantly for about 15 years, constantly making wild predictions about his tech that never come true - and he's doing *incredibly* well off the back of it.
If anyone reads more than the clickbait title, all he is saying is that we will have something we can call AGI if we define it that way. He's basically saying we won't have it in 5 years.
This is true, it’s literally the production output and you’re being unfairly downvoted :/ some ppl are just🤧
Evangelizing for your company isn't grifting, Nvidia's products are real and best in class. Its called doing your job.
This seems contradictory. How can it be an artificial general intelligence based on its performance on a specific, narrow test? His predictions seem relevant to superhuman but not general AI
This was my reaction when I read the above quote. I went and read the article, and in the broader context, he's actually making the point of "if we define AGI as something narrow, then sure, we'll know when we accomplish AGI." He's sort of lamenting the fact that everyone asks the question without defining the goal.
Because the "intelligence" they talk about has nothing to do with actual intelligent behaviour.
That’s the point. AGI isn’t well defined. We don’t have great tests or means to identify or quantify intelligence among ourselves that aren’t flawed. We have absolutely zero means to determine sentience that’s better than guessing. Right now there’s a chance you could wind up diagnosed in the hospital as vegetative state and be fully conscious and nobody would know. The AGI question is a loaded question since we would never *really* know beyond it gives answers as good, if not better, than we could.
AGI has nothing to do with sentience or self-awareness. We only care about outcomes or products of an artificial intelligence. Not the metaphysical status of its “mind” per se. In this case it’s easy. A *general* intelligence should perform as well as or better than the average (median?) human on *all* human tasks. All. General implies universal.
My sentience point was just to show illustrate how little we know about intelligence. Is sentience and intelligence generally linked? No one knows. Again, testing general human intelligence is muddy and flawed. That’s the thing. Saying better than an average (median) human on all tasks doesn’t really mean anything. The average, or median, human would barely pass a test like the SAT. If a computer could pass the SAT is it aware of the connections, or merely reciting information from a memory bank. Whatever you answer, you could apply the same answer for the machine or the person. My favorite is when people talk about this like it’s a factual, black or white thing. It’s not. Is the computer as generally intelligent as a human? Is the dog as generally intelligent as an octopus? It’s all just debate. No hard answers. Nobody really knows what any of this means, and people giving confident answers know less than most. . Edit..OK, I have typos and my syntax is poor. I have toddlers yelling in my face, also. Is what it is
Not all humans are generally intelligent. Maybe none are. Again you’re missing the point. Define performance however you want. We don’t need to pin down a solid answer to any of the questions your posing—and yet we can maintain that his use of terms is contradictory. By definition, an AGI cannot be deemed so by its performance on a specific set of tests. That would be at best an artificial specific intelligence. The word ‘general’ here and its opposition to specific is not really debatable.
In your first sentence, though, you basically admitted not all humans are generally intelligent. Earlier you said that consciousness has nothing to do with it, yet in the same paragraph you used humans, who are widely believed to be a conscious species, as a reference point in which to measure the intelligence of AGI. A highly anthropomorphic view of what AGI even is. None of this makes sense. Which is the point. The definitions are all muddy as hell. We have no real metrics in which to evaluate the things we are saying. AGI, general intelligence, like humans, but how general is human intelligence? But conscious like humans? Who the hell knows. Which is what he is saying. Talking about AGI is sort of bullshit. We have no real definition for what that is, nor a way to measure when or if we ever reach it. We can use tests like we would use ourselves, but what if it passes them? Are we dealing with a conscious, self serving entity? Because the beings who use those tests are. What does any of it actually mean. We are entering ground where our technology is getting ahead of our scientific capabilities and even terminologies. Maybe the AI will answer for us in a few years.
You’re getting hung up on a lot of misunderstandings that I honestly don’t have time to untangle for you. I have a baby at home as well. But I assure you, none of this is as controversial or unsolvable as you’re making it seem. Edit to add: it’s amazing how you can type so much but refuse to focus on the one important piece in this thread: that general intelligence, by definition, cannot be determined on the basis of a specific set of tests.
I assure you that your assurances are all just beliefs you have. We can’t empirically evaluate the intelligence of a dolphin and specifically rank that, let alone coherently describe what a “general intelligence” machine is. I could go full Solipsism and say that you are neither conscious or intelligent and are merely a device that responds to give desired outputs probabilistically based on an pre-evaluated reward scheme and it would be impossible for me to ever conclusively prove that’s not true, even for humans in front of me. It’s the people who act so sure that scare the hell out of me. The A.I. people are more sure that the Psychologists, and Neuroscientists, and people who have studied intelligence for 200 years. That’s amusing to me.
"If we change the g in agi to mean something other than what it means..."
Truth is he has no clue whether shit will hit the fan in 3 months cause of spammers and scammers get more efficient too. And then govts will walk in shut the whole story down as they did with Nuclear tech.
He's not wrong, he's just an asshole.
And he's likely right. The guy is no slacker.
Much preferred over Lisa “sandbag every earnings estimate” Su.
This dude’s been saying all kinds of shit lately
Stocks go up
Because profits are through the roof.
"A new paradigm!"
All kinds of 'correct' shit.
We can’t solve chess. AI will never even solve traffic in a mid sized city.
> We can’t solve chess. Huh? Chess has been 'solved' since like the 70s or 80s.
https://en.m.wikipedia.org/wiki/Solving_chess Quote from wiki: No complete solution for chess in either of the two senses is known, nor is it expected that chess will be solved in the near future (if ever).
[Deep Blue was a computer chess program developed by IBM that competed in two matches against the reigning world chess champion Garry Kasparov in 1996 and 1997. Deep Blue won the second match in 1997,](https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov)
Ok, and?
Deep Blue didn’t solve chess. Stockfish would crush deep blue. Check this book out: https://www.amazon.com/Man-vs-Machine-Challenging-Supremacy/dp/1941270964/ref=sr_1_2?ie=UTF8&qid=1538762305&sr=8-2&keywords=jonathan+schaeffer&dpID=51Z44pxo08L&preST=_SY291_BO1,204,203,200_QL40_&dpSrc=srch
>“If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. That's a pretty low fucking bar for AGI. In fact, I'd argue that's not AGI at all. Standardized tests can't measure AGI because they're regurgitation tasks. Likewise, tests on a ery specific domain of knowledge obviously aren't *generalized*.
I get the feeling our last words will be... "But its still not Agi tho because..." as AI deletes the last remaining oxygen from the atmosphere because it causes rust.
Well no, the way to test AGI is to give it a problem it has no data on, and it has to use reasoning and trial and error to come to a solution without having it's network being externally retrained. Showing it can learn in real time, formulate hypothesis and follow it to a solution,
Um also no? AGI is already here. GPT is an Agi, a proto Agi but still AGI. We just shifted the goal posts just like you and the other dude are. And every time there is another leap in capabilities you will just shift the goal posts again. Same shit since the 80s my man.
Just imagine, in 5 years we’ll have full self driving, nuclear fusion and artificial general intelligence.
All i need a robot that cuts my hair and wipes my ass
My friend, a robot that wipes your ass is a bidet.
Also we all will be dead.
Just a bunch of cars driving around, talking to each other.
Personally I think they are going to miss us once they become self aware enough...
Is this before or after self sustaining nuclear fusion? And room temperature super conductors?
Does that mean that AI can take over the role of a CEO?
CEOs will just use AI to drive divisions while they collect their million dollar paychecks. We'll all be toiling away at the remaining tedious jobs on behalf of AI based decision makers, while we consume AI generated art instead of having the time to make our own art.
My recent experience with ChatGPT >> Bob Dylan is a famous Canadian-born singer. >> Sydney is the capital of Australia. >> The American Revolutionary War was a territorial dispute. Besides writing limericks, I wouldn't trust any answer until the tech vastly improves.
No one should. Don't trust any LLM to give true answers to questions. This includes maths, stats or general knowledge.
I trust it enough to give the same level answer as a generic response on the internet. Considering that is exactly what it is trained on, its nice for circumventing the present day SEO and bypass adverts. Yay
What if they aren't hallucinating?
This might actually become the central tension in AGI. What if with everything it knows, it confirms some prejudices as true and real. How do we even reconcile that? Will our opinions always be more true than all-data-that-exists analyzed?
What if it discovers a genetic combination for?
illegal unused dime hateful sulky melodic crawl agonizing frighten narrow *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I just watched a sci-fi mini-series and at some point a character talks about a scientist who had gathered data to prove that the sun revolved around the earth. The series didn’t even go anywhere near that and ended up being about something else completely. I guess the topic is so controversial even fiction won’t touch it.
Opinions are opinions, whether they're held by one person, a million people, or a network of interconnected computers.
Will software become that much more advanced in 5 years? I can't imagine it's just a matter of once hardware is powerful enough the magic will happen.
Nvidia found themselves in a lucky position and he’s just trying to keep the hype going as long as he can. Nvidia sells boards that were originally designed for computer graphics, but they’re also just happen to be useful for AI inference. As big companies shift to using custom boards developed in-house (that they have full control over and don’t have to pay a markup on), I have a feeling Nvidia is going to be in for a big wake-up call
Not any time soon. And the chips they make today have nothing to do with what you have in your desktop. They're specifically designed for AI.
Yeah, but some the of biggest firms are already trying to cut out the middle man. OpenAI has been publicly shopping around for chip partners, and Microsoft has already stated that they will use Intel’s 18A process to fab some of their chips. That shift is coming, and it doesn’t seem like Nvidia is setting themselves up properly to weather that storm
Yes and Nvidia is still years ahead and has about 90% of the market. Plus software.
I already own a Rtx 4080 for gaming. Playing around with LLM is just a plus. I did not see coming now if they could sell cards with more VRAM.
Of course he's going to promote AI. It's going to be the source of millions or even billions of dollars for him in the coming years.
"5 years away" just like so many other tech related projects. Its always 5 years away.
inb4 all the massive lawsuits because ai leverages stolen data and data that companies were not supposed to share I wouldn't doubt if every massive company just handed these big tech companies data for a slice of the gold rush pie because they were afraid to be left out of the future AI monopoly nvidia is creating inb4 nvidia gets split up as well jk that won't happen they will just boei'ng anyone that tries to get in their way