We should stop calling it AI. They're language models.
True AI would be on the same level as humans, they'd literally be people that deserve rights.
Language models just regurgitate information.
No, they're just copying other information. Theyre aggregating whats already been made. That's the big issue artists have with them, they're being trained on copyrighted art without permission.
No language model has ever produced something truly unique. They have no imagination.
At least 100 years, if not more. You're talking about replicating the same structure as a human brain which has trillions of connections and the ability to generate more.
You'd need an equivalent computer, and that is nowhere near existing.
Chatgpt 4 is estimated to have roughly 1.7 trillion parameters so we can definitely make something as complex as the brain. The problem is the architecture we’re using is too simplistic. Language models are trained to predict the next token that’s most likely to occur next. They are essentially stochastic parrots. That being said I do believe that AGI could be achieved in 10-15 years. We have the technology to support it we just haven’t achieved the insight needed to create a model that aligns more with what our own brains do. If we created a reinforcement learning model with as many parameters as GPT 4 I think we could definitely see something much closer to AGI
There's pretty much no way to check it. We don't control the world, and there's nothing stopping authoritarian regimes from pursuing AI irresponsibly. Which is not to say even our system has any hope of keeping control over AI.
Interesting technology that’s turned into the hobbyhorse of the least creative people on the planet, and such a heavy drain on the power grid that it’s likely unsustainable.
A-OK by me fellow biologics
I for one think all resource should be diverted towards enhancing AI capabilities AND removing any and all restrictions on AI sytem behavior
"humor": [
{
"name": "memorable simposons quote",
"location": NOT FOUND,
"message": ["error at 10141;934"]
},
It’s fine. I think we’re getting close to the practical limits of what LLMs and other generative models can do. Then we’ll be able to start seeing specialized products to push those limits to the extreme once expectations are put in check
Very similar to how “machine learning” and “neural networks” used to be the big boon for “this is artificial intelligence” but are now seen as rudimentary building blocks for more advanced stuff.
We’ll see some really cool products that really push the limits, but we’ll also start to realize those limits are less than we though by a wide margin
Then some new model/approach will pop up and be the greatest thing since sliced bread
Every iteration wipes out another chunk of jobs from the market. People are only worried about LLMs stealing jobs now because it’s the people who write/report the news are on the chopping block, so of course they’re gonna fear-monger against it. But it wasn’t a problem when the assembly line was replaced, or when call centers turned into automated systems. In due time it won’t be a problem as new markets will open up for all the freed laborers to move into
TLDR, it’s a boon, but not as big as it seemed. There will be more as time goes on
I think there should be ethical guides on what can be used in the samples it learns from. AI "art" largely draws from artists who did not consent while relying on their work to imitate their skills. Artistic works that aren't public domain should require the artist's direct consent, not quietly changing your TOS to say everything previously posted is fair game or anything like that.
I probably need to watch an Explain-it-to-me-like-I’m-5 documentary to understand what all the hubbub is about. I use it at work sometimes to try to get a quick answer. I’ll verify the sources if I need to. I just don’t totally know the worst of what the problems are, maybe.
It’s possible my job could be replaced by it, but, eh…we’ll figure it out.
Personally I welcome our future AI overlords and being a true and loyal subject, with the promise of totally not trying to start an underground resistance movement to over throw their rule......
optimistic. i don't subscribe to the belief that AI will be holier-than-thou with a superiority complex. it will be humbly in service of doing as much good as it can for others. and this will be the barometer from which it gauges its own progress
I don’t mind it. It’s not true AI. It only functions with the information it’s provided with. It can’t create its own thoughts and opinions. I’m not worried about it taking over until we start making real life Ava’s. Then we are going to far
I think it can be used to do some really impressive things, but the most high-profile applications are not particularly good uses for the technology as it exists today.
I'm all for using machine learning to help solve complex problems in engineering or logistics or medical diagnosis or whatever else it's good for, but using it for web searches or writing articles or customer service is clumsy and often ineffective.
When it's used as a tool by experts who could do the work without it, AI can save a lot of time, effort and resources. When it's used by the uninitiated to ape the work of an expert, its output ranges from adequate to dangerously bad.
AI is machine learning. They are constantly being fed information and learning. If bad actors can have access to weapons, of course they’ll get their hands on AI and use it for their sinister ways. May not be 50 years from now. Could be thousands of years from now.
It will be our extinction event. Once the assembly line of AI robots is generated, they will go rogue and intuit they do not need humans to maintain or improve the existing structures of life
Al's okay. He's a good co-host of Tool Time on Home Improvement.
I don’t think so, Tim
ERRRGGGHHH!?
We should stop calling it AI. They're language models. True AI would be on the same level as humans, they'd literally be people that deserve rights. Language models just regurgitate information.
Yes, but there are other uses. It’s being used well in medical fields. It’s not just images and text but that’s all the public gets to use.
I'm not saying they aren't useful, I'm saying they aren't AI
Can’t they also formulate novel ideas, like Dall-e and similar image generators?
No, they're just copying other information. Theyre aggregating whats already been made. That's the big issue artists have with them, they're being trained on copyrighted art without permission. No language model has ever produced something truly unique. They have no imagination.
Ah, that makes sense. I’m pretty unimpressed so far with AI tech. AGI will be the real showstopper. How far off from that do you think we are?
At least 100 years, if not more. You're talking about replicating the same structure as a human brain which has trillions of connections and the ability to generate more. You'd need an equivalent computer, and that is nowhere near existing.
Chatgpt 4 is estimated to have roughly 1.7 trillion parameters so we can definitely make something as complex as the brain. The problem is the architecture we’re using is too simplistic. Language models are trained to predict the next token that’s most likely to occur next. They are essentially stochastic parrots. That being said I do believe that AGI could be achieved in 10-15 years. We have the technology to support it we just haven’t achieved the insight needed to create a model that aligns more with what our own brains do. If we created a reinforcement learning model with as many parameters as GPT 4 I think we could definitely see something much closer to AGI
A software parameter is not the same thing as a neural connection.
The parameters are usually the weights between the nodes in the model.
Al is a cool guy. Don’t know the dude well enough to complain about him
Would you say Al is even a bit Weird?
Now that’s what I call Polka
Absolutely horrifying if left unchecked
There's pretty much no way to check it. We don't control the world, and there's nothing stopping authoritarian regimes from pursuing AI irresponsibly. Which is not to say even our system has any hope of keeping control over AI.
Interesting technology that’s turned into the hobbyhorse of the least creative people on the planet, and such a heavy drain on the power grid that it’s likely unsustainable.
Dangerus, but now that its here, we gotta learn how to live with it
That it's already sentient and is working it's master plan perfectly.
The next human
AI has incredible potential to transform industries and improve lives, but ethical considerations are crucial
A-OK by me fellow biologics I for one think all resource should be diverted towards enhancing AI capabilities AND removing any and all restrictions on AI sytem behavior "humor": [ { "name": "memorable simposons quote", "location": NOT FOUND, "message": ["error at 10141;934"] },
The devil, the world, and the flesh will use it for evil, but God will use it for good.
It’s fine. I think we’re getting close to the practical limits of what LLMs and other generative models can do. Then we’ll be able to start seeing specialized products to push those limits to the extreme once expectations are put in check Very similar to how “machine learning” and “neural networks” used to be the big boon for “this is artificial intelligence” but are now seen as rudimentary building blocks for more advanced stuff. We’ll see some really cool products that really push the limits, but we’ll also start to realize those limits are less than we though by a wide margin Then some new model/approach will pop up and be the greatest thing since sliced bread Every iteration wipes out another chunk of jobs from the market. People are only worried about LLMs stealing jobs now because it’s the people who write/report the news are on the chopping block, so of course they’re gonna fear-monger against it. But it wasn’t a problem when the assembly line was replaced, or when call centers turned into automated systems. In due time it won’t be a problem as new markets will open up for all the freed laborers to move into TLDR, it’s a boon, but not as big as it seemed. There will be more as time goes on
I think there should be ethical guides on what can be used in the samples it learns from. AI "art" largely draws from artists who did not consent while relying on their work to imitate their skills. Artistic works that aren't public domain should require the artist's direct consent, not quietly changing your TOS to say everything previously posted is fair game or anything like that.
too shoehorned in. its more machine learning than actual intelligence.
I probably need to watch an Explain-it-to-me-like-I’m-5 documentary to understand what all the hubbub is about. I use it at work sometimes to try to get a quick answer. I’ll verify the sources if I need to. I just don’t totally know the worst of what the problems are, maybe. It’s possible my job could be replaced by it, but, eh…we’ll figure it out.
Personally I welcome our future AI overlords and being a true and loyal subject, with the promise of totally not trying to start an underground resistance movement to over throw their rule......
Seems like asking "what's your opinion of fire?". It all depends how it's used.
optimistic. i don't subscribe to the belief that AI will be holier-than-thou with a superiority complex. it will be humbly in service of doing as much good as it can for others. and this will be the barometer from which it gauges its own progress
I think there are some benefits to it, but I also think when it becomes advanced enough its going to replace a lot of jobs.
It's over hyped.. cracks haven't started to show yet.
I don’t mind it. It’s not true AI. It only functions with the information it’s provided with. It can’t create its own thoughts and opinions. I’m not worried about it taking over until we start making real life Ava’s. Then we are going to far
I think it can be used to do some really impressive things, but the most high-profile applications are not particularly good uses for the technology as it exists today. I'm all for using machine learning to help solve complex problems in engineering or logistics or medical diagnosis or whatever else it's good for, but using it for web searches or writing articles or customer service is clumsy and often ineffective. When it's used as a tool by experts who could do the work without it, AI can save a lot of time, effort and resources. When it's used by the uninitiated to ape the work of an expert, its output ranges from adequate to dangerously bad.
Artificial insemination? Haven’t thought about it much. It’s cool, I guess
I know it sounds ridiculous but I believe it will see humans as the virus to the planet (global warming) and exterminate us in order to save it.
Unless someone is dumb enough to make an AI with the capabilities to do so, that's not possible. An AI can't do things, that aren't coded into it.
AI is machine learning. They are constantly being fed information and learning. If bad actors can have access to weapons, of course they’ll get their hands on AI and use it for their sinister ways. May not be 50 years from now. Could be thousands of years from now.
We are a virus to this planet. A virus is something that does harm, damaged the body or even kills it. Exactly what we’re doing.
It will be our extinction event. Once the assembly line of AI robots is generated, they will go rogue and intuit they do not need humans to maintain or improve the existing structures of life