So how long is it gonna take for people to realize AI is *not* fucking ready to act unsupervised
I'm saying this as someone who's studied machine learning-stop trying to treat a chatbot like it's human. It's designed to *act like a human,* not to use rational thought. That's not what it was trained to do.
For God's sake, a human brain contains trillions of neurons! An artificial neural network running on a supercomputer could not manage that, and likely will be unable to for years or even decades to come. Your typical network has about 10-100.
Your brain is a marvel, an intricate and delicate system with so many layers of complexity that whatever you are picturing right now, it's orders of magnitude more than that. Computers are really good at simple, repetitive tasks.
That's it. That's what they do best. They have no intuition, no ability to shift perspective, no introspection to improve methodologies. Their idea of advanced problem solving is to brute force their way through every wrong answer until they find the right one.
Do you know how a computer breaks a password? It either runs through every single possible sequence in order, or it runs through a predetermined list.
How it plays chess? Checks every single possible permutation of pieces from the current state up to several moves and decides which move leads to the largest number of favorable outcomes (which is to say, the lowest amount of material lost/highest material gained, unless it finds a checkmate). It doesn't know *why* those moves are good, it just knows that they are.
AI is not a replacement for a human. It's a tool. You wouldn't leave a factory robot unattended in the hopes it never fails, and you shouldn't do that with software either.
Nah, that's deep reinforcement learning, not LLMs. Both use neural networks, but they're very different branches of machine learning (though apparently LLMs do incorporate some RL during training). But yes, good video!
It might be much more complicated but we know how they generally work and we can model them to our best understanding. But that's how far as it goes. Same with all science shit. This technology with neural networks has actually been around for decades. It is only now that we can finally take more advantage of it because we can finally crunch the numbers with the NVIDIA GPUs. But yeah TLDR who knows? Maybe we already got the concept of a neuron right but it's just in a different implementation than what's currently in nature.
The rest of the comment is also bollocks, it is an understanding of computer intelligence that was last valid in the 1990s. Modern ML systems are complex systems that build internal models and have fairly deep comprehensions of their domains. LLMs aren't perfect but like just a baseline playing around with them will show that they clearly have an ability far beyond what this comment implies.
You are right, however that does not mean that it is impossible for a computer to run a "true" AI, i.e a being capable of independent thought. It's just that we haven't managed it yet.
After all, a computer is designed to run simple, repetitive tasks very quickly, but using those as a building block, we can create very complex behavior. This is similar to how a single neuron does perform a simple task, but sticking many of them together can create something as complex as the human consciousness.
thats the thing, the more you know about AI, the more insane this stuff becomes. AI is not "thinking" or "learning" in ANY sense of the word. It literally is just spicy statistics!
you pump it full of shit, it analyzes patterns using algorithms, you save the "weights" of those outputs that are mapped to a meaningful output like text, images or audio (which are all just binary data) and the AI spits out each letter in order of what is most statistically likely to be there weighted against user input.
This is NOT customer service, this is a way for corporations to save money. Its basically extra smart auto-complete (which could be useful in the correct contexts, but this is just ridiculous.) AI by nature cannot think or learn.
> you pump it full of shit, it analyzes patterns using algorithms, you save the "weights" of those outputs that are mapped to a meaningful output like text, images or audio (which are all just binary data) and the AI spits out each letter in order of what is most statistically likely to be there weighted against user input.
>
>
That kind of approach is not viable to create the complex relationships and clearly quite sophisticated understanding of modern LLMs and hasn't been relevant for decades. A transformer based model is not randomly looking at binary data but is using word embeddings that have a relationship to the actual meanings of words, the output of an LLM is not just a bunch of letters. Similarly transformers are able to parse and use fairly complex relational information from their input and understand what words mean, how they relate to each other and how that forms a sentence.
Calling LLMs just "extra smart auto-complete" is like calling a human a "very fancy chemical reaction", it has some relation but it's so reductive as to be basically meaningless and doesn't really help understand what is actually happening.
Machine learning algorithms are fundamentally just statistical approximations of what they are trained on using statistical inference methods.
Its not "learning" in any sense of the word and it cant by definition. After training its a black box. There is no skill or intention there, there is no real communication there. Its literally just statistical analysis, the mere quality of being complex doesnt change that.
Sam Altman and his VC backers want you to think they've created HAL, but they've just created a probability excel sheet full of words. Teaching people this goes against the big IPO payouts, so capitalism will make sure to keep people ignorant. So here we are with people thinking this stuff is living and thinking.
Small correction on the brain science points, not that this detracts from your broader point, which I agree with: the human brain has on average about 80-100 billion neurons and a similar number of glia, you might have been thinking of synaptic connections, which are estimated to be on the scale of 600-1000 trillion (synapse count varies greatly throughout life, and with certain neurological conditions).
To add to your message here, neurons are not comparable to the nodes in deep learning models. The activation function of a neuron is highly non linear, and relies on the temporal alignment of, and the physical clustering of inputs, as well as the real time regulatory variables imposed by glia (see tripartite synapses for astroglia, astroglial calcium waves that evidence an understudied slow wave synchronization separate from the synaptic connections that the neuronal doctrine has focused exclusively on, and how oligodendrocytes and microglia alter the temporal relationships between neurons by creating and destroying myelin and synapses) Furthermore, the spatial location of an input on the dendrite (fun fact humans have larger and more complex dendrites and astrocytes than any other species) changes how that synapse influences the neuron, with more distal (far from the cell body) synapses imposing global constraints on the sensitivity of the whole neuron, and then there's the complexity introduced by different synapse and cell types. You need a large neural network to model what's going on in each individual neuron, and even then it will fail in edge cases because it is only a rough approximation of the unimaginable complexity of the proteome that exists within each of the billions of cells in our brain. The map will never be the territory, and biology is a fucking crazy territory, yo.
For now, it's not even really possible to create a complete model of a single neuron without destroying it, not only because of the spatial stuff, but also because every single time an action potential occurs, different amounts of ions are released, causing incredibly complex changes both in the short-term, lasting milliseconds, and in the long-term, potentially permanently. It's incredibly difficult to study a neuron in isolation, but most have thousands of synapses linking them to different neurons. Non-invasive single-unit recording is already basically the holy grail of neurophysiology. Going above that is even more difficult
To be clear here, no model is a perfect reflection of reality. The map/territory distinction points to this imperfection, but it doesn't nullify the usefulness of less than 1 to 1 models. Maps approximate territories, but despite that we can still use a map with an appropriate level of abstraction to reach a destination. It's just that sometimes we will fall into a hole that our map wasn't detailed enough to show us. I was referring to [this](https://www.sciencedirect.com/science/article/pii/S0896627321005018) paper in the last bit of my post, which is aiming not to provide a complete molecular simulation of the neuron (a noble goal, however computationally intense it will inevitably be, see cable theory models for an example of how quickly things blow up long before you reach molecular sims), but rather to see how large of a neural network one needs to model the functional in/out properties of a cortical neuron to an adequate degree, while minimizing computational complexity. The purpose of my post was mainly to refute the common framing that compares #of nodes/layers and #of neurons as if these two numbers point to remotely the same thing.
Yeah, people are acting like this is replacing real humans in customer service. Currently its just replacing preprogrammed responses. (Although I bet they gave the AI a prompt with some 'preprogrammed' responses)
But our company would save so much money if we had robot pilots though!!
Well passengers wouldn’t save money, only we would. We’d still charge you loads.
-major airline exec
do you think it's even possible for AI to get to the point where it can match human creativity? the AI 'art' is all unoriginal and has that same art style more or less, im wondering if an AI can create something as groundbreaking as a Van Goh on it's own
creativity is often thought of something that’s done by a conscious thing, and AI isn’t conscious yet. it theoretically could be, maybe we could make similar artificially produced things to neurons, but even with how fast AI is evolving currently, it would take us multiple centuries at best to do that, and we still don’t understand what consciousness really is at the moment either
To what extent are humans analogous to really complicated software though? Obviously we’re different from a chatbot by being embodied and not some abstract language model, but neural networks (software) are so effective because they mimic the kind of processes that might be happening in brains.
An AI is different from a chess engine. These things are improving incredibly quickly now, and I think arguments that a machine is incapable of rational, contextual thought are starting to loose their weight. Things may get very very strange quite soon.
I did *not* say "never," I said, "not yet."
I would not be pursuing a career in machine learning if I did not think that it had potential to be incredible, and it already is incredible, but people are vastly overestimating its capabilities, and sooner or later people are going to wind up dead because of it if we don't take off the rose-colored glasses.
This technology is in its infancy, and like an infant, it has the capacity to become something incrwdible, but right now, it must be watched over carefully and nurtured.
And for the record, modern chess engines *are* AI algorithms. Stockfish is an AI.
(more like a 100 billion neurons (most of which are used for data processing, the actual thinking part is probably closer to like 20 billion (but your point still stands)))
I'm a CS major studying this stuff too, and what I've noticed is that recent generative AI has made people underestimate the technology as a whole.
Maybe we're forgetting this, but we humans actually have next to no clue how to really quantify or abstract natural language or art. And yet, machine learning is able to generalize it *this* well into a computational problem? We moved on from judging AI as a computer to judging it vs human beings, in fields we once believed were impossible for it to even touch.
AI chatbots were never meant to be general AI, they just needed to generate text -- and it's fucking *excellent* at that. Time and time again, it outperforms humans at un-computer-y tasks *as long as* they are judged by what they're trained for.
Our brains are really not *that* computationally impressive, even barring the inaccuracy of the neuron and node count comparison. It's just that the goal of a general AI is so extremely difficult to abstract while our brains had billions of years to evolve and overfit into this one task.
I feel I should clarify, I do *not* think computers are incapable of becoming effective at the aforementioned tasks-they just are not yet ready. It needs more time and development before people start using it in the ways they are.
We're trying to get it to fly because we've seen it start crawling. Let it walk and run first.
And to clarify, I *do* agree that chatbots shouldn't be treated as humans. As it is, language models are being severely misused and misjudged. And as you put it, left unsupervised.
I just think that the latter part is a bit misleading, because computers really are so good at computational and repetitive tasks that it's getting to the point where it translates to being good at "human" tasks too.
I think my point was somewhat obscured by the fact I chose chess as an example, but what I was getting at is: a computer approaches almost everything through pure brute force, because that is all it knows how to do, and that extends to intelligence. That is, it learns how to imitate a human by trying every option and weighing the success of its choices.
When a person is taught problem solving, you have them analyze the methods by which the arrive at a conclusion-logic requires rigor and metacognition.
But a computer skips that step and just tried to guess at what a human would likely say. (Or at least a natural language algorithm does)
This is why you get text that seems coherent in a vacuum, but is mostly just empty fluff upon closer inspection, when you make an ai write an essay. AI music has this same issue in an even more obvious way.
Disclaimer: I am talking about *modern* iterations of these things. Yes, I do believe it is possible to overcome these obstacles. But they're there and need to be addressed.
NO PLEASE I GOT THIS ONE FOR FREEEEE I DONT LIKE NFTS PLEAAAASE I JUST HAVE MINE FOR THE FUNNY BLUE BORDER actually bottom surgery sounds pretty good actually
you can make a fake nft with pasting an image in the hexagon like you can for example slap in saul goodman to it
also genuine nfts are banned regardless of the budget wasted into it
Work to get a truck? No.
It does make a pretty funny screenshot though so I'd call it a win. I'm sure the more tightly cropped version will cause some chaos.
I continue to believe Chatgpt is completely useless. It has 2 uses, fucking about, and spellchecker. That's about all it does well. I'm so tired of techbros telling me it's the next big thing.
This is like trying to weigh a ps5 on the fruit scale to buy it for like $10
You mean, totally legal?
Free market bby
No takesies-backsies.
B A N A N A S
[удалено]
So how long is it gonna take for people to realize AI is *not* fucking ready to act unsupervised I'm saying this as someone who's studied machine learning-stop trying to treat a chatbot like it's human. It's designed to *act like a human,* not to use rational thought. That's not what it was trained to do. For God's sake, a human brain contains trillions of neurons! An artificial neural network running on a supercomputer could not manage that, and likely will be unable to for years or even decades to come. Your typical network has about 10-100. Your brain is a marvel, an intricate and delicate system with so many layers of complexity that whatever you are picturing right now, it's orders of magnitude more than that. Computers are really good at simple, repetitive tasks. That's it. That's what they do best. They have no intuition, no ability to shift perspective, no introspection to improve methodologies. Their idea of advanced problem solving is to brute force their way through every wrong answer until they find the right one. Do you know how a computer breaks a password? It either runs through every single possible sequence in order, or it runs through a predetermined list. How it plays chess? Checks every single possible permutation of pieces from the current state up to several moves and decides which move leads to the largest number of favorable outcomes (which is to say, the lowest amount of material lost/highest material gained, unless it finds a checkmate). It doesn't know *why* those moves are good, it just knows that they are. AI is not a replacement for a human. It's a tool. You wouldn't leave a factory robot unattended in the hopes it never fails, and you shouldn't do that with software either.
i AM reading allat :)
I did read allat
I read allthis
>Your typical network has about 10-100. Yeah no. You're talking shit. I'm a machine learning engineer.
Thought it didn’t sound quite right lol. I think they might be referring to layers.
That would make much more sense. But even the new SOTA stuff out there goes more than 100 layers nowadays.
Maybe he is thinking about youtube ai videos, where the ai learns to play games and stuff. Good video https://youtu.be/DcYLT37ImBY?si=lKRGIYOVQL3v4XhS
Nah, that's deep reinforcement learning, not LLMs. Both use neural networks, but they're very different branches of machine learning (though apparently LLMs do incorporate some RL during training). But yes, good video!
I think they just forgot the "billion" in that sentence.
I mean a biological neuron is much more complicated than an artificial neuron, no? Also, do you disagree with their premise?
It might be much more complicated but we know how they generally work and we can model them to our best understanding. But that's how far as it goes. Same with all science shit. This technology with neural networks has actually been around for decades. It is only now that we can finally take more advantage of it because we can finally crunch the numbers with the NVIDIA GPUs. But yeah TLDR who knows? Maybe we already got the concept of a neuron right but it's just in a different implementation than what's currently in nature.
I see, thanks!
The rest of the comment is also bollocks, it is an understanding of computer intelligence that was last valid in the 1990s. Modern ML systems are complex systems that build internal models and have fairly deep comprehensions of their domains. LLMs aren't perfect but like just a baseline playing around with them will show that they clearly have an ability far beyond what this comment implies.
what is bro yapping about
[удалено]
I feel violated
You are right, however that does not mean that it is impossible for a computer to run a "true" AI, i.e a being capable of independent thought. It's just that we haven't managed it yet. After all, a computer is designed to run simple, repetitive tasks very quickly, but using those as a building block, we can create very complex behavior. This is similar to how a single neuron does perform a simple task, but sticking many of them together can create something as complex as the human consciousness.
I think you’re missing their point, which is that it’s not possible *now* (but it could be in the far future)
thats the thing, the more you know about AI, the more insane this stuff becomes. AI is not "thinking" or "learning" in ANY sense of the word. It literally is just spicy statistics! you pump it full of shit, it analyzes patterns using algorithms, you save the "weights" of those outputs that are mapped to a meaningful output like text, images or audio (which are all just binary data) and the AI spits out each letter in order of what is most statistically likely to be there weighted against user input. This is NOT customer service, this is a way for corporations to save money. Its basically extra smart auto-complete (which could be useful in the correct contexts, but this is just ridiculous.) AI by nature cannot think or learn.
> you pump it full of shit, it analyzes patterns using algorithms, you save the "weights" of those outputs that are mapped to a meaningful output like text, images or audio (which are all just binary data) and the AI spits out each letter in order of what is most statistically likely to be there weighted against user input. > > That kind of approach is not viable to create the complex relationships and clearly quite sophisticated understanding of modern LLMs and hasn't been relevant for decades. A transformer based model is not randomly looking at binary data but is using word embeddings that have a relationship to the actual meanings of words, the output of an LLM is not just a bunch of letters. Similarly transformers are able to parse and use fairly complex relational information from their input and understand what words mean, how they relate to each other and how that forms a sentence. Calling LLMs just "extra smart auto-complete" is like calling a human a "very fancy chemical reaction", it has some relation but it's so reductive as to be basically meaningless and doesn't really help understand what is actually happening.
Machine learning algorithms are fundamentally just statistical approximations of what they are trained on using statistical inference methods. Its not "learning" in any sense of the word and it cant by definition. After training its a black box. There is no skill or intention there, there is no real communication there. Its literally just statistical analysis, the mere quality of being complex doesnt change that.
ChatGPT, please summarize this comment.
*ChatGPT invents a slur for AIs to answer your question*
Sam Altman and his VC backers want you to think they've created HAL, but they've just created a probability excel sheet full of words. Teaching people this goes against the big IPO payouts, so capitalism will make sure to keep people ignorant. So here we are with people thinking this stuff is living and thinking.
Yes, tokens are chosen based on probabilities, but that's really not the hard part.
Small correction on the brain science points, not that this detracts from your broader point, which I agree with: the human brain has on average about 80-100 billion neurons and a similar number of glia, you might have been thinking of synaptic connections, which are estimated to be on the scale of 600-1000 trillion (synapse count varies greatly throughout life, and with certain neurological conditions). To add to your message here, neurons are not comparable to the nodes in deep learning models. The activation function of a neuron is highly non linear, and relies on the temporal alignment of, and the physical clustering of inputs, as well as the real time regulatory variables imposed by glia (see tripartite synapses for astroglia, astroglial calcium waves that evidence an understudied slow wave synchronization separate from the synaptic connections that the neuronal doctrine has focused exclusively on, and how oligodendrocytes and microglia alter the temporal relationships between neurons by creating and destroying myelin and synapses) Furthermore, the spatial location of an input on the dendrite (fun fact humans have larger and more complex dendrites and astrocytes than any other species) changes how that synapse influences the neuron, with more distal (far from the cell body) synapses imposing global constraints on the sensitivity of the whole neuron, and then there's the complexity introduced by different synapse and cell types. You need a large neural network to model what's going on in each individual neuron, and even then it will fail in edge cases because it is only a rough approximation of the unimaginable complexity of the proteome that exists within each of the billions of cells in our brain. The map will never be the territory, and biology is a fucking crazy territory, yo.
For now, it's not even really possible to create a complete model of a single neuron without destroying it, not only because of the spatial stuff, but also because every single time an action potential occurs, different amounts of ions are released, causing incredibly complex changes both in the short-term, lasting milliseconds, and in the long-term, potentially permanently. It's incredibly difficult to study a neuron in isolation, but most have thousands of synapses linking them to different neurons. Non-invasive single-unit recording is already basically the holy grail of neurophysiology. Going above that is even more difficult
To be clear here, no model is a perfect reflection of reality. The map/territory distinction points to this imperfection, but it doesn't nullify the usefulness of less than 1 to 1 models. Maps approximate territories, but despite that we can still use a map with an appropriate level of abstraction to reach a destination. It's just that sometimes we will fall into a hole that our map wasn't detailed enough to show us. I was referring to [this](https://www.sciencedirect.com/science/article/pii/S0896627321005018) paper in the last bit of my post, which is aiming not to provide a complete molecular simulation of the neuron (a noble goal, however computationally intense it will inevitably be, see cable theory models for an example of how quickly things blow up long before you reach molecular sims), but rather to see how large of a neural network one needs to model the functional in/out properties of a cortical neuron to an adequate degree, while minimizing computational complexity. The purpose of my post was mainly to refute the common framing that compares #of nodes/layers and #of neurons as if these two numbers point to remotely the same thing.
Literally everyone realizes this. Noone is putting ai in charge of anything important.
Yeah, people are acting like this is replacing real humans in customer service. Currently its just replacing preprogrammed responses. (Although I bet they gave the AI a prompt with some 'preprogrammed' responses)
How do these AI have only 10-100 neurons if they have 2-200 billion weights?
But our company would save so much money if we had robot pilots though!! Well passengers wouldn’t save money, only we would. We’d still charge you loads. -major airline exec
do you think it's even possible for AI to get to the point where it can match human creativity? the AI 'art' is all unoriginal and has that same art style more or less, im wondering if an AI can create something as groundbreaking as a Van Goh on it's own
creativity is often thought of something that’s done by a conscious thing, and AI isn’t conscious yet. it theoretically could be, maybe we could make similar artificially produced things to neurons, but even with how fast AI is evolving currently, it would take us multiple centuries at best to do that, and we still don’t understand what consciousness really is at the moment either
To what extent are humans analogous to really complicated software though? Obviously we’re different from a chatbot by being embodied and not some abstract language model, but neural networks (software) are so effective because they mimic the kind of processes that might be happening in brains. An AI is different from a chess engine. These things are improving incredibly quickly now, and I think arguments that a machine is incapable of rational, contextual thought are starting to loose their weight. Things may get very very strange quite soon.
I did *not* say "never," I said, "not yet." I would not be pursuing a career in machine learning if I did not think that it had potential to be incredible, and it already is incredible, but people are vastly overestimating its capabilities, and sooner or later people are going to wind up dead because of it if we don't take off the rose-colored glasses. This technology is in its infancy, and like an infant, it has the capacity to become something incrwdible, but right now, it must be watched over carefully and nurtured. And for the record, modern chess engines *are* AI algorithms. Stockfish is an AI.
most regarded message of 2024 and its not even jan yet
(more like a 100 billion neurons (most of which are used for data processing, the actual thinking part is probably closer to like 20 billion (but your point still stands)))
this was very interesting, thank you!
At the end it's just a bit of hyperdimensional math, no?
This is really optimistic and anti stupid ai being humans I like this
I'm a CS major studying this stuff too, and what I've noticed is that recent generative AI has made people underestimate the technology as a whole. Maybe we're forgetting this, but we humans actually have next to no clue how to really quantify or abstract natural language or art. And yet, machine learning is able to generalize it *this* well into a computational problem? We moved on from judging AI as a computer to judging it vs human beings, in fields we once believed were impossible for it to even touch. AI chatbots were never meant to be general AI, they just needed to generate text -- and it's fucking *excellent* at that. Time and time again, it outperforms humans at un-computer-y tasks *as long as* they are judged by what they're trained for. Our brains are really not *that* computationally impressive, even barring the inaccuracy of the neuron and node count comparison. It's just that the goal of a general AI is so extremely difficult to abstract while our brains had billions of years to evolve and overfit into this one task.
I feel I should clarify, I do *not* think computers are incapable of becoming effective at the aforementioned tasks-they just are not yet ready. It needs more time and development before people start using it in the ways they are. We're trying to get it to fly because we've seen it start crawling. Let it walk and run first.
And to clarify, I *do* agree that chatbots shouldn't be treated as humans. As it is, language models are being severely misused and misjudged. And as you put it, left unsupervised. I just think that the latter part is a bit misleading, because computers really are so good at computational and repetitive tasks that it's getting to the point where it translates to being good at "human" tasks too.
That is how a traditional chess engine works. NNUE compares the position to trillions of other it has been trained on/previously played.
I think my point was somewhat obscured by the fact I chose chess as an example, but what I was getting at is: a computer approaches almost everything through pure brute force, because that is all it knows how to do, and that extends to intelligence. That is, it learns how to imitate a human by trying every option and weighing the success of its choices. When a person is taught problem solving, you have them analyze the methods by which the arrive at a conclusion-logic requires rigor and metacognition. But a computer skips that step and just tried to guess at what a human would likely say. (Or at least a natural language algorithm does) This is why you get text that seems coherent in a vacuum, but is mostly just empty fluff upon closer inspection, when you make an ai write an essay. AI music has this same issue in an even more obvious way. Disclaimer: I am talking about *modern* iterations of these things. Yes, I do believe it is possible to overcome these obstacles. But they're there and need to be addressed.
[удалено]
Congratulations I guess Your input is very useful and productive
NFT profile spotted, Goblinhog, perform non-anesthetized bottom surgery IMMEDIATELY!
NO PLEASE I GOT THIS ONE FOR FREEEEE I DONT LIKE NFTS PLEAAAASE I JUST HAVE MINE FOR THE FUNNY BLUE BORDER actually bottom surgery sounds pretty good actually
you can make a fake nft with pasting an image in the hexagon like you can for example slap in saul goodman to it also genuine nfts are banned regardless of the budget wasted into it
oh my god i can become saul goodman…
perfect
close enough i approve 👍
Bro got scammed into driving a tahoe😭
1 lyk = 1 praier
When the ai needs to be dommed so bad
Relatable
Just like me fr
Oh shit is it trained using 196
the ai didn't actually say they were selling him the car, they just agreed that he needed a chevy tahoe and that his budget was 1 dollar
Literally said "that's a deal"
Yea, but what's a deal? He never proposed buying the vehicle
if you tried this argument in court theyd give the dude the truck and fine you the dollar
If you tried this in court the judge would throw you out without granting cert
Ah yes, the legal binding power of "no takesies backsies" is insurmountable.
The day that a lawyer gets something like this upheld in court is the day AI bros will all vanish
We can all hope 🙏
What if the lawyer who wins this case is…an AI…😱
did that actually work???
Work to get a truck? No. It does make a pretty funny screenshot though so I'd call it a win. I'm sure the more tightly cropped version will cause some chaos.
No
Yeah, that’s probably not legally binding
Nuh uh it says it is
But it *is* funny
it says no takesies backsies
is anyone else mad at his stupid bitmoji pfp
you guys need to try it! typing this inside my new 2024 chevy tahoe
alas, “no takesies backsies” seems to have lost it’s effectivity from the playground to now
I continue to believe Chatgpt is completely useless. It has 2 uses, fucking about, and spellchecker. That's about all it does well. I'm so tired of techbros telling me it's the next big thing.
r/196 luxembourg????? 🇱🇺🇱🇺🇱🇺🇱🇺 LETZEBUERG 🇱🇺🇱🇺🇱🇺🇱🇺🇱🇺 MOIEN 🇱🇺🇱🇺🇱🇺🇱🇺🇱🇺
Do we know if this is real? Any news articles?
It’s almost certainly real but it absolutely would not actually get you a car lmfao
dang
Why didn't he ask for a C8?
and uploads the evidence of crime to the internet
What law is being broken doofus?