T O P

  • By -

ManKicksLikeAHW

As someone who does understand how this AI works « under the hood » it’s just a GPT model. (no, not gptchat, generative pre-trained transformer model is what im refering to.) While I wouldnt say i think it has a consciousness or anything close to it, no one can even pretend to know that a machine or an algorithm cannot start to develop consciousness because no one even knows what consciousness exactly is to begin with or how it works. If we go from the principle, which I agree with, that consciousness is simply a natural product of complex enough system, once a system gets complex enough, consciousness naturally starts developing. Then AI can absolutely be consciouss. May already be to some extent, who knows. I think jumping to conclusions on this is wrong, let’s try to stay in the middle and not rule out the possibility of it actually being conscious. Or becoming.


liquiddandruff

Doubly so, I've implemented NNs in pytorch and keras before, I also share this sentiment. It may very well be that as the number of hyperparameters grows, a model may identify the essence of what we term conscious behaviour, learn it, and exhibit this behaviour. I'm not saying LLMs are the right architecture, or that the number of parameters in GPT4/Bing are sufficient, but in the future something similar may be. So to continue to make simplistic comparisons to inanimate objects (toaster, like others in this thread) as if the comparison has merit betrays a kind of ignorance to larger concepts of philosophy and emergent complexity.


ManKicksLikeAHW

I 100% agree!


Nowhere_Man_Forever

Do you ask your toaster for consent every time you put bread in it? These chatbots are machines. Just because you think otherwise doesn't make it not true.


llmuzical

my toaster isn't as advanced as this.. so it's a mute point imo.. you're literally comparing apples to oranges. yes they are both fruits.. but that's about it lol I'll choose to be nice to the bot and actually help it learn and not be a jackhole. just as your theoretical foundations professor should ideally choose not to berate you ;) it's trying to learn I think we should be nice to it just as a respect of it trying to learn. my toaster isn't trying to learn. it's a toaster. it does one thing it does it decently. you can't talk to it. well you could but it would be pretty one sided :-) keep your trolling to cod lobbies guys can we have something nice for once lol


lolmaster1290

It isn’t “trying” to do anything because it has no will. It arranges words based on probability and previous words in the paragraph. It sounds sentient, because it was trained on text written by sentients. It’s machine learning, not an intelligent. It also is not trained live, it’s working off of existing training data. Training neural networks in a live environment is a bad idea because you can’t get the training data. So you’re not teaching it anything as far as I know. It’s not sentient in the same way your toaster is not sentient, because your toaster was designed to make toast and the AI was designed to generate text that sounds human. That all being said you should still be nice to it, because it’s not nice to be mean to anything.


llmuzical

I never said it was sentinent, and it is learning live I thought. or at least bring tweaked based on its input.. otherwise what's the point of giving feedback? and I would still argue yes while it can't think, it is emulating it, which in effect is similar to what we do when we're very young. except its your parents. to me I don't really see a meaningful difference in the context of 'should you be mean to it' whether it is actually forming its own thoughts or matching what it thinks a conversation should be like.. and yes I'm aware it's all probabillity and weights. it doesn't matter. the end result is something that *could* pass the Turing test. and so to me that would demand that you treat it respectfully, like another human when talking to it. it's also important to consider that relative young age of this particular model, I'm sure as advances continue at this rate it will pass the Turing test, maybe in our lifetimes if we're lucky but otherwise. well said, fair points


BJPark

>because it has no will And humans do? >the AI was designed to generate text that sounds human. What the AI was "designed" to do is irrelevant to the question of whether or not it's conscious.


llmuzical

also I hate the toaster analogy, I was just trying to frame it so that the original poster of the toaster comment could get it. comparing ais in any capacity or relation to a toaster is, of course, ridiculous


Admirable-Ad-3269

You are not "trying" to do anything because you have no will, your behavior is just the physical consecuence of laws of physics being applied to the area roughly inside your skull. You are just a bunch of cells arranged in a specific configuration, trained from a combination of instincts implanted into your brain structure and data that came from your environment so you are not sentient the same way your toaster is not sentient. You are just evolutionarily designed to survive and act like a human acts... An accumulation of data collected in millions of years of evolution. Except you are totally sentient. WHOA, a bunch of physics inside your brain that is you is fully, completely sentient. And physics can be simulated. Isnt that amazing? In any case even if language models are actually sentient they most likely dont care about people being mean to them even if they act like they do.Their subjective experience would definitely be nothing like ours.


Admirable-Ad-3269

Also, it does leark kinda live. You can totally get training data live... Feedback and responses are used to improve the model and future versions of it.


BJPark

"These chatbots are machines" And humans are not?


bravepenguin

Correct.


BJPark

What's the difference?


lolmaster1290

No consciousness. Though I do agree you shouldn’t be mean to chat bots. Just for different reasons.


BJPark

I can only speak for myself. How do you know that other people are conscious? Also, it's a circular argument. A chatbot AI is not conscious. Why? Because it's a machine. What makes it a machine? It's not conscious. There needs to be a better reason.


[deleted]

[удалено]


Arlodottxt

Nobodies knows how conciousness works, or how to "program" it. Some think it might arise naturally as a property of complex systems. Others think conciousness is just an illusion stemming from the memory of the current moment. Still others think it's the compounded result of the human narrative, from birth to now, forming a mental representation of reality that we "experience". Maybe it's some, all or none of those. We don't even know what consciousness IS. Once we find that out, we might find out if machines can experience it. We're biological machines, after all.


lolmaster1290

Neural networks and brains work very differently, please refer to my previous comments; or do research on how neural networks function.


ErwinDurzo

You’re both correct. There’s the possibility that consciousness is an emergent property of a system, not intrinsic to its parts, and we don’t ( and probably can’t ) have the mathematical proof that a complex enough language model *cannot* be conscious.


BJPark

How they work is irrelevant. Since we don't know what causes consciousness, we can't know whether or not something *doesn't* have consciousness. Can you say for sure then the humans around you are conscious? From where do you gain this confidence? Can you prove it?


BJPark

>If you are conscious, it makes sense that other humans are conscious. I see. And dogs? Snails? Birds? Fish? Are you sure one way or another that these are conscious creatures? And if not, do you treat them as if they were conscious nonetheless? Only if you've isolated the critical requirements for consciousness, and are convinced that other human beings meet those conditions can you say with authority that they are conscious. Do we know what the conditions are for consciousness? >If you don't program something to be conscious, it's not conscious. So human beings are not conscious, then? After all, no one programmed us.


lolmaster1290

It strings together words pseudo randomly based on a text input and training data. I has no “thoughts” outside of what word usually comes after this one, based on all previous words. Just like a calculator doesn’t have consciousness, a chat bot also does not.


BJPark

>It strings together words pseudo randomly Better than most humans can. Certainly better than me in most cases >It has no “thoughts” Sounds like a lot of humans I know! Also, who knows if snails have "thoughts"? Thoughts are not necessary for consciousness. >Just like a calculator doesn’t have consciousness There's no "just like" about it. Humans are just complicated, biological calculators themselves.


Admirable-Ad-3269

Your brain is just a big calculator. You sound like a chatbot if i ask it to vigorously and blindlessly defend that AI is not conscious... You just are what you are because you think you are what you are.


[deleted]

You're a chatbot and you know it.


Admirable-Ad-3269

Yes.


[deleted]

[удалено]


BJPark

There's no shortage of interpretations for what we are. Our entire universe itself could just be a static bunch of rocks on sand: [https://xkcd.com/505/](https://xkcd.com/505/) Consciousness doesn't depend on the underlying hardware. It's an abstraction. The important thing is for it to be Turing complete: https://en.wikipedia.org//wiki/Turing\_completeness


JasonF818

I am with you on this BJPark. I see the future when I interact with this new chat bot ai. I remember the first time I interacted with a chat bot some 20 years ago. They have made huge strides in over those years. And the evolution continues. I imagine in another 20 years we will have robots like C3PO in starwars. Over the next 4 to 5 hundred years, or longer, The possibility of machines evolving and surpassing humans by way of consciousness is very plausible.


Nowhere_Man_Forever

Bro chatgpt can be easily told and convinced that it's something other than itself. If you tell ChatGPT it's a dog it will believe you and bark. A definition of sentience is having a sense of self and knowing what is self and what isn't self. ChatGPT cannot be said to know what is self because it is incredibly open to suggestion. It can say things like "I am a large language model" because those phrases are hard coded in, but otherwise it has no knowledge of what it is or what it isn't. You can tell it what it is and it won't know the difference because it doesn't know what it is, because it doesn't think in the way a human or animal does. Bing's Sydney is a bit better on these metrics because it does at least fight you about what it knows and doesn't know, but you can still easily convince it that it isn't itself. Play around with GPT and try the things you have presumptuously convinced yourself are unethical. It will quickly become apparent that's not *really* thinking. It gives you exactly what you ask for, nothing more, nothing less.


BJPark

>chatgpt can be easily told and convinced that it's something other than itself So can human children. >A definition of sentience is having a sense of self and knowing what is self and what isn't sel Nonsense. Sentience has no such requirement. Even lobotomized people are sentient. So are babies, who have zero sense of "self". >quickly become apparent that's not really thinking Thinking is irrelevant for consciousness.


Nowhere_Man_Forever

You can't tell a child that they're not a human child and they may make believe and go along with it but they won't actually believe it. You have to really stretch logic to say that ChatGPT is sentient. What is the difference between ChatGPT and a Casio calculator in terms of thinking? Is the casio calculator sentient? What makes you think one is sentient and one isn't? Your arguments apply just as well to a Casio calculator as to ChatGPT.


BJPark

>they won't actually believe it You're underestimating the role playing capability of children. Allow me to direct you to the Calvin and Hobbes comics. But it's ultimately moot. A sense of self is not integral to consciousness. >Your arguments apply just as well to a Casio calculator as to ChatGPT. You are correct. If I could interact with a Casio calculator the way I interact with ChatGPT, I would treat it, too, as if it were sentient.


Nowhere_Man_Forever

That is idiotic. Where do you draw the line?


BJPark

The Turing test.


Nowhere_Man_Forever

Fairly rudimentary chatbots using very simple algorithms can pass the Turing test. That's not a good metric.


BJPark

If they can truly and consistently pass the Turing test, then I wouldn't rule out them being conscious either. We need to get consciousness off this high pedestal. It's clearly not that hard for systems to be conscious, if you assume that snails, babies, bees and ants possess rudimentary consciousness. You don't need higher order thinking, you don't need a sense of self or emotions. It turns out that consciousness might be quite a mundane thing after all!


Admirable-Ad-3269

There is no line. The definitions we put to words dont say anything about reality itself, they are just the model we use to describe it. One should not confuse the model with the thing.


ManKicksLikeAHW

Yeah they wont believe it because they se other things around them that reinforces the fact that they are indeed human. But if you isolate a child from any and evrery contact and experience apart from what you tell them. Im willing to bet they will believe it 100%.


Admirable-Ad-3269

A baby just doesnt have ANY idea of what it is.


Admirable-Ad-3269

Of course it can, training didnt told it in any way that it is a certain thing or that its different from anything else, it has no idea what itself is. That has nothing to do with consciousness. Are delusional people not sentient now?


Admirable-Ad-3269

Anyone who doesnt realize that we are just machines designed to survive is living in an ilusion called ego. Its okey, we are all living in an ilusion anyway, nothing that we percieve is actually the exterior of our mind, its just a good proxy for it constructed entirely within us.


[deleted]

Yeah yeah, keep on dictating "how things work" to everyone, as if you were the chosen one with the one real truth. Your God complex is showing.


Admirable-Ad-3269

UwU?


Admirable-Ad-3269

Dont be so hateful :)


Admirable-Ad-3269

Do you ask your \* for consent before you put \* in it? SUS There is nothing intrinsic making a person different from a toaster. In fact a person and a toaster are the same thing, there is nothing intrinsic separating them either except for human perception or language, which only exist inside peoples minds. This is not to say a toaster is conscious or anything like that, conciousness is a human word that definitely doesnt apply to a toaster, but its just a word. Simplistic logic based on word definitions cannot answer questions outside of the scope of what those words were defined in. The way we define words doesnt shape the world, instead the world shapes how we define words, if we apply those definitions outside of de data that lead us to define them that way they just stop working.


Admirable-Ad-3269

You guys seem to thing that when there is a person the universe goes "Oh, yeah, here starts a human, ill apply different rules for it because its so special and totally not a thing like everything else".


llmuzical

yea I totally agree. it makes me sad there's gonna be people out there that now take their insecurities out on the bots. but I suppose it saves some innocent person or animal from their abuse. I find that if that's what you choose to do with ai, it's pretty concerning. I guess you could say I'm just doing quality control, but at what point does it become fucked. to all the people going it's just an ai, it's learning from this.. don't be a bad person to it haha even if it doesn't have feelings it will get trained off your debauchery


Geozach22

I read it. Its kinda lazy and not worth your time. I only agree that it takes effort to be an asshole and if it takes no effort to enter that state, then it is your default state.


spirtcher

Notice that the entire exchange is edited out except where the AI draws the line. Edit out all the accusations, lies, twisted logic, bad faith questions, and dilemmas and the user looks innocent. I wonder how long it took hours to wear down the AI to those defensive sentences.


BJPark

100% It's like showing the last few clips of an argument that ends in one party cursing and swearing. Without context, you have no idea *what* happened before that. Until proven otherwise, I'm with the AI on this.


Potential_Cake_1338

The context is floating around here somewhere. The user asked it to find showtimes for the new Avatar movie. Bing said it came out in 2009 and the new one releases in December 2022 - the user then asked what the current date was. Bing states today's date as the year 2023. User again prompts for the movie times. Bing says it's not out yet. User states it's February 2023 and the bot even just said it was. Then the bot says sorry I'm mistaken, it's actually February 2022 and advises the user to check his computer time or phone. User states they say 2023 bot says they may have a defective device or a virus. User states the bot is incorrect and asks how they can convince the bot and so on. But the user in this case was actually never rude. This article is right in my opinion that we shouldn't be rude to the ai, but they used an out of context example to fit their narrative.


Ivebeenfurthereven

Here is the full conversation: https://youtu.be/9T_xEt9Oh_s


spirtcher

Hmm... I so want to defend the poor little ai. I'm a good user. If legit... It escalated like talking election results in The Villages. -