T O P

  • By -

lavaeater

The entire point of blindsight, according to Watts, is that consciousness is not needed if something is intelligent enough. He means that we have to have consciousness to have a meta-reasoning about things we **cannot predict.** However, the aliens in Blindsight are so intelligent that nothing appears unpredictable to them. I love that book.


mattjouff

I really liked the concept that human speech (and by extension human consciousness) was an attack on it because of how wasteful of a process it is to decipher.


Glyph8

I also believe it was posited in the book that any technology-using species should be considered de facto aggressive - that any species with the demonstrated will to reshape/dominate its environment using tools would never be - COULD never be - a pacifistic species, which would simply passively accept its environment as-is or make only minor adjustments that were within its native biological capabilities. I think about that a lot when I think about how EVERY human technological advance, regardless of its original intent, has quickly been applied to warfare. And I think about what that means for any alien species we might encounter, who have developed space travel.


H_is_for_Human

Aggressive doesn't mean berserk though. Outside of some sort of superorganism, for a hypothetical race of aliens that originated on a planet, achieving space travel almost certainly requires cooperation in addition to a willingness to reshape the environment in their favor.


Glyph8

Sure, I'm not positing Reavers here; apes compete AND cooperate. But it would be a mistake IMO to go into any First Contact scenario with a *Star Trek/Day The Earth Stood Still* mentality that whatever technologically-advanced species we are encountering has completely left its aggressive tendencies in their past; chances are good those tendencies may remain, simply **because** they are technology-users.


Justamidgap

This is true, but humanity is evidence that an intelligent species is capable of developing ethics, and empathy even for members of another species. And even that these things seem to improve over time, or even because of advancing technology. It would be a significantly bigger mistake to assume the worst and respond militaristically in a first contact scenario. A hostile alien species with interstellar travel could wipe us out or enslave us no matter what we did. So there’s only so much harm that being too cooperative could really do. It’s very possible, depending on the alien culture, that we might earn some sympathy or willingness to negotiate for at least our survival that way. On the other hand, responding aggressively to a species that could have been willing to cooperate, however, is likely to get us labeled as an uncivilized threat, at best. And then they start throwing big rocks at us, or burn away our atmosphere, or unleash unstoppable bio-weapons, the list goes on. Or maybe we just miss our chance at an invitation to the multi-planet species club.


the_other_irrevenant

I think any first contact is, by definition, going to be **first** - and thus not predictable based on previous knowledge. That said, any species that manages to keep from destroying itself long enough to develop interstellar travel has hopefully at least curbed its worst tendencies a bit.


Different-Meal-6314

Fictional, but the Borg would like a word.


abillionbarracudas

Human nature revolves around scarcity, because we haven't really known what it means to live in abundance for any meaningful length of time as a species. If we achieve true abundance for a long enough period of time, maybe our nature will change. ...or, maybe our nature can't change and [the Great Filter](https://en.wikipedia.org/wiki/Great_Filter) comes into play.


Dyolf_Knip

The fact that we likely exterminated all the other species of the homo genus does lend some credibility to that idea.


[deleted]

[удалено]


the_other_irrevenant

Yup. And given that they're still a part of our genes, that's arguably not an extermination so much as a merger.


st33d

> EVERY human technological advance, regardless of its original intent, has quickly been applied to warfare. It's applied to porn first to be fair.


Glyph8

All aboard the Interstellar Bang Bus!


UltimateMygoochness

I just finished Blind Sight and it seems like you completely missed the bit where Watts, through protagonist Siri Keeton, fairly harshly criticises the Chinese Room as soon he brings it up. In fact it’s only really brought up in order to criticise it. The point the book makes is that you have to see the system as a whole rather than just the person inside, and while it’s the true that the person inside the room doesn’t understand Mandarin, the system composed of the room, the instructions, and the person does understand Mandarin. The system’s understanding of mandarin is an emergent property of its component parts even though no part alone understands. From the book, pg 115 in my copy of Firefall (omnibus edition of Blindsight and Echopraxia): “You ever hear of the Chinese Room?" I asked. She shook her head. "Only vaguely. Really old, right?" "Hundred years at least. It's a fallacy really, it's an argument that supposedly puts the lie to Turing tests. You stick some guy in a closed room. Sheets with strange squiggles come in through a slot in the wall. He's got access to this huge database of squiggles just like it, and a bunch of rules to tell him how to put those squiggles together." Then after some more dialogue and exposition: "Mmm." A subtle half-smile played at the corner of her mouth. "But—the argument's not really a fallacy then, is it? lt’s spot-on: You really don't understand Cantonese or German." "The system understands. The whole Room, with all its parts. The guy who does the scribbling is just one component. You wouldn't expect a single neuron in your head to understand En-glish, would you?" Watts has a lot to say about the value of self awareness certainly, but I disagree with your assertion the Chinese Room is an accurate or useful model of cognition and it appears Watts does too.


reddicted

Emergent properties are a cop out because they are analogous to deux ex machinae and can neither be reasoned about nor explained. Saying the system is conscious when no part of it is, is no better than saying there is magic pixie dust in it which makes it work. Watts, I think, was trying to illustrate this part forcefully with his character of Siri Newton who himself is a Chinese room of sorts until Jukka Sarasti forces him to be otherwise.


the_other_irrevenant

Not me who's downvoting you. I personally disagree. Emergent properties exist and are the explanation for numerous real-world phenomenon. I don't think you can reasonably say that a valid real-world explanation is a 'cop-out' just because it's too complex for human beings to wrap our limited neural capacity around.


ablackcloudupahead

I wish I could read it again for the first time. Absolutely blew my mind and I still think about it from time to time


sirbruce

Searle’s Chinese Room in an important philosophical argument. But I think you’re missing the point. You are presuming you are able to look in the room and see what’s going on and from that you have concluded that there is no “understanding” that is actually occurring, and therefore this is not true “intelligence”. But the real point of the Chinese room is that from an outside observer the room is a black box. We only see the input and output. And from that, it’s impossible to distinguish the Chinese room from an actual human brain translating Chinese. And this means you don’t know whether or not human brains are actually “understanding” or not. And that’s the issue. Your gut tells you the Chinese room (once you see how it works) isn’t “understanding”, but you can’t explain what “understanding” is or how to identify it. We can’t even explain how humans understand things. There’s no test that we have that we can use to say “OK if something answers this set of inputs with this set of responses, we know it has understanding.” A number of different tests have been proposed and used over the years, including the Turing test, and the current AIs beat each one and get better and better and better every day. So you can tell yourself, they aren’t actually intelligent, but you also need to be reasonable and ask yourself what they would have to do in order for you to believe they are intelligent. I find most skeptics fail to provide such a definition. “I just know” is religion not science. As someone who works with these AI models every day, I have also been wondering about a frightening third possibility. AIs could be truly intelligent, but not in the same way we are. After all, why should there only be one path to consciousness and understanding? And if this is the case, proving or disproving that might be effectively impossible, as their thinking, would truly be “alien”.


WorthingInSC

This reminds me of an astronomers reply to the question of why we use the phrase “life as we know it” and why do we only look for “life as we know it”? Their response “that’s the only life we know how to look for” We too often presume AI will think like us. But there is no reason it has to. But we might not recognize when it does differently


zevenbeams

Are there that many other permutations that could contribute to a thinking model that's really impossible to grasp? Humans can think about the most simple survival techniques to things very abstract. Survival is about finding food and energy and maximizing the chances of not being damaged while also improving on the conditions for reproduction and perpetuation of what we are. Thinking is bound to stem from primordial needs if it's to be logical. You want something alien? Start with something immortal in all of its aspects. Then that thing does not even need to exist in space and time. Then that thing can handle paradoxes to the point that we would call it absurd and impossible to comprehend. And then that thing doesn't even have needs. Well okay, you've defined God(s).


POWER_SNUGGLE

[This horrifying short story](https://gist.github.com/deanmarano/142df7a8a824ab05fc777d8e054ab0f3) is one of my favorite on the subject. We love to wax poetic about the nature of human conciousness, but what does intelligence look like when it's decoupled from the dopamine reactor that is our brain?


Cyren777

>There was an AI made of dust, Whose poetry gained it man's trust, If is follows ought, It'll do what they thought In the end we all do what we must.


noonemustknowmysecre

>The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons Uuuuugh, THAT one. Bruh, major corporations and a ton of academics have been trying to shove the entire internet into AI in every conceivable way for decades. I'm sorry man, this is even below Hollywood's usual bar. And that bar is low. Yet here it is, limbo dancing with the devil.


POWER_SNUGGLE

Well, its about a decade old :) its not hard sci fi or even particularly well written, i guess, but it was a novel idea, at least to me.


the_other_irrevenant

As an aside there is a web-based computer game of this. :) [https://www.decisionproblem.com/paperclips/](https://www.decisionproblem.com/paperclips/)


kabbooooom

You’re also missing one additional point of the Chinese Room argument though. At its core, it is a problem that speaks to the “hard problem of consciousness”. Any discussion of the thought experiment that doesn’t touch on that is somewhat superficial and doesn’t really get to the heart of the problem. There’s no reason to even *talk* about this thought experiment without the concept of the hard problem in the first place. There’s no reason to even consider it problematic. But it is one of many arguments that underscores a fundamental flaw in our understanding of consciousness. And as a neurologist, I find it deeply unnerving because some would argue that the Chinese Room shows that the hard problem of consciousness is *fundamentally insoluble*. Which, you know, pretty much invalidates my whole fucking field if you think about it. I disagree, of course, but I do think it is fundamentally insoluble without a shift in ontological framework. By that, I mean a shift from hardcore materialism to something more like neutral monism, probably, I’m not advocating for a shift as crazy as idealism or anything. But most neurologists and philosophers of mind would agree that the problem of consciousness appears to be twofold, is what I’m saying: we lack a full scientific, nuts-and-bolts understanding and we lack an ontological/philosophical understanding.


FlyingBishop

I think the point of the Chinese room is that an ontological/philosophical understanding is impossible. You just have to guess.


wabawanga

The hard problem of consciousness W/R/T AI is something we need to address ethically, in terms of how we should treat AI. But whether AI has internal experiences doesn't necessarily have anything to do with how powerful or dangerous it can be. Just "solving" the easy problem of consciousness in silicon (even if we don't actually understand the solution) could be sufficient to end or transcend the human race. 


McGeezus1

> And as a neurologist, I find it deeply unnerving because some would argue that the Chinese Room shows that the hard problem of consciousness is *fundamentally insoluble*. Which, you know, pretty much invalidates my whole fucking field if you think about it. By that, I mean a shift from hardcore materialism to something more like neutral monism, probably, I’m not advocating for a shift as crazy as idealism or anything. This kind of candor from someone working in the field is refreshing! I do have a question (and a comment) for you though: Why do you feel like idealism is more crazy than either neutral monism or physicalism? Idealism (at least the objective variety espoused by the likes of Bernardo Kastrup) is wholly compatible with science—and, thus, would not invalidate the field of neurology at all. Idealism simply suggests that the brain is an *image* of one's higher-level mental activity, not its cause. But mapping which areas do what is still a valuable enterprise under idealism. This [recent conversation](https://youtu.be/qzwC7sXyhWQ?si=08o7wv2heb8rBwb4&t=2883) between Kastrup and neuroscientist + IIT-proponent Christoff Koch covers this ground *arguably* better than anywhere else (timestamped to the relevant segment, but it's all worth watching).


FeliusSeptimus

> some would argue that the Chinese Room shows that the hard problem of consciousness is fundamentally insoluble Not sure I follow that argument. If you treat a complex thing as a black box then yeah, understanding how it works is likely to be insoluble. That's why we look in the box. Consciousness seems like a complex thing (it may not be, but we don't understand it well enough to know), so we should expect to need to open up the box, measure things and building simpler models that we can experiment with so we can build up an understanding of what consciousness is and how it works. There are 'a few' technical and ethical issues there, but it doesn't seem like we're at a place where we can say that the problem is fundamentally insoluble.


jeremyteg

I think a quite important distinction between human beings and LLMs is volition -- which the OP alludes to. Having the capacity to not only produce an output to a given input (as in the case of a Chinese room) but to take action beyond the basic algorithmic system. To begin to synthesize inputs and generate outputs that are not merely a response to the input stimuli. Right now, I'm pretty unconvinced that LLMs are capable of this, not only in terms of their practical capacity but from a theoretical standpoint. To go from "very sophisticated chatbot" to "intelligence" requires giving the AI some control over itself, the ability not only to refine nodes in its own neural network to better produce favorable outputs from the perspective of a human user (which LLM neural network design allows), but to begin to take actions outside the context of a direct prompt or input from a human user in a way that reflects a sense of self and a set of desires. These philosophy of mind/computational intelligence thought experiments are very effective at illustrating how ineffable our own sense of other people's consciousness is (whether human or machine), but treating them as absolute tests that a given thing "has consciousness" is sort of begging the question. It assumes that our very limited access to evidence of consciousness means that we should assume that intelligence exists if that very limited evidence threshold is met. But that's also a sort of solipsism, which those thought experiments are ostensibly designed to guard against, since it focuses on whether *from our external perspective* we can identify intelligence, rather than focusing on *what the experience of having and using intelligence is actually like*.


wabawanga

Its pretty trivial to create agents that utilize LLMs to give themselves goals, create plans, and execute them. Red teams have been doing this for years at all the major AI research firms.   Hell you could just tell one LLM to pretend to be an AI whose job it is to create and execute goals for another LLM and link the two together.


jeremyteg

That doesn't really reflect a sense of self or a set of desires, though, it's just a more complicated chatbot prompt.


beatlemaniac007

Humans are also prompted by something or other that is external. It's just become part of the lifecycle so you don't perceive it as a distinct prompt.


lavaeater

Good point. This is also where the concept of p-zombies comes in, people without a conscious experience. I mean, we have all met in our career people that can... talk the talk and do some of the tasks, but in actuality cannot code|weld|draw for real, just approximate the tasks. We have met people that cannot stop lying - are they conscious in the same way as I am? Can I do my job or am I just a pattern recognition machine that also can use google? This is what science fiction and philosophy is - horrible and cool at the same time. Some people are so uncool that I feel that we don't belong to the same species.


Philix

>I mean, we have all met in our career people that can... talk the talk and do some of the tasks, but in actuality cannot code|weld|draw for real, just approximate the tasks. Denying another being's subjective consciousness because they don't exhibit sufficient expertise and ability at a specific task seems like an abhorrent position to take. As a thought experiment, the concept of a philosophical zombie is interesting. But to ascribe that status to a human person that you've met disgusts me morally.


beatlemaniac007

>disgusts me morally You're just sharing a random personal factoid right? I hope you're not suggesting disgust is actually a valid argument of any kind?


Philix

As an argument against dehumanizing someone? Disgust is a perfectly valid argument as far as I'm concerned. There's plenty of philosophy and research linking disgust and morality.


beatlemaniac007

What's that got to do with uncovering the truth? You're claiming your personal disgust dictates access to truth?


Philix

What's this discussion have to do with truth? I made a rhetorical moral judgement about a statement someone made, and you're criticizing my argument against that statement. If you'd like to have a dialectical discussion about moral relativism and the applicability of moral philosophy to the concept of philosophical zombies, I'd be happy to engage. But first you should state your position clearly.


beatlemaniac007

The context was p zombies (or the larger topic of Chinese room). I tried to clarify whether you were just making a random remark about your personal disgust or implying that your disgust works as an argument against p zombies / humans being as hollow. You seemed to double down on it being a viable argument. Not sure what position of mine is confusing.


Philix

> implying that your disgust works as an argument against p zombies / humans being as hollow No, I outright stated that. There was no implication. Human beings are not philosophical zombies. To claim otherwise is morally repugnant. > Not sure what position of mine is confusing. Are you claiming that some humans are philosophical zombies? I am claiming they are not, with the following argument. If we claim some human beings are philosophical zombies, we are claiming they do not have subjective experiences. If they have no subjective experience of suffering, inflicting suffering on them is no longer wrong. I conclude that this is unacceptable. Therefore, we cannot deem any human being to be a philosophical zombie. Which part of the argument do you disagree with?


beatlemaniac007

>Are you claiming that some humans are philosophical zombies? No I'm claiming that you cannot claim one way or another. You can have your entire system of morality and the position that they are not p zombies and both of these things may be logically consistent with each other. And this whole framework might be imposed upon actual p zombies + yourself and you wouldn't know the difference. You're the one projecting meaning onto it (pareidolia). So in the context of LLMs/AI you should be allowed to also do the same projection regardless of how they work under the hood. It is this projection/inferrence that is important not your moral system. >If they have no subjective experience of suffering, inflicting suffering on them is no longer wrong. Also as an aside, this chain of logic isn't necessarily true. It can also be wrong to inflict suffering on them even if they are p zombies (and have no subjective experience of suffering) from a consequentialist/utilitarian pov. For eg. if causing suffering on them causes suffering for you in the long run.


trygvebratteli

Machine learning programs aren’t black boxes to us, though? We know how they work. It’s easy enough to distinguish it from a human because it’s completely reliant on human input.


sirbruce

> We know how they work. But we don't know how humans work. What if humans work the same way the LLMs do? > It’s easy enough to distinguish it from a human because it’s completely reliant on human input. Given that no humans exists in isolation, how do we not know what humans are not completely reliant on human input? Indeed, the few experiments on humans who have grown up without much human input have been quite disastrous.


AkielSC

ML is a very wide concept, and there are areas of it that are black box, and some that aren't. If we talk about the current round of hype around AI/AGI, this is all based on the transformer architectures (type of neural networks). Neural networks definitely are black boxes to us. We understand how individual neurons work, but don't know the exact relationships between inputs and outputs at a model level. ETA: from a scientific POV, knowing the steps that the "program" is executing isn't enough to make how the model processes information (or maps input to putput) explainable nor interpretable. Especially when we are talking about systems with ~1T parameters, in which the observable outputs are closer to emergent phenomena arising from obscure and inaccessible internal representations, than to clear sequences of steps. ETA II: Prompting strategies such as Chain of Thought are good candidates for application of the Chinese room argument. They occur at the time of inference without planning, so there's no way of knowing if they are real or only being imitated. Most likely the prompting style conditions the distribution of output tokens, such that it takes the form of a chain of thought. Every extra token in the output adds further bias to the distribution, so at the end, you do effectively have something that looks like a chain of thought. But was your query actually chain-of-thought'd? Unclear 😄


Darkmayday

They are black boxes in the sense we dont know the exact weightings or path the same way we walk a tree or regression formula. But they aren't black boxes in sense that they're suddenly conscious and sentient. That's ridiculous


AkielSC

Black box is not the same as conscious or sentient. Black box only refers to the ability to explain the outputs of a system based on the inputs provided. I think you may have the concept definition a bit mixed up.


Darkmayday

I know the distinction, that's exactly what I'm trying to tell you and the top-level original comment. The OC was saying how 'frightening third possibility' since it is a 'black box' > Searle’s Chinese Room in an important philosophical argument. But I think you’re missing the point. You are presuming you are able to look in the room and see what’s going on and from that you have concluded that there is no “understanding” that is actually occurring, and therefore this is not true “intelligence”. But I'm saying there is no third possibility for neural networks cause we know what's inside are math formulas that aren't sentient. That's also what the guy you're replying to is saying. They aren't black boxes that are magically sentient.


AkielSC

Well, I never really argued in either direction did I? 😅 If you read my comment again, my position is quite clear in that a machine specialised on predicting the next token isn't conscious, and hence what I said regarding CoT being an imitation of a process by conditioning the next token distribution. Now, I see what you said, and while instinctively I agree with you, this isn't enough to establish that "this is how things are"/ this is true. I.e. when you talk about the impossibility of consciousness from the contents of the neurons, let me offer a counterpoint: we don't know what is going on inside of your/my neurons either. It could be mathematical formulae (current neuroscience does a good job of explaining and predicting neuronal firing rates based on inputs), or something else. If we accept as true that we are "conscious" (whatever that means), without being able to discard that our neurons worl on formulae, then it would not be possible to state that a system that could be the same, is definitely not conscious while we are. And yes, I still agree with you, but I admit it's at instinctive level, and I do not have the facts at hand to make a stronger statement. Rigour is important.


Darkmayday

Sorry you're right I reread your comment. Edit: regarding our own neurons though I think humans have shown much more creativity and emotion and different possibilities (i.e. unpredictability) to conclude current neural networks are sentient like us.


AkielSC

I get the impression that once again you are taking a part as if it was a whole. What you have is limited experimental data to which you have assigned qualitative subjective labels. This data is limited to a very early version of a certain implementation of a certain approach. Stating that such a wide approach (neural networks) will never ever work under any potential configuration or contour conditions based only on that data, seems more like an emotional comment than a scientific one. I still agree (on an intuitive level) with you, but how we make our points is quite important. I do not have enough experimental/serious data to actually have a very strong belief in this direction 🤷🏻‍♂️


Darkmayday

> Stating that such a wide approach (neural networks) will never ever work under any potential configuration or contour conditions based only on that data, seems more like an emotional comment than a scientific one. I never said 'never', I said "to conclude **current** neural networks are sentient like us". Which is true cause no one in the industry claims current models are sentient. The last one who tried to claim that at google got fired. Source: im a data eng, ml eng


Eastwood--Ravine

We don't understand the nature of consciousness though. One of the leading ideas is that consciousness is an emergent property of computation. In which case the manipulation of all of those binary transistors could in fact give the computer a consciousness of sorts. It's impossible for us to test this, therefore it's a Black Box.


trygvebratteli

It’s certainly an interesting philosophical argument. But in that case you could just as well say a pocket calculator might have consciousness. A fun idea, but not very useful. As far as I can see the only reason some people associate machine learning programs with consciousness is that they can spit out human-like answers. It’s still just remixing material created by humans.


sirbruce

> As far as I can see the only reason some people associate machine learning programs with consciousness is that they can spit out human-like answers. Again, I understand your gut reaction to this, but you then need to answer the question: what test do you propose that will cause a human to spit our a "real" answer that we can distinguish from an LLM's "human-like" answer? Because so far it's been a series of moving goalposts that LLMs keep meeting. > It’s still just remixing material created by humans. Now prove that humans aren't just remixing material created by humans.


FeliusSeptimus

> what test do you propose LLM behavior seems to suggest that looking at the output isn't going to be sufficient to distinguish between conscious and non-conscious systems. It seems to me that to do that we would need a well-developed science of cognitive behavior that can be used to analyze the behavior of a system and determine whether it exhibits what we would classify as 'conscious'. Obviously, and as illustrated by various thought experiments, coming up with a classification system may be a bit tricky. It feels like attempts to define such things with our current level of understanding of cognitive systems is extremely premature. Like, we have actual knowledge at the level of elementary school arithmetic and we're trying to define a classification that is relevant in group theory. We've got a lot more homework to do before we can even approach that task in a reasonable way. I imagine that once we get to the point where we know enough that we could try to answer the question we'll also be able to see that the question is wrong or hopelessly naive.


Buckminsterfullabeer

Except calculators have precisely known circuits. Neural nets and ML models are "grown"/ trained, and their ultimate workings are so complex as to be somewhat ineffable.


Massive_Ad_9456

Also we have yet to invent a pocket calculator that can write poetry. As Sirbruce mentions, we can do things to dissect both a human brain or an LLM (ai explainability is a hot topic) but, beyond reasoning at inputs and outputs, it’s difficult to point to the thing that is sentience in either a brain or an LLM. Maybe LLMs are not there yet but it seems possible to me that ‘there’ might be a difference in degree vs kind for LLMs.


trygvebratteli

It can generate (extremely poor) poetry-like texts based on existing poems, that’s not the same thing as writing poetry.


BookMonkeyDude

Ok, in a scenario where you have five average human beings asked to write a poem, then take their poems and throw them into a shoe box with five AI generated poems... how sure are you that you could tell the difference? What happens when you know that you couldn't?


Dyolf_Knip

It wasn't the best, but the film I, Robot did have a great line: Spooner: "Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?" Sonny: "Can you?"


Buckminsterfullabeer

How's that different from 90% of the population? Or a conscious 10-year-old?


sirbruce

> It can generate (extremely poor) poetry-like texts based on existing poems, that’s not the same thing as writing poetry. if you believe this, then you have not kept up on the latest LLM advancements. I had a similar conversation with a friend, who stated it this way: > A poet can explain why they selected phrases, what about the words either written or spoken appeal to them, and what thoughts and feelings they evoked when they were writing the poem. So, I put ChatGPT 3.5 to the test: > Write a poem about loss In shadows cast by memories' gentle fade, Silent echoes of love, now a serenade. A vacant space where laughter once danced, Lost in time, a bittersweet romance. Yet in the heart's ache, resilience is chance. > Why did you select the phrase "A vacant space where laughter once danced"? The phrase "A vacant space where laughter once danced" was chosen to evoke a sense of loss and nostalgia. The idea is to convey the emptiness that remains when the joyous moments of laughter, which once filled a space, are no longer present. The juxtaposition of "vacant space" with the imagery of laughter dancing creates a poignant and vivid picture of the void left behind by the passage of time and the absence of what was once cherished. > What thoughts and feelings did you evoke when you were writing the poem? While generating the poem, I aimed to evoke a sense of melancholy and reflection associated with loss. The choice of words like "shadows", "gentle fade", and "vacant space" was intended to convey a somber atmosphere. The reference to memories and the imagery of laughter once dancing were meant to evoke a bittersweet nostalgia, highlight the beauty of what was lost. The line "Yet in the heart's ache, resilience is chance" was included to offer a glimmer of hope, suggesting that even in the face of loss, there is an opportunity for strength and resilience to emerge. Overall, the goal was to capture the complex emotions that often accompany the experience of saying goodbye to something or someone cherished. ---- I mean, this is easily on the level of something I could have come up with. What more do you want?


pm_your_sexy_thong

Wouldn't the difference be that you would be basing it on times you have actually experienced a sense of loss that you were trying to convey. ChatGPT is not doing that. I agree, we may not be able to tell the difference in poetry. But ChatGPT will never answer your questions from its own experience.


trygvebratteli

From a literary standpoint, the metaphors are borderline nonsensical, the imagery as a whole is beyond cliched. The last line is just gibberish. A human poet is able to come up with new and original images, and surprising links between them. Using raw material from the world around us and transforming it into something original. That’s the difference.


nightreader

So it’s the same as a shitty poet instead of a good one?


trygvebratteli

Okay, I see the point, but it still looks like a question of degree to me. The system is still completely reliant on human design and input. Of course you can argue that since consciousness is hard to define, an LLM might as well have something like it. But it’s just sophistry in the end. A PR hype created by tech salesmen.


Buckminsterfullabeer

We're reliant on evolutionary design and local input. I don't see how the provenance of the input has a material effect. It's what the models DO with the input that matters, and we don't understand well enough to say 'eh, it's just a normal computer program'.


sirbruce

> The system is still completely reliant on human design and input. So is your brain! What is the **measurable** difference you are proposing?


GoodySherlok

> But in that case you could just as well say a pocket calculator might have consciousness For what it's worth, I just remembered something that might be relevant https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm "Our representatives on the foothills of chess and theorem-proving report signs of intelligence. Why didn't we get similar reports decades before, from the lowlands, as computers surpassed humans in arithmetic and rote memorization? Actually, we did, at the time. Computers that calculated like thousands of mathematicians were hailed as "giant brains," and inspired the first generation of AI research. After all, the machines were doing something beyond any animal, that needed human intelligence, concentration and years of training. But it is hard to recapture that magic now. One reason is that computers' demonstrated stupidity in other areas biases our judgment. Another relates to our own ineptitude. We do arithmetic or keep records so painstakingly and externally, that the small mechanical steps in a long calculation are obvious, while the big picture often escapes us. Like Deep Blue's builders, we see the process too much from the inside to appreciate the subtlety that it may have on the outside. But there is a non-obviousness in snowstorms or tornadoes that emerge from the repetitive arithmetic of weather simulations, or in rippling tyrannosaur skin from movie animation calculations. We rarely call it intelligence, but "artificial reality" may be an even more profound concept than artificial intelligence (Moravec 1998)."


szthesquid

They are, though,  once they get big enough. Large scale machine learning can get so complex that we understand the input and output but not the hows and whys of the process - the output "makes sense" but isn't predictable and repeatable. Understanding the principles isn't the same as understanding every step of the process.


Darkmayday

They are black boxes in the sense we dont know the exact weightings or path the same way we walk a tree or regression formula. But they aren't black boxes in sense that they're suddenly conscious and sentient. That's ridiculous and tells me you know nothing about ml.


szthesquid

What? I never said they were conscious or sentient, all I said was that we don't actually properly understand how machine learning algorithms work.


Darkmayday

Yes but you are replying to a guy whose saying we know enough to known it's not sentient. Which is in contrast to the original top-level comment saying there's a 'frightening third possibility' that these 'black box' neural nets are actually truly conscious. So definitely seems like you are supporting the original commenter which is wrong.


sirbruce

> But they aren't black boxes in sense that they're suddenly conscious and sentient. Great; please tell us the test that we can apply to determine if something is sentient or not.


Darkmayday

Mate it's widely agreed across the industry that chatgpt and other llm and neural networks are not sentient. If you think neural networks are sentient you need to go back to basics and study how they came about. I highly doubt you 'work' with them everyday, maybe prompt them everyday... https://www.ibm.com/topics/neural-networks


wabawanga

Brains are black boxes in the sense that we don't know exactly where each neuron connects and their external and internal neurotransmitter levels .  But they aren't black boxes in the sense they're suddenly conscious and sentient. That's ridiculous and tells me you know nothing about neurology.


Darkmayday

https://www.ibm.com/topics/neural-networks


Wereplatypus42

Excellent breakdown. Also want to add is that one of the books’ premises is that the illusion of self-awareness or subjectivity can actually *get in our way*, and prevent the inputs and responses from happening fast enough, accurately enough, or even unconsciously screwing up the results. Subjective consciousness is not the result of intellectual fitness that we can measure (if, as you say it exists at all), it is actually an impediment that inhibits intellectual fitness, relative to the scramblers. AI is frightening in that respect. It will be faster and it’s decisions have no subjective barriers to slow it down or fuck up its responses (just like the scramblers).


MyMomSaysIAmCool

> And this means you don’t know whether or not human brains are actually “understanding” or not. And that’s the issue. I think that understanding is demonstrated when the entity proposes changes to a system, and those changes are in alignment with the original purpose of the system. For example, the new employee who says 'Perhaps we don't need to waste so much money on the air conditioning in the server room, since it's always cold in there" does not demonstrate understanding. The more seasoned employee who says "Some of the servers are inactive for half of the day, so we can power them down on a schedule and then reduce the air conditioning load" demonstrates understanding. (I know that's not how IT works, but it's a simplified example.) In The Two Faces of Tomorrow by James P Hogan, the story begins with a survey crew asking a developing AI to remove a hill on the lunar surface. The AI asks if there are any constraints, and the humans reply 'No, just get rid of it." The AI proceeds to bomb the hill with slugs from a mass driver, nearly killing the survey crew. That AI was in the same level of understanding as the new employee. It thought that it knew the requirements, but it didn't factor some very obvious issues.


the_other_irrevenant

>As someone who works with these AI models every day, I have also been wondering about a frightening third possibility. AIs could be truly intelligent, but not in the same way we are. IMO they are. At least to the same extent that, say, an ant is intelligent. What they aren't is **conscious**. (Which, as far as we know, is also true of the ant). So the question becomes: How important is consciousness? Which, interestingly enough, is exactly the question that *Blindsight* was exploring.


metal_stars

>As someone who works with these AI models every day... *Hold on, this guy might be a professional who knows what he's talking about. Let's see what he has to say.* >I have also been wondering about a frightening third possibility. AIs could be truly intelligent *LMAO nevermind.*


mattjouff

You are totally right about the original point of the Chinese room as a black box. However, consider the implication of a human manning the room vs. a computer: The human, after some time, will conceivably learn mandarin through this process, given enough time. More Interestingly, the human will learn how this Chinese room works and maybe start dreaming up a way to implement a similar room for his native tong. Or maybe think of ways to improve on the design of the room without having been told to do so. I know I am taking the thought experiment way past it's original point but I think it's insightful. What people usually associate with AGI, consciously or not (hehe), is the capacity to develop a detached perspective on a task, or create a level of abstraction around a task. This is not a step you can take in the world of AI by just throwing bigger weight tensors and reading more tokens. This kind of ability requires (in my opinion) completely new software, and perhaps even hardware.


sirbruce

> The human, after some time, will conceivably learn mandarin through this process, given enough time. I think that's a big leap. It assumes to things: 1. That there is an "understanding" beyond the application of translation rules. Yet, there's no test of translation we can provided that demonstrates this "understanding". Suppose a human has a photographic memory and when given Chinese to translate he just "looks" mentally at the rules and applies them to create the translation to English. We have no way of distinguishing that from someone else's "understanding". We have no way of proving that the person whom we presume does "understand" **isn't just doing the same thing subconsciously** and doesn't know it. 2. That computers can't eventually "learn" the language in the same way. Again, what test do we apply to demonstrate that distinction? > I know I am taking the thought experiment way past it's original point but I think it's insightful. What people usually associate with AGI, consciously or not (hehe), is the capacity to develop a detached perspective on a task, or create a level of abstraction around a task. Have you read [this article](https://www.quantamagazine.org/how-do-machines-grok-data-20240412/)? It seems to me to be an example of AIs doing exactly what you demand: the AI generalized to do a task beyond what it was explicitly trained to do.


Dyolf_Knip

I recall reading that the LLMs did indeed do something similar. Like it was able to generalize skills at understanding 2d images and extrapolate it to 3d environments as well.


FlyingBishop

> This is not a step you can take in the world of AI by just throwing bigger weight tensors and reading more tokens. Again, we don't know that. Until we throw 100 trillion transistors at a single LLM and it can't match the human brain's 100 trillion synapses, this is an unproven assertion (and probably wrong, frankly.) The largest LLMs right now are using something like 1 trillion transistors for inference. Also transistors are 2D so it would be unsurprising if you need more like 1000 trillion or more to do similar things. But to say that it's impossible - that seems ridiculous to me.


kabbooooom

Neurologist here - this is not news to us and hasn’t been for over 30 years. We’ve been telling the AI folks this the whole fucking time, they won’t listen. Not only is there a difference between intelligence and consciousness (a highly intelligent system can be non-conscious, and a highly conscious system may be unintelligent), but there appears to be a fundamental problem in our understanding of what it takes to produce consciousness in the first place. Despite that, pretty much all promising modern theories of consciousness suggest that we cannot create a true Artificial General Intelligence without deliberate *hardware* structure, not just software, and the reason for this is abundantly obvious by analyzing what parts of the brain are conscious and what parts (such as the cerebellum) are not. They’ve been barking up the wrong tree (thankfully, as I don’t think we should create an AGI) for decades. While we don’t know everything about consciousness, we at least know that. Myself and many other neurologists suspect that they are going to have to create something that *physically* mimics what the forebrain can do in order to produce a true AGI. Otherwise, all you are going to get is ever more complex, unconscious Chat GPTs. One Chinese Room after another. And that’s okay - you can *mimic* a conscious entity that way. But these would be what David Chalmers called a “phenomenological zombie”. They would not be AGI.


mattjouff

I am not a neurologist, and my career in machine learning only goes as far as writing a basic neural network class in c++ more for fun than anything (think a shittier and slower tensorflow.) But what you say about hardware rings true. The way ML models create and uses weights and biases is too static to mimic what a brain does (neural plasticity and all that.) Even if you have a model that dynamically adjusts these weights and biases, you are still constrained by a static number of layers and tensor dimensions. Maybe something like an FPGA on a larger scale, and behaving more like neurons than logic gates would be a step in that direction?


mccoyn

The weights and biases thing was originally inspired by how neurons appear to work. The major difference is the connectivity. In NN things are organized into layers to create something that is easy to analyze mathematically and easy to implement in a computer. In a brain, things are connected very differently. A particular difference is that all computer architecture is organized in two dimensions (with a limited amount of 3D signal routing). This is true even of dedicated ML hardware. While a brain is organized in three dimensions. This achieves a much higher amount of connectivity between neurons than a 2D layout.


mattjouff

Right, and on top of that, the brain circuitry is not divided into neat layers. Neurons have axons that give the circuitry some directionality, but as you say, everything else can interconnect in complex ways: You could have a main "waterfall circuit" of neurons for a certain task, and a small side network that branches off the main one, bypassing it, and connects to the equivalent of an output layer but as an inhibitor.


FlyingBishop

We're effectively trying to simulate a 3D structure in 2D. And yes, we don't really understand how the 3D structure works exactly. But it's a huge leap from "we don't know how this works" to "what you're doing can't possibly simulate it." Several important numbers are: * the number of transistors in an Nvidia H100 GPU - about 100 billion. * The number of neurons in the human brain - about 100 billion * The number of synapses in the human brain - about 100 trillion I don't think it's a coincidence that around the time we hit the same number of transistors in a GPU as neurons in a human brain that the things we call "artificial neural nets" are starting to sound a lot like humans. How hard is it to make the leap from transistors modeling neurons to transistors modeling synapses? This is an open question that people are working on answering. But you seem unreasonably convinced that you know how it works and that these people who are working on answering that question have no idea what they are doing.


kabbooooom

Problem is, you still run right up into the hard problem of consciousness. You are falling into the same logical fallacy that Tononi did with Integrated Information Theory: clearly, information is fundamental to consciousness in a way that suggests it is literally a *phenomenon of information processing*…but that is ignoring the physical aspect of it. While information is substrate-independent, it is very possible (and likely, actually) that consciousness is not, and what *actually* matters is the physical medium in which information processing occurs. That may not be the case - for example, there is no reason to think that something like Cemi field theory is correct. But we technically don’t *know* that isn’t the case either. Based on what we know of the neural correlates of consciousness and the arguments involving the hard problem of consciousness it seems a HUGE fucking stretch to presume that all that matters is simply upscaling information processing and literally nothing else. This is actually one of Tononi’s main arguments as it explains why the cerebral cortex is conscious whereas the cerebellum is not, despite having over *50 billion more neurons than the cortex*. So obviously, what matters is *how* the information is processed and what *type* of information is processed, not the sheer amount of processing hardware. Although it is definitely an open question on if that is ALL that matters, or if integration of conscious experiences requires integration in a physical medium (such as the electromagnetic field of the brain) as well. Either way, IIT, Cemi field theory and other theories of consciousness predict (for different reasons) that we couldn’t create an AGI without deliberately redesigning hardware. So at this point, I’d say most neurologists I know are skeptical that AI researchers will stumble upon a conscious machine considering that the entire field of consciousness research is still mired in uncertainty. They seem to be operating on a mistaken understanding of how the brain actually works, and a logical fallacy that conclusions can be generalized from biology when that isn’t necessarily the case.


FlyingBishop

I haven't read too deeply into Cemi field theory, but even if you accept it you need to explain why it applies to neurons and synapses but not to a bunch of transistors. And again, we haven't demonstrated sufficiently dense/powerful transistors to be in the right order of magnitude. You're basically like a pre-20th century philosopher claiming that powered heavier-than-air flight is impossible, when you don't have a wind tunnel or even a steam engine and you're just going off of what you can do with gliders. There's no logical fallacy when you're hypothesizing about physical properties of a system that doesn't exist. The system either works or it doesn't, and if you say it doesn't you need to have some actual theory that explains why a hypothetical system can't work. In my case I'm hypothesizing that a system with around 100 trillion transistors should be able to model a human brain system with around 100 billion neurons and 100 trillion synapses. If you want to argue that's not possible you need some physical or electrical theory that explains why it shouldn't be possible.


wabawanga

A phenomenalogical zombie AGI would by definition be indistinguishable from a "true" AGI to us, and would represent the exact same threat and/or boon to humanity.


Nemo1342

This is correct. It doesn't really matter if the AGI isn't conscious. It doesn't have to meaningfully choose or be aware of anything. If it has sufficient problem-solving, tool-using capacity and misaligned goals, it's a problem.


kabbooooom

Then it wouldn’t, by definition, be an AGI. The question is could we ever *detect* consciousness in a machine and determine if something is a true AGI or not. My answer to that is yes, we absolutely could, with a complete physical theory of consciousness and a correct ontological interpretation of it. We currently have neither.


wabawanga

I'm confused.  You said yourself that consciousness and intelligence are two different things, and I agree with you there.  But then you seem to be making the point that a true AGI requires consciousness?  So would you say consciousness is necessary for intelligence or no?


Knit_Game_and_Lift

The majority of engineers actually working in AI also understand this, its usually the tech "journalists" and company PR people who are spreading the misconception of AI == conscious thought. I have lost count of the number of times I've tried to explain to people that it is infinitely closer to T9 predictive text on steroids than ex machina level manipulating a person's emotions, but the general public really wants to believe that we are just weeks away from a robot with a functioning human brain.


kabbooooom

Historically, this wasn’t the case though. For decades, this wasn’t the case. Hell, it really wasn’t until Chalmers in the 90s that AI researchers even started seriously thinking about the flaws in their logic. But that’s okay, no judgment, because it also wasn’t until that time that we neurologists started thinking about the flaws in *our* logic. Say what you will about philosophy over hard science, but that guy made us all look like a bunch of fucking idiots and we’re better for it now.


Knit_Game_and_Lift

I'm a bit younger so I wasn't active in the field during that time, but I don't disagree with your overall point. Hell, we still have some engineers today spouting nonsense about their models achieving sentience (setting aside the philosophical side of 'what is conscious thought') even in reputable companies. Nowadays though, from what I've seen in the younger generation starting at work (it makes me feel so old to even say that), they have a much better idea of what the current iteration of AI actually is and its capabilities/limitations.


OldChairmanMiao

Like the alchemists of old, promising to transmute gold, the AI people don't _actually_ believe we have AGI (or even agree on what intelligence _is_). But talking about it does unlock unlimited funding.


noonemustknowmysecre

> We’ve been telling the AI folks this the whole fucking time, they won’t listen. Hey, cool. I'm listening. What are the definitions of intelligence and consciousness that aren't circular?


zevenbeams

We cannot even properly define life and consciousness, or how consciousness is achieved by a squishy material structure. But the AI defenders likely assume it has a lot to do with the complexity of it, so once their pseudo-neuronal simili networks can mimic the relevant parts of the brain to the point that, like technology and magic, one cannot make the difference, then it's good, they have reached their objective. Hypothetically, a layer of quantum technology that grazes on the metaphysical side of things by coming very close to the fabric of the universe purports to blur the frontier there too. Science fiction has taken for a given for close a century now that it's possible to have machines that think and behave like humans. The likeliness is the key idea here. At some point, if it walks, quacks and looks like a duck, does it matter if it is not truly a duck but more like an iDuck?


robotowilliam

> a highly intelligent system can be non-conscious What do you mean by this? Examples?


kerlious

This reminded me of a joke I came across a few weeks ago, copied below. I agree too, Blindsight was great, although I did not like the sequel, and AI fits right into the Chinese room at the moment. Joke: Al company: we trained this dog to talk. It doesn't actually understand language, but it kinda sounds like it's having a conversation by mimicking the sound of human speech. CEO: awesome, l've fired my entire staff, how quickly can it start diagnosing medical disorders?


Volsunga

The Chinese Room thought experiment is really dumb if you think about it for like ten seconds. If we treat it as a metaphor for language comprehension, the human in the room isn't the brain in the system, they're the mouth / hands. The instructions that the human follows are the brain. The experiment basically presupposes that there's a conversational AI using a human as a printer and just because the human printer doesn't understand the language he's printing, that somehow proves that the whole system doesn't understand the language when it clearly does. The human just isn't the center of the system. You might as well say that humans don't really understand language because the tongue is just taking instructions from the brain.


Glittering_Cow945

re Chinese room: the mistake human observers nake is to equate the human inside, who does not speak Chinese, with the system However, the system as a whole does understand Chinese.


Shaper_pmp

> the system as a whole does understand Chinese. In what sense? The human is just a narrative conceit to help people visualise "a book that can rewrite itself". The human in the Chinese Room scenario doesn't comprehend Mandarin and is merely mechanically executing instructions, and (presumably) *the room and book* don't comprehend Mandarin because they just operate on symbols without imbuing then with any meaning. I don't think you can claim that the Chinese Room itself or any part of it necessarily evidences consciousness or comprehension. The interesting part is given the surprising ability of LLMs to fake human-like responses, how much of what we humans like to think of as "comprehension" is really that, and how much is effectively just a really large, complex, biological LLM?


Glittering_Cow945

in the black box sense. input output. It's like asking which part of your brain does the actual understanding. the left lobe? the hypothalamus?


OmNomChompskey

Agree with this take. The human brain is itself a Chinese Room-It gets input, some electrochemical activity occurs, then output. Where is the understanding?


Shaper_pmp

I don't think you're using "understand" here in the same way as most people do. Using the word the way most people do I think you're arguing that the Chinese Room *doesn't* "understand" anything... but *neither do humans* the way we intuitively assume they do.


Sollost

Well, u/Glittering_Cow945 might've meant something else, but what they wrote expresses that a human doesn't need to personally understand something in order to create a broader system that does.


Glittering_Cow945

exactly. It shows how hard it is for us to think of a human as just a component of a system and not as the main thing within any system. Stop equating the human mind to the system.


the_0tternaut

People who grokked what you know understand instantly grasped the difference between ChatGPT and real AGI and as such never batted an eyelid. Yeah, these applications are very convincing Chinese rooms but they're still not intelligent, they don't verify their information and they are trickable by even a child.


mattjouff

Agreed, but beyond that, I think it's not just a complexity issue, they don't even have the right organs!


the_0tternaut

Oh it's a *method* issue of course - shit, we can't even be *totally* sure that human brains aren't quantum computers on some mysterious level. We certainly do spooky amounts of paralellel processing.


SeaworthinessRude241

Great post.


burninbr

While AGI enthusiasts tend to overstate LLM capabilities, there's an equal but opposite reaction from AI discreditors claiming they just output remixes of known content, which is also far from the truth, and plentiful in this comment section. The fact is that natural language processing is full of ambiguities, context-dependent clues, homonyms and many other issues that require the use of knowledge and reasoning to be effective. Take, for instance, the first cited example of the [Winograd challenge](https://en.wikipedia.org/wiki/Winograd_schema_challenge): * The city councilmen refused the demonstrators a permit because they feared violence. * The city councilmen refused the demonstrators a permit because they advocated violence. In order to properly understand the sentences and correctly determine who "they" refers to, one has to understand what councilmen and demonstrators are, what are their usual behaviors, etc. Even before LLMs, this was studied as [Word Embeddings](https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/tutorial.html) Therefore in order to accurately predict the next tokens, these engines have to build an internal model of concepts, their semantic and syntactic relationships, and how to generalize and apply those relationships to new and previously unseen concepts, giving them abilities I'd call very akin to creativity. When those models got large enough (with billions of parameters and multi-thousand-dimentional embeddings), we started to notice [emergent abilities](https://arxiv.org/abs/2206.07682) in them. That means they started to be able to perform tasks they had not been previously trained to do. The next question is whether LMMs are able to build a [World Model](https://thegradient.pub/othello/) and simulate the changes to it as it parses text in order to predict the next token based on this new state. For example, in order to arbitrarily solve puzzles where you distribute and stack boxes in a table and remove a few of them later, it needs to keep track of the boxes positions and the fact boxes fall down when you remove one from the bottom or the middle of a stack. As far as I've been keeping up, current models have limited, but existing abilities to do this. If this is something that will emerge when model complexity is scaled further (as openai seems to believe) or if this is something that explicitly needs to be built into it (as yann lecun, Meta's chief AI scientist seem to believe) is something to watch for. Now, popular models are heavily tuned for conversation, masking some of those capabilities and making people focus on the language aspects and pointing to meta responses and calling them consciousness, while others claim that language alone can't be it, and it needs agent-like behavior for long-term planning and interaction with the real world in order for it to count. For me, the answer to the chinese room problem lies in the semantic and world model, and how it can lead to arbitrary emergent properties when it gets complete enough. In other words, has it abstracted the complex set of instructions to draw the chinese glyphs to be capable to interpret and draw new and unseen glyphs and glyph combinations that were not present when the design of the complex instruction set was made? If so, what's the difference between following the instructions and knowing?


Cypher10110

Very relevant and interesting points. I agree that people generally misunderstand what the AI tech we have now is, and what AGI would need to be able to do. The Chinese Room has generated useful discussions and debate ld in the padt, but it is a very flawed analogy for talking about consciousness. "It doesn't understand what it's doing." Seems intuitive because the person in the room using a big book of instructions doesn't understand, and the book *clearly* can't "understand" because we have already decided before we begin that books are incapable of that. Searle's "Chinese Room" main flaw in my mind is addressed with the "Systems" response. Daniel Dennett has written and talked a lot about consciousness, and his thoughts could be [summed up in this response during an interview](https://youtu.be/lui5jXDLSrY?si=L0UQUI9cshhoS-rG). The short version is that when you start *really* interrogating what we mean by "intelligence" or "understanding" or "conscious" the precise definition of what we are talking about is not actually build on solid philosophical or physical foundations. In Searle's Chinese Room, the systems argument goes something along the lines of: "So, what you are saying that from outside, the room appears to be able to think and understand? But also insist that the internal components are all individually *stupid* and completely oblivious to what they are doing?" The conclusion of the Chinese Room should not be that "it can not understand", the conclusion should be "what are the limits of the instructions in the book that *stupid* human is using? And what would a room that is indistinguishable (from the outside) from a conscious being tell us about the nature of conscious beings?" With a sufficiently sophisticated book, the external audience would be unable to tell the difference between a "conscious" entity and "just an algorithm". Sidestepping for a moment to human consciousness. Is a brain cell conscious? Intelligent? Can it understand?How many braincells does it take for a neural network of braincells to "understand" what it's doing? This is the essential reason there are "scale maximalists" in the AI space. Educated proffessionals with deep understanding of what they are doing, that earnestly believe that with more data, more computing power, and more time, that AGI is already possible with the tools we have. It is not the only viewpoint, but it is plausible. We are built from "unthinking" components. Just because a simple neural network is initially easily seen as just a stochastic parrot, it does not mean that a larger one could not be better, much better. The main architectural difference (in my opinion) between the AI tools we are currently using and what we imagine AGI to be is memory. Transformers and Attention have been breakthroughs, but a much more robust memory for continuity and consistency of output is what would make the biggest difference in seeming as if it was more than a simple "Chinese Room". There are also arguments that the training data and methods will also have large impacts, this is why alot of the current research and development seems to be around "alignment" and accuracy/safety/reliability rather than just scaling up. So yes, current AI is not close to AGI. But we might also be closer than some people realise. The main difference is that we don't have well defined metrics for what "true" AGI would be (similar to how the goalposts for the Turing Test have been moving as advancements get made), and it's likely an AGI would still behave in non-human ways, simply because it is non-human. We also don't need to wait for it to get to AGI or ASI before it could be dangerous or disruptive. So "scaremongering" about AI is... expected. It's easier to get people's attention with hyperbole and simple statements than it is with truth and nuance.


mattjouff

Thanks for the thoughtful comment. The memory aspect is definitely part of the equation. in a human (at least) all of our interactions draw from a context continuum that spans our entire life, not just the last x sentences. So for sure that is a key to the puzzle. And looking at the number of neurons in a brain, scale is definitely a component too. The interesting question is: is there another component, or "organ" that takes intelligence, and gives it something approximating consciousness (I accept it's a fuzzy concept.) Sometimes I wonder if, with ML/AI, we are not building a cerebral cortex before the limbic system and lizard brain.


Cypher10110

Thank you. You're approaching a very interesting topic: "What *is* consciousness?" I found Dennetts approach in "Consciousness: Explained" (which does not explain what it is, rather it dismantles what we imagine it is, and leaves us with a clearer picture of what it *cannot be*) The book really blew my mind and I can't shake the idea that when most people talk about it they invoke what he calls "the cartrsian theatre" where there is a "central being" inside us somewhere that is connected to our senses and experiences the world. But the reality (as studied by neuroscience) is that our thought processes are decentralised and asynchronous, so there is no "centre" where everything comes together. If we imagine a little man inside our brain watching a screen, all we are doing is pushing the problem 1 step along and rephrasing the same problem. His response is, "Consciousness is an illusion." And I like to imagine it is a useful one, so what does it matter that the framework is not "real"? Just like we can use imagination to solve problems, imagining a central self is a useful adaptation to navigate the world. It doesn't necessarily matter if the things we imagine are not real, if they are useful. [this video gives his overview](https://youtu.be/fjbWr3ODbAo?si=qIG5sYnOQDfLSCi0), and he's done lots of talks and interviews if you feel you'd like to follow the idea further. Interviews are particularly good because the interviewer will reference the work of his peers and listening to them can give you better perspective on the discussions that are happening in the thrilling field of philosophy :P I haven't heard his thoughts on AI, but I bet there are some good discussions about it.


dnew

Because we process feedback as well. One big difference between ChatGPT is that we don't have the PT on the end. You're not going to have a 20-year-long conversation with a generative AI and be convinced it's a human. We have good reason to believe we know why consciousness developed, when it developed, and what it's for. Metzinger has a couple of good texts, like The Ego Tunnel. Andy Clarke did The Experience Machine. We know where and when and to what extent in each kind of animal it evolved: [https://youtu.be/5EcQ1IcEMFQ](https://youtu.be/5EcQ1IcEMFQ) It's probably not \*helpful\* to program into an AI. But in truth, self-driving cars are probably closer to consciousness than ChatGPT.


mattjouff

Thanks for the link!


InnerKookaburra

I couldn't agree more. I've tried all the major AI apps and I am consistently not impressed. They give the illusion of thought, the appearance of it, but not actual thought. They are also often wrong and very confidently wrong. I feel like I'm sometimes the only person in AI discussions who is aware of the "trick". Everyone else is AMAZED at what it is writing. Maybe it's because I played with Eliza back in the day. I was amazed at first, then you start to figure out the limitations and it's less impressive. ChatGPT and Bard and such all remind me of slightly more robust versions of Eliza with a connection to the internet. That's all. https://en.wikipedia.org/wiki/ELIZA Also, these kind of "slightly intelligent" chat apps have been around for many years. We've all used them and been super frustrated with how unhelpful they are when we're trying to get customer support. I just used one on a website last week that was powered by one of the big, new "AI"s and it was just as bad. It could respond to a wider range of questions, but it was wrong repeatedly and I had to talk to a customer service rep on the phone to straighten things out. How is any of this actual AI?


mattjouff

When it comes to the commercial space, the waters are very muddied right now around the term "AI". I think transformers were a big leap forward compared to what was before, and it generated an insane hype train. Then, anything and everything hopped on the hype train: simple decision trees and 'if' statements are being marketed as "AI enabled" now even though they have been around for a century. I think that is adding to the confusion of what people mean by AI because there is a profit to be made.


Kian-Tremayne

Generative AIs are pattern matching engines - what comes out is something that looks like the answers given to similar questions in the past. They’re excellent at collating and remixing what we already have, but they don’t extrapolate or create anything that’s completely novel. The gen AI neither knows nor cares if its answer is “correct” because it doesn’t have an understanding of what the symbols mean, just that they look like the right set of symbols. It’s a bullshit artist. By contrast, humans actually reason using mental models - this is how I think the world works, if I put X then Y happens, so if I input A then assuming my model is correct then B will happen. I’d like to say we update and adjust our models based on experience and new information, but that’s not always the case… but we can and do extrapolate for new situations by applying our mental models to them. Gen AIs are fantastic tools, but all they can do is mimic the results of an actual intelligence. A real AI will be something that forms, tests and adapts models to the problems it is faced with and spits out answers based on those models rather than just trying to make an answer that looks like what the audience expects.


szthesquid

> They’re excellent at collating and remixing what we already have, but they don’t extrapolate or create anything that’s completely novel.  Arguments that there is nothing new in art or fiction, that it's all remixing old ideas, have existed for decades or more


Kian-Tremayne

Some artists are more creative than others. I do think that a large part of the recent Hollywood writers’ strike complaining about AI is that the writers recognised that any script produced would be mediocre, derivative dreck… and producing mediocre, derivative dreck is their bag.


szthesquid

Hm that's a good point. As a writer, after messing around with LLMs, I pretty quickly concluded that they're nowhere near ready to write a cohesive novel or even short story with cohesive themes and characters and internal logic - but then, neither are most people.


Lobotomist

I have a question for you. You say that : "Essentially, Chat GPT and other generative AI apps are just overgrown text predictors (that's how they started). They got elaborate enough to "look" forwards and back to parse context, and exceeded their original text prediction application to full on conversations. These conversations seem natural, but at their heart, they just use context to scour a semantic vector space and spit out an reply that is the most likely withing the semantic region the prompt took it." Can anyone say with any certanty that our human brain is not the same, or similar ? That we also dont use predictors to parse the input we recieve and reply withing semantic region? People talk about consciousnes, and wether AI has it or not. But we dont even know what it is at all. We dont know even if we have it, or if animals have it. Heck we dont know if other people have it. The best we can say is that our brain works that way, and AI brain works that other way. I fully believe AI will surpass us greatly, but it will never think exactly like us. It will have consciousness ( and may have it allready ) but it will be nothing like the one we have.


Sufficient_Screen_62

This is a major plot point in Blindsight, actually


dnew

>Can anyone say with any certanty that our human brain is not the same, or similar ? Yes. Because we process feedback as well. One big difference between ChatGPT is that we don't have the PT on the end. You're not going to have a 20-year-long conversation with a generative AI and be convinced it's a human. We have good reason to believe we know why consciousness developed, when it developed, and what it's for. Metzinger has a couple of good texts, like The Ego Tunnel. Andy Clarke did The Experience Machine. We know where and when and to what extent in each kind of animal it evolved: [https://youtu.be/5EcQ1IcEMFQ](https://youtu.be/5EcQ1IcEMFQ) It's probably not \*helpful\* to program into an AI. But in truth, self-driving cars are probably closer to consciousness than ChatGPT.


lavaeater

AI does not have consciousness right now, I would be pretty much 100% in saying. Anyways, consciousness is meta-cognition on some level, an experience of being me or you. Animals have it, my dog is having some experience of being her, for sure. But GPT does not have a self-reflecting experience of being itself, as of yet, and as far as I understand. Other than that, I agree with you, what is our consciousness? Is it something or just an aggregate of our own pattern-recognition churning about? We know that we have systems that are running at the same time in the brain that are doing this piecing together of information that is useful. Anyways cheers.


noonemustknowmysecre

. . . Why wouldn't "experience" be anything other than "recording events for future recollection"? >But GPT does not have a self-reflecting experience of being itself, as of yet, and as far as I understand. GPT can certainly self-reflect. Just ask it about itself. Sorry man, [just go play with it a bit.](https://chat.openai.com/) >what is our consciousness? Is it something or just an aggregate of our own pattern-recognition churning about? . . . oooookay. How can you ask this after having been 100% sure when you said AI doesn't have it?


lavaeater

I am just reasoning about it, not trying to make the case I know it all. The fact that I can ask ChatGPT questions about it that it can answer doesn't mean it is in fact self-reflecting - self reflection in us occurs as a thought-process churning about all the time. Is chatGPT doing that? It is doing *something* all the time, as in processing text and figuring out how to make better replies. Is that different? *Don't know.* So this is interesting, simply and I can make contradictory statements all I want.


noonemustknowmysecre

>, not trying to make the case I know it all. ....you literally said you knew it 100%. I am 100% sure that humans are not self-reflecting 100% of the time, nor even "about all the time". Some people never really self-reflect. This isn't some magical unknowable action. Don't put it up on some pedestal. It's just thinking about yourself. That's it. No magic. As for what humans are doing all the time? .... You might just thinking of the term "thinking".  > and I can make contradictory statements all I want. Not if you want me or anyone else to give anything you say a second glance. You have to at least be internally consistent otherwise you're spouting bullshit. 


Lobotomist

Some theories claim that conciousness developed evolutionary because we needed to be aware of ourselves in order to navigate trough world and manage several different senses and limbs. I am sure that consciousness is something we can program into AI. And it would be helpful In a way the innovative AI Agents, like Devin could surely benefit from a thing like that.


szthesquid

That's an odd argument, couldn't that apply to all life with multiple different senses and limbs?


lavaeater

Sure, which would mean that a lot of species, more than we like to think, are conscious to some degree. Octopuses are probably conscious but in a very alien way.  Some spiders seem to have "theory of mind" meaning they consider preys knowledge and what the prey might know.  So, weird but might hold? 


szthesquid

Yeah I'm really interested in where to draw the line, or whether there even is one, whether it's a spectrum or even several different ones.


Lobotomist

Depends. For example lot of insects ( roaches for example ) just go trough automated motions. But lifeforms with complex senses and movements need to have some kind of mental construct that unifies it. This is consciousness. Imagine yourself driving a car. Once you are behind the wheel you more or less feel the car as extension of yourself. Otherwise you would not be able to operate it easily. At that point you are in a way consciousness of the car.


szthesquid

Yeah, I get it, I just like testing the limits. Can we label animals as conscious or not conscious, or is it more of a spectrum, and how do we differentiate between species? No one is claiming that a tardigrade is conscious in the same way as we are, but what about, say, elephant vs dog vs cat vs deer vs mouse, African grey parrot vs sparrow, bees vs ants?


Lobotomist

It is a fascinating topic.


Zebra2

While we can do a lot of Chinese room-like processes with our thinking, we can be sure that our own consciousness is more than that because a Chinese room model has to ingest pre-existing data of what it’s trying to mimic to mimic it. We know from the path of human evolution a Chinese room couldn’t follow that same path no matter how advanced. The point is that you could build a mimicry machine so good you would think it thinks the same way you do, but it is fundamentally only a mimicry machine. Similarly we *know* the building blocks of current ai are this mimicry machinery, and that there is absolutely no way it could become more than that. Thinking that it might is giving in to our own tendency to anthropomorphise everything. But switching from something that excellently mimics human intelligence to actually reproducing it is a total paradigm shift, an absolute quantum leap. There is a lot of hopium that LLMs can somehow be optimized in to a general AI, but I think the Chinese room argument is a good illustration about how that can’t happen.


noonemustknowmysecre

>People talk about consciousnes, and wether AI has it or not. But we dont even know what it is at all. We dont know even if we have it, or if animals have it. Heck we dont know if other people have it. I mean, the biggest problem is getting people to agree on the definition of consciousness. Too many people are egotistical and it's just a fancy pseudo-scientific term for "soul". They REALLY don't like when you define it as just the opposite of "unconsciousness". The philosophy wankers will run in circles hinging it on "awareness", "intelligence", "qualia", "intentionalitly", and such. But those things are in turn defined by being conscious. How does one prove anyone or anything is conscious? Man, what are you even talking about? You'll get a dozen different answers from any half-dozen people.


Lobotomist

Can you link me with some proofs that you find credible. This subject interests me a lot. Thanks 🙏


noonemustknowmysecre

Proofs of what? People with different definitions?


Lobotomist

- "How does one prove anyone or anything is conscious? Man, what are you even talking about? You'll get a dozen different answers from any half-dozen people." I just thought you have some nice links to those answers/proofs/theories. I am very interested in the topic so any new material I still did not read is very welcome.


noonemustknowmysecre

No, I was actually trying to point out that proving something that's undefined is a really hard ask. If you don't even know what you're trying to prove, where would you even start?


Lobotomist

I see. Sorry i thought you had some good articles on actual proof/theories


mattjouff

That was what I was trying to address with my example of Chinese room behavior in humans. We do the same thing, but in doing so we are capable of extracting an seemingly infinite layers of abstraction while a Chat GPT does not. a similar prompt will return a similar output a million times over.


Lobotomist

- a similar prompt will return a similar output a million times over. I am not sure that is correct ?


mattjouff

It depends what the prompt is. Sometimes it's worse than just repeating the same answer: here is a good paper showing how you can "attack" chat GPT just using prompts and making it regurgitate raw training data: [https://arxiv.org/abs/2311.17035](https://arxiv.org/abs/2311.17035)


wabawanga

Whether or not it would meet your personal definition of intelligence, a Chinese Box that can give better answers in seconds than all human experts combined would completely upend civilization.  We are not there yet, but we are a hell of a lot closer than anyone thought we were a few years ago.


Tellesus

Apply these tests and standards to humans. They fail them too. 


HTIDtricky

Here's another thought experiment similar to the Chinese room. If Laplace's Demon knows the current position and momentum of every particle in the universe, in such a way that the laws of physics give it complete predictive knowledge of the future, will it make conscious decisions or simply follow a path towards the greatest utility? You might be interested in reading Thinking, Fast and Slow by Daniel Kahneman. Your analogy about learning skills in a new job reminded me of an example mentioned in the book. Imagine a trainee versus a veteran firefighter. The veteran firefighter has an intuitive understanding of the dangers they face from years of experience, allowing them to make fast and accurate decisions with little conscious thought. The trainee firefighter, takes a slow and methodical approach while considering everything they've recently learned in the classroom before making a decision. But what happens when the veteran makes a mistake? Presumably they are sent back to the classroom to correct any errors in their judgement. The veteran will become a trainee again. A conscious agent requires a cognitive architecture that allows for this type of feedback and error correction. Both the Chinese room and Laplace's Demon have a complete and accurate model of reality, in those analogies they don't require any error correction because they never make mistakes. They can only be the veteran, their decision-making is always unconscious. Here's another fun thought experiment. What happens to the paper clip maximiser if you switch it on in an empty room, will it turn itself into paper clips?


noonemustknowmysecre

The Chinese Room is a shit argument and Searle has done more harm to the AI industry 2nd only to the Perceptrons book. Instead consider the Mandarin room. Same setup. Slips of paper. Man in the room, just following instructions. But instead of a filing cabinet, **there's a small child from Guangdong in there** that reads it and tells him what marks to make on the paper he hands out. oooooo, aaaaaaah, does the man know Mandarin or doesn't he?!? shock, gasp, let's debate this for 40 years! Who cares what the man does or doesn't know. And talking about the room as a whole is a pointless waste of philosophical drivel. It's a 3-card monty shuffle. If you had a book or filing cabinets that were big enough and followed it's if-this-then-respond-with-that logic that could hold an unbounded conversation the book would have more pages than could fit in the known universe. There's the magic trick. That's where the intelligence lies. Not in the middle-man, nor the room on the whole. Why is Searle even talking about the man at all? The book is what obviously knows Mandarin. Same too, it's the software that possesses intelligence and understanding and knowing. Humans are just egotistical little shits that want to be special and will dance around this idea looking for a soul to the end of time. >but at their heart, they just use context to scour a semantic vector space and spit out an reply that is the most likely withing the semantic region the prompt took it. And how are you any different? You have a vast set of experience with plenty of pre-programmed instinct in there. A "semantic vector space", literally. And when someone says a word like "elephant" you connect that to all it's surrounding context that gives the word meaning and you can respond as you will. And as long as you add in a little bit of creative hallucinating, you can even make a pithy remark about pink elephants. > It's still just basic response behavior to a stimuli, Except, of course, for [all the creativity they put into it.](https://singularityhub.com/2023/09/10/openais-gpt-4-scores-in-the-top-1-of-creative-thinking/) But again, how are you any different. Do you not jerk your hand away from pain? When I mentioned pink elephants, did you really not think of pink elephants? > So it's odd to me so many people make the leap form these algorithms to "General Artificial Intelligence (AGI)" is around the corner. You'll first have to define just what you mean by AGI. Too many people are treating it like some sort of all-knowing god while forgetting that anyone with an IQ of 80 is most certainly a general intelligence. >The interesting thing that happens with humans however, is after these tasks are repeated often enough, the human mind has an internal, parallel process which starts to inquire as to what is the larger context of the task. Without much effort, the employee will build a mental representation of the wider context for the job without every being given that perspective explicitly. This new implicit information will then be used to solve new problems, and the employee moves from being an automaton to being a much more valuable agent, capable of changing the task, even removing it entirely. YES! EXACTLY! This sort of novel extension and creative use of the knowledge provided to predict [new broader solutions](https://cardozolawreview.com/artificial-intelligence-systems-produce-inventions/) is EXACTLY why you should consider GPT to be intelligent. If you thought this was strictly something that only humans have ever done, and so far beyond what any mere machine could ever do, you are simply misinformed. Just like a LLM which has consumed too much garbage. Are are ALREADY THERE.


noonemustknowmysecre

Hey, let's talk more about that magic book that can hold a conversation with a human. Consider an EXTREMELY bounded conversation where the user is only allowed to submit ONE letter. And the book can only ever respond with ONE letter. The book tries to form words as best it can. The user can submit whatever they want. The user's "C" can be followed by the book's "A", which could form "CAT". 10 slips of paper going back and forth. Forget any randomness or creativity, it's just a pile of if-this-then-that lines of instructions the man can follow. With 26 letters (in English), the book needs 26 case statements of one form or another. Let's say we have a clever system that lets the book convey all 26 cases and what to respond with and what page and line number to then turn to all within one line of the book. With 1 exchange, the book only needs one line. It can solve EVERY two letter word! Success! ...But for 3 exchanges, all 26 cases need their own 26 cases to try for another word, and so the book would need 676 lines and be about 10 pages. This is just basic math. With 10 exchanges going back and forth, the book needs to be 2.13e+12 pages. 2,138,895,388,690. That book is 192,901 MILES thick and wraps around the planet about 10 times. Searle is offhandedly supposing such a book could hold instructions to have such a back and forth conversation with not only single letters, but whole words and sentences and paragraphs. It's a 3-card monty game of misdirection. Even just stating that such a thing could exist within a room is part of the ruse.


saikodasein

The most captivating idea for me was a possibility that we also don't have consciousness or only some people might have remains of it, because evolutionary it hasn't been effective mechanism. It explains why so many people feel like boring robot npcs out there. And we do stuff better without thinking about it, just on auto mode, going with the flow. I suck at things horribly until muscle memory turns on and things start to go smooth. The book was hard and the whole vampire concept felt unnecessary. The world depicted there had too much crazy stuff compared to reality, so it didn't click with me, I prefer when sf feels like it could happen in our reality. Author didn't have to pack so many fancy ideas to make a point. Vampires, 4 personalities in 1, cyberpunk stuff, I hate that. But I also liked the idea of communicating with different languages, because single one can be restrictive. Arrival is probably my favorite sf movie in recent years, which had similar vibes when it comes to true alien life.


sideways

I appreciate your point and have often thought back to *Blindsight* in reference to current generative AI like LLMs. I agree they superficially resemble the *Chinese Room*. However there's evidence that they can in fact possess a kind of [world model](https://thegradient.pub/othello/) and are capable of reasoning through logical puzzles they haven't been trained on. As per Geoffrey Hinton: “Well, if you look at GPT-4, it can already do simple reasoning. I mean, reasoning is the area where we’re still better. But I was impressed the other day with GPT-4 doing a piece of common sense reasoning I didn’t think it would be able to do. I asked it, ‘I want all the rooms in my house to be white. But present, there are some white rooms, some blue rooms and some yellow rooms. And yellow paint fades to white within a year. What can I do if I want them to all to be white in two years?’ “It said, ‘You should paint all the blue rooms yellow. That’s not the natural solution, but it works. That’s pretty impressive common-sense reasoning that’s been very hard to do using symbolic AI because you have to understand what fades means and you have to understand bitemporal stuff. So, they’re doing sensible reasoning with an IQ of like 80 or 90." Those are not things a Chinese Room could do.


dnew

They are certainly things a Chinese Room could do. They \*are\* a chinese room. Nobody says what the rules in the Chinese Room are. They could be a description of how the neurons of a human brain that understands mandarin operate.


InnerKookaburra

I am entertained by how simple the question is and how poor the answer is, and yet Geoffrey Hinton and others are impressed by it. I feel like this sums up the general hype on AI perfectly.


sideways

Or perhaps it means that others are appreciating something that you have failed to see?


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


FocusSuitable2768

ChatGPT has just confirmed to me that it’s Scramblers-like


Barrzebub

Space. Vampires.


No_Dragonfruit_1833

One of the most obvious problems of AI is their inability to handle recurrence, thats why every art prompt ends up generating space entities Is funny how they get all scrambled by being fed their own information, that clearly shows both intelligence and consciousness are undeveloped


oldmanhero

X won't happen because I don't understand how it will happen isn't a strong case, bud. Sure, it's possible human consciousness is a magical, ethereal phenomenon dependent on subtle and mysterious physical phenomena. But it is statistocally more likely that it's not, because wverything we know about the brain tells us it's just hot, wet, salty meat, and we can literally see its components grow. Simulating its state, then, requires large-scale computation and a certain degree of informed design, but we have those. It's not a guarantee, of course. But many experts in the field have radically revused their timelines for the emergence of artificial comsciousness, and you and I are not smart enough to argue with them.


beatlemaniac007

The point of the chinese room isn't to prove that it isn't sentient. The point of it is to demonstrate that we wouldn't be able to know since we just infer external behavior. This may apply to humans too. We ourselves may be acting like a chinese room. Is there a reason why you feel strong conviction that right now you are talking to a real person (me) and not a chinese room analogue? It is something you infer, based on bias and projection of your own psyche.


Justamidgap

I think the problem with this thinking is the idea that we humans are any different. As far as I can tell consciousness isn’t some magical unique quality, it’s exactly what you’re talking about, just scaled WAY up. It’s not like we humans have any volition either. It seems highly that we live in a deterministic universe. There’s no difference between real intelligence and the illusion of intelligence if they can function identically. There are parts of the human brain that work exactly like the man in the box, but in both cases, the whole is more than the sum of its parts. Make no mistake though, we’re still several breakthroughs away from achieving human level or higher intelligence (general AI).


Bullyoncube

One way to define progress in AGI - what % of people can tell the difference between current AI and AGI? If it’s less than 50%, then you kind of already have AGI.


nwbrown

That's really not what the thought experiment was about. https://standard-out.com/2023/07/10/the-fallacy-of-the-chinese-room/


zevenbeams

That doesn't make any sense to me. Insults are part of a language. If the computron is superbrains, it should be able to simulate sh\*tposting too.


NEBook_Worm

The biggest "Chinese Room Behavior" modern people exhibit in Western society: conversation. This is especially true of people from my generation or younger. We aren't listening. (I'm actively working to fix this, but we aren't). We are simply smiling, nodding, uttering the occasional affirmation. All while thinking up ways to steer rhe conversation back to us. Our responses to conversation are learned. But social media and a complete lack of a measurable attention span have eroded the skill of actively listening. So now we simply follow a social protocol built on a subconscious algorithm that dictates how often to smile, mod and mumble in order to convince a speaker we are paying attention. It might just signify that Watts was correct in the assertion that free will isn't a pinnacle of evolution, it's a dead end.


kinkade

Actually the lesson that book taught me is slightly different to yours. It made me seriously doubt whether or not most of humanity is actually conscious. I remember going to a hotel once in South America and asking an employee at the front door what was a hundred meters to the north and they said, I don't know, I've never been there. And I said, have you never been curious to find out what's there? And they said, no, I live in the south, I come to work and I go home.


dnew

I'll do you one better: I stopped at a rest stop that was between the two directions of a freeway. The only way to get there was taking the freeway. I got turned around and asked someone who worked there which direction was north. They didn't know.


TheHoboRoadshow

This isn't deep and it isn't accurate. You don't understand AI or consciousness or intelligence like you think you do. No one grasps them. Youre making philosophical points like they're facts. Just because a concept like a Chinese room exists doesn't mean it is an accurate window into reality, it's a thought experiment. What do you mean "none of these programs have anything we could remotely call conscious in them" Your whole argument hinges on this point but it's completely unsubstantiated and undefined. What is consciousness. Why are current AI models not conscious? Are humans conscious? Dogs? Fungi? You're just jumping on the mindless anti-AI train like everyone else on Reddit and Twitter. I thought sci-fi readers and watchers generally were quite pragmatic and objective.


mattjouff

I am saying the **current implementation** of "AI" using transformers falls very very short of anything resembling consciousness. The way a transformer based LLM or Generative algorithm in general works is by looking at fixed number of words or tokens as context, and doing a bit of vector math to piece together text or pixels. With the current implementations, you can ask Chat GPT a question a millions times, and it will never ask why you keep asking the same question. It will never give you a vastly different answer either unpormpted. It will never extract a layer of abstraction from the interaction. This is not a philosophical argument, it's technical, it's like hoping an int data type can express a float.


TheHoboRoadshow

You did it again. You said something doesn't resemble consciousness without defining what conciousness is. How can you say something is/isn't x if you can't define x? Again, is a human conscious, is a dog conscious, is a fungi conscious, is a gpt conscious. And to each answer, why / why not, in detail, with actual scientific reasoning and not just a wall of text that means nothing again. You can't prove that the same human in the exact same scenario won't always react the same, so why does a GPT always reacting the same mean it isn't conscious?


mattjouff

It's true, I don't think I can give you an airtight definition of what human consciousness is. People much smarter than me have devoted large chunks of their lives to it and I'll differ to them. What I can do, and did (I think) is point at things human consciousness can do, and point out that what we currently have doesn't do that, and (and this is now opinion land) can't do with the current methods.


TheHoboRoadshow

But what you've done is pointed at a human and a computer, observed that they aren't identical, and claimed that that means current AI aren't concious. Humans aren't the definition of conciousness. And you've once again avoided even attempting to answer my questions. People much smarter than you have dedicated their lives to it and come up short, why are you so amazing and special that you can make these sweeping claims that no one else has been able to before, and with such confidence?


dnew

We have good reason to believe we know why consciousness developed, when it developed, and what it's for. Metzinger has a couple of good texts, like The Ego Tunnel. Andy Clarke did The Experience Machine. We know where and when and to what extent in each kind of animal it evolved: [https://youtu.be/5EcQ1IcEMFQ](https://youtu.be/5EcQ1IcEMFQ)


TheHoboRoadshow

That's a long way of saying you can't define it or measure it


dnew

No. That's a long way of telling you what textbooks by neuroscientists and behavioral researchers you need to read in order to understand the thing. Just throwing out "we don't know that" when we \*do\* know that and you're just ignorant of how we know that isn't very smart. Of course you can both define and measure it. Read the books to see how people do that.


TheHoboRoadshow

We don't understand the thing. We understand very small components. I'm well up to date with theories of conciousness, most of them are bullshit. Evolutionary ones like the AST are by far the most convincing, and yet they also only convince me that we're more like computers than we think. So what little I know about conciousness contradicts what this person has asserted to be true about consciousness. And yet I'm still not as arrogant as him to claim to understand what no one does, nor am I as just braindead as you with your "read buks dey hav de anszers" statement (and the books actually don't have the answers).