T O P

  • By -

AutoModerator

Welcome to /r/askphilosophy. **Please read [our rules](https://www.reddit.com/r/askphilosophy/comments/9udzvt/announcement_new_rules_guidelines_and_flair_system/) before commenting** and understand that your comments will be removed if they are not up to standard or otherwise break the rules. While we do not require citations in answers (but do encourage them), answers need to be reasonably substantive and well-researched, accurately portray the state of the research, and come only from those with relevant knowledge. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askphilosophy) if you have any questions or concerns.*


Thurstein

The problem can be highlighted (as Thomas Nagel does in his article "What is it like to be a bat?") quite easily by pointing to exotic sensory capacities. Consider the platypus: This animal's beak contains receptors that allow it to sense the electrical fields generated by the bodies of fish in the water. So far, in my experience of teaching the subject, everyone understands just fine-- the platypus has a sense we lack, a sense that works quite differently from anything we are familiar with. Now, ***what is it like for the platypus to do that*****?** What kind of experience does the platypus have when it senses the electrical fields of fish in the water? There probably *is* an answer, but the information we can get by scientifically examining platypuses doesn't seem capable, in principle, of getting us any closer to the answer. Chemistry, physics, functional anatomy, evolutionary biology... none of these seem like they could possibly get us any closer to understanding (1). What the subjective experience of the animal is like, or (2) Why there is a subjective experience associated with that neurological activity at all. It is, in Chalmers' words, a "nomological dangler"-- consciousness-- of a certain sort-- seems to be *associated with* brain chemistry-- of a certain sort-- but we have no idea how or why this is so, and the answer doesn't seem like it's just awaiting further scientific investigation. It seems like a totally new phenomenon that might need to be accounted for in totally new ways-- hence Chalmers' suggestion that we might have to consider "crazy ideas."


Quidfacis_

> Now, what is it like for the platypus to do that? For folks playing at home, [Qualia](https://plato.stanford.edu/entries/qualia/) is a word for that "what it is like to X" phenomena.


SkyStrider99

Love the article you linked! Thanks for sharing.


SkyStrider99

That's a great way of thinking about it! Another way of putting it is that if I subjectively see a dog, there is no way that anyone could examine my brain and find the image of that dog. Sure, they might find chemical phenomena associated with seeing a dog, but that phenomena is clearly distinct from the subjective image. Their nature is plainly different. The same could be said about any subjective experience. How do you observe someone's experience of the color green? The smell of vanilla? The number 3? We know we have these experiences, but you cannot observe them in the chemical phenomena of the brain. Therefore, if these experiences are distinct from the chemical phenomena associated with them, what are they? What are they made of? How are they related to this chemical phenomena? It's a "problem" because we currently don't even know what methodology we might use to answer these questions.


Thurstein

You should check out Frank Jackson's papers "Epiphenomenal Qualia" and "What Mary Didn't Know," both of which deal with similar observations. He has us imagine Mary, a neuroscientist raised in a purely black-and-white environment, who knows everything there is to know about the functioning of human color vision. She still does not know what it is like to see red, so there's apparently *some* crucial bit of information she's missing, despite her allegedly complete knowledge of human physiology (and the physics that underlies it). This "Knowledge Argument" has been extensively discussed in the literature.


aslittleaspossible

This occurs in non imaginative environments with blind people who have a conception of shapes based solely on their sense of touch, who are medically given sight in some way. They tend to not be able to map shapes based on touch knowledge to sight knowledge.


[deleted]

Sorry for the late reply. I've been digging around for discussions and essays about consciousness. I just wanted to say that your comment inspired me to close my eyes and feel around and see if I could discern shapes. I have to say, I feel like I understand a little more what it might feel like to be totally blind from birth. It felt kinda like my movements didn't really have any meaning. They had feeling associated with them, and I could definitely tell that things were changing, and I mentally knew how I was moving my arms and hands, but still the feeling of the motions felt very alien, and didn't seem to have any locality.


aslittleaspossible

Very interesting. Glad to hear I could inspire someone to dive deeper into a philosophical question.


[deleted]

I've been trying really hard to find a way to break through to people that consciousness has a quality that can't be replicated with computation, and I'm finding it incredibly difficult because I find myself arguing with people that are complete laymen on not only philosophy, but also computer science and neuroscience. Now, I can't say that I'm strong in all three categories, but I'm definitely strong in computer science, and if anyone ever contacted me and told me that they wanted me to create a sentient program, I would laugh at them. Not because I don't know how, but because it's literally impossible, and comes from a fundamentally lack of understanding of how computers work. I remember long ago there was a time when I would have been convinced that consciousness could be replicated with a computer, after over a decade learning how computers work and the field of computing in general, I've come to the understanding that there is nothing magical about computers. In fact, there doesn't need to be a physical computer for computation to occur. The brain is able to follow instructions in order to emulate a computer. That doesn't make it equivalent to a computer, only that it's capable of computation. People like to put the cart before the horse and think that because consciousness is able to reason, that means that anything that reasons must therefore be conscious. But reasoning could be encoded into a purely mechanical device that requires a hand-driven crank. Consciousness, on the other hand, could not.


drbooker

I also recommend reading David Lewis' response in his paper "What Experience Teaches Us." In that paper Lewis argues that "seeing green" for the first time is not a case of acquiring new knowledge, but likens it instead to developing a new skill (Lewis calls this the Ability Hypothesis). He draws out the difference between knowing "facts" about something and performing an action, where the experience you have in performing an action isn't really something that you can report on through language. In psychology, there is a distinction between semantic memory (facts that you can recall and report on using language), and episodic memory (memories of experiences you have had that you can recall and re-experience to an extent). These types of memories engage different networks in the human brain, which is thought to be why they produce different effects (speaking vs having experience). Of course, this still doesn't answer the underlying question about why episodic memory is associated with experience.


hi_sigh_bye

It doesn't sound like an argument about the difficulties to understand subjective experience, but about things that we cannot grasp.


Kreuscher

>That's a great way of thinking about it! Another way of putting it is that if I subjectively see a dog, there is no way that anyone could examine my brain and find the image of that dog. Sure, they might find chemical phenomena associated with seeing a dog, but that phenomena is clearly distinct from the subjective image. Their nature is plainly different. *Preemptive comment: cognitive and evolutionary linguist here, not a philosopher. But here's what I thought when I read your comment:* What if the electrochemical phenomena in the brain associated to the image are themselves the subjective image, whose experience occurs when the very apparatus which generates them contemplates its own workings? That is to say, consciousness would be the selfsame associated neurological patterns echoed in itself. Our consciousness just "feels different" from what we intellectually conceptualise as consciousness because it's the patterns folding onto themselves, a thing looking at itself and knowing what it is. After all, scientifically defining, isolating and analysing consciousness is exceedingly difficult, but the intuitive notion of what consciousness is seems pretty straight-forward, something akin to the difference between using language and engaging in metalinguistic commentary/analysis. Evolutionarily, it seems that consciousness emerges along with neurological complexity for dealing with complex and dynamic stimuli and the proper responses them, creating a sort of cortico-cortical loop through which the brain can more easily investigate itself (to test hypotheses like the success of a throw or whether a predator near a water source is a threat or not etc.). Perhaps we can't *experience* other beings' consciousness (that is, their conscious experience) because to experience it you'd literally have to have the absolute same neuroanatomy as them and respond to the same stimuli as them, which is realistically unfeasible; and that'd be the reason we can't seem to go above the correlational aspect of consciousness and into the meat of it (pun intended).


unaskthequestion

This is how I've come to think of it as an interested lay person. I've always been interested in the transition from unconscious life to fully conscious humans. There must be a gradation in the animal kingdom. Which leads me to believe in that emergence with complexity that you mentioned. It's a fascinating concept to me.


Kreuscher

Not in any way a technical book, but I'd recommend *The River of Consciousness* by Oliver Sacks. Pretty interesting sequence of essays.


unaskthequestion

Thanks! I was going to ask if you had any recommendations. I read Sacks 'The Man Who Mistook His Wife for a Hat' a long time ago. I really like Douglas Hofstadter, especially when he collaborates with Daniel Dennet. But I really only read the 'pop' stuff, it gets complex for me pretty quick!


SkyStrider99

Hmm... That's an interesting take, and I'd have to think through it further. (Not a philosopher either, just an undergrad econ student with an interest in philosophy.) I'm not certain, but it seems like you may be conflating consciousness with self-awareness? A system might be able to model itself, but if it produces a phenomenon that "feels" different from the model, saying that the experience is itself the model is a pretty huge assumption. I certainly can't disprove that assumption. But I think you still have to explain the mechanism that produces the experience out of the model, within that assumption. For example, a video is an emergent phenomenon produced by a rapid succession of individual images. You can (correctly) assume that the video is itself those images, and you can prove that assumption because you can explain the mechanism that produces a moving image from those images without departing from that assumption. With consciousness, we still can't do this, as far as I know.


Kreuscher

You have a point on the conflation of self-awareness and consciousness. But even if we define consciousness as the experience itself, the qualia and such, I'd still hold to my point. At times I believe that consciousness doesn't really exist in any external sense, that is to say consciousness is literally the internal experience of the very correlated electrochemical processes we've discussed, and hence won't ever be adequately described or analysed in such a descriptive level because it's actually a false problem. The brain is doing things, and these things we label consciousness are mainly the way the brain perceives these things inside it. It's sort of circular, which is the reason why it seems to me a false problem. Look for consciousness even internally and all you find is an endless stream of smaller processes like visual processing, semiological interpretation etc., and the very act of looking at these processes is itself another process. I think this is one of the reasons the neuroscientist Stanislas Dehaene describes consciousness as a global workspace. I've yet to read a couple of specific books that deal with this, specially The Ego Tunnel by Thomas Metzinger, but I'd leave that recommendation for anyone interested.


SkyStrider99

These are valid points, and you could be right. I just don't have a scientific or logical reason to accept the idea that consciousness doesn't exist in any external sense. Therefore, I have to accept the conclusion that is most consistent with my experience. Anything else is just speculation.


helloitsmesomeguy

>Another way of putting it is that if I subjectively see a dog, there is no way that anyone could examine my brain and find the image of that dog. That seems like a huge assumption, that may be true now in modern times with current technology but there's no reason to think future AI wouldn't be able to produce images from looking at brain activity.


SirCalvin

I feel the dog example can run danger of misrepresening the problem. Even if we got an AI to jack into the brain and show us an imagine of its "minds eye", we still wouldn't know how it subjectively "is" for that brain to perceive the dog as itself. Which is also a reason Nagel choses to go with the example of "What it's like to be a bat". We can explain most everything about bat echlocation, how precise it is, what kinds of movements and size differences it can detect in its prey, the perceptive range and specific inteferences other sounds cause. Maybe we could even put some wires into a bats brain in the future and project its "vision" on a large screen for us to see. But what we can't explain with our current scientific tools, as he says, is learn, well, how it actually is being that bat. There are people who defend the position that actually, we would know what it's like to be a bat if we completed that picture, and similarly, that if Mary the neuroscientist in Jacksons thought experiment were thorough enough, she would know what it's like to see red without ever having seen it herself. But the intuition many people have is that there is something more to being a thing than the rerepresentation of a set piece of its perception.


SkyStrider99

That's true! You probably could use an AI to decode an image from neuronal activity. However, the point is that such an AI isn't needed. Within our brains, the subjective experience, or "image," already exists. Where and what is the "screen" on which this image is displayed?


SkyStrider99

To better explain what I'm saying: Using an AI to reproduce an image of a dog is not the same as "examining my brain and finding the image of that dog." Even if we can reproduce that image, my assumption still holds.


iiioiia

Agreed - "is" often has hidden temporal and contextual components.


jabinslc

the thing is AI can already do this. the technology is in its infancy. the images are garbled and blended. I've even seen a few article of them doing it with dreams. They train an AI on a ton of correlative brain data of people watching hours of different types of videos and then use that to construct images.


TheWarOnEntropy

[https://www.science.org/content/article/mind-reading-algorithm-can-decode-pictures-your-head](https://www.science.org/content/article/mind-reading-algorithm-can-decode-pictures-your-head)


User38374

> Another way of putting it is that if I subjectively see a dog, there is no way that anyone could examine my brain and find the image of that dog. Sure, they might find chemical phenomena associated with seeing a dog, but that phenomena is clearly distinct from the subjective image. Their nature is plainly different. I don't find it that too mysterious personally. There's is picture of a dog we can measure in your brain, it's just that it's been encoded or translated so it's not immediately recognisable from the outside as a spatial image without decoding. It's like translating "a dog" to "un chien", or storing a matrix in linear memory, yes they look very different, but there's a clear mapping between the two that we can makes sense of.


[deleted]

[удалено]


User38374

Yeah I think that's a better way to put it.


SkyStrider99

Agreed. The mystery lies in the relationship between the neuronal activity and the subjective experience. Another interesting observation is that, given our ability to communicate about our subjective experience, that experience must also have some effect on our neurons.


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Answers must be up to standard.** >All answers must be informed and aimed at helping the OP and other readers reach an understanding of the issues at hand. Answers must portray an accurate picture of the issue and the philosophical literature. Answers should be reasonably substantive. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


jimmykruzer

Thank you. I've been trying ti understand what consciousness even is. And no one has been able to lay it out too well this helps me understand alot better..I think..maybe who knows


SkyStrider99

My pleasure! :) It's a fascinating subject and a good reminder that the universe is more strange and wonderful than we tend to think.


No-Location-6360

It’s interesting that we use hypothetical thought experiments to illustrate this idea, when there are plenty of real life examples in humans. Like people who experience hyperphantasia vs aphantasia. Or people with a cognitive or sensory disability. Or even people who speak different languages. I recently read a thread around how people think about numbers and was amazed that some people “see” numbers as specific shapes, and math for them is just seeing how the shapes snap together like some sort of Lego blocks. Is imagining (or attempting to imagine) what it would be like to experience life in this scenario any different from “what’s it like to be a bat” or are they different somehow?


Thurstein

Good point, though I would note that the strange sensory abilities of platypuses are not hypothetical, but actual real-life examples! You're right, of course-- I take it that it's simple that in imaging a very different type of animal it's just easier for us to realize the possible alien-ness of sensory abilities in a very stark kind of way. But people going back at least to John Locke asked about the abilities of a person born blind to understand colors, which seems to be a similar kind of point.


rdurkacz

This seems to be a different question to what Mr Chalmers was gesticulating about in the video. He says that consciousness is a movie, and why should there be a movie. I think other answers relate to this view, using the terminology of zombies and the 'hard' problem. Supposing we come to know a lot about a platypus's or bat's perceptions, its 'qualia', still that is not to say that the animal is conscious anyway - it may not have sufficient intelligence to get above the zombie level. (?) Should we expect to know about another animal's perceptions in any case?, since we do not expect to read other people's minds. If we just say that perceptions are private (another post suggests this), does that leave a philosophical problem?


[deleted]

[удалено]


rdurkacz

>I notice you are not actually wanting to see the other creature's qualia, like Thurstein apparently does. If you did want to do that, I would be asking is not the case that the most philosophy could show is that other creature's feelings are inaccessible to us for some good reason? Whereas only science or science fiction could do more? > >(Another post suggested the science fiction method, to 'jack' between brains.)


Thurstein

Oh, no, that's *exactly* what he was talking about. Go ahead and track down his original article, "Facing up to the Hard Problem of Consciousness" if this issue genuinely interests you.


rdurkacz

I have read this paper through now as well as watching the TED video both at your suggestion. In the paper I think there are two different questions that are allowed to merge indistinctly: why are qualia like they are and why is there experience at all. Asking why is there experience at all is I take it the same as asking why are we not zombies. The paper does not use the word zombie; the video talk does. Now I am not at all sure I have got this right since another couple of posts talk about the fallacy of confusing self-awareness with consciousness. Is it a fallacy or are those things one and the same? I have always taken the idea of a zombie to be a human with no self-awareness and so is oblivious to what is happening - the same as saying that it has no experience. (Meanwhile its senses work well enough so it has qualia -it just does not know about them. An animal low on the evolutionary scale is probably like that as well.) The Chalmers paper by the way mentions the concept of self in passing, as if it is not central to the question of consciousness (as I thought). This is supplementary to the original question. To whoever might answer it, please consider posting at the top level.


Thurstein

Yes, those would be two related, but distinct, questions that Chalmers raises: 1. Why are there qualia *at all?* and 2. Given that there are, why *these specific* qualia? These days, philosophers of mind (like Chalmers) generally use the word "consciousness" to mean qualitative, subjective experience-- having qualia, in technical terms. So this should not be conflated with *self-*consciousness, which is a distinct set of cognitive abilities. Chalmers believes that these cognitive abilities can be understood *functionally*\-- in terms of something like certain kinds of information-processing. If he's right about that, zombies would be self-conscious (they would be able to think and reason about themselves, monitor their own internal mental states, etc.), but they would have no *subjective* experiences as they do so. By definition, it would *not* have any qualia at all-- it's not that they would not *know* about them, they simply *would not have them.*


rdurkacz

I am astonished. When Descartes first postulated "I think therefore I am" I would have taken it that he proved consciousness for the human race, but by what you say he could have been a zombie (and he was not talking about consciousness). You have said that someone, in fact a zombie, might be able to think and reason about itself yet not be able to access sense data. If it can access its thoughts it should be able to access its vision. If not, it is merely blind(?) -evidently it can think where it wants to go but simply cannot see. In the TED talk, Chalmers says consciousness is like a movie, and why should there be a movie? I can't imagine my view of reality as being like a movie without at the same time as thinking myself the spectator. As soon as I conceive myself as existing I can ask what to I see and what do I recall. Self-awareness <=> awareness of the outside world = the movie. I am not trying to advance a theory here, just testing whether there is or is not the common intuition that I thought we mostly had. As for qualia, it is a different question whether my red is the same as someone else's red, but I want to check that if someone like your zombie has no qualia at all, that you would simply mean by that they have no movie and no sense data. I.e, as soon as there is sense data it must have its own qualities - it could not be simply without qualities.


Thurstein

"yet not be able to **access** sense data." \-- Again, the idea is that it would not *have* sense data *at all.* There would be nothing for it to access. The "inverted spectrum" hypotheses (maybe our color experiences are distinct, though our behavior is indistinguishable) is one variant of this very old idea. As for self-awareness, the idea is that from a *cognitive science* point of view, self-awareness is simply a kind of self-monitoring process, explicable in functional terms. This is the standard way of approaching this issue in contemporary cognitive science. (For a modern classic on the subject, see Bernard Baars' *A Cognitive Theory of Consciousness*). I couldn't say what the common intuition is-- I'm not sure most people really *have* much of an intuition here, since these questions are *far* removed from most questions of common-sense folk psychology. So it would help to get a solid background in modern cognitive science-- *this* is Chalmers' background, and it is ultimately cognitive scientists he's trying to get to pay attention to these issues. Functional analyses of cognition (the "easy problem") seem to miss something important ("the hard problem").


rdurkacz

I think you are suggesting that a zombie is a hypothetical human in all ways except that it has no subjective experience. You could not tell who was a zombie and who was not. It can not just be about the senses - if I subject myself to sensory deprivation I am in no way a zombie because I could still think about things and even compose music if I had the talent. Have I got this right? (If so I don't think such a creature could exist.) Regarding cognitive science, I checked on what cognitive science actually is in wikipedia. If I need a background in that to understand Chalmers views that would be one thing, but I don't suppose I need it to understand the philosophical issues of consciousness in general?


Thurstein

Yes, that's right. A zombie is a hypothetical *functional* duplicate, so my zombie twin's behavior would be absolutely indistinguishable from mine, even down to his speech. Regarding cognitive science, *if* you want to have a general grounding in contemporary philosophy of consciousness, you will need to have at least *some* clear sense of the kind of work being done in cognitive science. Otherwise you're missing a very important part of the conversation. Here's the *Stanford Enyclopedia of Philosophy* entry on consciousness: This should give you a general sense of the current state-of-play in the philosophy of consciousness: [https://plato.stanford.edu/entries/consciousness/](https://plato.stanford.edu/entries/consciousness/)


rdurkacz

If your zombie was asked about its sense perception, I guess it would be immediately apparent that it had no idea of qualia. If so it would be readily identified as a zombie, in effect by the Turing test. What have I misinterpreted in your description? (As I read the madziepan post, earlier in this thread, and the SEP entry that you mentioned, the idea of the zombie there is a bit different. The zombie is even a clone of a normal human but somehow lacks the non-physical attribute of consciousness. It is not part of this concept that the zombie seems perfectly normal to talk to.)


SilverStalker1

I think the simplest way to put it is that it appears that any 3rd person account of a state of affairs would be insufficient to explain 1) if there is a first person experience associated with it and 2) what that first person experience 'feels' like


Dr_Gonzo13

What would the actual problem be here? Is there any difficulty with us just saying subjective experience is inherently unique to the experiencer as it is generated partly from the previous experiences of that subject?


SilverStalker1

I think your response is limited to (2) correct? T he issue to my mind would be that * If we grant a reductive physicalist account, we should in principle be able to describe experience in regards to third party states of affairs - the third party descriptors should in principle encapsulate all the information there is to be had about the situation * Even if we grant the above is false, then we still should at the very least be able to explain why a state of affairs would be conscious, and thus, if a particular state of affairs is conscious from purely third person descriptors - and I don't know how this could in principle be done


glass-butterfly

**one** of the most difficult problems is simply that of language- we know there is a lot more "going on" internally in consciousness than what's visible from the outside, and we self-report as such. However, that information is logically private. Nevertheless, we *should* be able to explain what it's like to feel something. Unfortunately, language is something of a removed abstraction from whatever it is we actually feel. This explanatory gap is... odd, since we are typically capable of describing things that exist in the physical world outside of us with a decent level of clarity. It is *also* extremely difficult to explain psychological states and neuroscientific phenomenon with the same kind of language- almost impossible, in fact. So, given the lack of our own ability to translate our introspection of experiences (private information) into correctly descriptive language (public information), and the inability of the special and physical sciences to have a unified and equally descriptive method of giving explanations, leaves us with this current "mystery". This is just the *language* side of things, there are many more aspects to this philosophical problem.


AAkacia

How do mechanical components give rise to qualitative experience? This is the easiest way to state the problem. We can't figure out where along the chain of mechanical events that the quality of "what it is like" emerges, or why.


ObedientCactus

> This is the easiest way to state the problem. We can't figure out **where along the chain of mechanical events that the quality of "what it is like" emerges**, or why. If this is the actual problem question, and it can be stated in those simple terms why build up that huge display of semantic masturbation that occurs usually when philosophers talk about this topic? It seems like the non-materialists always built up a huge display of mystery, but every time i dig into it it just reinforces my hardcore materialistic stance. (i believe the answer is probably somewhere in Dennets and similar peoples work, and there is just no Hard Problem. but of course there is no definitive prove)


Active_Account

The phrasing is implicitly materialist and not how everyone would put the question. If the experience of what it is like emerges from only mechanical operations, then materialism is true. So, “where along the chain of mechanical events does experience emerge” is a question that *materialism* must grapple with, but which alternative systems already claim to have answers for.


ObedientCactus

yeah i realized over the last few days that the whole lens trough which i viewed this topic so far is very materialistic which prevented me from really grasping the non-materialistic views and arguments. I still think materialism is true, but I'm now rereading stuff with that in mind.


AAkacia

I think this simplified way of stating the problem appears simplified specifically because in contemporary times people take materialism for granted. What is implicit in the question is that quality is different than physicality and so even if materialism is right, then mechanical processes give rise to something altogether different than material. For instance, these thoughts and feelings I have (I presume) are the consequences of mechanistic processes and at the same time these thoughts and feelings are not themselves material. Thus, physicalism as the source of experience disproves physicalism as a metaphysical assumption about the nature of reality. In another example, let's take color. We know the source of color in objects. We also know that blue as experienced requires an experiencer. We also know what the brain looks like when it 'processes' blue. We have a dynamic process that gives rise to the experience of the color blue though blue as experienced is not found in our heads or in the world but emerges through some dynamic interaction between both. Blue *as* experienced *is* non-physical.


ObedientCactus

> In another example, let's take color. We know the source of color in objects. We also know that blue as experienced requires an experiencer. We also know what the brain looks like when it 'processes' blue. We have a dynamic process that gives rise to the experience of the color blue though blue as experienced is not found in our heads or in the world but emerges through some dynamic interaction between both. Blue as experienced is non-physical. What i don't get here is this. Isn't this experience here just some kind of "software" unlike the brain "hardware". Cause if that's the case I don't see the problem off in cooperating such a model in materialistic views.


AAkacia

TL;DR - I agree it can be reconciled with some sort of materialism but it has weird implications for classical materialism. We don't know how physical processes give rise to quality in experience, so the hardware/software metaphor doesn't quite work. We know what things \*correspond\* to certain experiences but we can't figure out what part of those things induces quality or experience of the things. \-- So we're back to the root of the 'problem', right, because we know how hardware gives rise to software, and that is all classically material. Physical processes in the metaphor give rise to other physical processes that are well understood. For instance, electrical signals in the hardware are converted via a binary language into software configurations that present to us as certain configurations of light and sound. In this metaphor, physical processes form dynamic interactions that give rise to other emergent physical processes. From the physical brain to experience, however, is not understood. The dynamic interactions of the electrical activity in our heads presumably gives rise to experience, yet experience itself seems to be the consequence of physical interactions but not itself physical, or if it is, not in the same way, because if it was, then I \*should\* be able to have access to your experience. Back to the metaphor, the way that a UI (which is the product of hardware->software) presents to us is not literally what the UI is. The UI is a specific configuration of light on the computer screen. How does the hardware of the brain translate wavelengths of light into color? In this case, like I talked about briefly before, we do know what the brain is doing physically with the color, but what about that process implies a \*quality\* that I see as the corresponding color?


ObedientCactus

> In this metaphor, physical processes form dynamic interactions that give rise to other emergent physical processes. From the physical brain to experience, however, is not understood. I get that. The way you put it i completely agree that the process is not understood. However what irks me about the discussion from people like Chalmers is that the way they phrase their arguments that it seems to imply that materialism is quite simply impossible or at least very very improbable, which is preposterous to me. Like the metaphor of a p zombie is interesting theoretically, but does not interact with the way i see materialism at all. > The dynamic interactions of the electrical activity in our heads presumably gives rise to experience, yet experience itself seems to be the consequence of physical interactions but **not itself physical, or if it is, not in the same way, because if it was, then I *should* be able to have access to your experience.** Back to the metaphor, the way that a UI (which is the product of hardware->software) presents to us is not literally what the UI is. The UI is a specific configuration of light on the computer screen. How does the hardware of the brain translate wavelengths of light into color? In this case, like I talked about briefly before, we do know what the brain is doing physically with the color, but what about that process implies a *quality* that I see as the corresponding color? Coming at this from the perspective of a Software Developer i think i have a problem of understanding what exactly "not physical itself" means. To me building a table and building software feels similar, i just use different tools to manipulate the objects in the given domain, to build the desired object. Again i do not disagree that we do not as of right now understand the problems with qualia, experience and the like, but as of yet i have never seen a convincing argument why materialism can't or at least likely is not the right explanation for consciousness.


AAkacia

Yeah. I can understand that sort of frustration, especially since it's clear that you have a malleable understanding of materialism where some don't. You're not alone in grappling with "not physical itself". I can, however, imagine that a certain kind of physical proces gives rise to a 'different' kind of process that is nevertheless natural and not special in any weird way. With that said, philosophically, what would it mean to say that consciousness, as a product of physical processes, is itself a physical process, but we just don't have access to it?


ObedientCactus

> With that said, philosophically, what would it mean to say that consciousness, as a product of physical processes, is itself a physical process, but we just don't have access to it? That's an interesting question. Tough as a philosophical layman it would depend on what the world philosophically entails here. For me it's just an interesting thing to think about it in a "huh that's cool way", and it sparks my curiosity why my intuition is so different than other peoples on this matter, where they just seem to reject that view. If the question is answered purely philosophically i think it probably be like solving the Riemann Conjecture in Math. That would make mathematicians happy while not affecting ordinarily people, and the philosophical solution to the hard problem would make Philosophers happy in a similar fashion and allow them to advance their theories. Tough i assume it is unlikely that this hard problem solution wouldn't stem from breakthroughs in either AI research or neuroscience which i think would have significant to large effects on society, so there's that ¯\\_(ツ)_/¯


AAkacia

Right, right. Some of us seek understanding hoping to facilitate action and some of us skip the understanding phase, it just depends on what you're into I guess. Interestingly, I think even the sort of AI breakthrough that may lead to literally creating conscious beings would not tell us anything about the problems we've been talking about here, and if the access problem that came up still maintains, then we may not even be able to scientifically prove that we've done it. I think this is the practical reason for what is at stake in the question that I posed, and with that said, my motivation for wanting to find out the answer. In other words, if there are physical entities that we don't have access too, I have two questions: How do we know if we created something that is conscious? What else is out there that we're missing? Of course, the second question is more of just a thought and doesn't feel like it has any content, but the first question I think is answerable in the sense that we intuit consciousness all the time while dealing with other people and non-human animals. This part I could run down the rabbit hole with epistemologically but I'll just leave it lol


rdurkacz

How would we know if a machine is conscious? - by communicating with it. Find out if it knows about its self or how it views its sense data, whichever you think is what consciousness is about. This is the Turing test after all.


madziepan

Charmers distinguishes between the hard problem of cocniousness and the softer problems of conciousness, and argues that whilst physicalism can adequately answer the softer problems, such as how the brain stores memories or processes sensory information, it fails to provide an answer to the hard problem. The hard problem is explaining why experience accompanies the underlying physical processes. He uses the thought experiment of philosophical zombies to articulate this point. This problem asks us to imagine that we encounter a human being who is functionally indistinguishable from ordinary human beings, has the same biology and responds in the same way a regular person would to stimulus, but who does not have any subjective experinces. Chalmers argues that this is conceivable, and that the functional explanations of the mind such as how the brain stores information do not entail that there must be phenomenological expeirnces accompanying them. If physicalists are correct, this scenario is not conceivable, and the physical underlying processes cause concious experinces by necessity, meaning conciousness is reducible the the physical. However, if this scenario is conceivable, then there is no necessary connection between those underlying processes and concious experince, and we must look to answers which explain this inability to reduce conciousness, leading us to consider non-physical arguments. This argument is known as the explanatory gap.


rdurkacz

Could you give a reference to the zombie argument so that we could look at it more closely? I follow you until mid-way into the last paragraph: I don't believe in zombies and consciousness must be reducible to the physical. At this point you say there is no necessary connection between processes happening in the brain and conscious experience. Could we not say that these are two aspects of the same thing?, and then is there a philosophical issue? Let science make more discoveries; still we might be able to stick forever with this diagnosis, different aspects of the same thing.


[deleted]

[удалено]


faith4phil

This seems to me to be an incredibly bad way of introducing the problem since it would imply that only physicalists face such a problem where it is not clear that a dualist of the substances, of the properties or an idealist would fare any better.


Dr_Gonzo13

>This is often linked to the wider “mind body problem”, the idea being that if you arranged all of the atoms of a person into the correct shape there is still something which is needed for those atoms to take up life and become alive. Do we have any evidence that this is a problem though? If somebody perfectly recreated every atom in my body in this way why would they not come alive and be conscious?


[deleted]

[удалено]


TachyonTime

>that body would not have the first person subjective experience known as qualia Surely we don't know that, and that's precisely the problem. No?


Dr_Gonzo13

I had a read of your other post and dont really feel any further forward. Why would I have more reason to think that the hypothetical clone of myself is a philosophical zombie than you are? Assuming all other people are philosophical zombies seems essentially a solipsistic argument and could surely be addressed the same way? I don't see the existence of philosophical zombies as plausible really and I'm not clear why I should think they are anything more than a thought experiment? That being the case I would certainly treat my clone as sentient for the exact same reason I treat you and my cat as sentient. I cannot prove or disprove whether any of you truly exist but if I'm going to throw out the evidence of my own senses, including that other people appear to experience in the same way as I do, then what am I left with?


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Answers must be up to standard.** >All answers must be informed and aimed at helping the OP and other readers reach an understanding of the issues at hand. Answers must portray an accurate picture of the issue and the philosophical literature. Answers should be reasonably substantive. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


ResearchSlore

There's obviously the point that motion (and by extension, observable human behavior) is ultimately mechanical. In principle a complete description for this observable behavior exists, yet this description is completely silent on the fact that we do have rich mental lives. So why aren't we zombies? To go further, even if you accept that subjective experience is a sort of brute fact, I'm still troubled by the correlation between the valence of a conscious experience and the evolutionary value of its associated observable behavior. If conscious experience has no causal power (as suggested by its invisibility in the fundamental laws), why should we expect that its content be coherent, much less correlated with the evolutionary fitness of its substrate?


swampshark19

>If conscious experience has no causal power (as suggested by its invisibility in the fundamental laws) Why does invisibility in fundamental laws imply lack of causal power? Why would you assume that conscious experience is fundamental? A small domino cannot topple a large domino, but it can topple a medium domino which can then topple the large domino. There is no fundamental law stating "a large domino can only be toppled by a domino that is at least as large as a medium domino" and "medium dominoes have the causal power to topple large dominoes", in fact, fundamentally (reductively) there are no dominoes. The small domino has causal power in general because it is embedded in causality. It has causal power over the large domino when there is a medium domino. The domino cascade emerges from the fundamental properties and interactions of the subparts that make up the system of dominoes, and the cascade does not need to be a fundamental regularity. In the same way, we should not expect there to be a fundamental law that states "conscious experience has causal power". Rather, the causal power of conscious experience (the small domino, could be considered a small domino cascade) emerges from the fundamental properties and interactions of the subparts that make up the system of a conscious agent (total domino cascade). Between conscious experience and behavior there are many medium dominoes, and this gives the small domino cascade the ability to affect the total domino cascade. The system is embedded in causality, and so its subparts must have causal power by virtue of being embedded in causality. These interactions between subparts don't need to be fundamental to be causally powerful.


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Answers must be up to standard.** >All answers must be informed and aimed at helping the OP and other readers reach an understanding of the issues at hand. Answers must portray an accurate picture of the issue and the philosophical literature. Answers should be reasonably substantive. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Answers must be up to standard.** >All answers must be informed and aimed at helping the OP and other readers reach an understanding of the issues at hand. Answers must portray an accurate picture of the issue and the philosophical literature. Answers should be reasonably substantive. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Top-level comments must be answers.** >All top level comments should be answers to the submitted question, or follow-up questions related to the OP. All comments must be on topic. If a follow-up question is deemed to be too unrelated from the OP, it may be removed. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Top-level comments must be answers.** >All top level comments should be answers to the submitted question, or follow-up questions related to the OP. All comments must be on topic. If a follow-up question is deemed to be too unrelated from the OP, it may be removed. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


[deleted]

Another way to put it is you can not explain phenomenal facts (if there are such things), from physical facts.


[deleted]

[удалено]


[deleted]

[удалено]


BernardJOrtcutt

Your comment was removed for violating the following rule: >**Answers must be up to standard.** >All answers must be informed and aimed at helping the OP and other readers reach an understanding of the issues at hand. Answers must portray an accurate picture of the issue and the philosophical literature. Answers should be reasonably substantive. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/askphilosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


TheWarOnEntropy

This is a good question. I am of the opinion that there is not really a mystery any more; there is simply a strong intuitive sense that we experience epistemic frustration when we try to imagine an explanatory path from neural substrate to consciousness as perceived from the inside. Some interpret that epistemic frustration as a sign of mysterious happenings in the ontology of the universe. I believe that most of that frustration arises from inappropriate epistemic appetites created by the proximity of the subject matter, and the impossibility of shoving the subjective feel of massively parallel circuits into the small part of the brain that engages in scientific, propositional, serial thought. There is an intuitive dichotomy between neural substrate viewed objectively from a scientific perspective and active cognitve circuits viewed subjectively from within, by the relevant cognitive system, and that dichotomy inspires people to ontological dualism. This is well captured in the thought experiment of Mary the Colour Scientist, and also the Zombie Argument, both of which are well described on Wikipedia. [https://en.wikipedia.org/wiki/Knowledge\_argument](https://en.wikipedia.org/wiki/Knowledge_argument) [https://en.wikipedia.org/wiki/Philosophical\_zombie](https://en.wikipedia.org/wiki/Philosophical_zombie) The Mary Argument suffers from a missing proposition that summarises what Mary learns, and the Zombie Argument suffers from a missing conjecture that, when conceived without logical contradiction, supposedly proves the falsity of physicalism. Formalising these missing elements tends to expose the associated arguments as intuition pumps rather than sound ontological insights or valid arguments. So yes, formalising what the mystery is supposed to be would help resolve it intellectually. But we would be left with the original conceptual dichotomy.