In the philosophy of consciousness and mind, one topic still widely debated is the “Hard Problem of Consciousness” proposed by David Chalmers [1]. This problem also closely relates to qualia, phenomenal consciousness and philosophical zombies. When I first heard of these, I considered myself a supporter of the hard problem. However, with more thoughts devoted, I have found myself shifting to the opposing side. In this article, I will explain why I think the hard problem (and all of the related concepts I mentioned above) is related to a way of thinking that detaches us from our experiences, and simply walking away from the problem is probably the best solution to these problems.
. . .
The Easy Problem and Hard Problem
Chalmers divides the problem of consciousness into the “easy problem” and the “hard problem.” Questions such as how a person perceives, remembers, thinks, and reports their feelings, how these functions are interrelated, and how the brain, as a physical system, implements these functions all fall under the category of easy problems. These are external behaviors manifested by a system. Therefore, in principle, the relations between these behaviors can be discovered and understood if provided sufficient research resources.
Following these easy problems, Chalmers then introduced the “hard problem.” Intuitively, it asks: “I can see that here’s a brain that performs the perceiving, memorizing, thinking and information processing. I can also understand how the brain, as a physical system, implements all these processes. However, why should the performance of these functions give rise to experiences?”
Although not emphasized by Chalmers himself, I would say there are two parts to the hard problem. The first part is when each of us, subjectively feels something, say the redness of a red cloth, and tries to explain why some physical process in our brain leads to that sensation. The second part is when each of us looks at other people, animals, machines or anything, and tries to explain why (or whether) in some of these physical systems there is “what it’s like to be the system”, or say subjective experiences as the system. For most of this article, I would be dealing with the second part of the problem. If you are only interested in the first part, you might want to skip to the section titled “The Explanatory Gap of Everything”.
When dealing with the second part of the hard problem, that is, explaining whether there is something it’s like to be another person. We would find ourselves recalling another classical problem in philosophy: the problem of other minds. The common issue is that when we look at other people, animals, or machines, we do not access their feelings the same way we do our own. They seem to degenerate into a physical system, a machine operating according to some rule. What could possibly convince us that this person, or an extremely intelligent artificial intelligence, also has “subjective experiences” like us, rather than being a “philosophical zombie” without subjective experiences?
Perspectives
I believe the key to solving this problem is to recognize a basic principle about knowledge: “All of our knowledge is about the relations between phenomena that we experience from a particular perspective.” Perspectives are asymmetric; the things each of us has access to and perceives are not the same. Instead of trying to overcome the limitations of perspectives and asking, “What do others feel?” we should be content that, as limited beings, we must observe and describe the world from our own perspective. Any attempt to ignore this limitation should be seen as impractical.
Most discussions on the hard problem would say what makes the easy problem easy is that it is about “functions,” which scientists are good at explaining. However, I would say this characterization is missing the point. What makes the easy problems easy is that they are about the causal relations between phenomena that we do experience in each of our perspectives. That is, we can really see, touch, measure, collect information, and gather evidence about these phenomena. In principle, if these causal relations exist, they should be open to our understanding.
On the contrary, the second part of the hard problem concerns things that are not experienced. If someone tells me that he is confident there is something it’s like to be another person, I must question him “Where in your experience do you find this “something it’s like to be another person”? Of course there is no such thing. Otherwise, there wouldn’t be the problem of other minds. Whenever we look into any corner of our own experiences, we should only find ourself in “what it’s like to be me”. Thus we wonder, “Where is the ‘subjective experience of this person’? How can I prove or predict its existence?” and never get a satisfying answer. I do not experience other people’s subjective experiences. Thus, it is impossible for me to have knowledge of them. What I can experience are this person’s bodily movements, verbal reports, neuron activity, and so on. My knowledge about other people can only be built on these phenomena. The second part of the hard problem is hard exactly because the target to be explained is not experienced through perception, but rather imagined, or say linguistically constructed. Attempts to study unexperienced phenomena, as shown here, should be considered a faulty way of thinking.
Conceivability
One argument supporting the existence of philosophical zombies is the “Conceivability argument.” The argument states that if the concept of a philosophical zombie can be conceived without logical contradictions, then it is possible for it to exist.
I argue that this simply shows the problematic tradition in philosophy - having too much faith in their a priori reasoning. Like mathematics, you can define your own set of axioms and get a consistent mathematical game. Still, these arbitrarily defined mathematical structures do not necessarily help us describe and understand the world we experience. The concept of philosophical zombies seems to be another instance of these logic games, in which an unobservable property can make two people who appear identically actually “different.” One person mysteriously has an extra “subjective experience,” while the other person magically lacks it and becomes a “zombie.” Faced with this confusing logic game, one should be able to step outside of the framework and judge from a meta-perspective: Do I want to think in this logic game? Does this model help me understand the world? If two people appear identical to me under all possible observations, do I have any justification for modeling them as different?
I think only when there are observable differences between two individuals it is reasonable to conceive a new property in my model of a person. The problem of philosophical zombies exists precisely because we created a flawed model to approach the world in the first place, which in turn confuses us.
Not all logic games are to be taken seriously. Just like the choice of tools, the selection of logic games and thinking frameworks should be determined by experience. When thinking about such a “subjective experience of others” has given us no progress other than confusion, the best thing to do is simply abandon this thinking rather than clinging to these unsolvable problems. A problem that results from a flawed framework is not worth solving.
Inverted Qualia
In discussions surrounding consciousness, another common concept related to the hard problem is “qualia.” Philosophers use this term to refer to the subjective experience of perceiving, such as the raw sensation of seeing the color red when looking at a red cloth or the saltiness of eating salt. It seems not much of a problem when we use the term “qualia” to refer to our own experiences, but once we try to discuss “what qualia others possess” (which is something we should avoid thinking about, as I explained earlier), some strange problems arise. One of them is the problem of inverted qualia.
The problem of inverted qualia is not difficult to understand. Put simply, it asks: Although we all agree that a tomato is “red” and has a “green” stem, and we use the same linguistic symbols, do we really have the same feeling (qualia)? Is it possible that the red that I feel and the red that you feel are different?
The crux of this issue is similar to that mentioned earlier: we are attempting to compare the experiences of different people. However, whenever I experience something, isn’t it necessarily “my” experience? Does it even make sense for me to use the concept “other’s qualia”? If I have never experienced someone else’s qualia, how can I compare them to my own? Some philosophers seem to think that simply by assigning a symbol (such as \(Q\)) to the qualia of others and another symbol (\(Q'\)) to my own qualia, they can construct a rigorous argument to demonstrate the existence of the inverted qualia problem. However, if we take a step back (to a meta-perspective) and examine how we are using our language, we will again find ourselves trapped in a logic game. Logic is only useful when the used concept models some observable phenomena, but the \(Q\) in this game does not correspond to anything in our experience.
I’m not saying that we cannot talk about things that are not directly observable. If we want to talk about something that we cannot directly observe, it still needs to be “related” to our experience. For example, in quantum mechanics, we can use a wave function to predict the probability distribution of a particle’s position. We cannot directly observe this wave function, but only the measurement results that match the derived probability distribution. Nevertheless, scientists accept the existence of such unobservables because they provide predictions that can be measured and verified through our experiences.
In our daily lives, we also say things like, “This person is angry.” This seems to be describing a person’s internal state. However, when I say this, I’m not claiming I know the subjective experiences “as” the person. Angry is a term, a latent variable in our model of a person, we use to describe a person’s behavior observed at the moment. Although it’s not something that can be easily quantified, it manifests in tense muscles, accelerated heart rate, and speaking loudly. Descriptions of these types, while abstract, are based on phenomena that we can actually observe.
Qualia, however, is private to a perspective and can not be accessed by others by definition. No matter how detailed I observe a person’s behavior, I will never observe their qualia. Therefore, I cannot have knowledge about others' qualia. Any theory that claims to predict the existence of qualia or describe qualia will only remain unverifiable.
Phenomenal Consciousness
Related to the hard problem and qualia is also Thomas Nagel’s question, “What is it like to be a bat?” [2] By his definition, if there is “what it is like to be the bat,” we say that bats are phenomenally conscious. The core issue is the same as that in the previous discussions. They are not asking, “How can I use all the means I have to observe, understand and describe bats” (which counts as easy problems), but rather asking, “What is experienced from the bat’s perspective.” They are all trying to gain knowledge by escaping from their own perspective. In the end, this is simply impossible.
The fact is, most of us would intuitively claim that others are phenomenally conscious. Nagel also assumes that bats have phenomenal consciousness. Just take some time to reflect: What makes us think that others or other things have phenomenal consciousness? The answer is clear: it is because we observed others exhibiting certain patterns of behavior; it is because Nagel observed certain patterns of behavior in bats, not because he actually experienced what it is like to be the bat. Understanding things through our own perspective and experience is the only thing we can do and the only thing we need to do. Philosophers created the concept of phenomenal consciousness by defining it as “there is something it’s like to be X.” However, when asked to decide whether X has phenomenal consciousness, they would find that there is no way to answer this question without secretly retreating back to their experiences and observations about X. In the end, the concept of phenomenal consciousness, if not to degenerate into a totally useless concept, would still be used in a way that is attached to the behaviors observable from each of our perspectives. Another way to say this is that it has to be used in a functional sense. Discussing an idea isolated from our experiences is simply impossible. The concept of phenomenal consciousness itself is a mistake.
. . .
If the true experiences “as” others are impossible to know, what can we discuss, study, and understand about ourselves and others' bodies and minds? Let us dive further into these topics. In the next two sections, I will finally deal with the first part of the hard problem. Then, we will return to discussions of other minds.
Understanding Ourselves
In abstract terms, “I” (whatever that is) experiences various phenomena every second I am awake. Phenomena refer to anything I “experience,” including what we commonly call senses - vision, hearing, taste, touch, etc. - as well as ideas built upon these senses, such as tables or chairs. Furthermore, phenomena include our awareness of our inner states and emotions, as well as the emergence of abstract thoughts and ideas. Between certain phenomena, we can find relations, and knowledge is about these relations between phenomena.
In this context, what I mean by “experience” is very general and includes what we call “physical” measurements. A lot of philosophers seem to treat physical descriptions as something totally separate from subjective experiences and thus they talk about it in a disembodied way. However, isn’t the physical phenomena just a subset of our experiences? We must understand that when I use an instrument to measure a physical quantity (such as the frequency of light), I have not escaped my own perspective and suddenly observe the world from “The View from Nowhere” [3]. We still have to observe the results of the measurement with our eyes and make interpretations. At that moment, the measuring instrument becomes an extension of my senses and a part of my “perspective.” Through the establishment of standard units and the calibration of instruments, science does not provide “The View from Nowhere” but rather a shared sense organ. The objectivity of science builds on this sharing. Since these physical phenomena are just a subset of what I experience, we should be able to treat both “subjective experiences” as well as “physical phenomena” in a unified way.
Whether it is the “external world” or “my internal mental activities,” they are both “phenomena” that I can experience from my perspective. If they can be experienced, they should be open to my understanding. Therefore, I can understand the relation between the physical state of my body, mental states, emotions, and thoughts (although not easily). By using appropriate instruments to stimulate specific brain regions and feel some sensations, I should also be able to identify the relations between specific sensations and specific brain regions and form knowledge about the correlation between these two phenomena. Such knowledge is exclusive to me, however. Others who observe me in the third person only know about my reports (and various measurements) and their relation to my neurons' activation. They should not try to study “what it is like to be me.”
The Explanatory Gap of Everything
Some people may be unsatisfied with “just finding correlations” and insist to ask, “Why does a certain physical process give rise to a certain experience?” This is the so-called “explanatory gap” in the philosophy of mind. It states no matter how well we understand the physical description of something, such as the physiological activity associated with sadness, we will still not be able to infer what the subjective experience of sadness feels like, or if there is anything it’s like to subjectively experience it. This may seem like a question about the mind, but it is actually a question about all knowledge.
My answer is that this gap is not unique at all, and there is nothing to be solved. This is because knowledge is about the relations between phenomena, and understanding is simply the process of discovering and accepting such relations. For facts that have already been discovered, we should not ask “why” they are the way they are. Instead, the best we can do is describe in more detail how phenomena are related to each other. If you examine our current knowledge carefully, you will find that it always tells us “how” things are related instead of “why.”
“Why do apples fall?”
“Because gravity pulls them down.”
“Why does gravity pull things down?”
“… The law of gravitation is described by formula X.”
“But why is the law of gravitation in this form?”
“… There’s no ‘why’. It just is.”
Even if we have a more microscopic theory that derives the macroscopic law of gravity or develop a multi-verse theory that explains that we are just one of the parallel universes with different laws of gravity (regardless of whether such theories are still considered scientific), we will find that they describe “how” gravity operates in a more microscopic scale, or “how” multiple universes exist. As for why gravity is like this at a microscopic level or why there are multi-verses, these are questions that no theory will ever answer, and questions that we need not ask. There is a certain point in our inquiry that we must stop and accept what is given as brute facts. This rule applies not only to scientific knowledge but any knowledge in general.
Therefore, the correspondences between firing neurons and my sensations are relations that we simply discover and accept. Even if there seems to be a gap in connecting two phenomena, it need not worry us. Just like regular computer users, they only need to know that when they move the mouse, the cursor on the screen will move correspondingly. They don’t need to know all the underlying details when the computer is functioning properly.
Let us test our idea with more examples. You know that light of a specific wavelength will give you a specific color sensation, but why should this wavelength correspond to this sensation? Changing magnetic fields are always accompanied by electric fields, and changing electric fields are always accompanied by magnetic fields, but why should electric fields and magnetic fields have such a relation? Such gaps actually exist in all our knowledge, but it does not affect our understanding, prediction, and control of these phenomena. What is important is that we can use this interface to understand the relation between these phenomena rather than why they are related.
To discover the relations between phenomena, the phenomena to be related must first be experienced. Therefore, if I only measure physical phenomena (such as the wavelength of light), of course, I cannot know how the light looks like when viewed with the eyes (this is another form of measurement). I must use scientific instruments to measure numerical values and use my eyes simultaneously in order to grasp the relation between these two phenomena. If I only measure the numerical values without observing with my eyes, I will not be able to understand how these two correspond. This principle applies not only to subjective sensations but also to objective physical measurements. If I only measure the electric field and not the corresponding magnetic field, we will not be able to understand or verify the relation between the electric field and the magnetic field. Therefore, the impossibility of predicting subjective experiences from physical descriptions is not a special problem. The relation can not be understood if one chooses to close his eyes forever.
In summary, the seemingly unbridgeable gap between physical descriptions and subjective experiences is actually ubiquitous. It exists in all the problems we regard as “explained.” The relations between the body and the mind may not be as difficult to understand as we think. It is just that the existence of such gaps is selectively ignored in most other cases.
. . .
I have discussed the issues that might occur when we investigate the phenomena that are usually called “ours.” Next, let us shift our focus to “other minds.”
Understanding other Minds
There is a subset of phenomena we experience that we call “others.” To understand others, like any other phenomenon, we should base our knowledge on the experiences we have. To understand someone’s emotions, I must base it on the “measurement” of their facial expressions, verbal reports, body movements, behavioral patterns, and bodily states. Therefore, when I say someone is happy, it must be related to specific facial expressions and behavioral patterns that I observe. Of course, we don’t need to consciously consider these details when we infer other people’s emotions in our daily lives. The word “measurement” includes our unconscious, fast inference of other people’s emotions. Based on these experiences, I can establish a psychological model of others and understand how they will react to specific events, enter what kind of emotional states, and subsequently behave. The “emotions” here should be understood as a latent variable in our model that is not directly observable but can be inferred from the observables.
If we insist on questions like, “Yes, the person appears angry and appears to be aware that he is angry, but is the person really experiencing angry subjectively?” the concept of “experiencing angry” will be rendered useless since we have detached it from all of our experiences and observations. In this case, we would be making the same mistake as in the second part of the hard problem.
If there is anything that allows me to say, “I understand someone’s feelings,” it is that my models, which contain latent variables called “the person’s subjective experience,” successfully explain and predict someone’s behavior. It is not that I know what it is like to be others. Notice that the moment I use it to describe someone’s behavior, the term “subjective experience” becomes a functional term.
Forget about Qualia
In the preceding text, the main idea I aim to establish is simple: we must understand others in terms of how we experience them from our own perspectives instead of misleading ourselves into discussing things experienced from others' perspectives. If I were to say, “The puppy subjectively experiences the heat of the flame,” it would only correspond to the phenomena I observe: the puppy, as a biological system, perceives the signal of heat, avoids the heat source through reflexive behavior, or decides to run away after going through some cognitive processes. It is only fine when the phrase “subjectively experience” is used as a tool to explain what we observe. Claiming that there is an unobservable “what it’s like to be the dog” that needs to be described would be a faulty way of thinking.
Once we cease discussing “the subjective experiences from the perspective of others,” the notion of qualia becomes unnecessary, and the problem of inverted qualia vanishes. What remains are concrete, observable, and understandable “differences in representation.” Here, by representation, I am also referring to the concept of “encoding.”
It is evident that different machines can have different internal encodings. Recently, there has been a lot of discussion about large language models such as ChatGPT, which can serve as an example. Modern language models operate in a similar manner: sentences are first tokenized and mapped into vectors of real numbers. On top of these vector representations, a series of mathematical operations (such as dot products, matrix multiplication, and nonlinear functions) are applied, finally generating the output sentences. In theory, if we have a model A, we can create a new model B by performing a linear transformation on its vector representation while applying the corresponding inverse transformation to the network weights responsible for processing these vectors. As a result, we would have two models with identical input-output relations but with different internal representations for each word. We can let these two language models engage in conversation and observe how they use each word. We will see that they use each word in identical ways even though they encode the sentences differently.
Likewise, we can imagine two robots with identical light sensors, but we design different internal encodings to store color information and map those internal color encodings to language differently. In doing so, they can both agree that an apple is red, even though they store light information internally using different encodings. Humans, as complex machines, each possess a unique genetic makeup and undergo different developmental processes. Even with deep learning models, slight differences in the random seeds may result in drastically different models. Therefore, it is not surprising that we can have varying degrees of differences in our eyes, receive different visual information, and adopt different internal neural encodings. However, synchronization through communication allows us to use words like “red” or “green” in a roughly similar manner.
In the descriptions above, there is simply no room for qualia. Even if we observe these differences in perception and representation in either humans, animals, or machines, we cannot claim any differences in qualia because I have never measured or even experienced this “other person’s qualia” at any moment! Predictions regarding qualia cannot be verified and provide no benefit to our understanding of a person’s behavior. It seems that when our goal is to understand this world, it is better to discard the concept of qualia.
. . .
Conclusions
In recent years, Chalmers has also started discussing the so-called “meta-problem of consciousness.” Simply put, instead of solving the aforementioned hard problem of consciousness, he wants to shift the question to “Why do people think there is a hard problem of consciousness?” However, he does not seem to believe that discussing the meta-problem will solve the original hard problem.
The idea I proposed can also be considered a kind of meta-solution based on meta-thinking. Here, by meta-thinking, I mean instead of directly adopting our habitual way of thinking, we step back and observe our thinking process, treat the thinking process itself as a tool and evaluate its usefulness. The habitual ways of thinking that should be critically examined are:
- Abusing the word “why” because one refuses to accept brute facts. This is what caused the first part of the hard problem, in which one tries to understand the relation between one’s subjective experiences and the related physical phenomena (which are really just another part of one’s experiences).
- Trying to describe or predict other’s qualia or subjective experience, which we cannot access. This is what caused the second part of the hard problem.
I believe that once we adopt the meta-thinking that I proposed here, we would find that the way of thinking listed above does not lead to fruitful knowledge. Once we stop thinking in these ways, the hard problem of consciousness naturally becomes an unreasonable question, and refusing to answer an unreasonable question is actually the true solution.
In this article, I have analyzed common concepts in discussions related to consciousness, including the hard problem, philosophical zombies, phenomenal consciousness and qualia. Assessing the usefulness of these ideas and the ways of thinking that lead to them, I find them impractical and do not lead to useful knowledge. Thus, I guess the best thing to do is to simply stop thinking in such a way. In doing so, there will be no more hard problems.
. . .
References
- Chalmers, David (1995). “Facing up to the problem of consciousness”. Journal of Consciousness Studies. 2 (3): 200–219.
- Nagel, Thomas (1974). “What Is It Like to Be a Bat?”. The Philosophical Review. 83 (4): 435–450.
- Reiss, Julian and Jan Sprenger, “Scientific Objectivity”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.)