There are thoughts, which include beliefs and desires. We think:  «There is a tree here.» | «I want chocolate.» | «I believe there is a tree here.» | «Natural numbers are well-ordered.»

There are qualia. We experience: A good feeling. | A strong red image. | The image of a tree. | The feeling of wanting chocolate. | The feeling of certainty about a belief. | The feeling of thinking about mathematics.

There are also thoughts about qualia. We think: «I’m feeling something.» | «The qualia I’m feeling is good.» | «I’m seeing a red-hot color.» | «I’m seeing a tree.» | «I feel a desire for chocolate.» | «I feel certainty about this belief.» | «I’m having the experience of thinking about mathematics.»

I. Functional Thoughts, Qualitative Thoughts.
here are thoughts. Some of these thoughts are accompanied by feelings of having these thoughts, which we may call phenomenal experience of thinking. There are also feelings. There are thoughts about all kinds of feelings too: when we feel something, we can think about it. These processes may stack: we may feel that we are thinking about a belief we have about something we felt. In this case, we can think about this feeling too, and then we can feel that we are thinking this. This nesting can be iterated as often as we want, it seems.

The relationship between thoughts about feelings and the feelings themselves is interesting, and I want to explore it.

We may call the experience of thinking, believing, desiring, and whatnot qualitative thinking, qualitative believing, and so on. These are mysterious; like all qualia, we know not what they are. Contrariwise, we do know what thoughts are. We know what it is to think about mathematics, and to think about our qualia, and to think about trees. It’s physical processes in the physical brain, some sort of mechanism that may be analogical computation. Then, we have assumption one: thinking is a physical mechanism in the brain.

Thinking itself, the «processing of information», the discriminating and the concluding, the comparing and the evaluating, we shall call functional thought. A lot of functional thought occurs unconsciously, while others are accompanied by qualitative thought. The same goes for believing Beliefs themselves are «information-states» which are taken in as input when our brain processes language, planning, inferring, and other kinds of functional thought, and we may call those functional beliefs. We can also characterize functional desires and others.

I want to know a few things. (1) What is the relation in the brain between that which does the thinking and that which does the feeling? — (2) Particularly, I want to know what «access» the thinking (and speaking, believing, acting) parts of our brains have to the feelings that occur inside our craniums. — (3) One idea I want to explore is that qualia are identical to certain thinking processes, but this will only appear in section VI.

II. Definitions and Caveats.
et us call the parts of us which do the thinking, the talking, the planning, and so on, the Thinking Parts (TPs). To be more explicit, ‘TP’ is an umbrella term that includes all our faculties of judging, analyzing, discriminating, believing, reporting, questioning, articulating, comparing, remembering, entertaining, planning, acting, reflecting, introspecting, evaluating, appreciating, and so on. For ease of exposition, we should call all such processes “thinking.” By assumption one, they are all processes in the brain, and by definition we can only think what our Thinking Parts think.

Let us call the parts of the physical brain which do the feeling the Feeling Parts (FPs). Something in the physical brain must have information about qualitative states (i.e. access to qualia), if qualia exist and if epiphenomenalism is false, and these we call the Feeling Parts. Otherwise, we have no Feeling Parts.

This access can occur in two fashions. First, somehow, qualitative states may be identical to certain physical aspects of the brain, like activation patterns or neurobiological states. In these cases, it is no mystery how we can think about qualia: just feed our TPs information about our the state of our own brains. The mystery: how it is that something neurobiological or informational can be qualitative at the same time? (This will come back in section VI.)

Second, qualitative states may be distinct from any part of our brains, and somehow our Feeling Parts perceive these qualia. Under some causal theory of perception, qualia would cause state-changes in our FPs, like external objects cause state-changes in our visual cortices. In this case, it is no mystery how we can think about qualia: juggle information around until our TPs have them. The mystery: how it is that such causal connection (such “access”) occurs between neural systems and qualitative states? (This mystery I will not deal with.)

What I will say in sections III and IV holds independently of these two possibilities. I want to examine the relation between thought and qualia, because I want to see what this relation must be if we are to have unerring introspective access to our qualitative states. This is why I am characterizing parts of our brain that think, and parts of our brain that access qualia. Throughout the text, since I have as an axiom in my mind that our access to qualia is infallible, I will assume the Feeling Parts of our brain have unerring access to our qualitative states. Then, I will investigate (i) what must be the relation between our Thinking Parts and our Feeling Parts for introspection about qualia to be unerring, and (ii) what must be the relation between our Feeling Parts and our qualitative states for introspection about qualia to be unerring.

I will being by considering the two possible relations between our TPs and our FPs. Maybe they are different, maybe they are the same. In section III I examine what happens when we have Feeling Parts distinct from our Thinking Parts, which we may call disjunctive hypothesis. In the latter case, probably some TPs may be FPs while others are not, while each FP is identical to some TP. We shall not deal with this identity hypothesis until section IV.

I think that considering these two hypotheses carve something in the conceptual space of the philosophy of mind at its joints. We must have something that has access to qualitative states, and we need to consider whether this something is different from the parts of us that introspect and think about qualia.

(Side-note: This is not important here, but I want to say Feeling Parts may not be centralized. Perhaps qualitative states and their perception occur at multiple points. The visual cortex might have its very own Feeling Part, and the auditory cortex another, and the introspective parts of our cortex yet another one too, each feeling a different kind of thing at a different location and time. One mystery, which I shall not deal with, is how the unity of consciousness can occur with such decentralization in place.)

III. Introspection and the Disjunctive Hypothesis.
If the parts of us that think (specially about qualia) are distinct from the parts of us that have information about qualia, then thoughts about what is being felt can only be accomplished when some TP receives, from some FP, information about what is being felt. A schema for what is going on is this: Qualia → Feeling Part → Thinking Part. Later on, in section VI, we’ll try to connect qualia and thinking directly.

For example, a red quale is occurs, and some of our TPs are duly informed of this. As a result, they produce functional thoughts of the kind: «I am feeling red.» | «Red is being felt.» | «I am feeling something.» | «I have qualia.» | «I want to look more at this.» — There may also be subjective reports about the redness being observed, along with commentaries on the beauty of the red, and the like.

But what are our TPs being informed of? They are blind to qualitativeness. They must be fed descriptions of what is being felt. So it can only tell whether a qualitative state is vivid or warm or comforting by analyzing the description it was fed. The descriptions themselves cannot contain such information explicitly: under the disjunctive hypothesis, our Feeling Parts do not judge or evaluate. They are as dumb as a nut, and their task is translating from one format (qualitativeness) to another (description).

This description must be pretty bare and uninterpreted: qualia must leave uninterpreted marks in our FPs somewhat like feet leave uninterpreted marks on dirt — footprints. All interpretation the our FPs can have done, then, is translating its input into some format readable our TPs. I suppose it must be encoded informationally, in activation patterns and circuit structures. Perhaps it could describe the quale of a tree-image with a bitmap, coding for hue, saturation, and color at each point in a grid. There are two possibilities here.

(A) Qualia are not activation patterns or similar stuff.  Presumably, this entails one cannot capture qualia with descriptions, much like one cannot capture a painting with a description of it. We already know our Thinking Parts are incapable of grasping and thinking about the red qualia in its full qualitative glory, since they are not Feeling Parts. In this scenario A, our TPs won’t have equivalent non-qualitative surrogates either. They’ll have to do with our FPs’ best descriptive efforts, which are imperfect and non-equivalent to the original due to the translation between two wholly different formats: from qualitativeness to description. Any intrinsic warmness, vividness, painfulness, etc. the qualitative state may have had will be lost in translation.

(B) Qualitative states are activation patterns (or similar stuff) in the FPs. In this case, I’m not sure what would happen. If qualia were activation patterns, then our TPs could get perfect descriptions of qualia. Would this mean our TPs would be feeling something to? Ex hypothesi, this could not be. There are two ways to avoid the collapse of our hypothetical disjunction: Either (i) our FPs do not describe perfectly their own activation patterns, in a way so that our TPs do not implement activation patterns which are qualitative, or (ii) the same activation patterns may occur without being qualitative; perhaps they must be implemented in the neurobiological matter of our Feeling Parts to be qualitative. — Both of these are weird, the second one more so, but that’s scenario B under the disjunctive hypothesis!

What I will say from now on holds for both scenarios, A and B.

FP and TP
We begin with an image (qualia). This was clearly generated by our brain, as our knowledge of brain-mind dependency assures us. This qualia may or may not be different from some state of the Feeling brain. → Now, the FPs encode this image into data-arrays (information, descriptions) the TPs can read. → Finally, the TPs decode and analyze these data-arrays, outputting beliefs, judgments, reactions, subjective reports, statements, evaluations, and the like.

Here’s the crucial point: where there is communication going on, there can be miscommunication. In fact, if a system’s TPs and FPs are distinct, then therec can be systematic miscommunication between them. The TPs may be getting perfectly distorted information from the FPs about what is being felt — and the FPs’ reports are the only source of information our TPs have about phenomenal experience. Here are some examples of what could happen.

In one case, the FPs might tell the TPs the opposite of what is happening. For instance, a red-hot quale may be felt, and some TP may be misinformed that a blue-ice quale is occurring. To pick another example, perhaps when the system’s eyes are directed at the sunset, the quale of a garish image pops up in its mind’s eye (that is, in one of its FPs), but nevertheless the thinking brain (the TPs) is informed of a description of a wholly different image, whose color composition is much more harmonious, and it erroneously concludes: «I am seeing a beautiful sunset, how wonderful.»

More radically, in another case,  bunch of things are being felt, but the FPs fail to inform any TP of what is happening, and simply remain silent. For example, strong pain may be felt, but no information on this is communicated. As a result, the pain goes unnoticed and unthought of by the TPs. Perhaps one of the TPs, noticing the idleness of the FPs, even forms the erroneous belief «All’s well» or «There is no pain».

(Here are some questions, which may or may not be important to what I’m thinking through. Is any brain system in the above scenario noticing the ongoing sustained, sharp pain, and perhaps getting distressed over it? Our TPs are surely not noticing; could our FPs notice what is happening? Wouldn’t that require powers of discrimination and remembrance? — The point is, what happens if there is a pain that cannot affect the Thinking Parts of the brain? Is it just felt by some FP, silently, unthought of, unprocessed, unremembered from one instant to the next?)

Moving on, other things may happen. We have considered the FPs telling the TPs distorted versions of what is happening, and the FPs failing to tell the TPs about things that are happening. There is also the case in which the FPs are telling the TPs about a bunch of sensations when no sensations occurred in the first place.

For instance, the FPs may detect no pain, but they nevertheless signal that pain is being felt. In this case, the TPs mistakenly generate the thoughts «Pain is being felt» and «I feel pain». Consider the following strange situation. A system that is (1) functionally convinced that it is pain, because its FPs are signaling pain to its TPs; (2) qualitatively convinced that it is in pain, generated by its functional conviction of being in pain; and (3) functionally in pain, since its TPs are receiving pain-signals (e.g. the system is shrieking, worrying, focusing all its attention in discovering how to stop the pain) — but the system is, nevertheless, NOT (4) qualitatively in pain.

This is an absent qualia scenario: a functional state (3) which corresponds to a qualia, but this qualia is absent (4). If one tinkers with the set-up above and substitutes the lack of pain for the presence of something else, like a mild discomfort or the feeling of being caressed, the result is an inverted qualia scenario. — It is one of my contentions that such functional-qualitative mismatches can only occur if the disjunctive hypothesis is true. (See section VI for more details and a small correction to what I have just said.)

What is weirdest about absent qualia and inverted qualia scenarios is not that there is a mismatch between what the system functionally thinks itself to be feeling, and what is really felt. It is that in some of these scenarios one would not be able to notice or think about these mismatches, since what do the noticing and thinking are the TPs, and they would not be informed of any mismatch. Right now you could be feeling the worst toothache in existence, while being entirely qualitatively and functionally convinced that there is no pain. Nothing in your introspective realm could reveal that mismatch, for introspection is a thinking, searching, retrieving, and analyzing process carried out by the sorely misinformed TPs.

Ponder on this fact, for it is mind-boggling, difficult, and important. Introspection is carried out by the TPs. If your Thinking Parts are misinformed to some degree, the results of your introspection will be misguided to that same degree. — Furthermore, nothing available to your Thinking Parts distinguishes between fraudulent and genuine reports from your Feeling Parts. Informationally and evidentially, these two kinds of reports are the same to any Thinking Part, and a fortiori indistinguishable by any introspective process.

(To be clear, if one has a qualitative belief that one is conscious, then ipso facto one is conscious. Illusion of consciousness can happen only with functional beliefs.)

IV. Consequences for Epiphenomenalism.
Now I can say why I think epiphenomenalists are hopelessly muddled when they attempt to explain how we know ourselves to be conscious. I am not claiming epiphenomenalism is a consequence of the disjunctive hypothesis; it obviously is not. It merely has many similar features, and this is (a) why I can address epiphenomenalism with results from my thinking of the consequences of the disjunctive hypothesis and, as we shall see, (b) why I can shed new light on what is happening under the disjunctive hypothesis by thinking about epiphenomenalism.

I have two interconnected reasons for thinking that under epiphenomenalism we have no knowledge of qualitative states. (1) Under epiphenomenalism, human brains have no Feeling Parts, and thus our Thinking Parts have no source of information that discriminates between states in which qualia exist and states in which they don’t, or between states where qualia A exists and state where qualia B exists. The evidence and information before our TPs are the same as those before the TPs of a philosophical zombie: nil.

(2) People who consider epiphenomenalism usually are distressed by the possibility that they, these “qualitative I’s” which are having certain experiences right now, are mere appendages to the world. Such experiences are consistent with epiphenomenalism: there sure can be qualitative experiences of wondering «whether epiphenomenalism is true», and experiences of «despairing over such possibility.»

However, if these experiences are truly ours, then they must be a product of our thinking — and our thinking, ex hypothesi, goes on in our TPs. That was my initial assumption: thinking goes on in the brain. All qualitative thoughts follow our functional thoughts. So if property dualists are right and there is more to each of us than our brains, the conscious part of us would not be able to think of anything our brains did not think. As a consequence, we could not have qualitative thoughts being influenced by what happens in our conscious life!

Since our functional thoughts are products of the inferential and evidential paths available to our Thinking Parts, these paths would take no notice of the existence of qualia if epiphenomenalism was true. They’d have neither evidence nor indication of what, if anything, is happening in our phenomenal world. If our thoughts about our conscious lives actually matched what went on our conscious lives, we could be nothing but lucky. This is because no mechanism could “read-off” phenomenal events and inform our brains.

As a result, no one could be justified in saying: «I’m sure of this feeling which I’m having right now, because I’m having it!» One could not have any evidence for such feeling, nor could one’s sureness be caused by the existence of such experience. In addition, there could be no justification for saying this: «I am these qualitative states, and I am a causally inert appendage to the word, a mere epiphenomenon, and this makes me despair.»

The “I” who’s thinking, mulling over, worrying about, despairing, and talking is our «functional self», the Thinking Parts of our brain. The whole train of thought of our qualitative selves depends on the musing of our functional self.

As someone has said in an online forum: “Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive.”

Outside epiphenomenalism, this is not worrying at all. If the connection between our TPs and our FPs is well-functioning, then our thoughts are felt and our feelings are thought of. What happens to our qualitative selves is thought about by our functional selves, and these thoughts resurface as states in our qualitative selves. So one can think one is feeling such-and-such, and one can feel that one is thinking about such feels.

In this case our TPs could worry whether these qualitative states it is thinking about are mere appendages, which would lead to qualitative states of distressing over epiphenomenalism. But the very fact that one can worry about the qualitative states that are happening now means one has access to such qualitative states, in which case epiphenomenalism is incorrect.

So if you are pondering over your present experiences, and because of that worrying whether epiphenomenalism is true, — and those are things which are done by your functional self, ex hypothesi —, then you can be sure epiphenomenalism is wrong.

Small detour: I want to say that, if you’re sure of your own consciousness, then epiphenomenalism is false too. If epiphenomenalism was right, we’d be all deluded in thinking we have any reason to speak of consciousness. It would be like talking about omega waves or some other piece of fiction. David Chalmers has an objection to that, which he offered as a defense of p-zombie-ism (which he explicitly says does not entail epiphenomenalism):

“It may be tempting to object that if my belief lies in the physical realm, its justification must lie in the physical realm; but this is a nonsequitur. From the fact that there is no justification in the physical realm, one might conclude that the *physical* portion of me (my brain, say) is not justified in its belief. But the question is whether *I* am justified in the belief, not whether my *brain* is justified in the belief, and if property dualism is correct then there is more to me than my brain. (…) To say that the experience makes no difference to my psychological functioning is to say that the experience makes no difference to *me*. (…)  Even when it is objected that my zombie twin would believe the same things that I would, this does nothing to make plausible the first-person skeptical hypothesis that I might be a zombie. Underlying this sort of objection may be the implicit assumption that the beliefs themselves are the primary determinants of my epistemic situation; so if there is a situation in which I believe exactly the same things that I do now, it is a situation that is evidentially equivalent to my current one. But of course this is false. The evidence for my beliefs about experiences is much more primitive than the beliefs themselves. It is experience itself that is primary; the beliefs are largely a secondary phenomenon.” (The Conscious Mind, pages 198-199.)

This is a fine answer by Mr. Chalmers, and one might accept that one’s qualitative “half” is justified in its qualitative belief that it’s conscious; but this comes at a cost. One cannot accept, under the hypothesis that underpins my whole essay, that Chalmers’s qualitative self could think about that which he is experiencing (except by miracle or sheer luck), since thinking goes on in the brain. The only exit for Chalmers here is postulating that some thinking goes on independently in our conscious minds.

V. Consequences for the Disjunctive Hypothesis.
That the Thinking Parts and the Feeling Parts are not the same opens up the possibility of their miscommunication. In fact, it opens up the possibility of a cognitive system S, sufficiently like mine and yours, in which they systematically miscommunicate: perhaps S never felt anything, but its TP strongly believe it has; or perhaps feelings abound in S, but its TP believes S to be a philosophical zombie. (It could even experience conviction about being a philosophical zombie. Should this be possible?)

We also have that S’s TPs have no way to discriminate between different qualitative states, even if its FP-to-TP connections are well functioning.

I think it will be profitable to consider what would a conversation between two such systems be like. This little thought experiment will heighten our understanding of what is doing the thinking when we are discussing epiphenomenalism, philosophical zombies, inverted qualia, and absent qualia. And it will make clear why all these theses lead us into perfect skepticism regarding the character (or the mere existence) our qualitative states.

Having a conversation with a system means conversing with the parts of it that think and speak. When you and a colleague engage in conversation, what we have are two systems of Thinking Parts exchanging information. This is because the thing that will interpret, understand, think about, and evaluate what you say, and afterwards articulate a response, will be your partner’s TPs — and your system of TPs will be the ones dealing with what your partner says: understanding, thinking, and responding. The conversation, then, is between two systems of TPs. (Even when I speak to myself, the truth is that my TPs are exchanging information.)

IF these two systems do not have direct access to qualitative states, and are either (α) being systematically misled by their only informants, the FPs, or are (β) causally closed-off from qualia, as in epiphenomenalism, THEN what we have is a conversation that is going on independently of anything that happens in the phenomenal/qualitative world. The two systems of TPs would be shooting in the dark when they speculate about the character of their present qualitative states, or about whether there are any qualitative states at all.

TP-to-TP communication, a closed circuit. Person A functionally thinks something, articulates it, and says it. Person B functionally hears it, functionally understands it, and functionally thinks about it. — Diagram: low-level processes create qualia, which are felt by the FP. The FP then tells TP what was felt. — Must qualia enter the picture for this conversation to take place effectively? Can’t unconsciously parsed language just go to the TP, without becoming qualitatively heard sentences? Can such information traverse the gray path and bypass the phenomenal world?

Now here I should repeat my point. Even if our TPs were not ever misled by our FPs, how could our TPs have any evidence of that? Every TP-system in human existence ever got to see were data-arrays which said ‘this and this was felt’ — none of them ever accessed the feelings themselves. Informationally and evidentially, our TPs do not discriminate between the existence and the non-existence of qualitative states. No TPs would ever have sufficient information to tell whether the scenario α above, that of systematic miscommunication, is the present scenario.

(Minor digression: It should not be difficult to fake data-arrays which talk about fictional feelings; at least, not more difficult than generating qualia and then describing them accurately. The example I will give now is due to Dennett (1995). Suppose you have two systems, system A and system B, and you want to get functional visual information from A to B. A bad strategy would be displaying this information in a monitor attached to A, and then attaching a TV-camera to system B, so that it can look at the monitor and proceed to decode what it sees into data-arrays system B can actually read. — Why not bypass the whole ordeal and just transmit information from system A (which is analogous to our visual cortex) to system B (which is analogous to our TPs)? Why the monitor (qualia) and the TV-camera (our FPs)?

In fact, faking phenomenal experience would be the perfect evolutionary strategy: convince your system that it has a bunch of feelings, — that is, that it has a bunch of states which it really dislikes or really likes, like pain and pleasure, — and then it will act as if it had pain and pleasure: it will flee, search for food, defend itself, have sex, change strategies after a failed plan, protect its children, functionally love its neighbors, and so on. Why go through all the trouble of making these data-array reports of qualia genuine, if their genuineness does not matter? All that matters is the behavior the system of TPs output. Fool them, and you’ve got an adapted creature.)

I said the above was a minor digression because that the faking strategy is possible and seemingly advantageous does not prove that it is implemented. Evolutionary processes generate cumbersome systems sometimes. Plus, I am quite sure that I am not being misled about my qualitative states. So let us not dwell on this point and bask in the glory of our result: the disjunctive hypothesis leads to skepticism about qualia, because evidentially there is nothing about qualia before our reasoning faculties.

(I suppose I should say it now: I am analyzing these matters from an internalist, evidentialist, and causal-theory-of-perceptionist point of view. In future essays I will pursue what what follow from different views about justification and knowledge.)

VI. The Identity Hypothesis: Two Possible Corollaries.
I reject eliminative materialism because consciousness is undeniable, and I will reject the disjunctive hypothesis and the very similar thesis of epiphenomenalism on the same grounds. That I, the one who thinks, have incontrovertible knowledge of my qualitative states is obvious. This knowledge stops being incontrovertible if my Thinking Parts do not have direct access to consciousness.

The alternative hypothesis, that of identity, has no trouble granting us such unerring access. identity hypothesis. What is difficult is understanding what, exactly, the truth of this hypothesis amounts to. Recall earlier we were neutral as regarded the Feeling Part’s relation with qualitative states. Such states could be different from the processes in our FPs, while being perceived by these processes in some way; or they could be identical to such processes. All we said is valid for both these cases, for in both of them the information-holder FP is distinct from the information-receiver TP. Now, these two possibilities will be explored.

To be clear, the identity hypothesis does not merely state that (some of) our Thinking Parts are identical to our Feeling Parts. If we opened up a TP black box and we found two distinct sub-parts, one that thought and one that felt, we would come back immediately to the problem of miscommunication. The identity hypothesis holds, instead, that every qualitative state is one of these: (I) Perceived by each corresponding functional state. For example, the qualitative state of hunger is perceived by functional states regarding what is being felt now. (II) Identical to each corresponding functional state. For example, the qualitative state of a beautiful tree-image is identical to the functional judgment that a beautiful tree is being visually experienced.

The ‘correspondence’ qualification above is present so there can be no mismatch between what is thought to be felt and what is felt. So introspection about our phenomenal experience cannot fail. Here are these two possibilities in more detail:

Option I. Perhaps our qualitative states differentially cause something to occur in the functional states, depending on their qualitative aspects. Our faculty of thinking and judging could be tuned in many points to respond appropriately to these qualitative states (much like our visual cortex responds appropriately to the state of the outside world), which presumably exist somewhere, in some form, inside our craniums, with causal efficacy on our neural circuits — be them physical or not.

I am left wondering why this extra step exists in our cognition, and how it is that qualitative states are differentially created by either brain states or functional states, and how they differentially affect (depending on their qualitative character) either brain states or functional states. Perhaps this is something a new theory of physical reality could answer.

I also wonder whether some of the above evidential problems surface here. For example, it is possible that our eyes and visual cortex could be badly tuned to respond to the characteristics of the external world, leading it to see trees where they weren’t. We know this isn’t true by indirect methods: what we see matches what we taste, feel, and hear (coherency), and the actions we take based on our visual beliefs lead to what we want and expect (success). We have independent measures of accuracy for our visual information. In fact, this is how skepticism about our knowledge of the external world is defeated.

Similarly, our TP-systems could be badly tuned to discriminate qualitative states, and we’d staunchly believe to be feeling good, and living happily ever after, even though a bunch of horrendously painful qualitative states were going on. Could there be an independent measure of accuracy here? Presently, I do not see how. Thus, to avoid any possibility of mistake, let us turn to option two.

Option II. In this case, our functional desire to stretch our fingers is identical to our phenomenal experience of desiring just that. | Our functional belief that we are in pain is identical to our qualitative state of pain. The characteristics of this pain will be the set of judgments the system has about what is being felt. This needn’t be verbal; dogs may have pains just as we do, perhaps with some different characteristics, due to the different structure of their cognitive system. | Our functional discrimination of a duck while seeing a duck-rabbit drawing leads to the functional belief that we are seeing a duck, and this is identical to our qualitative visual awareness of the duck, the gestalt of seeing the drawing as a duck.

The details of our functional state when we see red are the phenomenal experience of seeing red. All those details lead us to describe our color experience as “hot”, and to judge it more similar to orange than to blue and to be the same color as an apple; they lead us to say that there is a “valiant” aspect to this red image, and that it is more “lovely” than the red of the U.S. flag. Anything else we might try to say and think when we experience a bright red qualitative state is a product of that functional state.

I am not claiming that pain is a certain set of dispositions, like Gilbert Ryle would. I think pain is an aspect of a certain information-state or activation pattern operant in our brain circuits. This information-state has certain characteristics, and these will be the characteristics of the quale of pain. These characteristics may lead us to shriek or say «I am in pain», or it may not; contra Ryle, I think the resultant behavior does not matter. What matter are the judgments that occur in our brain about what is occurring inside us.

To give yet another example, this time clearly related to the bogus inverted qualia thought experiment: if the leave-shuffling sound I’m hearing right now had a different qualitative aspect, this would manifest in my thinking about it. Not only I would notice it, but I would be less comfortable with it, or perhaps remember it less fondly, or perhaps describe it as ‘prickly’ instead of ‘smooth’. I would imitate it differently too. — All variations in qualitative state are variations in functional state, and this is reflected in how I like it, how I describe it, how I compare it to other auditory feels, how I compare it to other feels in general, and so on. Thought and qualia are inseparable.

There is a strong objection that must be met, whoever. Phenomenologists like Hubert Dreyfus and Sean Kelly (in Heterophenomenology: Heavy-handed sleight-of-hand) argue that, when we are running focused on catching a bus, we don’t really have a belief about what we are doing — but we are conscious nevertheless! Thus, they conclude, locating qualitative states as aspects of functional states like belief, desire, and other judgments, is misguided. Dreyfus and Kelly are right in pointing out that they are not judgments of a verbal or verbalizable kind, they are not beliefs, and they are not qualitative thoughts. (We are not thinking «I am feeling anxiety and physical exertion» while we run; that’s their point. We’re just having those experiences of running without reflecting upon it.)

However, we want to preserve the infallibility of introspection about qualia. We can only do so if our introspective states — the judgments about what’s going on in us, qualitatively — are identical to what’s going on in us, qualitatively. Thus, we must accept that even when running to catch a bus there are many judgments occurring, just not of the kinds listed in the previous paragraph.

I believe Dan Dennett would call these judgments «events of content-fixation», but I have not read his book The Intentional Stance to know what he means by that. Perhaps events of content-fixation are representations. I am not sure, and I am not even sure what counts as a representation. Perhaps they must be more specific kinds of representation: representations about what is happening inside the system itself.

Anyhow, which functional judgments identical to which qualitative states is a task for further inquiry. The great mystery is explaining how a functional state/process is identical with a qualitative state/process, and we must solve it while preserving infallibility.

I want to tackle this in the future, but presently I cannot claim to make sense of all that, metaphysically: the functional/qualitative duality, and the criteria for representations having qualitativeness. However strange and ungraspable this all may seem, it does not seem metaphysically impossible. Furthermore, it is the only way for our knowledge of qualia to be incorrigible. So we should accept it, even if we don’t understand it, much like we accept quantum states exist without understanding what they are. We accept quantum mechanics because we have strong evidence for it; we must also accept functional-qualitative identity because we incontrovertible evidence for it.

This means that inverted qualia and absent qualia thought experiments are wholly wrong-headed, because they postulate we could invert or remove qualitative states from a system without changing any of its functional states. However, qualitative states are (certain) functional states.

(Repeating my earlier qualification: I use the word ‘functional state’ for lack of a better word. I am not a functionalist in the ordinary sense, for I don’t believe qualia are sets of behavioral dispositions or causal relations. Instead, I believe they are certain (perhaps dynamic) activation patterns in the brain. I should add that, perhaps, qualia are activation patterns in the brain, and that the same activation pattern in a Chinese Gym or in a IBM computer would not be qualitative. I just don’t know.)

Post Scriptum.
One. Was I feeling my feet a few seconds ago, before I started paying attention to it? I am not sure if there can be ‘unattended’ conscious experiences: something which I’m feeling, but about which I’m not reflecting in the least while I feel it. It needn’t be something complicated; a dog could do it. It just needs to have some attention focused on that experience.

Two. Perhaps even the phenomenal experience of time, and the order of events in our conscious experience, are judgments in our brain that something extended in time is occurring, and that things are happening in a certain order. Conscious-time may not reflect real time. This is a difficult idea I read Dan Dennett (1995) discuss, and I do not claim to understand it.

Three. This essay can be readily seen as wholly dennettian; everything is explained in functional terms (as the conjunction between the mechanicality of thought and the indubitability of our introspective knowledge about qualia forced us to conclude), except for that very special ingredient: qualia themselves. I will not fall into Dennett’s mistake of supposing that we could never understand this special little bit in functional terms, without making it less special.