The Rise of Consciousness

Contents

The Rise of Function
The Power of Modeling and Entailment
Qualia and Thoughts
Color
Emotions
Thoughts
The Self
The Stream of Consciousness
The Hard Problem of Consciousness
Our Concept of Self

The Rise of Function

I’ve established what function is and suggested ways we can study it objectively, but before I get into doing that I would like to review how function arose on Earth. It started with life, which created a need for behavior and its regulation, which then created value in learning, and which was followed at last by the rise of consciousness. We take information and function for granted now, but they are highly derived constructs that have continuously evolved over the past 4.28 billion years or so. We can never wholly separate them from their origins as the acts of feedback that created them help define what they are. However, conceptual generalizations about what functions they perform can be fairly accurate for many purposes, so we don’t have to be intimately familiar with everything that ever happened to understand them. This is good, because the exact sequence of events that sparked life is still unknown, along with most of the details since. The fossil record and genomes of living things are enough to support a comprehensive overview and also gives us access to many of the details.

We know that living, metabolizing organisms invariably consist of cells that envelop a customized chemical stew. We also know that all organisms have a way to replicate themselves, although viruses do it by hijacking the cells of other organisms and so cannot live independently. But all life has mutual dependencies on all other life either very narrowly through symbiosis or more broadly by sharing resources in the same ecological niche or the same planet. Competition between or across species is rewarded with a larger population and share of the resources. The chemical stew in each cell is maintained by a set of blueprint molecules called DNA (though originally thought to have been RNA) which contain recipes for all the chemicals and regulatory mechanisms the cell needs to maintain itself. Specifically, genes are DNA segments that are either transcribed into proteins or regulate when protein-coding genes turn on or off. Every call capable of replication has at least one complete set of DNA1. DNA replicates itself as a double helix that unwinds like a zipper while simultaneously “healing” each strand to form two complete double helices. While genes are not directly functional, proteins have direct functions and even more indirect ones. Proteins can act as enzymes to catalyze chemical reactions (e.g. replicating DNA or breaking down starch), as structural components (e.g. muscle fibers), or can perform many other metabolic functions. Not all the information of life is in the DNA; some is in the cell walls and the chemical stew. The proteins can maintain the cells but can’t create them from scratch. Cell walls probably arose spontaneously as lipid bubbles, but through eons of refinement, their structure has been completely transformed to serve the specific needs of each organism and tissue. The symbiosis of cells and DNA was the core physical pathway that brought function into the world.

A stream has no purpose; water just flows downhill. But a blood vessel is built specifically to deliver resources to tissues and to remove waste. This may not be the only purpose it serves, but it is definitely one of them. All genes and tissues have specific functions which we can identify, and usually one that seems primary. Additional functions can and often do arise because having multiple applications is the most convenient way for evolution to solve problems given proteins and tissue that are already there. Making high-level generalizations about the functions of genes and tissues is the best way to understand them, provided we recognize the limitations of generalizations. Furthermore, studying the form without considering the function is not very productive: physicalism must take a back seat to functionalism in areas driven by function.

Lifeforms have many specialized mechanisms to perform specific functions, many of which happen simultaneously. However, an animal that moves about can’t do everything at once because it can only be in one place at a time. A mobile animal must, therefore, prioritize where it will go and what it will do. This functional requirement led to the evolution of animal brains, which collect information about their environment through senses and analyze it to develop strategies to control the body. Although top-level control happens exclusively in the brain, the nervous and endocrine systems work in tandem as a holobrain (whole brain) to control the body. Nerves transmit specialized or generalized messages electronically while hormones transmit specialized messages chemically. While instinctive behavior has evolved for as many fixed functions as has been feasible, circumstances change all the time, and nearly all lifeforms consequently have some capacity to learn, whether they have brains or not. Learning was recently demonstrated quite conclusively in plants by Monica Gagliano2. While non-neural learning mechanisms are not yet understood, it seems safe to say that both plants and animals will habituate behaviors using low-level and high-level mechanisms because the value of habituation is so great. While we also can’t claim full knowledge about how neural learning works, we know that it stores information dynamically for later use.

My particular focus here, though, is not on every way brains learn (using neurons or possibly hormonal or other chemical means), but on how they learn and apply knowledge using minds. “Mind” is something of an ambiguous word: does it mean the conscious mind, the subconscious mind, or both? English doesn’t have distinct words for each, but the word “mind” mostly refers to the conscious mind with the understanding that the subconscious mind is an important silent partner. When further clarity is needed, I will say “whole mind” to refer to both and “conscious mind” or “subconscious mind” to refer to each separately. Consciousness is always relevant for any sense of the word “mind” and never of particular relevance when using the word “brain” (except when used very informally, which I won’t do). Freud distinguished the unconscious mind as a deeper or repressed part of the subconscious mind, but I won’t make that distinction here as we don’t know enough about the subconscious to subdivide it into parts. While subconscious capabilities are critical, we mostly associate the mind with conscious capabilities, namely four primary kinds: awareness, attention, feelings, and thoughts. Under feelings, I include sensations and emotions. Thoughts include ideas and beliefs, which we form using many common sense thinking skills we take for granted, like intuition, language, and spatial thinking. Feelings all have a special experiential quality independent of our thoughts about them, while thoughts reference other things independent of our feelings about them. Beliefs are an exception; they are thoughts because they reference other things, but we have a special feeling of commitment toward them. We have many thoughts about our feelings, but emotions and beliefs are feelings about our thoughts. I’ll discuss them in detail below after I have laid some more groundwork. Awareness refers to our overall grasp of current thoughts and feelings, while attention refers to our ability to focus on select thoughts and feelings. All four capabilities — awareness, attention, feelings, and thoughts — help the conscious mind control the body, which it operates through motor skills that work subconsciously. Habituation lets us delegate the subconscious to execute fairly complex behaviors with little or no conscious direction. This may make it seem like the subconscious mind acts independently because it initiates some reactions before we are consciously aware we reacted. It is more efficient and effective to leave routine matters to the subconscious as much as possible, but we can quickly override or retrain it as needed.

The Power of Modeling and Entailment

The role of consciousness is to promote effective top-level decision making in animals. While good decisions can be made without consciousness, as some computer programs demonstrate, consciousness is probably the most effective way for animals to make decisions, and in any case, it is that path that evolution chose. Consciousness is best because it solves the problem of TMI: too much information. Gigabytes of raw sensory information flow into the brain every second. A top-level decision, on the other hand, commits the whole body to just one task. How can all that information be processed to yield one decision at a time? Two fundamentally different information management techniques might be used to do this, which I generically call data-driven and model-driven. Data-driven approaches essentially catalog lots of possibilities and pick the one that seems best. Model-driven approaches break situations down into more manageable pieces that follow given rules. Data-driven techniques are holistic and integrate diverse data, while model-driven techniques are atomistic and differentiate data. The subconscious principally uses data-driven methods, while the conscious mind principally uses model-driven methods, though they can leverage results from each other. The reason is that data-driven methods need parallel processing while model-driven methods work require single-stream processing (at least, they require it at the top level). The conscious mind is a single-stream process while the rest of the mind, the subconscious, is free to process in parallel and most likely is entirely parallel. This difference is not a coincidence, and I will argue more later that the sole reason consciousness exists is so that our decision making can leverage model-driven methods.3 The results of subconscious thinking like recognition, recollection, and intuition just spring into our conscious minds after the subconscious has holistically scanned its stored data to find matches for given search criteria. While we know these searches are massively parallel, the serial conscious mind has no feel for that and only sees the result. The drawback of data-driven techniques is that while they can solve any problem whose solution can be looked up, the world is open-ended and most real-world problems haven’t been posed yet, much less solved and recorded. Will a data-driven approach suffice for self-driving cars? Yes, probably, since the problem space is “small” enough that millions of hours of experience logged by self-driving cars is enough for them to equal or exceed what humans can do on just hundreds to thousands of hours. Many other human occupations can also be largely automated by brute-force data-driven approaches, all without introducing consciousness to robots.

The more interesting things humans do involve consciously setting up models in our minds that simplify more complex situations. Information summarizes what things are about — phenomena describing noumena — and so is always a simplification or model of what it represents. But models are piecewise instead of holistic; they explicitly attempt to break down complex situations into simpler parts. The purpose of this dissection is that one can then consider logical relationships between the simplified parts and derive entailment (cause and effect) relationships. The power of this approach is that conclusions reached about models will also work for the more complex situations they represent. They never work perfectly, but it is uncanny how well they work most of the time. Data-driven approaches just don’t do this; they may discriminate parts but don’t contemplate entailment. Instead, they look solutions up from their repertoire, which must be very large to be worthwhile. While physical models are comprised of parts and pieces, conceptual models are built out of concepts, which I will also sometimes call objects. Concepts or objects are functionally delineated things that are often spatially distinct as well. An object (as in object-oriented programming) is not the thing itself, but what we know about it. What we can know about an object is what it refers to (is about) and its salient properties, where salience is a measure of how useful a property is likely to be. Because a model is simpler than reality, the function of the concepts and properties that comprise it can be precisely defined, which can lead to certainty (or at least considerable confidence) in matters of cause and effect within the model. Put into the language of possible worlds logic, we say that if something is true in a possible world, then it is necessarily true. Knowing that something will necessarily happen is perfect foreknowledge, and some of our models apply so reliably to the real world that we feel great confidence that many things will happen just the way we expect, even though we know that in the real world extraneous factors occasionally prevent our simple models from being perfect fits. We also use many models that are only slightly better than blind guessing (e.g. weather forecasting), but any measure of confidence better than guessing provides a huge advantage.

Though knowledge is imperfect, we must learn so we can act confidently in the world. Our two primary motivations to learn are consequences and curiosity. Consequences inspire learning through positive and negative feedback. Direct positive consequences provide an immediate reward for using a skill correctly. Direct negative consequences, aka the school of hard knocks, let us know what it feels like to do something wrong. Indirect positive or negative consequences such as praise, punishment, candy, grades, etc., guide us when direct feedback is lacking. The carrot-and-stick effect of direct and indirect consequences pulls us along, but we mostly need to push. We can’t afford to wait and see what lessons the world has for us, we need to explore and figure it out for ourselves, and for this we have curiosity. Curiosity is an innate, subconscious motivating force that gives us a rewarding feeling for acquiring knowledge about useless things. Ok, that’s a joke; we are most curious about things we think will be helpful, but we do often find ourselves fascinated by minor details. But curiosity drives us to pursue mastery of skills assiduously. Since the dawn of civilization people have needed a wide and ever-changing array of specialized skills to survive. We don’t really know what knowledge will be valuable until we use it, so our fascination with learning for its own sake is critical to our survival. We do try to guess what kind of knowledge will benefit people generally and we try to teach it to them at home and in school, but we are just guessing. Parenting never had a curriculum and only emerged as a verb in the 1960’s. Formal education traditionally stuck to history, language, and math, presumably because they are incontrovertibly useful. But picking safe subjects and formally instructing them is not the same as devising a good education. The Montessori method addresses a number of the shortcomings of traditional early education, largely by putting more emphasis on curiosity than consequences. In any case, evolution will favor those with a stronger drive to explore and learn over the less curious up until the point where it distracts them from goals more directly necessary for survival. So curiosity is balanced against other drives but is increasingly helpful in species that are more capable of applying esoteric knowledge. So curiosity was a key component of the positive feedback loop that drove up human intelligence. Because it is so fundamental to survival, curiosity is both a drive and an emotion; I’ll clarify the distinction a bit further down.

To summarize, consciousness operates as a discrete top-level decision-making process in the brain by applying model-driven methods while simultaneously considering data-driven subconscious inputs. We compartmentalize the world into concepts and models in which they operate according to rules of cause and effect. Emotional rewards, including curiosity, continually motivate us to strive to improve our models to be more successful. Data-driven approaches can produce good decisions in many situations, especially where ample experience is available, but they are ultimately simplistic, “mindless” uses of pattern recognition which can’t address many novel problems well. So the brain needs the features that consciousness provides — awareness, attention, feeling, and thinking — to achieve model-based results. But now for the big question: why is consciousness “experienced”? Why do we exist as entities that believe we exist? Couldn’t the brain go about making top-level decisions effectively without bothering to create self-possessed entities (“selves”) that believe they are something special, something with independent value above and beyond the value of the body or the demands of evolution? Maybe; it is not my intention to disprove that possibility. But I do intend to prove that first-person experience serves a vital role and has a legitimate claim to existence, namely functionality, which I have elaborated on at length already but which takes on new meanings in the context of the human mind.

Qualia and Thoughts

The context the brain finds itself in naturally drives it toward experiencing the world in the first person. The brain must conduct two activities during its control of the body. First, it must directly control the inputs and outputs, i.e. receive sensory information and move about in the world. And second, it must develop plans and strategies telling it what to do. We can call these sets of information self and not-self, or self and other, or subject and object (before invoking any concept of first-personness). It is useful to keep these two sets of information quite distinct from each other and to have specialized mechanisms for processing each. My hypothesis of consciousness is that it is a subprocess within the brain that manages top-level decisions and that the (subconscious) brain creates an experience for it that only makes information relevant to top-level decisions available. In particular, it uses specialized mechanisms to create a very different experience for self-information than for not-self-information. Self-information is experienced consciously as awareness, attention, feelings, and thoughts, but by thoughts, here, I mean just experiencing the thoughts without regard to their content. These things constitute our primary sense of self. Not-self-information is the content of the thoughts, i.e. what we are thinking about. Put another way, the primary self is the information an agent grasps about itself automatically (without having to think about it), and not-self are the things it sees and acts upon as a result of thinking about them. It is the responsibility of the consciousness subprocess to make all self-information appear in our minds experientially without having to be about anything. The customized feel of this information can be contrasted with the representational “feel” of not-self-information. Not-self-information doesn’t “feel” like anything at all, it just tells us things about other things, representing them, describing them and referencing them. Those things themselves don’t need to really exist; they exist for us by virtue of our ability to think about them.

We know what our first-person experience feels like, but we can’t describe it objectively. Or rather, we can describe anything objectively, but not in a way that will adequately convey what that experience feels like to somebody who can’t feel it. The qualities of the custom feelings we experience are collectively called qualia. We distinguish red from green as very different qualia which we know from intimate experience, but we could never characterize in any useful way how they feel different to a person who has red-green color blindness. Each quale (pronounced kwol-ee, singular of qualia) has a special feeling created for our conscious minds by untold subconscious processing. It is very real to our conscious minds, and has the objective reality that some very specialized subconscious processing is making certain information feel a certain way for our conscious benefit. Awareness and attention themselves are fundamental sorts of qualia that conduct all other qualia. And thoughts feel present in our minds as we have them, but thoughts don’t “feel” like anything because they are general-purpose ways of establishing relationships about things (beliefs are an exception discussed below that have a feeling of “truth”). So we most commonly use the word qualia to describe feelings, which each have a very distinct customized feel that awareness, attention, and thoughts lack. In other words, all awareness, attention, and thoughts feel the same, but every kind of feeling feels different. Our qualia for feelings divide into sensory perceptions, drives, and emotions. Sensory perceptions come either from sense organs like eyes, ears, nose, tongue, and skin, or from body senses like awareness of body parts (proprioception) or hunger. Drives and emotions arise from internal (subconscious) information management mechanisms which I will describe more further down. Our qualia, then, are the essence of what makes our subjective experience exist as its own perspective, namely the first-person perspective. They can be described in terms of what information they impart, but not how they feel (except tautologically in terms of other feelings).

We create a third-person world from our not-self-information. This is the contents of all our thoughts about things. Where qualia result from innate subconscious data processing algorithms, thoughts develop to describe relationships about things encountered during experience. Thoughts can characterize these relationships by representing them, describing them, or referencing them. This description makes it sound like some “thing” must exist to which thoughts refer, but actually thoughts, like all information, are entities of function: information separates from noise only to the extent that it provides predictive power, aka function. It can be often useful when describing information to call out conceptual boundaries separating functional entities of representation, description or reference, but much information (especially subconscious information) is a much more abstract product of data analysis. But in any case, thoughts are data about data, patterns found in the streams of information that flow into our brains. We can consequently have thoughts about awareness, attention, feelings, and other thoughts thought those thoughts being the awareness, attention, feelings, and thoughts themselves. These thoughts about our conscious processes form our secondary sense of self. We know humans have a strong secondary sense of self because we are very good at thinking, and so we suspect other animals have a much weaker secondary sense of self because they are not as good at thinking, though they do all have such a sense because all animals with brains can analyze and learn from experiential data, which includes data about consciousness.

This logical separation of self and not-self-information does not in itself imply the need for qualia, i.e. first-person experience. The reason feeling is vital to consciousness has to do with how self-information is integrated. Subconsciously, we process sensory information into qualia so we can monitor all our senses simultaneously and yet be able to tell them apart. It is important that senses work this way as the continuous, uninterrupted and distinguished flow of information from each sense helps us stay alive. But it is how we tell them apart that gives each quale its distinctive feel. Ultimately, information is a pattern that can be transmitted as a signal, and viewed this way each quale is much like another because patterns don’t have a feel in and of themselves. But each quale reaches our conscious mind through its own data channel (logically, the physical connection is still unknown) that brings not only the signal but a custom feel. What we perceive as the custom feel of each quale is really just “subconscious sugar” to help us conveniently distinguish qualia from each other. The distinction between red and green is just a convenient fiction created by our subconscious to facilitate differentiation, but they feel different because the subconscious has the power to create feelings and the conscious mind must accept the reality fed to it. We can think whatever conscious thoughts we like, but qualia are somehow made for us outside conscious control. While the principal role of qualia is to distinguish incoming information for further analysis, they can also trigger preferences, emotions, and memories. Taste and smell closely associate with craving and disgust. Color and sound have an inherent calming or alerting effect. These associations help to further differentiate qualia in a secondary way. To some extent, we can feel qualia not currently stimulated by the senses by remembering them, but the memory of a quale is not as vivid or convincing as it felt at first-hand, though it can seem that way when dreaming or under hypnosis.

Color

How we tell red and green apart is ineffable; we just can. We see different colors in the rainbow as a wide variety of distinctive qualities and not just as shades of (say) gray. All shades of gray share the same color quale and only vary in brightness. We are dependent on ambient lighting even to tell them apart. Not so with red and green, whose quale feel completely different. This facility stems from how red, green, and blue cone cells in the eye separate colors into independent qualia. Beyond this, we see every combination of red, green, and blue as a distinct color, up to about ten million hues. While we interpret many of these combined hues as colors in the spectrum, three color values in combination define a plane, not a line, so we see many colors not in the rainbow. Significantly, we interpret the combination of red and blue without green as pink, and all three together as white. Brown is another, but we can actually only distinguish hundreds to (at the very most) thousands of distinct colors along the visible band of the electromagnetic spectrum, which means that nearly all colors we distinguish are non-spectral. Although being able to distinguish colors is the primary reason we can do it, this doesn’t explain why they are “pretty”, i.e. colorful. First, note that if we take consciousness as a process that is only fed simple intensity signals for red, green and blue, then it could distinguish them but they wouldn’t feel like anything. I propose, but can’t prove, that the qualia we feel for colors that I called subconscious sugar above result from considerable additional subconscious processing which extends a simple intensity signal into something which feels much more readily unique to the conscious mind than the qualia would feel if, say, they appeared as a number of gauges. While qualia are ultimately just information and not qualities, the way we consciously feel the world is entirely a product of the information our subconscious feeds us, so we shouldn’t think of our conscious perception of the world as a reflection of it, we should think of it as a complex creation of the subconscious that gives a deep informational structure to each kind of sensory input. Qualia are like built-in gauges which we don’t have to read; we can just feel the value they are using awareness alone. Since the role of consciousness is to evaluate available information to make decisions quickly and continuously, anything the subconscious mind can do to make different kinds of information distinctively appear in our awareness helps. We can distinguish many colors, but nowhere near all the light information objects give off. Our sense of a three-dimensional object feels to us like we know the noumenon of the object itself and are not just representing it in our minds. To accomplish this, we subconsciously break a scene down into a set of physically discrete objects, automatically building an approximate inventory of objects in sight. Our list of objects and features of each, like their color, form an informational picture. That picture is not the objects themselves but is just a compendium of facts we consider useful. Qualia innately convert a vast stream of information into a seamless recreation of the outside world in our head. Our first-person lives are just an efficient way to allow animals to focus their decision-making processing on highly-condensed summaries of incoming information, solving the problem of TMI (too much information).

But why does red specifically feel red as we know it, and is everyone’s experience of red the same? The specific qualities of the qualia is at the heart of what David Chalmers has famously called the hard problem of consciousness. This problem asks why we experience qualia at all, and specifically why does a given quale feel the specific way it does. We think of qualia as being as real as anything we know since all of our reality is mediated through them. However, we must admit that they are imaginary informational constructs our brain puts together for us subconsciously and presents to our conscious mind with the mandate that we believe they are real. So, objectively, then, we realize our brains are creating these experiences for us. It is in our brain’s best interests that the information the qualia provide us be as consistent with the outside world as possible so that we will have full confidence to act. When we lose a vital sense, e.g. when we are plunged into darkness, our confidence and reactions are severely compromised. But even knowing that a quale’s feel is imaginary doesn’t explain why it has the characteristic feel that it has. To explain this, I would suggest that we recall that the mind exists to serve a function, not just to exist as physical noumena do. The nature of the qualia, then, is intimately and entirely a product of their function: they feel like what they inspire us to do. While the primary role of the feel of qualia is let us simultaneously feel many channels of information simultaneously while keeping them distinct, their exact feeling is designed to persuade us to consider them appropriately. Green is not just distinct from red, it is calming where red is provocative. Colors are pretty, yes, but my contention is that their attractiveness actually derives from their emotional undertones. Grays don’t carry this extra emotional coloring, so we feel neutral about them. There is typically no reason to be interested in gray. From an evolutionary standpoint, the more helpful it is to consciously notice and distinguish a quale when it is perceived, the stronger the need for it to have a strong and distinctive custom feeling. Hunger, thirst, and sex drives can be very compelling. Damaging heat or cold or injuries are painful. Dangerous substances often smell or taste bad. We shy away from the unpleasant and seek the comfortable to the exact degree the custom feeling of the qualia involved inspire us. Qualia are the way the subconscious inspires the conscious mind to behave well. To develop this further, let’s consider how we perceive color.

We sense colored light using three types of cone cells in the retina. A second stage of vision processing done by retinal ganglion cells creates two signals, one indicating either yellow or blue (y/b) and the other indicating either red or green (r/g). Because these two signals can never produce both yellow and blue or both red and green, it is psychologically impossible for something to be reddish green or yellowish blue (note that mixing red and green paint may make brown, and yellow and blue may make green, but that is not the same thing as making something reddish or yellowish). These four psychological primary colors are then blended in a third stage of vision processing to make all the colors we see. The blended colors form a color wheel from blue to green, green to yellow, yellow to red, and red to blue. These follow the familiar spectral colors until one reaches red to blue, at which point they pass through the non-spectral purples, including magenta. If one blends toward white in the center of the color wheel one creates pastel colors, or one can blend toward any shade of gray or black in the center for additional colors. This gives the full range of ten million colors we can see, of which only a few hundred on the outer rim are spectral colors. Instead of thinking of color as a frequency of light, it is more accurate to think of it as a three-dimensional space using y/b, r/g and brightness dimensions that is built with measurements from the four kinds of photopsins (photoreceptor proteins) in the eye. More accurately still, color goes through a fourth stage in which the colors surrounding an object are taken into consideration. The brain will automatically adjust the color actually seen to the color it would most likely take if properly illuminated under white light by reversing the effects of shadowing and colored lighting. For example, contextual interpretation can make gray look like blue in yellow context or like yellow in blue context. The brain can’t defeat this illusion until the surrounding context is removed. 4

One way to explore the meaning of qualia is by inverting them5. John Locke first proposed an inverted spectrum scenario in which one person’s blue was another person’s yellow and concluded that because they could still distinguish the colors as effectively and we could not see into their minds, completely “different” but equivalent ideas would result. Locke’s choice of yellow and blue was prescient, as we now know the retina sends a y/b signal to the brain which could theoretically be flipped with surgery, producing the exact effect he suggests6. Or we could design a pair of special glasses that flipped colors along the y/b axis, which would produce a very similar effect. (Let’s ignore some asymmetries in how well we discriminate colors in different parts of the color wheel.)7 Locke’s scenario presumes a condition present from birth and concludes that while an inverted person’s ideas would be different, their behavior would be the same. As a functionalist, I disagree. I would argue that whether the condition existed from birth or was the result of wearing glasses, the inverted person would see yellow the same as the normal person. This hinges entirely on what we mean by “different” and “same”; after all, no two people have remotely similar neural processes at a low level. By “same”, I mean what you would expect: if we could find a way to let one person peek into another person’s mind (and I believe such mind sharing is possible in principle and happens to some degree with some conjoined twins), then they would see colors the way that person did and would find that the experience was like their own. What I mean by this stance is that our experience of yellow and blue are not created by the eye but by the visual cortex, which interprets the signals to serve functional goals. But wait; certainly if one put one the glasses, yellow would become blue and vice versa right away. Yes, that is true, but how would our minds accommodate the change over time? Consider the upside-down vision experiment conducted by George Straiten in 1896 and again by Theodor Erismann and Ivor Kohler in 1955. Wearing glasses that inverted the perceived image from top to bottom and left to right, just as a camera flips an image, was disorienting at first, but after a week to ten days the world effectively appeared normal and one could even ride a motorcycle with no problem. The information available had been mapped to produce the same function as before. I believe an inverted y/b signal would produce the same result, with the colors returning to the way the mind used to see them after some days or weeks. Put another way, many of the associations that make yellow appear yellow are functional, so for the brain to restore functionality effectively it would subconsciously notice that the best solution is to change the way we interpret the incoming signals to realign them with their functions. For colors to function correctly, yellow needs to stand out more provocatively to our attention process than blue, and blue needs to be darker and calmer. We would remember how things used to be colored and how they felt, and our brains would not be happy with the new arrangement and would start to pick up on it and start making yellow things seem calm and blue things stand out. As our feelings toward the colors changed, our subconscious would become more inclined to map them back to the way they were. And if it were a condition we were born with, we would just wire physical yellow to psychological yellow in the first place. I don’t know if adult minds would necessarily be plastic enough for this effect to be perfect, but there is no reason to think they are any less capable of reversing a color flip than an orientation flip, though it would probably take longer as the feedback is much more subtle. Our brains probably have the plasticity needed to make these kinds of adjustments because we do continually adjust our interpretation of sensory information, for example to different levels of brightness. Our brains are built to interpret sensory data effectively. I am not saying that yellow has no real feel to it and that we just make it up as we go; quite the opposite. I am saying that yellow and all our qualia have a substantial computational existence at a high (but subconscious) level in the cortex which is flexibly connected to the incoming sensory signals, and that this flexibility is not only lucky but necessary. Knee-jerk-type reflexes are hardwired, but it is much more practical and adaptable for many fixed subconscious skills (like sensory interpretation) to develop in the brain adaptively to fulfill a role rather than using rigid neural connections. This kind of development has the additional advantage that it can be rewired or refined later for special circumstances, for example the help the remaining senses compensate when one sense is lost (e.g. through blindness).

Emotions

We have two kinds of qualia, sensory and dispositional. Sensory qualia bring information from outside the brain into it using sensory nerves, while dispositional qualia bring us information from inside our brain that tell us how we feel about ourselves. We experience both consciously, but kinds of experiences are created for us subconsciously. Dispositional qualia come in two forms, drives and emotions. Each drive and emotion has a complex subconscious mechanism that generates it when triggered by appropriate conditions. Drives arise without conscious involvement, while emotions depend on how we consciously interpret what is happening to us. The hunger drive is triggered by a need for energy, thirst by a need for water, sleep for rest, and sex for bonding and reproduction. Emotions are subconscious reactions to conscious assessments: a stick on the ground prompts no emotional reaction, but once we recognize it as a snake we might feel fear. When an event fails to live up to our expectations, we may feel sad, angry, or confused, but the feeling is based on our conscious assessment of the situation. We can’t choose to suppress an emotion because the subconscious mind “reads” our best conscious estimation of the truth and creates the appropriate reaction. But we can learn control our emotional reactions better by viewing our conscious interpretations from more perspectives, which is another way of saying we can be more mature. Both drives and emotions have been shaped by evolution to steer our behavior in common situations, and are ultimately the only forces that motivate us to do anything. The subconscious mind can’t generate emotions independent of our conscious assessments because only the conscious mind understands the nuances involved, especially with interpersonal interactions. And the rational, conscious mind needs subconsciously-generated emotional reactions because rational thinking needs to be directed towards problems worth solving, which is the feedback that emotions provide.

We have more emotions than we have qualia for emotions, which means many emotions overlap in how they feel. The analysis of facial expressions suggests there are just four basic emotions: happiness, sadness, fear, and anger.8 I disagree with that, but these are certainly four of the most significant emotional qualia. While there are no doubt good evolutionary reasons why emotions share qualia, but the most basic reason, it seems to me, is that qualia help motivate us to react in certain ways, and we need fewer ways to react than we need subconscious ways to interpret conscious beliefs (i.e. emotions). So satisfaction, amusement, joy, awe, admiration, adoration, and appreciation are distinct emotions, but share an uplifting happy feeling that makes us want to do more of the same. We distinguish them consciously, so if the quale for them feels about the same it doesn’t impair our ability to keep them distinct. Aggressive emotions (like happiness and anger) should spur us to participate more, while submissive emotions (like sadness and fear) should spur us to back off. We don’t need to telegraph all our emotional qualia thought facial expressions; sexual desire and romance are examples that have their own distinct qualia (and sex, like curiosity, is backed by both drives and emotions). We feel safe emotions (like happiness and sadness) when we are not threatened, and unsafe emotions (like anger and fear) when we are. In other words, emotions feel “like” what action they inspire us to take. Wikipedia lists seventy or so emotions, while the Greater Good Science Center identifies twenty-seven9. But just as we can see millions of colors with three qualia, we can probably distinguish a nearly unlimited range of emotional feelings by combining four to perhaps a dozen emotional qualia which correspond to a nearly unlimited set of circumstances. Sadness, grief, despair, sorrow, and regret principally trigger a sadness quale in different degrees, and probably also bring in some pain, anger, surprise, confusion, nostalgia, etc. Embarrassment, shyness, and shame may principally trigger awkwardness, tinged with sadness, anxiety, and fear. Similarly to sensory qualia, emotional responses recalled from memory tend not to be quite as vivid or convincing as they originally were. When we remember an emotion, we feed the circumstances to our subconscious, which evaluates how strongly we believe the situation calls for an emotional response. Remembered emotional reactions are muted by the passage of time, during which memories fade and lose their relevance and connection to now, and because memory only stores a small fraction of the original sensory qualia experienced.

When I was young, I tried to derive a purely rational reason for living, but the best I could come up with is that continuing to live is more interesting than dying. Unfortunately, this reason hinges on interest or curiosity, which I did not realize was an emotion. Unfortunately, as much as it irks committed rationalists, there is no rational reason to live or to do anything. Our reason for living, and indeed all our motivation, comes entirely from drives and emotions. The purpose of the brain is to control an animal, and the purpose of the conscious mind within it is to make top-level decisions well. The brain is free to pursue any course that furthers its overall function, and it does, but the conscious mind, being organized around the first-person perspective, must believe that the top-level decisions it makes are the free product of its thought processes, i.e. it must believe in its own free will. Humans have an unlimited capacity to pursue general-purpose thought processes using a number of approaches (which I have not yet described), and there is nothing intrinsic to general-purpose thoughts that would direct them along one path in preference to any other. In other words, we can contemplate our navels indefinitely. But we don’t; we are still physical creatures who must struggle to survive, so our minds must have internal mechanisms that will ensure we apply our minds to survive and flourish. If our first-person experience consisted only of awareness, attention, sensory qualia and thoughts, we would not prioritize survival and would soon die. Drives and emotions fill the gap through dispositional qualia. These qualia alter our mood, affecting what we feel inclined to do. They create and maintain our passion for survival and nudge us toward behaviors that ensure it. They don’t mandate immediate action the way reflexes do, because that would be too inflexible, but they apply enough “mental pressure” to “convince” us to do their bidding eventually. Drives impact our thoughts independent of any conscious thought, but emotions “read” our thoughts and react to them. So while our rational thinking does spur us toward goals, those goals ultimately come from drives and emotions. The purpose of life, from the perspective of consciousness, is to satisfy all our drives and emotions as best we can. We must check all the boxes to feel satisfied with the result. We sometimes oversimplify this mission by saying happiness is the goal of life, but the positive emotional feeling of joy is just one of many dispositional qualia contributing to our overall sense of fulfillment of purpose. People with very difficult lives can feel pretty satisfied with how well they have done despite a complete absence of joy.

Thoughts

Let’s take a closer look now at thoughts. Thoughts are the product of reasoning, which refers loosely to all the ways we consciously manage descriptive or referential information, i.e. information that is about something else. Thoughts are constructed using models that combine concepts and subconcepts, which are in turn based on sensory information. Although reasoning is conscious, it draws on subconscious sources like feelings, recollection, and intuition for information and inspiration. The subconscious mind provides these things either unbidden or in response to conscious queries. Consequently, many of our ideas and the models that support them arise entirely from intuitive, subconscious sources and appear in our minds as hunches about the way things are. We then employ these in a variety of more familiar conscious ways to form our thoughts about things. This is a highly iterative process that over a lifetime leads to what seems to be a clear idea of how the world works, even though it is really a very loose amalgamation of subconceptual and conceptual frameworks (mental models). Consciousness directs and makes top-level decisions, but is heavily influenced by qualia, memory, and intuition.

Unlike awareness, attention, and feelings, which are innate, reactive information management systems, thinking is the proactive management of custom information that an animal gathers from experience through its single stream of consciousness and multiple paths of subconsciousness. Our ability to think is innate, but how we think about things both specifically and generally is unlimited and unpredictable because how it develops depends on what we experience. We remember things we think about as thoughts, which are configurations of subconceptual and conceptual details. Subconsciously, we derive patterns from our experiences and store them away as subconcepts. Subconcepts group similar things with each other without labeling them as specific kinds. Consciously, we label frequently-seen groupings as kinds or concepts, and we associate details with them that apply to all, most, or at least some instances we encounter. Once we have filed concepts away in our memory, we can access them subconsciously, so our subconscious minds work with both subconcepts and concepts. Consciously, subconcepts all bubble up from memory and usually feel very familiar, though sometimes they only feel like vague hunches. Much of subconceptual memory imitates life in that we can imagine feeling something first hand even though we are just imagining doing so. Our sense of intuition also springs from familiarity, as it is really just an effort to recall explanations for situations similar to the one we currently find ourselves in. We can reason using both subconcepts and concepts, essentially pitting intuition against rationality. In fact, we always look for intuitive support of rational thinking, except for the most formalized rational thinking in strict formal systems. We also perceive concepts as the bubble up through memory. Concepts can be thought of as a subset of subconcepts which have been labeled and clarified to a higher degree. As we think about concepts and subconcepts pulled from memory we continually form new concepts and subconcepts as needed, which we then automatically commit to memory. How well we commit something to memory and how well we recall it later is largely a function of intently we rehearse and use it.

We have a natural facility to form mental models and concepts, and with practice we can develop self-contained logical models where the relationships are explicit and consistently applied. Logical reasoning leverages these conceptual relationships to develop entailments. Rigorous logical systems like mathematics can prove long chains of necessary conclusions, but we have to remember that all models are subconceptual at the lowest levels because concepts are built out of subconcepts. The axiomatic premises of formal logical models arguably need no subconceptual foundation, but if we ever hope to apply such models to real-world situations then we need subconceptual support to map those premises to physical correlates. From a practical standpoint, we let our intuition (meaning the whole of our subconscious and subconceptual support system) guide us to the concepts and purposes that matter most, and then we use logical reasoning to formalize mental models and develop strong implications.

Logical reasoning itself is a process, but concepts are functional entities that can represent either physical entities or other functional entities. Concepts can be arbitrarily abstract since function itself is unbounded, but functional concepts typically refer to procedures or strategies for dealing with certain states of affairs within certain mental models. The same concept can be applied in different ways across an arbitrarily large range of mental models whose premises vary. Consequently, concepts invariably only specify high-level relational aspects, and often with built-in degrees of freedom. Apples don’t have to be red, but are almost certainly red, yellow, green or a combination of them, though in very exceptional cases might be colored differently still. Concepts are said to have prototypical properties that are more likely to apply in generic mental models than more specific ones. As a functional entity, a concept or thought has its own noumenon, and because it is necessarily about something else (its referent), that referent also has its own noumenon. We think about the thought itself via phenomena or reflections on its noumenon, and additionally we only know of the referent’s noumenon through phenomena about it. Our awareness of our knowledge is hence a phenomenon, even though noumena underlies it (including functional and not just physical noumena). Just as with physical noumena, we can prove the existence of functional noumena (thoughts, in this case) by performing repeated observations that demonstrate their persistence. That is, we can think the same sorts of thoughts about the same things in many ways. Persistence is arguably the primary attribute of existence, so the more we observe and see consistency, the more it can be said to exist. Things exist functionally because we say they do, but physical existence is unprovable and is only inferred from phenomena. While physical persistence means persistence in spacetime, functional persistence means persistence of entailment: the same causes yield the same effects. Put another way, logic is inherently persistent, so any logical configuration necessarily exists functionally (independent of time and space).

All thoughts fall into one of two camps, theoretical or applied. Thoughts are comprised of information, and while information is always a functional construct, its function is latent in theoretical information and active in applied information. We process theory and application quite differently because theory is mostly conceptual while application is mostly subconceptual. Subconceptual thought is one level deep; we look things up by “recognizing” them as appropriate. So subconceptual “theories” consist of direct, “one-step” consequences: using recall we can search all the conceptual models we know to see if we can match a starting condition to an outcome without thinking through any steps one might need to actually employ the model. For example, if we need to go to work, we will recall that our car is appropriate without any thought as to why or how we drive it. But if we need to compare our car to foot, bike, motorcycle, bus, or car service, for example, we would use logical models that spelled out the pros and cons of each. Logical models, aka conceptual theories, decompose problems into explanatory parts. Application, on the other hand, is mostly a matching operation, which is why we usually do it subconceptually rather than devising a conceptual way to do it. We have a native capacity to align our models to circumstances and to keep them aligned by monitoring sense data as we go. Similarly, the easiest way to teach something is to demonstrate it, which taps into this native matching capacity. Monkey-see-monkey-do has been demonstrated at the neuron level via mirror neurons, which fire when we perform an action we see someone else do. Alternately, one could verbally teach applied information via a conceptual theory. For example, why a cyclist must countersteer to the left to make a right turn can be explained via physical theory. But few cyclists ever learn the theory. Like most applied knowledge, it is easily done but not easily explained. This exemplifies why there is no substitute for experience; conceptual theories are just the tip of the iceberg of all the hands-on knowledge and experience is required to do most jobs well. Consequently, most of our basic knowledge comes from doing, not studying, which means development of subconceptual rather than conceptual knowledge. As we do things, our subconscious innate heuristics will detect subconceptual patterns which we can then recall later.

In principle, theory is indifferent to application. We can develop arbitrarily abstract theories which may have no conceivable application. In practice, our time is limited and need to be productive in life, so most of our theories are designed with possible applications in mind. We can certainly develop theories purely for fun, but then we are amused, which also serves a function. We are unlikely to pursue theories we don’t find interesting, because satisfying curiosity is, as noted, a basic drive and emotion. But my point is just that theory can proceed in any direction and stand on its own without regard to application. Application, on the other hand, cannot proceed in any direction but must make accommodations to the situation at hand. Application done purely subconceptually makes a series of best-fit matches from stored knowledge to current conditions. Application done with the support of theories, which are conceptual, will also make a series of best-fit matches. First, we subconsciously evaluate how well the current circumstance fit models we know to pick the best model. We further evaluate ways that model might fit and doesn’t fit to establish rough probabilities that the model will correctly predict the outcome. Although theories are open-ended, application requires commitment to a single decision at a time. Given everything that can go wrong picking a model, fitting it to the situation, and extrapolating the outcomes, how can we quickly and continuously make decisions and act on them? The answer is belief. Beliefs must be general enough to support quick, confident decisions for almost any situation where a belief would be helpful, but specific enough to apply to real situations correctly. I said above that thoughts are comprised of ideas and beliefs: uncommitted thoughts are ideas, and committed ones are beliefs. Belief is also called commitment, opinion, faith, confidence, and conviction. The critical distinction between ideas and beliefs is that belief, like emotions, is a subconscious reaction to conscious knowledge. We feel commitment as a quale similarly to happiness and sadness, but it makes us feel determined, even to the point of stubbornness. We won’t act without belief, and conversely, belief makes us feel like acting, and acting confidently at that. Because belief has to pass through two evaluations, one conscious and rational and the other subconscious (which we then experience consciously via the feeling of commitment), the rational and subconscious components can get out of sync when more information becomes available. It sounds redundant, but we have to believe in our beliefs; that is, we have to rationally endorse what we feel. Our subconscious mind will resist changing beliefs because the whole value of beliefs comes from being tightly held. Trust, and trust in beliefs, must be earned over time. We are consequently prone to rationalizing, which is the creation of false reasoning that deflects new knowledge while supporting emotional loyalties. Ironically, rationalizing (in this sense) is an irrational abuse of rational powers. While this loyalty to beliefs is a handy survival mechanism, it also makes us very susceptible to propaganda and advertising, which seek to monopolize our attention with biased information to control our minds.10

The effectiveness of theories and their degree of applicability are matters of probability, but belief creates a feeling of certainty. This raises the question of what certainty and truth really are. Logically, certainty and truth are defined as necessities of entailment; that which is true can be proven to be necessarily true from the premises and the rules of logic. One could argue that logical truths are tautologies and are true by definition and so are not telling us anything we didn’t know would follow from the premises. Usually, though, when we think about truth we not concerned so much with the logical truths of theory, whose may spell things out perfectly, but with the practical truths of applied knowledge. We can’t prove anything about the physical world beyond all doubt or know anything about the true nature of noumena, because knowledge is entirely a phenomenal construct that can only imperfectly describe the physical world. However, with that said, we also know that an overwhelming amount of evidence now supports the uniformity of nature. The Standard Model of particle physics professes that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.11. This uniformity translates very well for aggregate materials at human scale, leading to very reliable sciences for materials, chemistry, electricity, and so on. Models from physical science make quite reliable predictions so long as nothing happens to muck the model up. If something unexpected does happen, we can usually identify new physical causes and adjust or extend the models to restore our ability to predict reliably. Circumstances with too many variables or that involve chaotic conditions are less amenable to prediction, but even in these cases models can be developed that do much better than chance. If we believe something physical will happen, it means we are sufficiently convinced that the model we have in mind is applicable and will work. So the purpose of belief is to enable us to act quickly, which we feel subjectively as confidence. Because the goal is to be able to apply the model for all intents and purposes that are reasonably anticipated, our standard for truth is pragmatic rather than logical and so can admit exceptions, which can then be dealt with. The scope of what is reasonably anticipated is usually more of a hunch than something we reason out. This is a good strategy most of the time because our hunches, whose scope includes the full range of our subconscious intuition, are quite trustworthy for most matters. Most of what we have to do, after all, is just moving about and interacting with things, and our vast experience doing this gives us very reliable intuitions about what we should believe about our capabilities. Logical reasoning, and consciousness in general, steps in when the autopilot of intuition doesn’t have the answer.

Many of the considerations we manage as minds concern purposes, which have no physical corollary. In particular, whether we should pursue given purposes are ethical considerations, so we need to understand what drives ethics. We have preferences for some things over other due to dispositional qualia, which as noted above include drives and emotions. Like belief, ethics are a subconscious reaction to conscious knowledge, and so critically depend on both subconscious and conscious factors. But disposition itself is not rational, so ethics are ultimately innate. Considerable evidence now exists supporting the idea that ethical inclinations are innate12, and it stands to reason that behavioral preferences as fundamental as ethics would be influenced by genes because reason alone can’t create preferences. To understand ethical truth, then, we need only understand these inclinations. Note that inclinations don’t lead to universal ethics; ethics need to be flexible enough to adapt to different circumstances. We don’t yet have a sufficient understanding of the neurochemistry of the mind to unravel the physical basis of any qualia, let alone one as complex as ethics, but evolutionary psychology suggests some things. Our understanding of evolution suggests that we should feel an ethical responsibility to protect, with decreasing priority, ourselves, our family, our tribe, our habitat, our species, and our planet. I think we do feel those responsibilities in just that decreasing order, which I can tell by comparing them to each other to see how I would prioritize them. While these ethical inclinations are innate, we build our beliefs about ethics from ideas we learn consciously, which we then accept subconsciously and consequently feel consciously. So, as with any belief, our feelings can get out of sync with our thoughts. Many people believe things that contradict their ethical responsibilities without realizing it because they have either not learned enough to know better or have been taught or accepted false information. So adequate and accurate information is essential to making good ethical decisions.

The Self

I’ve spoken of self and not-self-information, but not about self-awareness. Is self-awareness unavoidable or can a conscious agent get by in the world without noticing themselves? First, consider that all animals with brains exhibit behavior characteristic of awareness, but this doesn’t imply they all have the conscious experience of awareness. Ants, for example, act as if they were aware, but with such small brains, it may seem more plausible that their behavior is simply automatic. And yet ants are self-aware: “ants marked with a blue dot on their forehead attempted to clean themselves after looking in the mirror while ants with brown dots did not.”1314 You can’t fake this: the ants knew exactly where their own bodies were and could reverse information about their bodies from a mirror. From a behavioral standpoint, they are self-aware. But this still doesn’t imply they experience this awareness. Without anthropomorphizing, I would say that an aware (and self-aware) agent experiences things if its brain uses an intermediate layer between sensation and decision (essentially a theater of consciousness) that makes decisions based on a simplified representation of the external world rather than on all data at its disposal. This “experiencing” layer would necessarily acquire a first-person perspective in order to interpret that simplified world and keep it aligned with the external world. We can’t actually tell from behavior whether the ant brain has this additional layer, but I would argue that it does not. The reason is that arbitrarily complex behavior can be encoded as instinct, and in very small animals this strategy is the most effective route. They do detect foreign dots on their heads and interact with them, so their brains have advanced visual and motor skills, but they do this only as a consequence of instincts that help them preserve body integrity. Only a handful of more advanced animals (e.g. apes, elephants, dolphins, Corvus) can pass the mirror test to identify themselves, which they do not as a direct consequence of instinct but because they have an agent layer that experiences self-awareness. Probably all vertebrates and some invertebrates have some degree of consciousness including awareness, attention, and some qualia, and probably all mammals and birds have some measure of emotions and thoughts. Those that can pass the mirror test have sufficient agency to model their physical selves and probably their mental selves as well. Still, though some animals have abilities that can match ours, and many have senses and abilities that surpass ours, something special about human consciousness sets us apart. That something, as I previously noted, is our greater capacity for abstraction, the ability to decouple information from physical referents, which lets us think logically independent of physical reality. And when we think about self-awareness, we are more concerned about this abstract introspection into our “real” selves, which is our set of feelings, desires, beliefs, and thoughts, than with our body awareness.

The idea that the consciousness subprocess is a theater that acts as an intermediate layer between sensation and decision raises the question of who watches that theater. The homunculus argument suggests that a tiny person (homunculus) in the brain watches it, which humorously implies an infinite regress of tinier and tinier homunculi. Though we don’t feel someone else is inside our self, and so on ad infinitum, we do feel like our self is inside us watching that theater. Explaining it away as an illusion is pointless because we already know that the way our minds interpret the world is just a representation of reality and not reality itself. It only counts as an illusion if the intent is to deceive or mislead us, and, of course, the opposite is the case: our senses are designed to give us reliable information. What is happening is that the qualia fed into the consciousness process represent the outside world using an internal representation that simplifies the world down to what matters to us, i.e. down into functional terms. The internal representation bears no resemblance to the external reality. How could it, after all? One is physical and the other is functional (information). But for consciousness to work effectively to bring all those disparate and incomplete sources of information together, it must create the lusion (to coin a word opposite to illusion) that all this information accurately represents the external world. To be completely accurate, it provides this information in two forms, spatial and projected. We feel our bodies themselves spatially through thousands of nerves carrying information to our brains from actual points in space. We interpret our bodies in a spatially omnipotent way, although these nerves actually convey rather limited information. This lusion seems real, though, because we have many body-sense qualia keeping us updated continuously and we feel it gives us seamless, complete awareness of our bodies in 3-D.

We have no such spatial beyond our body, but we have projected senses. Sight and hearing give our eyes and ears information about distant objects. To interpret it, we have a head-centric agent that builds a projection-based internal model of the external world. Smell and touch provide additional information from airflow and vibrations. As with body senses that work together to create a seamless spatial model of the body, our projected senses work together to great a seamless projected model of the world. Eyes collect visual information the same way cameras do, so it should come as no surprise that vision interprets 2-D projections as a window into a 3-D world. Furthermore, binocular vision could in theory and does in practice achieve stereoscopic sight. The signal from a monocular or binocular video camera itself does nothing to facilitate the interpretation of images. We interpret at that data using a highly bespoke process that cherry-picks the information most likely to be relevant (e.g. lines of sharp contrast (boundaries)) and applies real-time recognition to create the lusion that one has correctly identified what one is looking at and can consequently interact with it confidently. The goal is entirely functional (i.e. to give us the competence to act), and our feeling that the outside world has “actually” been brought into our minds happens only because the consciousness process is “instructed” to believe what it sees. The resulting lusion is a fair and complete description of reality given “normal” senses, though we are abundantly biased about what counts as normal. Scientific instruments can extend our perception of the world down to the microscopic level (for example), but not by giving us new qualia. Rather, instruments just map information into the range of our existing qualia, which can create a new kind of fair and complete lusion when done well. Our sensory capacity remains constrained by the qualia our subconscious feeds our conscious. Also, we consciously commit to a single interpretation of a sensory input at a time15, demonstrating that sensory “belief” happens below the level of consciousness. Consciously, we go along with sensory belief so long as it is not contradicted, but if our recognition triggers any other match, as when a harmless stick starts to move like a snake, we will flip instantly to the new match. In practice, surprises are rare and we feel like we continuously and effortlessly understand what we are seeing. This is a pretty surprising result considering how complex a process real-time recognition is, but it is not so surprising once we appreciate the contribution of memory.

We are convinced by the lusion our senses present to us because it is integrated so closely with our memory. In fact, understanding is really a product of memory and not senses; our senses only confirm what we already (think we) know. We don’t have to examine everything about us closely because we have seen it all before (or stuff similar enough to it) and we are comfortable that further sensory analysis would only confirm what we already know. We do inevitably reexamine the objects we interact with in order to use them, but never more closely than is necessary to achieve the functions we have in mind because knowledge is a functional construct. If we do take the time to study an ordinary object just for fun or to pass time, this dedication of attention has still surpassed all others in that moment to become the one action that has the most potential to serve our overall functional objectives. In other words, we can’t escape our functional imperatives. If our senses don’t align with any memory (e.g. consider the inverted glasses example, or being unexpectedly swallowed a whale, etc.), we will be disoriented until we can connect senses to memory somehow. Our confidence in the seamless continuity of what we see is a function of the mind’s real world, which is the mental model (in our memory) of the current state of the physical world. Our sensory inputs don’t create that model, they only confirm it. The attention subprocess of consciousness (which is itself subconscious) stays alert for differences between what it expects the mind’s real world to be and what the senses provide. These differences are resolved subconsciously in real-time to prevent the simultaneous interpretations of one image into multiple objects despite the fact that any image is potentially ambiguous. The subconscious mind actively crafts the lusion we perceive, even though it can be tricked into seeing an illusion. The important thing is that the match between lusion and reality is generally very reliable, meaning we can act on it with confidence. Our whole suite of qualia continuously confirm that the mind’s real world is the actual world by making it “feel like” it is. As I work by the window, cars travel up and down my street and I hear them before I see them. I know about what they will look like before I see them, and I generally only see them peripherally, but I am confident in my seamless mental model despite being surprisingly short on detailed information.

Back to the original question, is there a homunculus viewing these material and projected views of the world? Yes, there definitely is, and it is the consciousness subprocess. This subprocess is technically quite distinct from a small person because it is just one subprocess in a person’s brain and not a whole person. The confusion comes because we identify our conscious minds with the bodies that use them to preserve the lusion. But we don’t need to extrapolate another body for the mind; we know minds are disembodied functional entities with no physical substance. So there is no regress; consciousness was always a disembodied agent. It is only awkward for us to conceive of ourselves as having both physical and functional components to ourselves if we are militantly physicalist. Using common sense, we have had no trouble with this dichotomy, probably for thousands and even millions of years. It is not regress to say that consciousness is a subprocess of the brain with its own internal model of the world, it is just a statement of fact. Consciousness is designed to see itself as an agent in the world rather than as a collector and processor of information, and the subconscious is designed to spoon-feed consciousness information in the forms that support that lusion. The result is that consciousness has a lusion that mirrors the outside world, and interacting with the lusion makes the body perform in the real world, much like pulling on puppet strings.

The Stream of Consciousness

We’ve taken a closer look at some of the key components, but haven’t yet hit some of the bigger questions. I have pointed out how consciousness separates self and not-self-information. I described why qualia need to be distinguishable from each other and also how stronger custom feelings inspire stronger reactions. I reviewed how emotions and thoughts work. And I described how the self is an informational entity that is fed a simulation (a lusion) that lets us engage in virtual interactions that power physical interactions in the external world. But I still haven’t tied together just why consciousness uses awareness, attention, feelings, and thoughts to achieve its goal of controlling the body. It comes down to a simple fact: there is only one body to control. This has the consequence that the whatever algorithm is used to control the body, it must settle on just one action at a time (if one takes the direction of all bodily parts as a single, coordinated action). For this reason, I call this core part of consciousness that does the deciding the SSSS, for single-stream step selector. The SSSS must be organized to facilitate taking the most advantageous step appropriate in every circumstance it encounters. This is not the kind of problem modern-day procedural computer programs can solve because it must simultaneously identify and address many goals whose attainment covers many time scales. The evolutionary goal of survival, which requires sustenance and reproduction (at least), is the only long-term goal, but it must be subdivided into many sub-strategies to outperform many competitors.

From a logical standpoint, before we consider the role of consciousness, let’s look at Maslow’s Hierarchy of Needs, which Abraham Maslow proposed in 1943. He outlined five levels of needs which must be satisfied in order for a person to thrive: physiological, safety, belongingness, esteem, and self-actualization. All of these needs follow necessarily and logically from the single evolutionary need to survive, but it is not immediately apparent why and how. I would put his first two needs at the same level. Physiological needs, such as food, shelter, and sex, are positive goals and their corresponding negative goals, which are defensive or protective measures, are Maslow’s safety needs. All animals must achieve these positive and negative goals to survive. The next two needs are belongingness and esteem, which are only relevant for social species. Individuals must work together in a social species to maximize fitness of the group, so whatever algorithm controls the body must incorporate drives for socialization. Belongingness refers to the willingness to engage with others, while esteem refers to the effectiveness of those engagements. Addressing physiological and safety needs benefits survival directly, but the benefits of one socialization strategy over another are indirect and can take generations to demonstrate their value. This value has been captured by instincts that make us inclined to favor socialization behaviors that have been successful in the past. We may feel that our social behavior is mostly rational, but it is mostly driven by emotions, which are complex instinctive socialization mechanisms. We must attend to these first four needs to prevent problems, so they are called deficiency needs. The last need, self-actualization, is called a growth need because it inspires action beyond a deficiency need. For maximum competitiveness, all animals must both avoid deficiencies and desire gains for their own sake, which I described above as consequences and curiosity. But self-actualization goes beyond curiosity to provide what Kurt Goldstein originally called in 1934 “the driving force that maximizes and determines the path of an individual”16. We don’t really have any concrete evidence to support a self-actualization drive, but it does stand to reason that instincts would evolve to push us both to cover deficits and seize opportunities to flourish and grow, and the latter should provide more competitive edge than the former. Such a drive would inspire people to achieve overall purposes in life, i.e. to seek meaning in life through certain kinds of activities, and all people feel a pull to satisfy such purposes. It is safe to say that the reasons we imagine drive us toward those purposes are rationalizations, meaning that we devise the reasons to explain the behavior rather than the other way around. But it doesn’t matter whether we know this; we are still inspired to lead purpose-driven lives that go above and beyond apparent survival benefit because the self-actualization drive compels us to excel and not just live.

Consciousness may not be the only solution that can satisfy these needs effectively, but it is the one nature has chosen I’m going to list the main reasons consciousness is well suited to meet these needs, roughly from most to least important.

  1. Divide and conquer. Some decisions are more important than others, so any control algorithm needs to be able to devote adequate resources and focus to important decisions and less to mundane ones without becoming confused. Consciousness solves this in many ways, but most significantly it bifurcates matters worthy of top-level consideration from those that are not by cordoning the latter group off in the subconscious outside conscious awareness. Secondly, it uses qualia of different levels of motivating power to focus more attention on pressing needs. And third, it provides a medium for conceptual analysis, which divides the world up into generalized groups about which one has useful predictive knowledge.

  2. Awareness and attention. Important information can arrive at any moment, so any control algorithm should stay alert to any and all information the senses provide. At the same time, computing resources are finite, so senses should specialize in just the kinds of information that have proven the most useful. Consciousness achieves this goal admirably with awareness and attention. Awareness keeps all qualia running at all times, while attention leverages subconscious algorithms to notice unusual inputs and also lets us direct conscious thoughts along the most promising pathways. Sleep is a notable exception and so it must provide enough benefits to warrant the cost.

  3. Incorporating feedback. Let’s look first at the selection part of single-stream step selection. The algorithm must pick one action instead of others, and it has to live with the consequences immediately. This is equivalent to saying it is responsible for its decisions. Consciousness creates a feeling of responsibility, not because we controlled what we did but because future behavior builds on past decisions. This side-steps the question of free will for now; I’ll get back to that. The important point is that consciousness feels like it caused decisions because this feeling of responsibility is such a great way to incorporate feedback.

  4. Single-stream. Now let’s think about single-stream. It is not a coincidence that our decisions must happen one at a time and we also have a single stream of consciousness. We know the subconscious does many things in parallel, but we can’t consciously think in parallel, and we must, in fact, resolve concepts down to one thing at a time to think further with them. The reason is that it is a strong adaptive advantage to be of one mind when it comes time to make a decision. If we have two or more competing lines of thought which simultaneously have our attention, then when we come to the moment of decision we will need to narrow them down to one since we have only one body. But the problem is, every moment is a possible moment of decision. If evolved creatures could make decisions on their own timetable, then it would be faster to think through many scenarios simultaneously and then pick the best. But not only do we not get to choose those moments, we actually make decisions every moment. We are always doing something with our bodies, and while much of that is not consuming much of our conscious attention because we have it on subconscious “autopilot,” we are committing to actions continuously, and it wouldn’t do to be unsure. Consequently, it is better for our conscious perspective to be one which perceives a single train of thought making the best decisions it can with the available information. This means that alternative strategies must be simulated serially and then compared. While this is slower than thinking in parallel, our subconscious helps us reap some benefits of parallel thinking by giving us a good feel for alternatives and by facilitating switching between different lines of thought.

  5. Qualia help us prioritize. Qualia help us keep our many goals straight in our minds simultaneously. If the primary purpose of consciousness is to make decisions, but it needs to prioritize many constantly competing goals, it needs an easy way to prioritize them without bogging down the reasoning process. Qualia are perfect at that because they force our attention to the needs our subconscious considers most pressing. It is not a good idea to leave the decisions about what to think about next entirely up to the rational mind, because it has no motivation to focus on important matters.

The Hard Problem of Consciousness

Does the above resolve the “hard” problem? The problem is only hard if we aren’t willing to think of consciousness as an experience created by the subconscious. That’s odd, because we know it has to be. There is clearly no physical substance to the conscious mind other than the neurochemistry supporting it. It must therefore be a consequence of processes in the brain. We know that our senses (including complex processing like 3-D vision), recognition, recollection, language support, and so on require a lot of computation of which we have no conscious awareness, so we have to conclude the subconscious does the work and feeds it to us in the form of our first-person perspective. What separates this first-person perspective from what a zombie or toaster would experience (i.e. nothing), and what ultimately gives our qualia and other experiences meaning and “feel,” is their function. Experience is information, which means it enables us to predict likelihoods and take actions. The whole feel and excitement of experience relate to the potential value of the information. If it were white noise, we wouldn’t care and all of experience would dissolve into nothingness. So the subconscious doesn’t just provide us with pretty accurate information through many qualia, it also compels us to care about that information (via drives, emotions, and beliefs) roughly in proportion to how much it matters to our continued survival. As I said, qualia feel like what they inspire us to do. The feel of qualia is just the feel of the survival drive itself broken down to a more granular level.

So my contention is that information is the secret sauce that makes experience possible, specifically because it makes function possible. Chalmers is open to information being the missing link:

Information seems to be a simple and straightforward construct that is well suited for this sort of connection, and which may hold the promise of yielding a set of laws that are simple and comprehensive. If such a set of laws could be achieved, then we might truly have a fundamental theory of consciousness.

It may just be…that there is a way of seeing information itself as fundamental.17

Information is fundamental. While it is ultimately physical systems that collect and manage information in a physical world, one can’t explain the capabilities of these systems in physical terms because they leverage feedback loops to create informational models. The connection from the information to their application just becomes too indirect and abstract to track physically. But they can be explained in functional terms. Our first-person experience of consciousness is a kind of program running in the brain that interprets awareness, attention, feelings, and thoughts as the components of a unified self. We are predisposed by evolution to think of ourself as a consistent whole, despite it really being a mishmash of disparate information. Millions of years of tweaks bring it all together to make a convincing lusion of a functional being acting as an agent in the world.

The first-person perspective of consciousness is a good and possibly ideal solution for controlling animal bodies. The reason is that it links function to form using effectiveness as the driving force. Specifically, it continuously aligns representations in the brain to external reality by providing conscious rewards for behavior that lines up with survival needs. Survival is a hard job and sounds onerous, but we enjoy doing it when we do it well because of these rewards. The other side of the coin, of course, is that we dislike it when we do it badly, which gives us an incentive to up our game. It isn’t the kind of control system one could build with if-x-happens-then-do-y logic. Rather, it is self-balancing and autopoietic. Autopoiesis refers to an organism’s ability to reproduce and maintain itself, but in the case of the brain’s top-level control system it more generally refers to its ability to manage many simultaneous sources of information and many goals (polytely) gracefully. Subjectively, we feel different priorities fighting for our attention in different ways, leading us to prioritize them as needed. I don’t know if first-person subjectivity is the only way to solve this problem gracefully and well, but I suspect so. In any case, it evolved and works and we only need to explain how. Subjectively, consciousness has its own functional, high-level way of seeing the world that interprets things in terms of their perceived functions, dwells on what it knows, and issues orders to the body to act.

We can only consciously think a single stream, but it seems likely that all the algorithms of the subconscious are parallel. Thousands to millions of parallel paths are vastly more powerful than a single path, but the single path of consciousness draws on many subconscious algorithms to become much more than a single logical path could. Consciousness must logically resolve into a single stream so we can come to one decision at a time to guide one action at a time. Theoretically, a brain could maintain multiple streams of consciousness before the moment of decision, but mother nature has apparently found that this creates a counterproductive internal conflict because we find we can feel or think just one non-conflicting thing at a time. Note that severing the hemispheres via corpus callosotomy possibly forces such a split in some ways, at least temporarily. But split-brain patients report feeling the same as before the split, and can still use either hemisphere to sense objects presented only to one (e.g. to the left eye, which is controlled by the right hemisphere). It is now theorized that the thalamus, which is not separated by this operation, mediates communication between the hemispheres18. It is also possible that neural plasticity can either regrow cortical pathways or repurpose subcortical (e.g. thalamic) pathways to maximize interhemispheric integration19. But I would just emphasize that the stream of consciousness is linked to the SSSS (single-stream step selector), so having more than one stream of consciousness would make coordinated action impossible, which would at the very least require one stream to dominate the other.

Multiple personalities, now called dissociative identity disorder (DID), arguably creates serial rather than parallel streams of consciousness. This strategy usually arises as an adaption for dealing with severe childhood abuse. As a protective mechanism, DID patients’ identities fragments into imaginary roles as an escape from their true circumstances. Although their situation has driven them to invest an unrealistic level of belief in these alter personas, which in turn suppresses belief in their real persona, I don’t consider this condition to represent true multiple serial streams of consciousness, but rather just a single, confused stream. All of us live in worlds we construct with our mental models, and that includes hopes and dreams not at all apparent from our physical circumstances but constructed from our interpretation of the fabric of social reality. When we embrace personality traits as our own, we are trying on a role to see how it fits. The way we behave then comes to define us, both from our own perspective and from others. But it isn’t all we are; we always have potential to change, and, in any case, our past behavior may be indicative of but does not constrain our future behavior. Physical events don’t actually repeat themselves; only patterns repeat, and patterns have boundaries that are always subject to interpretation. Given this inherent fluidity of function, is it meaningful to speak of a unified self?

Our Concept of Self

I have spoken of how our knowledge divides into knowledge about the self and not-self, and of how consciousness creates a perspective of an agent in the world, which is the self. But I haven’t spoken about what we know about our own self, i.e. about self-knowledge. Self-knowledge is clearly an exercise in abstract thinking because while all higher animals have some capacity to experience simulations of themselves acting in the world, knowledge of self goes further to attribute situation-independent qualities to that self. Self-knowledge arises mostly from the same submerged iceberg of subconscious knowledge that underlies all our knowledge and then leverages the same kind of generalization skills we use to process all conceptual knowledge. But it has the important distinction of being directed at ourselves, the machine doing the processing. Having the capacity to reflect on ourselves is a liability from an evolutionary standpoint because it presents evolution with the additional challenge of ensuring that we will be happy with what we see. Since we have been designed incrementally, this problem has been solved seamlessly by expanding our drives and emotions sufficiently over time to keep our rational minds from wandering excessively. Humans are such emotional creatures, relative to other animals, because we have to be persuaded to dedicate our mental powers sufficiently toward survival and all its attendant needs. This persuasion doesn’t stop with short-term effects, but influences how we model the world by leading us to adopt beliefs and convictions for which we will strive diligently. Those beliefs may be rationally supported, but they are more usually emotionally supported, often based solely on social pressure, because we are adapted to put more stock in our drives, emotions, and the opinions of others than our own ability to think.

So we can see ourselves, but we are biased in our perspective. When well-adjusted, we will accept the conflicting messages from our emotional, social, and rational natures to see a unified and balanced entity. While it is not hard to see why it is adaptive for us to feel like unified agents, doesn’t our ability to think rationally highlight all the diverse pieces of our minds, which could alternately support a fractured view of ourselves? Yes, and it does. We see many perspectives and are often torn over which ones to back. We can doubt whether the choices we make even accurately represent what we are or are just compromises we have to make because we don’t have enough time or information to discover our true preferences. But our persistent sense of self is generally not shaken by such conflicts. Our feeling of continuity with our past creates for us at any given moment a feeling of great stability: we may not be able to picture all the qualities we associate with ourselves as we probably have not given any of them much active thought recently, but we know they are there. Themes about ourselves float in our heads in such a way that we can sense that they are there without thinking about them. This “floating”, which applies to any kind of memory and not just thoughts about ourselves, feels like (and is) thousands of parallel subconscious recollections helping to back up our conscious thoughts without our having to focus directly on them. Subconscious thoughts not brought into focus contribute to our conscious thought by giving us more confidence in the applicability of any intuition or concept to the matters at hand. We know they are there because we can and often do explore such peripheral thoughts consciously, which reinforces our sense that our whole minds go much deeper than the thoughts we are executing at any given moment in time. Note that this is both because the subconscious does so much for us and because thoughts are informational structures in their own right — functional entities — and not just steps in a process, and so have a timeless quality.

So do we know ourself? Knowledge is always phenomenal, not noumenal, and so presents a perspective or representation of something without explaining the whole thing. We know many things about ourselves. Intuitively, we know ourselves through drives, emotions, our responses to social situations, and even our habitual thinking patterns. Rationally, we know ourselves from impartial considerations of the above and our abilities. It isn’t a complete picture — nothing ever is — but it is enough for most of us to go on. We see ourselves both as entities with certain capabilities and potential and as accomplishers of a variety of deeds. We know many things strongly or for sure, many more things still we only suspect or know little about, and of everything else, which is surely a lot, we know nothing. But it is enough. We always know enough to do something, because time marches on and we must always act. Is it wrong that we don’t spend more time in self-contemplation to ferret out our inner nature? Aside from the obvious point that what we do with our minds can’t be either right or wrong but simply is what it is, we can’t break with our nature anyway. Our genomes have a mission which has been reinforced over countless generations, and it makes us strongly inclined to pursue certain objectives. “I yam what I yam and tha’s all what I yam.”20 So while I would argue that we can direct our thoughts in any direction, we can’t fundamentally alter the nature of our self, which is determined by nature and nurture, though we can alter it somewhat over time by nurturing it in new directions.

Knowing now that we look at ourselves from a combination of intuitive and rational perspectives, what do we see? Mostly, we see an agent directing its body, both in practice and as the lead character in stories we tell ourselves. The self is a fiction just as all knowledge is a fiction, but that doesn’t make knowledge or the self unreal; both are real as functional entities: we are real because we can think of ourselves as real. As I said above, self and not-self information is a fundamental distinction in the brain, so far from being illusory, it is profoundly “lusory”. Our concept of self develops from turning our subjective attention inward, making our subjective capacity the object of study: I as subject study me as object. Some philosophers claim that the self can’t understand the self, because self-referential analysis in inherently circular, but this is not true. The study of anything creates another layer, a description of the phenomenon that is not the object (noumenon) under study itself. But if what we know about the self or mind is created by the mind, is it really knowledge? Can we escape our inherent subjectivity to achieve objectivity? Yes, because objective knowledge is never absolute, it is functional — it is knowledge if it makes good predictions. We don’t actually have to know anything that is incontrovertibly true about the mind; we only need to make generalizations that stand up consistently and well. So it doesn’t matter if the theories we cook up are far-fetched or bizarre from some perspectives as that can be said of all scientific theories. Theories about our subjective lives attain objectivity if they test well. This doesn’t mean we should throw every theory we can concoct at the wall to see if it sticks; we should try to devise theories that are consistent with all available knowledge using both subjective and objective sources. We can’t afford to ignore our subjective knowledge about the mind because almost everything we know about it emanates from our subjective awareness of it.

  1. The number of sets is the ploidy. Cells with one set are haploid; cells with two sets are diploid. Our sperm and eggs are haploid while our somatic cells are diploid. Many plants, amphibians, reptiles, and insects are tetraploid.
  2. Gagliano, M. et al., Learning by Association in Plants, Sci. Rep. 6, 38427; doi: 10.1038/srep38427 (2016).
  3. The implication is that any animal capable of doing real-time modeling (what we think of as “thinking”) must be conscious, and any incapable of it must not be conscious. All animals must execute a single stream of decisions and so must have information processing capabilities which reduce any parallel processing to a single stream of decisions, but this doesn’t imply they use real-time modeling. Insects are an interesting test case, and I would guess that they are not conscious to any degree. While their behavior shows they have awareness and attention, and we know they can learn to spot and leverage patterns they encounter, these things can be done using data-driven methods with parallel processing. Until we can demonstrate that they can draw conclusions using entailment, there is no reason to suppose they are modeling their environment to generalize from parts to wholes. Also, there is no reason for their brains to include such elaborate machinery, which I believe only exists in advanced animals. Consequently, they have no feelings or thoughts and there is no “they” from a first-person perspective.
  4. Purves and Lotto’s Illusion
  5. Alex Byrne, Inverted Qualia, Stanford Encyclopedia of Philosophy, 2015
  6. One could also invert the r/g signal and luminosity if one liked to invert the whole range of colors (which is, as noted, much larger than the spectrum), but let’s consider just the y/b signal.
  7. We have the technology now to build color inversion goggles using a camera and computer; I can’t help but wonder why nobody has.
  8. New Research Says There Are Only Four Emotions, Julie Beck, The Atlantic, 2014
  9. Yasmin Anwar, How Many Different Human Emotions Are There?, Greater Good Science Center, Sep 8, 2017
  10. The most popular internet sites harvest as much of our attention as possible, and some sites, like Buzzfeed, were created to grab attention for its own sake, much like gossip does.
  11. 37 particles (counting antiparticles) are considered confirmed (12 quarks, 12 leptons, 13 bosons) and dozens more are hypothesized. See List of particles, Wikipedia
  12. Reality about Human Morality, The Brights’ “Morality Project”, thebrights.net
  13. Self-Awareness: The Animals That Know Themselves
  14. Mirror Test
  15. We don’t have bistable perception.
  16. The Organism: A Holistic Approach to Biology Derived from Pathological Data in Man, Kurt Goldstein, 1934
  17. The Conscious Mind: In Search of a Fundamental Theory, David A. Chalmers, p 287
  18. Consciousness post corpus callosotomy, Brain, A Journal of Neurology, Volume 140, Issue 7, 1 July 2017, Pages e38
  19. Split Brain, Undivided Consciousness?, Discover, Jan 13, 2017
  20. Popeye

Leave a Reply