Introduction: The Mind Explained in One Page or Less

My goal here is to develop an explanation of how the mind works starting from first principles, but at the same time embracing the full range of current scientific understanding. Scientific progress in many other fields has been clear and definitive, but science has reached no consensus on the mind and seems reluctant to grant that the mind as such even exists. But if we start from the beginning and are careful to draw on only the best-supported scientific findings, we should find that what we already know about the mind can be formulated into a scientific theory, just by adjusting the way we think about it. It took me years turning these ideas over in my head to get to the bottom of it all, and it will take quite a few pages to lay them out so that anyone can follow them. But I don’t want to hold you in suspense for hundreds of pages until I get to the point, so I am going to explain how the mind works right here on the first page. And then I’m going to do it again in a bit more detail over a few pages, and then across a few chapters, and then in the rest of the book. Each iteration will go into more detail, will be better supported, and will expand the theory further. But it should all seem both intuitive and scientific along the way.

From a high level, it is easy to understand what the mind does. But you have to understand evolution first. Fortunately, evolution is even easier to understand. Evolution works by induction, which means trial and error. It keeps trying. It makes errors. It detects the errors and uses that feedback to try another way that will avoid the earlier mistakes. It’s pretty much the same approach machine learning uses, using feedback to improve future responses. The mind evolved as a high-level control center of the brain, which is the control center of the body. Unlike evolution, which gathers information slowly from natural selection, brains and minds gather real-time information from experience. Their basic strategy for doing that is also inductive trial and error. But minds, especially human minds, also use deduction. Where induction works from the bottom up (from specifics to generalities), deduction worked from the top down (generalities to specifics). Understanding and knowledge come from joining the two together. Most of the brain’s work is inductive and outside conscious control, producing senses, feelings, common sense and intuition, while deduction happens under conscious control, along with more induction and blending the two. Consciousness is just the product of connecting inductive and deductive frameworks together to construct an imaginary but practical inner realm. We think of our perspective as being ineffable, but it is only the logical consequence of applying logical models to target circumstances. A computer program with the same sort of inputs and meld of inductive and deductive logic would “feel” conscious as well. Our inner world is not magic, it is computed. But, just to be clear, such programs are not even on the horizon; it is not for nothing evolution needed billions of years to pull this off. So that is the mind in a nutshell.

Part 1: The Duality of Mind

“Normal science, the activity in which most scientists inevitably spend almost all their time, is predicated on the assumption that the scientific community knows what the world is like”
― Thomas S. Kuhn, The Structure of Scientific Revolutions

The mind exists to control the body from the top level. Control is the use of feedback to regulate a device. Historically, science was not directly concerned with control and left its development to engineers. The first feedback control device is thought to be the water clock of Ktesibios in Alexandria, Egypt around the third century B.C. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel.1

I’m a software engineer; my whole life has been spent devising control algorithms. So naturally, when I think of the mind, I see a problem in control. Traditional software design doesn’t doesn’t usually have to manage feedback dynamically. Instead, objectives are imagined and strategies for achieving them are implemented algorithmically. In use, usually human feedback triggers the appropriate algorithms, but the programmer still has to anticipate what control to give the human user. But any concerns beneath the level of human feedback have to be anticipated and adequately regulated to make sure the program functions as desired.

In this section, I am going to develop the idea that science has overlooked the fundamental nature of control in the study of certain systems in which control plays a dominant role, specifically in life and the mind. By reframing our scientific perspective, we can dispense with unproductive lines of thought and get straight to the heart of the matter.

Approaching the Mind Scientifically

“You unlock this door with the key of imagination. Beyond it is another dimension: a dimension of sound, a dimension of sight, a dimension of mind. You’re moving into a land of both shadow and substance, of things and ideas. You’ve just crossed over into… the Twilight Zone.” — Rod Serling

Many others before me have attempted to explain the functional basis of the mind. They’ve gotten a lot of the details right, but taken as a whole, no theory presented to date adequately explains the mind as we experience it. Our deeply held intuitions about the special character of the mind are quite true, but science has found no purchase to get at them. My premise is that nearly everything we think we know about the mind is true and that science needs to catch up with common sense. The conventional approach of science is to write off all our intuitions as fantasy, the biased illusions of wishful thinking that evolved to lead us down adaptive garden paths rather than to understand the mind. But this is misguided; minds are not designed to be deluded, they are designed to collect useful knowledge, and we each already have encyclopedic knowledge about how our mind works. This is not to say minds are immune to delusion or bias, which can happen for many practical reasons. But with a considered approach we can get past such gullibility.

Our most considered approach to figuring things out is science. The cornerstone quality of science is objectivity. What objective exactly means is a topic I will discuss in greater detail later, but from a high level it means a perspective that is independent of each of our subjective perspectives. Knowledge that can be shown to be outside of us is the same for everyone and can count as scientific truth. For this reason, I am developing a scientific perspective here, not a spiritual or fantastic one. But is it even possible to develop an objective way of studying the mind, which seems to be an entirely subjective phenomenon? We only know about minds because we have them and can think about what they are up to. We can’t see what they are doing using instruments. Or, rather, we can see through brain scans that areas of the brain are active when our minds are active, and we can even approximately tell what areas of the brain are related to what aspects of the mind by correlating what people report is happening in their mind to what is happening in their brain. But science seems to be inadequately equipped to make sense of mental states in a way that makes sense to us. I’m going to dig into the philosophy of science to sort this out. We will have to consider more closely the nature of the object under study and what we are expecting science to accomplish. And we will find that deriving an appropriate philosophy of science is closely related to understanding the mind because both search for the nature of knowledge and truth. As we move into this twilight zone, we should remain cognizant of Richard Feynman’s injunction against cargo cult science, which he said could only be avoided by “scientific integrity, which he described as, “a kind of leaning over backwards” to make sure scientists do not fool themselves or others.” What I am proposing here is especially at risk of this because I am playing with the very structure of science and its implications at the highest levels. I’ve tried to review everything I have written to ferret out any overreach, but reach in this area still has many subjective qualities. Still, I believe that a coherent, unified theory is now possible and I hope my approach helps pave the way.

Science has been fighting some pitched philosophical debates in recent decades which have reached a standstill and left it on pretty shaky ground. These skirmishes don’t affect most scientific fields because they can make local progress without a perfect overall view, but the science of mind can go nowhere without a firm foundation. So I’m going to establish that first and then start to draw out the logical implications, decomposing the mind from the top down in a general way. I am going to have to make some guesses. A scientific hypothesis is a guess which gets promoted to a theory once it has the backing of rigorous experimentation and evidence. I’m not doing field research here, and my investigation will encompass many fields, so I will mostly look to well-established theories to support my hypothesis. Where theory has not been adequately established, I will have to hypothesize, but I will also cite some credible published hypotheses. I will adjust and consolidate these theories and hypotheses to form a unified hypothesis. Because I am using an iterative approach to present my ideas in increasingly more detail, I have to ask you to bear with me. Each iteration can only go so deep, but I will try to get to all the issues and provide sufficient support. If you accept the underlying theories, then you should find my conclusions to be well-supported and relatively non-controversial. That said, the fields of science involved are works in progress, so recent thinking is inherently unsettled and controversial. But my goal is to stay within the bounds of the conclusions of science and common sense, even though I will be reframing our conception of the scope of both.

First, the major theories from which I plan to draw support:

  1. Physicalism, the idea that only physical entities comprised of matter and energy exist. Under the predominant physicalist paradigm, these entities’ behavior is governed by four fundamental forces, namely gravity, the electromagnetic force, and the strong and weak nuclear forces. The latter three are nicely wrapped up into the Standard Model of particle physics, and gravity by general relativity. So far a grand unified theory that unites these two theories remains elusive. Physicalists acknowledge that their theories cannot now or ever be proven correct or reveal why the universe behaves as it does. Rather, they stand as deductive models that map with a high degree of confidence to inductive evidence.

  2. Evolution, the idea that inanimate matter become animate over time through a succession of heritable changes. The paradigm Darwin introduced in 1859 itself evolved during the first half of the 20th century into the Modern Synthesis to incorporated genetic traits and rules of recombination and population genetics. Watson and Crick’s discovery of DNA in 1953 as the source of the genetic code provided the molecular basis for this theory. Since that time, however, our knowledge of molecular mechanisms has exploded, undermining much of that paradigm. “In 2009, the evolutionary biologist Eugene Koonin stated that while “the edifice of the [early 20th century] Modern Synthesis has crumbled, apparently, beyond repair”, a new 21st-century synthesis could be glimpsed.”1 Most notably, we see a bigger picture in which the biochemistry and evolutionary mechanisms shared by all existing organisms took perhaps 0.5 to 1 billion years to evolve, and probably another billion years of refinements before before the eukaryotes (organisms whose cells have a nucleus) appeared about 2 billion years ago. The central dogma of molecular biology from Francis Crick in 1957 essentially states that “DNA or RNA can make protein but protein can never make DNA, RNA, or protein” to capture the idea that information flows is one-way from nucleic acids to proteins. While this still always holds, it does not follow that all information in DNA is translated to proteins. As it turns out, nearly all DNA does get translated to proteins in bacteria, but only a very small amount of it does in human beings. The human genome project revealed that only 1% of our DNA does that using about 20,687 protein-coding genes2, of which about 60% are expressed in all tissues. Some genes can create a few functional variants, so the total number of proteins could be significantly higher (estimates are 100,000 to 1 million). 3 The ENCODE project revealed that over 80.4% of the remaining DNA is transcribed to RNA and likely performs some active function, while the rest may either be used only under rare circumstances or represent a reserve of latent unexpressed functionality that may require genetic modification to activate.45 The Registry of Candidate Regulatory Elements contains 1,310,152 human candidate Regulatory Elements (cREs), any or all of which may be regulatory genes.6 The term “gene” is now being more widely applied to any any sequence of DNA or RNA that has a function, so I will distinguish protein-coding genes specifically from genes in the non-coding DNA, which presumably usually achieve functionality from RNA transcribed from DNA. I will use the term “gene” to include both. Many of the 18441 identified RNA fragments discovered so far are thought to be regulator genes.7 Further, there are pseudogenes (11,224 found so far8), which are apparently inactive genes but are perhaps more likely genes-in-waiting. There may be some way cells can tap tap into this reserve.9 One proposed use of some pseudogenes is as gene-editing genes that construct new genes in sperm or somatic cells based on stress triggers. This class is somewhat hypothetical, but I will investigate this theory later because it closes some big holes in established evolutionary theory.10

  3. Information theory, the idea that something nonphysical called information exists and can be manipulated. The study of information is almost exclusively restricted to the study of the manipulation of information and not to its nature, because the manipulation has great practical value but the nature is seen a point of only philosophical interest. However, understanding the nature of information is critical to understanding how life and the mind work, so I will be primarily concerned in this work with nature rather than manipulation. Because the nature of information has been almost completely marginalized in the study of information theory, existing science its nature doesn’t go very far and I have mostly had to derive my own theory of the nature of information from first principles, building on the available evidence.

  4. The Computational Theory of Mind, the idea that the human mind is an information processing system and that both cognition and consciousness result from this computation. According to this theory, computation is generalized to be a transformation of inputs and internal states using rules to produce outputs. Where mechanical computers use symbolic states stored in digital memory and manipulated electronically, neural computers use neurochemical inputs, states, outputs, and rules. This theory, more than any other, has guided my thinking in this book. It is considered by many, including me, to be the only scientific theory that appears capable of providing a natural explanation for the much if not all of the mind’s capabilities. However, I largely reject the ideas of the representational theory of mind and especially the language of thought, as they unnecessarily and incorrectly go too far in proposing a rigid algorithmic approach when a more generalized solution is needed. Note that whenever I use the word “process” in this book, I mean a computational information process, unless I preface it with a differentiating adjective, e.g. biological process.

While the scientific community would broadly agree that these are the leading paradigms in their respective areas, they would not agree on any one version of each theory. They are still evolving and in cases have parallel, contradictory lines of development. I will cite appropriate sources that are representative of these theories as needed.

Information is Fundamental

Physical scientists have become increasingly committed to physicalism over the past four centuries. Physicalism is intentionally a closed-minded philosophy: it says that only physical things exist, where physical includes matter and energy in spacetime. It seems, at first glance, to be obviously true given our modern perspective: there are no ghosts, and if there were, we should reasonably expect to see some physical evidence of them. Therefore, all that is left is physical. But this attitude is woefully blind; it completely misses the better part our existence, the world of ideas. Of course, physicalism has an answer for that — thought is physical. But are we really supposed to believe that concepts like three, red, golf, pride, and concept are physical? They aren’t. But the physicalists are not deterred. They simply say that while we may find it convenient to talk about things in a free-floating, hypothetical sense, that doesn’t constitute existence in any real sense and so will ultimately prove to be irrelevant. From their perspective, all that is “really” happening is that neurons are firing in the brain, analogously to a CPU running in a computer and our first-person perspective of the mind with thoughts and feelings is just the product of that purely physical process.

Now, it is certainly true that the physicalist perspective has been amazingly successful for studying many physical things, including everything unrelated to life. However, once life enters the picture, philosophical quandaries arise around three problems:

(a) the origin of life,
(b) the mind-body problem and
(c) the explanatory gap.

In 1859, Charles Darwin proposed an apparent solution to (a) the origin of life in On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. His answer was that life evolved naturally through small incremental changes made possible by competitive natural selection between individuals. The idea of evolution is now nearly universally endorsed by the scientific community because a vast and ever-growing body of evidence supports it while no convincing evidence refutes it. But just how these small incremental changes were individually selected was not understood in Darwin’s time, and even today’s models are somewhat superficial because so many intermediate steps are unknown. Two big unresolved problems in Darwin’s time were the inadequate upper limit of 100 million years for the age of the Earth and the great similarity of animals from different continents. It was nearly a century before the earth was found to be 4.5 billion years old (with life originating at least 4 billion years ago) and plate tectonics explained the separation of the continents. By the mid-20th century, evolutionary theory had developed into a paradigm known as the Modern Synthesis that standardized notions of how variants of discrete traits are inherited. This now classical view holds that each organism has a fixed number of inherited traits called genes, that random mutations lead to gene variants called alleles, and that each parent contributes one gene at random for each trait from the two it inherited from its parents to create offspring with a random mixture of traits. Offspring compete by natural selection, which allows more adaptive traits to increase in numbers over time. While the tenets of the Modern Synthesis are still considered to be broadly true, what we have learned in the past seventy or so years has greatly expanded the repertoire of evolutionary mechanisms, substantially undermining the Modern Synthesis in the process. I will discuss some of that new knowledge later on, but for now, it is sufficient to recognize that life and the mind evolved over billions of years from incremental changes.

Science still draws a blank trying to solve (b) the mind-body problem. In 1637, René Descartes, after thinking about his own thoughts, concluded “that I, who was thinking them, had to be something; and observing this truth, I am thinking therefore I exist”1, which is popularly shortened to Cogito ergo sum or I think, therefore I am. Now, we still know that “three” is something that exists that is persistent and can be shared among us regardless of the myriad ways we might use our brain’s neurochemistry to hold it as an idea, so intuitively we know that Descartes was right. But officially, under the inflexible auspices of physicalism, three doesn’t exist at all. Descartes saw that ideas were a wholly different kind of thing than physical objects and that somehow the two “interacted” in the brain. The idea that two kinds of things exist at a fundamental level and that they can interact is called interactionist dualism. And I will demonstrate that interactionist dualism is the correct ontology of the natural world (an ontology is a philosophy itemizing what kinds of things exist), but not, as it turns out, the brand that Descartes devised. Descartes famously, but incorrectly, proposed that a special mental substance existed that interacted with the physical substance of the brain in the pineal gland. He presumed his mental substance occupied a realm of existence independent from our physical world which had some kind of extent in time and possibly its own kind of space, which made it similar to physical substance. We call his dualism substance dualism. We know now substance dualism is incorrect because the substance of our brains alone is sufficient to create thought.

Physicalism is an ontological monism that says only one kind of thing, physical things, exist. But what is existence? Something that exists can be discriminated on some basis or another as being distinct from other things that exist and is able to interact with them in various ways. Physical things certainly qualify, but I am claiming that concepts also qualify. They can certainly be discriminated and have their own logic of interactions. This doesn’t quite get us down to their fundamental nature, but bear with me I and I will get there soon. Physicalism sees the mind is an activity of the brain, and activities are physical events in spacetime, so it just another way of talking about the same thing. At a low level, the mind/brain consists of neurons connected in some kind of web. Physicalism endorses the idea that one can model higher levels as convenient, aggregated ways of describing lower levels with fewer words. In principle, though, higher levels can always be “reduced” to lower levels incrementally by breaking them down in enough detail. So we may see cells and organs and thoughts as conveniences of higher-level perspectives which arise from purely physical forms. I am going to demonstrate that this is false and that cells, organs, and thoughts do not fully reduce to physical existence. The physicalists are partly right. The mind is a computational process of the brain like digestion is a biological process of the gastrointestinal system. Just as computers bundle data into variables, thinking bundles data into thoughts and concepts which may be stored as memories in neurons. Computers are clearly physical machines, so physicalists conclude that brains are also just physical machines with a “mind” process that is set up to “experience” things. This view misses the forest for the trees because neither computers nor brains are just physical machines… something more is going on that physical laws alone don’t explain.

This brings us to the third problem, (c) the explanatory gap. The explanatory gap is “the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced.” In the prototypical example, Joseph Levine said, “Pain is the firing of C fibers”, which provides the neurological basis but doesn’t explain the feeling of pain. Of course, we know, independently of how it feels, that the function of pain is to inform the brain that something is happening that threatens the body’s physical integrity. That the brain should have feedback loops that can assist with the maintenance of health sounds analogous to physical feedback mechanisms, so a physical explanation seems sufficient to explain pain. But why things feel the way they do, or why we should have any subjective experience at all, does not seem to follow from physical laws. Bridging this gap is called the hard problem of consciousness because no physical solution seems possible. However, once we recognize that certain nonphysical things exist as well, this problem will go away.

We can resolve these three philosophical quandaries by correcting the underlying mistake of physicalism. That mistake is in assuming that information is physical, or, alternatively, that it doesn’t exist. But information is not physical and it does exist. Ideas are information, but information is much more than just ideas. The ontology of science needs to be reframed to define and encompass information. Before I can do that, I am going to take a very hard look at what information is, and what it is not. I am going approach the subject from several directions to build my case, but I’m going to start with how life, and later minds, expanded the playing field of conventional physics. During the billions of years before life came along, particle behaved in ways that could be considered very direct consequences of the Standard Model of particle physics and general relativity. This is not to say these theories are in their final form, but one could apply the four fundamental forces to any bunch of matter or energy and be able to predict pretty well what would happen next. But when life came along, complex structure started to develop with intricate biochemistries that seemed to go far beyond what the basic laws of physics would have predicted would happen. This is because living things are a information processing systems, or information processors (IPs) for short, and information can make things happen that would be extremely unlikely to happen otherwise. Organisms today manage heritable information using DNA (or RNA for some viruses) as their information repository. While the information resides in the DNA, its meaning is only revealed when it is translated into biological functions via biochemical processes. Most famously, the genetic code of DNA uses four nucleotide letters to spell 64 3-letter words that map to twenty amino acids (plus start and stop). Technically, DNA is always transcribed first into RNA and from RNA into protein. A string of amino acids forms a protein, and proteins do most of the heavy lifting of cell maintenance. But only two percent of the DNA in humans codes for proteins. Much of the rest regulates when proteins get translated, which most critically controls cell differentiation and specialization in multicellular organisms. Later on I will discuss some other hypothesized additional functions of non-coding DNA that substantially impact how new adaptations arise. But, as regards the storage of information, we know that DNA or RNA store the information and it is translated one-way only to make proteins. The stored information in no way “summarizes” the function the DNA, RNA or proteins can perform; only careful study of their effects can reveal what they do. Consequently, knowing the sequence of the human genome tells us nothing about what it does; we have to figure out what pieces of DNA and RNA are active and (where relevant) what proteins they create, and then connect their activities back to the source.

Animals take information a step further by processing and storing real-time information using neurochemistry in brains2. While other multicellular organisms, like plants, fungi, and algae, react to their environments, they do so very slowly from our perspective. Sessile animals like sponges, corals, and anemones also seem plantlike and seem to lack coordinated behavior. Mobile animals encounter a wide variety of situations for which they need a coordinated response, so they evolved brains to assess and then prioritize and select behaviors appropriate to their current circumstances. Many and perhaps all animals with brains go further still by using agent-centric processes called minds within their brains that represent the external world to them through sensory information that is felt or experienced in a subjective, first-person way. Then first-person thinking contributes to top-level decisions.

While nobody disputes that organisms and brains use information, it is not at all obvious why this makes them fundamentally different from, say, simple machines that don’t use information. To see why they are fundamentally different, we have to think harder about what information really is and not just how it is used by life and brains. Colloquially, information is facts (as opposed to opinions) that provide reliable details about things. More formally, information is “something that exists that provides the answer to a question of some kind or resolves uncertainty.” But provides answers to whom? The answer must be to an information processor. Unless the information informs “someone” about something, it isn’t information. But this doesn’t mean information must be used to be information; it only has to provide answers that could be used. Information is a potential or capacity that can remain latent, but must potentially be usable by some information processor to do something. So what is fundamentally different about organisms and brains from the rest of the comparably inert universe is that they are IPs, and only IPs can create or use information.

But wait, you are thinking, isn’t the universe full of physical information? Isn’t that what science has been recording with instruments about every observable aspect of the world around us in ways that are quite objectively independent of our minds’ IPs? If we have one gram of pure water at 40 degrees Fahrenheit at sea level at 41°20’N 70°0’W (which is in Nantucket Harbor), then this information tells us everything knowable by our science about that gram of matter, and so could be used to answer any question or resolve any uncertainty we might have about it. Of course, the universe doesn’t represent that gram of water using the above sentence, it uses molecules, of which there are sextillions in that gram. One might think this would produce astronomically complex behavior, but the prevailing paradigm of physics claims a uniformity of nature in which all water molecules behave the same. Chemistry and materials science then provide many macroscopic properties that work with great uniformity as well. Materials science reduces to chemistry, and chemistry to physics, so higher-level properties are conveniences of description that can be reduced to lower-level properties and so are not fundamentally different. So, in principle, then, physical laws can be used to predict the behavior of anything. Once you know the structure, quantity, temperature, pressure, and location of anything, then the laws of the universe presumably take care of the rest. Our knowledge of physical laws is still a bit incomplete, but it is good enough that we can make quite accurate predictions about all the things we are familiar with.

Physical information clearly qualifies as information once we have taken it into our minds as knowledge, which is information within our minds’ awareness. But if we are thinking objectively about physical information outside the context of what our minds are doing, that means we are thinking of this information as being present in the structure of matter itself. But is that information really in the matter itself? Matter can clearly have different structures and group in many patterns. First, they can differ in what subatomic particles comprise them, as there are quite a variety of such particles. Next, how these particles have combined into larger particles and then atoms and then molecules can vary tremendously. And finally, the configurations into which molecules can be assembled into solids is nearly endless. Information can describe all these structural details, and also the local conditions the substance is under, which chiefly include quantity, temperature, pressure, and location (though gravity and the other fundamental forces work at a distance, which make each spot in the universe somewhat unique). But while we can use information to describe these things, is it meaningful to say the information is there even if we don’t measure and describe it? Wouldn’t it be fair to say that information is latent in physical things as a potential or capacity which can be extracted by us as needed? After all, I did say that information is a potential that doesn’t have to be used to exist.

The answer is no, physical things contain no information. Physical information is created by our minds when we describe physical things, but the physical things themselves don’t have it. Their complex structure is kept physically and that is it. The laws of the universe then operate on all their subatomic particles/waves uniformly. The universe doesn’t care about how the particles are situated with regard to each other, or with anything else we would consider relevant information. It just does what it does. Now, how close particles get to each other affects what atoms, molecules and aggregate substances form, and can create stars and black holes at high densities. But all this happens based on physical laws without any information. While there are patterns in nature that arise from natural processes, e.g. in stars, planets, crystals, and rivers, these patterns just represent the rather direct consequences of the laws of physics and are not information in and of themselves. They only become information at the point where an IP creates information about them. So let’s look at what life does to create information where none existed before.

Living things are complicated because they have microstructure down to the molecular level. Cells are pretty small but still big enough to contain trillions of molecules, all potentially doing different things, which is a lot of complexity. We aren’t currently able to collect all that information and project what each molecule will do using either physics or chemistry alone. But we have found many important biochemical reactions that illuminate considerably how living things collect and use energy and matter. And physicalism maintains that given a complete enough picture of such reactions we can completely understand how life works. But this isn’t true. Perfect knowledge of the biochemistry involved would still leave us unable to predict much of anything about how a living thing will behave. Physical laws alone provide essentially no insight. Our understanding of biological systems depends mostly on theories of macroscopic properties that don’t reduce to physical laws. We are just used to thinking in terms of biological functions so we don’t realize how irreducible they are. Even at a low level, we take for granted that living things maintain their bodies by taking in energy and materials for growth and eliminating waste. But rocks and lakes don’t do that, and nothing in the laws of physics suggests complex matter should organize itself to preserve such fragile, complex, energy-consuming structures. Darwin was the first to suggest a plausible physical mechanism: incremental change steered by natural selection. This continues to be the only idea on the table, and it is still thought to be correct. But what is still not well appreciated is how this process creates information.

At the heart of the theory of evolution is the idea of conducting a long series of trials in which two mechanisms compete and the fitter one vanquishes the less fit and gets to survive longer as its reward. In practice, the competition is not head-to-head this way and fitness is defined not by the features of competing traits but by the probability that an organism will replicate. Genetic recombination provided by sexual reproduction means that the fitness of an organism also measures the fitness of each of its traits. No one trait may make a life-or-death difference, but over time, the traits that support survival better will outcompete and displace less capable traits. Finally, note that mechanisms by which new or changed traits may arise must exist. If you look over this short summary of evolution, you can see the places where I implicitly departed from classical physics and invoked something new by using the words “traits” and “probability”. These words are generalizations whose meaning relative to evolution is lost the moment we think about them as physical specifics. Biological information is created at the moment that feedback from one or more situations is taken as evidence that can inform a future situation, which is to say that it can give us better than random odds of being able to predict something about that future situation. This concept of information is entirely nonphysical; it is only about similarities of features, where features themselves are informational constructs that depend on being able to be recognized with better than random odds. Two distinct physical things can be exactly alike except for their position in time and space, but we can never prove it. All that we can know is that two physical things have observable features which can be categorized as the same or different based on some criteria. These criteria of categorization, and the concept of generalized categories, are the essence of information. For now, let’s focus only on biological information captured by living organisms in DNA and not on mental information managed by brains. Natural selection implies that biological information is created by inductive logic, which consists of generalizations about specifics whose logical truth is a matter of probabilities rather than logical certainty. Logic produces generalities, which are not physical things one can point to. And the inductive trial-and-error of evolution creates and preserves traits that carry information, but it doesn’t describe what any of those traits are. Furthermore, any attempt to describe them will itself necessarily be an approximate generalization because the real definition of the information is tied to its measure of fitness, not to any specific effects it creates.

We know that evolution works as we are here as evidence, but why did processes that collected biological information form and progress so as to create all the diverse life on earth? The reason is what I call the functional ratchet. A ratchet is a mechanical device that allows motion in only one direction, as with a cogged wheel with backward angled teeth. Let’s define the fitness advantage of a given trait as its function. The word function implies we know a specific use case for the trait, but I am using it in a more abstract sense to include the full range of generalized advantages that contributed to the creation of the trait (which must be noted includes to some degree information from every inductive trial the organism and all its forebears experienced back to the dawn of life). In any case, genes and proteins are not limited to one function or one way of interpreting their function. Cases of protein moonlighting, in which the same protein performs unrelated functions, are now well-documented. In the best-known case, different sequences in the DNA for crystallins code either for enzyme function or transparency (as the protein is used to make lenses). In any case, competition tends to create biological information in a functional ratchet because more generally capable functions will continuously displace less capable ones over time. This does not imply that what we think of as higher forms of life like vertebrates or humans are necessary or inevitable, but it does put life on a trajectory toward higher function, where function is always relative to current possibilities in current ecological niches. Later, I will discuss how a cognitive ratchet managed to evolve human brains in the evolutionary blink of an eye.

Although we should hesitate to draw narrow conclusions about the uses of biological functions, the truth is we can make approximate guesses about biological functions that are highly reliable. If we consider that all knowledge about the world is based on inductive information, and induction is always approximate, if we can achieve high reliability about uses and if this helps us understand functions better, then we should not hesitate to do so. I will be justifying and detailing this aspect of knowledge in much more detail later, but for now I’d like to flip our abstract view of information and function from being independent of utility to focusing on the nature of that utility. This flipped view is, of course, our native perspective. We see the uses of things everywhere. That life is based on the struggle for survival and replication has been abundantly obvious since ancient times, and we have always looked at living traits as having been designed to fulfill specific functions. Limbs are for locomotion, eyes are for seeing, hearts are for pumping blood, gullets are for taking in food, etc., etc. These functions, which clearly contribute to overall survival and fitness, are approximations, and detailed study always reveals many more subtle subsidiary functions, but they are nearly perfectly accurate for almost any intent or purpose. Of course, we know that evolution didn’t “design” anything because it is used trial and error rather than intelligence, but our personal experience in life leads us to believe that function only arises from intent, where intent is an effect or goal reached by a deductive cause. Goals reached by intent are purposes, and strategies that accomplish purposes are designs. I call actions with planning or intent maneuvers. Intent, goal, purpose, deduction, strategy, design, planning, and maneuver are all roughly synonymous mental constructs whose substructure I will be investigating in much more detail further on, but none of them have any place or corollary in evolved processes outside of minds. Cause and effect may be appropriate terms for evolved biological functions; I will discuss this idea further in the next chapter. My point, for now, is only that functions can evolve from inductive trial-and-error processes without intent. Provided we keep in mind that our characterizations of functions in terms of their uses is imperfect, we are well-justified in so characterizing them. So we can conclude that biology can freely cite functional contributions to survival as a foundational principle. To the extent biological functions are described using intent-based terminology, it is understood that this is not meant to be taken literally. But this doesn’t mean that information and function are not present; they are very much present in all living structures and this information and function are not physical at all. Physicalists go a step too far, however, in discounting our descriptions of function to suppose that the information and function itself is physical. The statistical advantage of information has no physical corollary and cannot be reduced to something physical.

Although it is possible to create, collect, and use information in a natural universe, it is decidedly nontrivial, as the complexity of living things demonstrates. Beyond the already complex task of creating it with new traits, recombination, and natural selection, living things need to have a physical way of recording information and transcribing information so that it can be deployed as needed going forward. I have said how DNA and RNA do this for life on earth. Because of this, we can see the information of life captured in discrete packages called genes. DNA and RNA are physical structures, and the processes that replicate and translate them are physical, but as units of function, genes are not physical. Their physical components should be viewed as a means to an end, where the end is the function. It is not a designed end, an inductively-shaped one. The physical shapes of living structures are cajoled into forms that would have been entirely unpredictable based on forward-looking design goals, but which patient trial and error demonstrated are better than alternatives.

Beyond biological information, animals have brains that collect and use mental information in real time that is stored neurologically. And beyond that, humans can encode mental information as linguistic information or representational information. Linguistic information can either be in a natural language or a formal language. Natural languages assume a human mind as the IP, while formal languages declare the permissible terms and rules, which is most useful for logic, mathematics, and computers. Representational information simulates visual, audio or other sensory experience in any medium, but most notably nowadays in digital formats. And finally, humans create artificial information, which is information created by computer algorithms, most notably using machine learning. All of these forms of information, like biological information, answer questions or resolve uncertainties to inform a future situation. They do this by generalizing and applying nonphysical categorical criteria capable of distinguishing differences and similarities. Some of this information is inductive like biological information, but, as we will see, some of it is deductive, which expands the logical power of information.

We have become accustomed to focus mostly on encoded information because it can be readily shared, but all encodings presume the existence of an IP capable of using them. For organisms, the whole body processes biological information. Brains (or technically, the whole nervous and endocrine systems) are the IP of mental information in animals. Computers can act as the IPs for formal languages, formalized representations, and artificial information, but can’t process natural languages or natural representational information. However, artificial information processing can simulate natural information processing adequately for many applications, such as voice recognition and self-driving cars. My point here is that encoded information is only an incremental portion of any function, which requires an IP to be realized as function. We can take the underlying IPs for granted for any purpose except understanding how the IP itself works, which is the point of this book. While we have a perfect knowledge of how electronic IPs work, we have only a vague idea of how biological or mental information processors work.

Consider the following incremental piece of biological information. Bees can see ultraviolet light and we can’t. This fact builds on prevailing biological paradigms, e.g. that bees and people see light with eyes. This presumes bees and people are IPs for which living and seeing are axiomatic underlying functions. The new incremental fact tells us that certain animals, namely bees, see ultraviolet as well. This fact extends what we knew, which seems simple enough. A child who knows only that animals can see and bees are small flying animals that like flowers can now understand how bees see things in flowers that we can’t. A biologist working on bee vision needs no more complex paradigm than the child; living and seeing can be taken for granted axiomatically. She can focus on the ultraviolet part without worrying about why bees are alive or why they see. But if our goal is to explain bees or minds in general, we have to think about these things.

Our biological paradigm needs to define what animals and sight are, but the three philosophical quandaries of life cited above stand in the way of a detailed answer. Physicalists would say that lifeforms are just like clocks but more intricate. That is true; they are intricate machines, but, like clocks, an explanation of all their pieces, interconnections, and enabling physical forces says nothing about why they have the form they do. Living things, unlike glaciers, are shaped by feedback processes that gradually make them a better fit for what they are doing. Everything that happened to them back to their earliest ancestors about four billion years ago has contributed. A long series of feedback events created biological information leveraging inductive logic captured as information rather than using laws of physics alone. Yes, biological IPs leverage physical laws, but they add something important which for which the physical mechanisms are just the means to the end. The result is complex creations that have essentially a zero probability of arising by physical mechanisms alone.

How, exactly, do these feedback processes that created life create this new kind of entity called information and what is information made out of? The answer to both questions is actually the same definition given for information above: the reduction of uncertainty, which can also be phrased as an ability to predict the future with better odds than random chance. Information is made out of what it can do, so we are what we can do. We can do things with a pretty fair expectation that the outcome will align with our expectations of the outcome. It isn’t really predicting in a physical sense because we see nothing about the actual future and any number of things could always go wrong with our predictions. We could only know the future in advance with certainty if we had perfect knowledge of the present and a perfectly deterministic universe. But we can never get perfect knowledge because we can’t measure everything and because quantum uncertainty limits how much we can know about how things will behave. But biological information isn’t based on perfect predictions, only approximate ones. A prediction that is right more than it is wrong can arise in a physical system if it can use feedback from a set of situations to make generalized guesses about future situations that can be deemed similar. That similarity, measured any way you like, carries predictive information by exploiting the uniformity of nature, which usually causes situations that are sufficiently similar to behave similarly. It’s not magic, but it seems like magic relative to conventional laws of physics, which have no framework for measuring similarity or saying anything about the future. A physical system with this capacity is exceptionally nontrivial — living systems took billions of years to evolve into impressive IPs that now centrally manage their heritable information using DNA. Animals then spent hundreds of millions of years evolving minds that manage real-time information using neurochemistry. Finally, humans have built IPs that can manage information using either standardized practices (e.g. by institutions) or computers. But in each case the functional ratchet has acted to strongly conserve more effective functionality, pulling evolution in the direction of greater functionality. It has often been said that evolution is “directionless”, because it seems to pull to simplicity as much as toward complexity. As Christie Wilcox put it in Scientific American, “Evolution only leads to increases in complexity when complexity is beneficial to survival and reproduction. … the more simple you are, the faster you can reproduce, and thus the more offspring you can have. … it may instead be the lack of complexity, not the rise of it, that is most intriguing.”3 It is true, evolution is not about increasing complexity, it is about increasing functionality. Inductive trial and error always chooses more functionality over less, provided you define “more” as what induction did. In other words, it is a statistical amalgamation of events of successful performances where the criteria for success were each situation-specific.

A functional entity has the capacity to do something useful, where useful means able to act so as to cause outcomes substantially similar to outcomes seen previously. To be able to do this, one must also be able to do many things that one does not actually do, which is to say one must be prepared for a range of circumstances for which appropriate responses are possible. Physical matter and energy are comprised of a vast number of small pieces whose behavior is relatively well-understood using physical laws. Functional entities are comprised of capacities and generalized responses based on those capacities. Both are natural phenomena. Until information processing came along through life, function (being generalized capacity and response) did not exist on earth (or perhaps anywhere). But now life has introduced an uncountable number of functions in the form of biological traits. As Eugene Koonin of the National Center for Biotechnology Information puts it, “The biologically relevant concept of information has to do with ‘meaning’, i.e. encoding various biological functions with various degrees of evolutionary conservation.”4 The mechanism behind each trait is itself purely physical, but the fact that the trait works across a certain range of circumstances is because “works” and “range” generalize abstract capacities, which one could call the reasons for the trait. The traits don’t know why they work, because knowledge is a function of minds, but their utility across a generalized range of situations is what causes them to form. That is why information is not a physical property of the DNA, it is a functional property.

Function starts to arise independent of physical existence at the moment a mechanism arises that can abstract from a token to a type, and, going the other way, from a type to a token. A token is a specific situation and a type is a generalization of that token which potentially includes other tokens that are deemed sufficiently similarly according to some criteria. This act of abstraction is also called indirection and is used all the time in computers, for example to let variables hold quantities not known in advance. Just as we can build physical computers that can use indirection, biological mechanisms can implement indirection as well. I am not suggesting that types are representational; that is too strong a position. Information is necessarily “about” something else, but only in the sense that its collection and application most move between generalities and specifics. Inductive trial-and-error information doesn’t know it employs types because only minds can know things, but it does divide the world up this way. When we explain inductive information using knowledge, we are simplifying what is happening though analogy to cause-and-effect models even though they really use trial-and-error models. Cells have generalized approaches for moving materials across cell membranes which we can classify as taking resources in and expelling wastes, but the cells themselves don’t realize they have membranes. Sunlight is important to plants, so sunlight is a category plants understand, which is to say they are organized so as to gather sunlight well, e.g. many plants turn their leaves to the sun.

To clarify further, we can now see that function is entirely about generalities and their applications and physical things are entirely about specifics. We can break a generality down into increasingly specific subcategories, but they still always be generalities because they are still categories that could potentially refer to multiple physical things. Even proper nouns are still generalities. A given quark detected in a particle accelerator, or my left foot, or Paris refer to specific physical things, but do so in general ways and remain references to those things and not the things themselves.

Natural selection allows small functional changes to spread in a population, and these changes are accompanied by small DNA changes that caused them. The physical change to the DNA caused the functional change, but it is really that functional change that brought about the DNA change. Usually, if not always, a deductive cause-and-effect model can be found that accounts for most of the value of an inductive trial-and-error functional feature. For example, hearts pump blood because bodies need circulation. The form and function line up very closely in an obvious way. We can pretty confidently expect that all animals with hearts will continue to have them in future designs to fulfill their need for circulation. While I don’t know what genes build the circulatory system, it is likely that most of them have contributed in straightforward ways for millions of years.

Sex, on the other hand, is not as stable a trait. Sometimes populations benefit from parity between the sexes and sometimes with disproportionally more females. Having more females is beneficial during times of great stability, and having more males during times of change. I will discuss why this is later, but the fact that this pressure can change makes it advantageous sometimes for a new mechanism of sex determination to spring up. For example, all placental mammals used to use the Y chromosome to determine the sex of the offspring. Only males have it, but males also have an X chromosome. With completely random recombination, this means that offspring have a 50% chance of inheriting their father’s Y chromosome and being male. However, two species of mole voles, small rodents of Asia, have no Y chromosome, so males have XX chromosomes like females. We don’t know what trigger creates male mole voles, but a mechanism that could produce more than 50% females would be quite helpful to the propagation of polygamous mole vole populations, as some are, because they would be more reproducing (i.e. female) offspring.567 The exact reason a change in sex determination was more adaptive is not relevant, all that matters is that it was and the physical mechanism was simply abandoned. A physical mechanism is necessary, and so only possible physical mechanisms can be employed, but the selection between physical mechanisms is not based on their physical merits but only on their functional contribution. As we move into the area of mental functions, the link between physical mechanisms and mental functions becomes increasingly abstract, effectively making the prediction of animal behavior based on physical knowledge alone impossible. To understand functional systems we have to focus on what capacities the functions bring to the table, not on the physical means they employ.

Although we can’t define or distinguish a piece of information with perfect precision because it is a generality, many generalities hold nearly all of the time, which often makes it feasible to take informational facts as truths. It is not that we are confusing them with absolute truths, it is just that their probability of holding is certain for nearly all intents and purposes. For example, among biological facts, we know that bee eyes have cone cells with a photopigment that is most sensitive to 344nm light in the ultraviolet range and that human eyes do not. We know bees need to be good at discriminating flowers, which reflect many frequencies in the ultraviolet range. We can thus further generalize that bees see ultraviolet well so they can distinguish flowers better and be pretty sure we are right.89 Again, physical mechanisms are limiting factors, but not nearly so limiting as the constraint by natural selection to useful functions.

I have introduced the idea that information and the function it brings are the keys to resolving the three philosophical quandaries created by life. In the next chapter I will develop it into a comprehensive ontology that is up to the task of supporting the scientific study of all manner of things.

Dualism and the Five Levels of Existence

To review, an information processor or IP is a physical construction that manages (creates and uses) certain kinds of information. Information or function consists of nonphysical generalizations about physical or functional things that have been abstracted away from them using indirection which creates the capacity to predict what will happen with better than random odds. This can also be called answering questions or resolving uncertainties. Information exists because it can be distinctly discriminated from other things that exist and can interact with them in various ways. Physical things exist specifically and functional things exist generally, but both can be distinguished and can form interactions. Consequently, we can conclude that interactionist dualism is true after all. The idea that something’s existence can be defined in terms of the value it produces is called functionalism. For this reason, I call my brand of interactionist dualism form and function dualism, in which physical substance is “form” and information is “function”. As an interactionist, I hold that form and function somehow interact in the mind. To understand how that works, I’m going to further distinguish five levels of understanding we can have for each of the two kinds of existence, only the first two of which apply to physical things:

Noumenon – the thing-in-itself. Keeps to itself.

Phenomenon – that which can be observed about a noumenon. Reaches out to others.

Perception – first-order information created by an information processor (IP) using inductive reasoning on phenomena received . Notices others.

Comprehension – second-order information created with deductive reasoning, usually by building on percepts. Understands others.

Metacognition – third-order information or “thoughts about thoughts”. Understands self.

We believe our senses tell us that the world around us exists. We know our senses can fool us, but by accumulating multiple observations using multiple senses, we build a very strong inductive case that physical things are persistent and hence exist. Science has increased this certainty many fold with instruments that are both immune to many kinds of bias and can observe things beyond our sensory range. Still, though, no matter how much evidence accumulates, we can’t know for sure that the world exists because it is out there and we are in here. But we can always suppose that something exists, and we call the existence of something independent of our awareness of it a noumenon, or thing-in-itself (what Kant called das Ding an sich). The only way we can ever come to know anything about these noumena is through phenomena, which are emanations from or interactions with a noumenon. Features of noumena that exhibit no phenomena are completely unknowable from a scientific standpoint, because science builds on verifiable interactions. Example phenomena include light or sound bouncing off an object, but can also include matter and energy interactions like touch, smell, and temperature.

Perception is the receipt of a phenomenon by a sensor and adequate accompanying information processing to create information about it. Physical things have noumena that radiate phenomena, but they never have perception since perception is information. A single percept is never created entirely from a single phenomenon; the capacity for perception must be built over billions of inductive trial-and-error interactions as life has done it. We notice a camera flash as a percept, but only because our brain evolved the capacity over millions of years to convert data into information. So if a tree falls in the forest and there was nobody to hear it, there was a phenomenon but no perception. Because IPs exploit the uniformity of nature, our perceptions can very accurately characterize both the phenomena we observe and the underlying noumena from which they emanate, even if complete certainty is impossible.

Perception includes everything we consciously experience without conscious effort, and includes sensory information about our bodies and the world, and also emotions, common sense, and intuition, which somehow bubble up into our awareness as needed. Experience and learning can improve our perception, extending the fixed natural part with a variable nurtured part that can help us get more out of our innate gifts. All information created by perception is first-order information because it is based on induction, which is the first kind of information one can extract from data. Inductive reasoning or “bottom-up logic” generalizes conclusions from multiple experiences based on similarities, a trial-and-error approach. Because all inherited genetic information is created inductively, I will use the term “percept” or “perception” to refer to any biological or mental information created inductively. Evolutionary processes don’t think, but they can be said to perceive benefits of inductive strategies and to save those percepts genetically. I have to generalize perception this way because the perceptive information of the mind combines mechanisms that slowly evolved with sensory information collected in real time to make the composite information we experience as perception. Our ability to see red is entirely genetic, but our experience of seeing red is real-time. Also, our ability to build a vast mental library of red things we have seen and to draw associations and intuitions about new red things we see by connecting them red things we have seen before is entirely genetic, but we can also extend that learning past direct genetic consequences via comprehension and metacognition.

Comprehension goes beyond perception by using deductive reasoning to establish causes and effects, which gives us a deeper ability to see what will happen. I will decompose deductive reasoning more as the book unfolds, but for now, it is sufficient to say that deduction builds information from the other direction using “top-down logic” on abstract entities that can be mapped to bottom-up perceptions. We don’t comprehend our inductive senses (senses, emotions, common sense, and intuitions) because we can just use them without concerning ourselves with what causes and effects might tell us. But we can build deductive explanations for them if we are interested in explaining how they work, which is, of course, a chief object of this book. Already, by characterizing them as inductive and breaking them down into categories, I have started to present a deductive model of them. The value that deduction brings to the table that induction lacks is implication or entailment, known formally as logical consequence. Provided you stick to the rules of the deductive system you set up, logical consequence tells you exactly what will happen with perfect foresight. And because causes and effects can be chained together, deduction can take you with certainty many steps further than induction, which can basically only reach probable one-step conclusions. I call the causes and effects of deduction second-order information because these kinds of models give us the sense that we understand or comprehend something we know, as opposed to the merely comfortable or familiar feeling of inductive first-order information. The limitation of deduction is that the physical world is not a theoretical model, so we have to find ways to line our models up with actual circumstances, which always involves some shoehorning (fitting something that does not easily fit), which we do with both inductive and deductive heuristics.

Metacognition is thoughts about thoughts, or, more specifically, deductive reasoning about thoughts. Comprehension is a first-order use of deductive reasoning in which the premises can be mapped to inductive entities, while metacognition is a higher-order use of deductive reasoning in which the premises are abstractions based on conclusions reached by lower-order uses of deductive reasoning. Most significantly for us as people, when we posit the existence of our own self, or any of our interests or thoughts as the starting points of further deduction, we have moved into the realm of metacognition. We can then start to deduce cause and effect relationships and logical chains of reasoning about these abstract entities. Metacognition thus expands our realm of comprehension from matters of immediate relevance to matters abstracted one or more levels away. It extends our reach from physical reality to unlimited imagination. I call the abstractions of metacognition third-order information because this move to arbitrary degrees of indirection unlocks new kinds of explanatory power. While second- and third-order information can increase the functional range of an animal, it also creates new challenges for evolution to keep this information focused on evolutionary needs. To meet these challenges, humans evolved a much wider range of emotional responses to channel mental powers productively. Both comprehension and metacognition heavily leverage our innate talents, so I am not suggesting they operate independently of them, but what is interesting about them is that they can do things that perception can’t.

Noumena and phenomena just happen. That is, they are not functional in that they do nothing to influence future events. While physical things are necessarily noumenal and phenomenal, functional things can take on these roles as well. Viewed externally, a functional thing like a thought or a concept has a noumenon — which is the function the IP derives from it. The IP may or may not know the exact extent of that function, but another IP external to it can only learn about that noumenon through its phenomena. The phenomena of a functional noumenon are not physical observations, of course, because it has no physical substance; instead, they are observations of its functions. For example, a calculator is an IP with a multiplication button. Not knowing anything about how the calculator works, you can observe what it does. You may develop a theory based on using it many times that it does repeated additions. In this case, we know that you would be right and that you will have discovered the actual noumenon of the button, but you still can’t prove it because it may, for example, be programmed to do division instead in leap years and you wouldn’t know that until the next leap year.

When our concepts are based on formal models like mathematics, we have access to their actual noumena because we defined them. In this case, all their logical implications are also noumenal by definition. But the implications of many formal models can be too complex for us to reason out (i.e. prove), so we may instead opt to gather information about them by induction. If we can run the model on a computer, we can do this by running millions of simulations and analyzing the results for patterns. If we run logical models in our minds, this often includes applying approximate or intuitive reasoning because we have not or don’t want to work out the details of the model to the nth degree. In practice, then, much of our knowledge of purely deductive models comes from phenomenal perception of them, which means it is suspected to be true but not known to be true.

Evolved functions can also be said to have noumena that blend all the inductive trials that went into forming them. Although any attempt to simplify all that into a deductive cause-and-effect model is necessarily a simplification, we can use such simplifications to better understand the noumena with the understanding that we are only characterizing the underlying function and not fully grasping it. Daniel Dennett calls explanations of evolved noumena “free-floating rationales”1

This is a great way of putting it, because it emphasizes that the underlying logic is not dependent on anything physical, which is essential to understanding the nature of function. All functional noumena are necessarily free-floating in the sense that they don’t have to be implemented to exist; they embody logical relationships whether anyone knows it or not. But we focus nearly all our attention on function that is also implemented, meaning it has a physical manifestation through an IP, because information processing can’t be conducted in the abstract. Function that is expressed physically is pragmatic, an idea I will get back to later.

Perception, comprehension, and metacognition, on the other hand, don’t “just happen”, but are the outputs of information processing on IPs that have either evolved or been designed to process information in specific ways. Those that are evolved but not designed used perceptive feedback loops over many cycles to refine the functional qualities of the information. The information tends to become ever more “informative” over time because of how the ratchet effect creates a functional ratchet that preferentially conserves function as it goes. It does not matter that function is sometimes lost, either inadvertently due to accident or drift, or because the functional needs of the IP’s niche shift and a function that was useful before is no longer needed. What matters is that function is always useful to IPs and they will always act to increase the function they need at any given point in time, whether that increase calls for a more complex mechanism or a more streamlined or simple one. There is no evidence that inherited information processing (i.e. information in DNA) has any comprehension or metacognition. I am fairly sure nobody has proposed a plausible mechanism whereby deductive models could be leveraged by evolutionary mechanisms, let alone self-reflective ones. I think that idea goes too far, but I do think that inductive methods are capable of much more “clever” behavior than current evolutionary theory imagines, and I will discuss this later on.

Perception, comprehension, and metacognition are purely functional modes of existence, so function can be thought of as being three levels deeper than physical existence. While this doesn’t make it “better” than physical existence, the concept of “better” (and all concepts for that matter) is strictly functional. Since our entire concept of the world depends on only these three upper levels, we don’t actually need the physical world. We could happily live our lives entirely within a computer simulation if it did a good enough job. Scientific experiments within the simulation would arguably expose differences between physical and simulated reality, but it would not be necessary to lie to simulants or prevent all access to the physical world. Simulation could just be used as a more affordable way to extend reality. Though minds can potentially run on different platforms, information processing will always require IPs implemented in the physical world.

Let’s review our three quandaries in the light of form and function dualism. First, the origin of life. Outside of life, phenomena naturally occur and explanations of them comprise the laws of physics, chemistry, materials science and all the physical sciences. These sciences work out rules that describe the interactions of matter and energy. They essentially define matter and energy in terms of their interactions without really concerning themselves with their noumenal nature. As deductive explanations, they are based in the functional world of comprehension and draw their evidence from our perception of phenomena. While the target is ultimately the truth our noumena of nature, we realize that models are functional and not physical, and also only approximations, even if nearly perfect in their accuracy. With the arrival of life, a new kind of existence, a functional existence, arose when the feedback loops of natural selection developed perception to finds patterns in nature that could be exploited in “useful’ ways. The use that concerns life is survival, or the propagation of function for its own sake, and that use is sufficient to drive functional change. But perception forms its own rules transcendent to physical laws because it uses patterns to learn new patterns. The growth of patterns is directed toward ever greater function because of the functional ratchet. It exploits that fact that appropriately-configured natural systems are shaped by functional objectives to replicate similar patterns and not just by physical laws indifferent to similarity.

Next, let’s consider the mind-body problem. The essence of this problem is the feeling that what is happening in the mind is of an entirely different quality than the physical events of the external world. Form and function dualism tells us that this feeling reflects that actual underlying natural entities, some of which are physical and some functional. Specifically, the mind is entirely concerned with functional entities and the external physical world is entirely concerned with physical entities, except that other living things are themselves both functional and physical. This division is not reducible at all, as physicalists would have us believe, because function is concerned with and defined by what is possible, and the realm of the possible is entirely outside the scope of mere physical things. While function doesn’t reduce to physical, it does depend on the brain. The mind is a natural entity comprised of a complex of functional capacities implemented using the physical machinery of the brain. So the mind can be said to have both a functional aspect and a physical aspect. Since the mind is the subset of brain processing that we associate with consciousness and experience, which is arguably a small subset of the large amount of neural processing that happens outside our conscious awareness, it is quite relevant to discuss the physical nature of the mind in terms of the subset of brain functions directly associated with consciousness. But one can also talk about the functional aspect in isolation from the IP that physically supports it. Although this aspect is just an abstraction in the sense that it needs the brain to support it, as an abstraction it can be thought of as entirely immaterial, our “soul”, if you like. This view of the soul is not supernatural; it just distinguishes function from form, and, more to the point, higher-level functions from lower-level functions, being the essential activities of survival like sleeping and eating with which we have conscious awareness but which don’t define our long-term objectives.

Finally, let’s look at the explanatory gap, which is about explaining with physical laws why our senses and emotions feel they way they do. I said this gap would evaporate with an expanded ontology. By recognizing functional existence as real, we can see that it opens up a vastly richer space than physical existence because it means anything can be related anything in any number of ways. The world of imagination is unbounded, while the physical world is closely ruled by rather rigid laws. The creation of IPs that can first generalize inductively (via evolution of life and minds) and then later deductively and metacognitively (via further evolution of minds) gave them increasing degrees of access to this unbounded world. The functional part alone is powerless in the physical world; it needs the physical manifestation of the IP and its limbs (manipulative extremities) to impact physical things; there is nothing spectral going on here. Physical circumstances are always finite and so IPs are finite, but their capacities are potentially unlimited because capacities are general and not constrained to handle only specific circumstances. So to close the explanatory gap and explain what it means to feel something, we should first recognize that the scope of feeling, experience, and understanding was never itself physical; it was a functional effect within an IP. So what happens in the IP to create feelings?

I’m just going to say the answer here and develop and support it in more detail later on. The role of the brain is to control the body in a coordinated way, and as a practical matter it solves this using a combination of bottom-up and top-down information processing. These two styles, which have to meet somewhere in the middle, are known most colloquially as unconscious and conscious. The role of consciousness is to focus specifically on top-level problems that the unconscious can’t handle by itself, which is actually a very small fraction of the matters that come under consideration. The way the brain presents information to the conscious mind so that it can do this job is by creating a theater of consciousness, which, like a movie theater, is a highly produced and streamlined version of all the information processed by the unconscious mind. What we think of as pain or any experienced feeling is really just part of the user interface between the unconscious and conscious processes in the brain. The essence of conscious experience is an awareness of the world and the passage of time, and further the sense of our bodies and the external world and emotions about our conscious states. These things are functions of consciousness that result from its role as the top-level controller. We only see consciousness and the experiential feelings unique to it as separate from the physical world because they are functional and function is separate from physical. We focus on them above all other brain functions because we, as conscious beings, only have privileged access to a small subset of the functions in the brain, and this interface between unconscious and conscious is at the crux of that. While not magic, the interface is set up to create an equivalence between perception and physical reality that is as seamless as possible, which is the role of the magician as well, so it seems like magic. There are reasons why things feel precisely the way they do which I will explore later on.

To summarize my initial defense of dualism, I have proposed that form and function, also called physical and functional existence, encompass the totality of possible existence. We have evidence of physical things in our natural universe. We could potentially someday acquire evidence of other kinds of physical things from other universes, and they would still be physical, but they may produce different measurements that suggest an entirely different set of physical laws. Functional existence needs no time or space, but for physical creatures to benefit from it, there must be a way for functional existence to manifest in a natural universe. Fortunately, the feedback loops necessary for that to happen are physically possible and have arisen through evolution, and have then gone further to develop minds which can not only perceive, but can also comprehend and reflect on themselves. Note that this naturalistic view is entirely scientific, provided one expands the ontology of science to include functional things (which I will do more later), and yet it is entirely consistent with both common sense and conventional wisdom, which hold that “life force” is something fundamentally lacking in inanimate matter. We also see evidence of that “life force” in human artifacts because what we really sense is order that suggests a functional origin. Life isn’t magic, but some of its noumenal mystery is intrinsically beyond any full understanding. But our understanding of life and the mind through closer and closer approximation from deductive cause and effect models will continue to grow and its certainty will eventually rival our understanding of non-living or prebiotic matter using the physical sciences.

Hey, Science, We’re Over Here

Between the physical view that the mind is a machine subject entirely to physical laws and the ideal view that the mind is a transcendent entity unto itself that exists independently of the body, science has come down firmly in the former camp. This is understandable considering we have unequivocally established that processes in the brain create the mind, although just how this happens is still not known. The latter camp, idealism, is, in its most extreme form called solipsism, the idea that only one’s own mind exists and everything else is a figment of it. Most idealists don’t go quite that far and will acknowledge physical existence but still claim that our mental states like capacities, desires, beliefs, goals, and principles are more fundamental than concrete reality. So idealists are either mental monists or mental/physical dualists. Our intuition and language strongly support the idea of our mental existence independent of physical existence, so we consequently take mental existence for granted in our minds and in discourse. The social sciences also start from the assumption that minds exist and go from there to draw out implications in many directions. But none of this holds much sway with physicalists, who, taking the success of physical theories as proof that physical laws are sufficient, have found a number of creative ways to discount mental existence. Some hold that there are no mental states, just brain states (eliminativism), while others acknowledge mental states, but say they can be viewed as or reduced to brain states (reductionism). Eliminativism, also called eliminative materialism (though it would be more accurate to call it eliminative physicalism to include energy and spacetime), holds that physical causes work from the bottom up to explain all higher-level causes, which will ultimately demonstrate that our common-sense or folk psychology understanding of the mind is false. Reductionism seems to hold outside of life. We can predict subatomic and atomic interactions using physics, and molecular interactions using chemistry. Linus Pauling’s 1931 paper “On the Nature of the Chemical Bond” showed that chemistry could in principle be reduced to physics12. But reductionism with regard to the mind holds although our common wisdom about the mind is essentially valid, like chemical bonds it can ultimately be reworded in terms of underlying physical causes. Both eliminativism and reductionism build on the incorrect assumption that information and its byproducts don’t fundamentally alter the range of natural possibility. This assumption, formally called the Causal Closure of the Physical, states that physical effects can only have physical causes. Stated correctly, it would say that physical effects can only have natural causes and recognize that information can be created and have effects in a natural world.

The ontology I am proposing is form and function dualism. This is the idea that the physical world exists exactly as the physicalists have described, but also that life and the mind have capacities or functions, which are entirely natural but cause things that would not otherwise happen. It is easy to get confused when talking about function because we often use the same kind of language to describe specific events and general capacities. Nearly all discussions on the subject inadvertently conflate the two, which blocks understanding. The following table helps distinguish terms for characterizing physical and functional concepts:

form function, capacity, information
specific, particulargeneral, similar
concrete abstract
token, instance type, category, classification
action performance, behavior, maneuver
Although a single application of function in any specific physical situation can be interpreted as being entirely physical, the capacity for the function is an abstract property that exists independent of any physical uses. This kind of capacity exists whether lifeforms or minds exploit it or not, but when life and minds do exploit it, it opens up a functional realm with very physical implications. The idea that nonphysical information can cause a physical impact is called downward causation34, an idea that directly rejects reductionism and asserts emergence5. Downward causation is a functional phenomenon whose underlying physical mechanism is statistical. When information is applied, which results in downward causation, the effect is specific, but it is also a special case of a general rule characterizing things that happen in similar situations. Events happen that are generally similar to prior events, but physically all that is happening is a set of specific events because similarity means nothing from a physical standpoint. Functionally, it looks like things that have happened “before” are happening “again”, but nothing in the universe ever happens twice. Something entirely new has happened that has similarities to something that happened before. We must not conflate things happening in general with things happening specifically. As soon as we even speak of things happening in general, we have admitted the existence of function. Our minds and our language are very function-oriented, so this comes so naturally to us, it can be hard to separate functional ideas from non-functional ones, but it is always possible. And aside from the fact that we have words for concrete, physical things and words for functional, abstract things, nouns of any kind may be specific or general in that they can refer to a specific thing or a class of things. Wording and context help differentiate these cases. For example, “I have a green car” refers to a specific physical object, while “I am going to get a green car” refers to a categorical, functional thing. Of course, I used a categorical reference to refer to a single car, and all words, even proper nouns, are functional references, so the difference between specific and general is whether the referent is a single physical object. This distinction between specific and general is at the heart of function because the way function works is to “extract” order from the uniformity of nature by identifying and then capitalizing on generalities. Whenever information processors (IPs) collect or apply information, they are contributing to the emergence of function in a physical world. The word “emergence” suggests something comes out of nothing, which is what happens when information is created. and that something is function or information creation. It is not magic, it is not inexplicable, and it is not a consequence of complexity, it is just the result of feedback loops exploiting the uniformity of nature. As Bob Doyle puts it, “Some biologists (e.g., Ernst Mayr) have argued that biology is not reducible to physics and chemistry, although it is completely consistent with the laws of physics. … biological systems…process information at a very fine (atomic/molecular) level. Information is neither matter nor energy, but it needs matter for its embodiment and energy for its communication.”6

Downward causation can be called an “interaction” between life and body or between mind and body because life and mind affect the body and vice versa. The physical still remains entirely physical and the functional entirely nonphysical; it is just that natural systems can use feedback loops to create functional effects in a natural universe. One could arguably just say that function is a subset of what is possible in the natural universe and that it is hence a legitimate extension of what we should call “physical”, but if so then this is an extension that doesn’t use matter or energy but just information. The same information can exist in multiple forms (called multiple realizability) and is inherently abstracted from physical forms because it consists of generalizations and not any physical form it might leverage. While matter and energy are conserved in that they can neither be created nor destroyed (though quantum effects challenge this a bit), information is inherently infinite. The amount of information captured by IPs in the universe is finite but can grow over time without bound.7

Although physicalism specifically rejects the possibility of anything existing beyond the physical world as characterized by the laws of physics, in so doing it overlooks unexpected consequences of feedback. The real spirit of physicalist philosophy is naturalism, which says that everything arises from natural causes rather than and supernatural ones. My point is that naturalism is best supported not by a monism of form but by a dualism of form and function. Physicalists are not blind and see the functional effects of life and the mind (and, not incidentally, make all their arguments using their minds), but they conflate biological information with complex physical structure, and they are not the same. Just because we can collect all sorts of physical, chemical, and structural information about both nonliving and living matter does not mean they are the same sort of thing. As I said earlier, rocks have a complex physical structure but contain no information. We collect information about them using physical laws and measurements that help us describe and understand them, but their structure by itself is information-free. Weather patterns are even more complex and also chaotic, but they too contain no information, just complex physical structure. We have devised models based on physical laws that are pretty helpful at predicting the weather. But the weather and all other nonliving systems don’t control their own behavior; they are reactive and not proactive. Living things introduce functions or capabilities built from information generated from many feedback experiments and stored in DNA. This is a fundamental, insurmountable, and irreducible difference between the two. Living things are still entirely physical objects that follow physical laws, but when they use stored or newly created information they are essentially triggering physical events through feedback that would not happen without that feedback. An infinite door of indirection opens that leverages the almost infinite variety of biochemical mechanisms that carbon chemistry make possible. If carbon had slightly different chemical properties, the biochemical complexity necessary to allow feedback to create self-replication would probably not exist. But it does, and so life and the mind are able to implement highly customized IPs that have created a huge pool of functionality.

What I am calling form corresponds to what Plato and Aristotle called the material cause, which is its physical substance, and what I am calling function corresponds to their final cause, or telos, which is its end, goal, or purpose. They understood that while material causes were always present and hence necessary, they were not sufficient to explain why many things were they way they were. The idea that one must invoke a final cause or purpose to fully explain why things happen is called teleology. Aristotle expanded on this to identify four kinds of causes that resolve different kinds of questions about why changes happen in the world. Of these, material, formal, efficient, and final, I have discussed the first and last. The formal cause is based on Plato’s closely-held notion of universals, the idea that general qualities or characteristics of things are somehow inherent in them, e.g. that females have “femaleness”, chairs have “chairness”, and beautiful things have “beauty”. While the Greeks clung to the idea that universals were intrinsic, William of Ockham put metaphysics on a firmer footing in the 13th century by advocating nominalism, the view that universals are extrinsic, i.e. that they have no existence except as classifications we create in our minds. While classifications are a critical function of the mind, I think everyone would now agree that we can safely say formal causes are descriptive but not causative. The efficient cause is what we usually mean by cause today, i.e. cause and effect. The laws of physicalism all start with matter and energy (no longer considered causative, but which simply exist) and then provide efficient causes to explain how they interact to bring about change. A table is thus caused to exist because wood is cut from trees and tools are used in a sequence of events that results in a table.

Although Aristotle could see that see that these steps had to happen to create a table, it doesn’t explain why the table was built. The telos or purpose of the table, and the whole reason it was built, is so that people can use it to support things at a convenient height. Physicalists reject this final, teleological cause because they see no mechanism — how can one put purpose into physical terms? For example, physically objects sink to lower places because of gravity, not because it is their purpose or final cause. This logic is sound enough for explaining gravity, but it doesn’t work at all for tables, and, as I have mentioned, it doesn’t work for anything life and the mind in general. So was it really reasonable to dispense with the final cause just because it wasn’t understood? How did such a non-explanatory stance come to be the default perspective of science? To see why, we have to go back to William of Ockham. 1650 years after Aristotle, William of Ockham laid the groundwork for the Scientific Revolution, which would still need another 300 years to get significantly underway. With his recognition that universals were not intrinsic properties but extrinsic classifications, Ockham eliminated a mystical property that was impeding understanding of the prebiotic physical world. But he did much more than identify the mind as the source of formal causes, he explained how the mind worked. Ockham held that knowledge and thought were functions of the mind which could be divided into two categories, intuitive and abstractive.8 Intuitive cognition is the process of deriving knowledge about objects from our perceptions of them, which our minds can do without conscious effort. Abstractive cognition derives knowledge from positing abstract or independent properties about things and drawing conclusions about them. Intuitive knowledge depends on the physical existence of things, while abstractive knowledge does not, but can operate on suppositions and hypotheses. Ockham further asserted that intuitive knowledge precedes abstractive knowledge, which means all knowledge derives from intuitive knowledge. Since intuitive knowledge is fundamental, and it must necessarily be based on actual experience, we must look first to experience for knowledge and not to abstract speculation. Ockham can thus be credited with introducing the now ubiquitous notion that empiricism — the reliance on observation and experiment in the natural sciences — is the foundation of scientific knowledge. He recognized the value of mathematics (i.e. formal sciences) as useful tools to interpret observation and experiment, but cautioned that they are abstract and so can’t be sources of knowledge of the physical world in their own right.

Francis Bacon formally established the paramountcy of empiricism and the scientific method in his 1620 work, Novum Organum. Bacon repeatedly emphasizes how only observation with the senses can be trusted to generate truth about the natural world. His Aphorism 19, in particular, dismisses ungrounded, top-down philosophizing and endorses grounded, bottom-up empiricism:

“There are and can only be two ways of investigating and discovering truth. The one rushes up from the sense and particulars to axioms of the highest generality and, from these principles and their indubitable truth, goes on to infer and discover middle axioms; and this is the way in current use. The other way draws axioms from the sense and particulars by climbing steadily and by degrees so that it reaches the ones of highest generality last of all; and this is the true but still untrodden way.”

Bacon built on Ockham’s point that words alone could be misleading by citing a number of biases or logical fallacies that can so easily permeate top-down thinking and so obscure what is really happening. Specifically, he cited innate bias, personal bias, and rhetorical biases (into which one could include traditional logical fallacies like ad hominem, appeal to authority, begging the question, etc.).

Bacon didn’t dispense with Aristotle’s four causes but repartitioned them into two sets. He felt that physics should deal with material and efficient causes while metaphysics should deal with formal and final causes.9 He then laid out the basic form of the scientific method. The objective is to find and prove physical laws, which are formal causes that are universal. While Ockham had rejected such abstractions, Bacon accepted them, but rebranded the only legitimate ones as those that were demonstrable by his method. Using the example of heat as a formal cause, he recommended collecting evidence for and against, i.e. listing things with heat, things without, and things where heat varies. Comparative analysis of the cases should then lead to a hypothesis of the formal cause of heat. Bacon could see that further cycles of observation and analysis could inductively demonstrate the universal natural laws, and he attempted to formalize the process, but never quite finished that work. Even now people would disagree that the scientific method has a precise form, but would agree that it depends on iterative observation and analysis. Bacon had little to say about the final cause because it was the least helpful to his inductive method, and in any case could easily be perverted by bias to lead away from the discovery of efficient causes that underly formal causes. In any case, the success of the inductive, physicalist approach since Bacon and the inability of detractors to refute its universal scope have led to the outright rejection of teleology as an appeal to mysticism when physical laws seem to be sufficient.

We are now quite comfortable with the idea that all our knowledge of the physical world must derive only from observations of it and not suppositions about it. And we concur with Ockham that our primary knowledge of the physical world is inductive, but that secondary abstractive knowledge can group that knowledge into classifications and rules which provide us with causative explanatory power. We recognize that our explanations are extrinsic to the fabric of reality but are nevertheless very effective. However, this shift away from the more magical thinking of the ancients (not to mention the Christian idea that God designed everything) blinded us to something surprising that happens in biological systems and even more significantly in brains: the creation of function. Function is in many ways a subtle phenomenon, and this is why it has been overlooked or underappreciated. Function is nothing specific you can point to; it is entirely about generality.

In supposing that knowledge must originate inductively, Ockham and Bacon inadvertently put a spotlight on direct natural phenomena. How could they have known, how could anyone know, that indirect natural phenomena would play a critical role in the development of life and then the brain? Darwin, of course, figured it out by process of elimination, but that is not the same thing as recognizing the source of the power of indirection. Aristotle had already pointed out that every phenomenon had an efficient cause, so of course some sequence of events must have caused life to arise, and Darwin put the pieces together to propose a basic strategy for it to start from nothing and end up where it is now. The events that power natural selection are, taken individually, entirely physical, and so it seems natural to assume that the whole of the process is entirely physical. But this assumption is a fundamental mistake, because natural selection is only superficially physical. The specific selection events of evolution don’t matter; what matters is how they are interpreted or applied in a general way so as to influence future similar events. What it really does by collecting evidence of the value of a mechanism across a series of events is to justify the conclusion of an indirect or general power of the mechanism across an abstract range of situations.

Rene Descartes tried to unravel function, but, coming long before Darwin, he could see no physical source and resorted to conjecture. As I mentioned before, he proposed a mental substance that interacted with the physical substance of the brain in the pineal gland. This is a wildly inaccurate conclusion which has only served to accentuate the value of experimental research over philosophy, but it is still true that knowledge is a nonphysical capacity of the brain whose functional character physical science has not yet attempted to explain. But Descartes’ mistaken assumptions and the rise of monism have led to a concomitant fall in the popularity of all stripes of dualism, even to the point where many consider it a proven dead end. Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind10 in 1949. We know (and knew then) that Descartes’ mental “thinking substance” does not exist as a physical substance, but Ryle felt it still had tacit if not explicit “official” support. He felt we officially or implicitly accepted two independent arenas in which we live our lives, one of “inner” mental happenings and one of “outer” physical happenings. This view goes all the way down to the structure of language, which has a distinct vocabulary for mental things (using abstract nouns which denote ideas or qualities) and physical things (using concrete nouns which connect to the physical world through senses). As Ryle put it, we have “assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed with this view, contending that the mind is not a “ghost in the machine,” something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake” to describe a situation where one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, he identified the mistake as arising from a failure to understand that forest has a different scope than tree.11 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough, we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

But Ryle was more mistaken than Descartes. His mistake was in thinking that the whole problem was a category mistake, when actually only a superficial aspect of it was. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. So his reductionist plan can get us that far. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because saying how the clock works is not really the interesting part. The interesting part is the purpose of the clock: to tell time. Why the brain does what it does cannot be explained physically because function is not physical. The brain and the mind control exist to the body, but that function is not a physical feature. One can tell that nerves from the brain animate the hands, but one must invoke the concept of function to see why. As Aristotle would say, material and efficient causes are necessary but not sufficient, which is why we need to know their function. Ryle saw the superficial category mistake (forgetting that the brain is a machine) but missed the significant categorical difference (that function is not form). So, ironically, his argument falls apart due to a category mistake, a term that he coined.

Function can never be reduced to form because it is not built from subatomic particles; it is built from logic to characterize similarities and implications. It is true that function can only exist in a natural universe by leveraging physical mechanisms, but this dependency doesn’t mean it doesn’t exist. All it means is that nature supports both generalized and specific kinds of existence. We know the mind is the product of processes running in the brain, just as software is the product of signals in semiconductors, but that doesn’t tell us what either is for. Why we think and why we use software are both questions the physical mechanisms are not qualified to answer. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong because there are two different kinds of existence, and living things exhibit both. Information processors have a physical mechanism for storing and manipulating information and use it to deliver functionality. For thinking, the brain, along with the whole nervous and endocrine systems, are the physical part and the mind is the functional part. For living things, the whole metabolism is the physical part and behavior is the functional part. This is the kind of dualistic distinction Descartes was grasping for. While Descartes overstepped by providing an incorrect physical explanation, we can be more careful. The true explanation is that functional things are not physical and their existence is not dependent on space or time, but they can have physical implementations, and they must for function to impact the physical world.

The path of scientific progress has understandably influenced our perspective. The scientific method was designed to unravel mysteries of the natural world, and was created on the assumption that fixed natural laws govern all natural activity. Despite his advocacy of dualism, Descartes promoted the idea of a universal mechanism behind the universe and living things, and his insistence that matter should be measured and studied mathematically as an extension of what we now call spacetime helped found modern physics: “I should like you to consider that these functions (including passion, memory, and imagination) follow from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels.” 12 He only invoked mental substance to bridge the explanatory gap of mental experience. If we instead identify the missing piece of the puzzle as function, then we can see that nature, through life, can “learn things about itself” using feedback to organize activities in a functional way we call behavior. Behavior guides actions through indirect assessments instead of direct interactions, which changes the rules of the game sufficiently to call it a different kind of existence.

Darwin described how indirect assessments could use feedback to shape physical mechanisms, but he didn’t call out functional existence specifically, and, in the 150 years since, I don’t think anyone else has either. But if this implies, as I am suggesting, that the underlying metaphysics of biology has been lacking all this time, then we have to ask ourselves what foundation it has been built on instead. The short answer is a physicalist one. Both before and after Darwin, traits were assumed to have a physical explanation, and they are still mostly thought to be physical today. And because function does always leverage a physical mechanism, this is true, but, as Aristotle said in the first place, it is not sufficient to tell us why. But if biologists honestly thought only in terms of physical mechanisms, they would have made very little progress. After all, we still have no idea, except by gross analogies to simple machines like levers, pipes, and circuits, how bodies work, let alone minds. Biology, as practiced, makes observations of functioning biological mechanisms and attempts to “reverse engineer” an explanation of them to create a natural history. Much of what is to be explained is provided by the result that is to be explained.13 We assume certain functions, like energy production or consumption, and work out biochemical details based on them, but we couldn’t build anything like a homeostatic, self-replicating living creature if our lives depended on it because we only understand superficial aspects. Biology is thus building on an unspoken foundation of some heretofore ineffable consequence of natural selection which I have now called out as biological function or information. Darwin gave biologists a license to identify function on the grounds that it is “adaptive”, and they have been doing that ever since, but not overtly as a new kind of existence, but covertly as “phenomena” to be explained, presumably with physical laws. I am saying that these phenomena are functional and not physical ones, and so their explanations must be based on functional principles, not physical.

But what of teleology? Do hearts pump blood because it is their purpose or final cause? We can certainly explain how hearts work using purposeful language, but that is just an artifact of our description. Evolved functionality gets there by inductive trial and error, while purpose must “put forth” a reason or goal to be attained. Evolution never looks forward because induction doesn’t work that way, so we can’t correctly use the word purpose or teleology to describe information created by inductive means. But we can use the word functional, because biological information is functional by generalizing on past results even though it is not forward-looking. And we can talk about biological causes and effects, because information is used to cause general kinds of outcomes. Biological causes and effects are never certainties the way physical laws deal in certainties because information is always generalizing to best-fits. Physical effects can also be said to have causes, but we should keep in mind that the causality models behind physical laws are for our benefit and not part of nature themselves. They are deductive models that make generalizations about kinds of things which we then inductively map onto physical objects to “predict” what will happen to them, which will give us a good idea of the kind of things that will most likely happen.

With our minds, however, through the use of abstraction used with deductive models we can “look forward” in the sense that we can run simulations on general types which we know could be mapped to potential real future situations. We can label elements of these forward-looking models as goals or purposes, because bringing reality into alignment with a desired simulation is another way of saying we attain goals. So we really can say that the purpose of a table is to support things at a convenient height for people. But tables are not pulled toward this purpose; they may also serve no purpose or be used for other purposes. Aristotle claimed that an acorn’s intrinsic telos is to become a fully grown oak tree.14 Biological functions can be said to be pulled inexorably toward fulfillment by metabolic processes. The difference is actually semantic. Biological processes can be said to run continuously until death, but again, it only looks like things that have happened “before” are happening “again” when really nothing ever happens twice. Similar biological processes run continuously, but each “instance” of such a process is over in an instant, so we are accustomed to using general and not specific terminology to describe biological functions. These processes have no purpose, per se, because none was put forth, but they do behave similarly to ways that have been effective in the past for reasons that we can call causes and effects. Many of the words we use to describe causes and effects imply intent and purpose, so it is natural for us to use such language, but we should keep in mind it is only metaphorical. Tables, on the other hand, are not used continuously and have no homeostatic regulation ensuring that people keep using them, so they may or may not be used for their intended purpose. Designers don’t always convey intended purposes to users, and users sometimes find unintended uses which become purposes for them, and both can be influenced by inductive or deductive approaches, so it is hard to speak with certainty about the purpose of anything. But it is definitely true that we sometimes have purposes and intentionally act until we consider them to be generally fulfilled, so minds can be teleological.

Part 2: The Rise of Function

I’ve outlined what function is and how it came to be, but to understand the detailed kinds of function we see in life and the mind, we need to back up to the start and consider the selection pressures at work. Humans have taken slightly over four billion years to evolve. Of that, the last 600 or so million years (about 15%) has been as animals with minds, the last 4 million years (about 0.1%) as human-like primates, the last 10,000 or so years (about 0.00025%) as civilized, and only the last 100 or so years in a state we like to think of as technologically advanced (about 0.0000025%). The rate of change has been accelerating, and we know that our descendants will soon think of us as shockingly primitive (and some already do!). An explanation of the mind should account for what happened in each of these five periods and why recent changes have been so dramatic.

Life: 4 billion to 600 million years ago
Minds: 600 million to 4 million years ago
Humans: 4 million to 10,000 years ago
Civilization: 10,000 to 100 years ago
High Tech: 100 years ago to now

Life: 4 billion to 600 million years ago

While we don’t know many of the details of how life emerged, the latest theories connect a few more dots than we could before. Deep-sea hydrothermal vents 1 may have provided at least these four necessary precursors for early life to arise around four billion years ago:

(a) a way for hydrogen to react directly with carbon dioxide to create organic compounds (called carbon fixation),

(b) an electrochemical gradient to power biochemical reactions that led to ATP (adenine triphosphate) as the store of energy for biochemical reactions,

(c) formation of the “RNA world” within iron-sulfur bubbles, in which RNA could replicate itself and catalyze reactions,

(d) the chance enclosure of these bubbles within lipid bubbles, and the preferential selection of proteins that would increase the integrity of these outer bubbles, which eventually led to the first cells

This scenario is at least a plausible way for the precursors of life to congregate in one place and have opportunities for feedback loops to develop which could start to capture function and then ratchet it up. Many steps are missing here, and much of the early feedback probably depended more on chance than on mechanisms that actually capture and leverage it as information. Alexander Rich first proposed the concept of the RNA world in 1962 because RNA can both store information and catalyze reactions, and thus do both the tasks that DNA and proteins later specialized at. But whatever the exact order was, let’s just assume that in the first few hundred million years life arose.

(e) expansion of biochemical processes, including organized cell division, the use of proteins and DNA,

(f) the last universal common ancestor, or LUCA, about 3.5 billion years ago

Early life must have been very bad at even basic cell functions compared to modern forms, so the adaptive pressure in the early days must have mostly focused on improving the core mechanisms of metabolism, replication, and adaptation. As life became more robust, it became less dependent on the vents and was gradually able to move away from them. Although we know that all living cells on earth must descend from a single common ancestor roughly 250 to 750 million years after life first arose, this does include viruses. While there are several theories of the origin of viruses, I believe all viral lines are remnants of pre-LUCA strategies that evolved before the LUCA line was firmly established.

The central mechanics of life on earth evolved during these early years. Although the central mandate of evolution is survival over time, we can roughly prioritize the set of component skills that needed to evolve to make it happen. As each of these skills improved over time, organisms that could do them better would squeeze out those that could not:

1. Metabolism is, of course, the fundamental function as life must be able to maintain itself. A source of energy was critical to this, which is why hydrothermal vents are such a likely starting point.

2. Reproduction was the next most critical function, as any kind of organism that could produce more like itself would quickly squeeze out those that could not. This is where RNA comes in. RNA is too complex to have been the first approach used to replicate functionality, but one can imagine a functional ratchet that used simpler but less effective molecules first.

3. Natural selection at the level of traits is the next most critical function needed because it would make possible the piecewise improvement of organisms. Bacteria developed a mechanism called conjugation that lets two bacterial cells connect and copy a piece of genetic material called a plasmid from one to the other. Most plasmids ensure that the recipient cell doesn’t already have a similar plasmid, which protects against counterproductive changes. There are so many bacteria that a good strategy for them is to try out everything and see what works.

4. Proactive gene creation. Directed mutation is currently a controversial theory, but I think it will turn out that nearly all genetic change is pretty carefully coordinated and that the mechanisms that make it possible evolved in these early years. I am talking about ways a cell can assemble new genes by combining snippets of DNA called transposable elements that are then put back into chromosomes where their functional effects could be inherited by daughter cells. If these changes are done in germ cells they will affect all future generations. If organisms were able to evolve ways to do this in the early years, they could have easily outcompeted other organisms. I think much of the genetic arms race in the beginning focused on better ways to direct change, not because the result of such tinkering was known in advance but because organisms that tinkered with their own DNA when under stress survived better in the long run. Such directed mutation capacities probably started out by directly impacting the next generation so that they could be selected for right away, but over time were refined into strategies that could take many generations to produce new genes or even be held in reserve indefinitely until environment stress indicated that change was needed.2

The next big step was:

(g) the arrival of eukaryotes

Eukaryotes are now widely thought to have arisen by symbiogenesis, which is the absorption of certain single-cell creatures by others that resulted in one living inside the other permanently. Two organelles common to all eukaryotes have double-membraned organelles, which would be expected to occur if one membrane originated from the cell membrane of the endosymbiont while the other originated in the host vesicle which enclosed it.3 The first is the cell nucleus and the other is the mitochondrion. Algae and plants also contain plastids that also have double-membranes. Mitochondria and plastids reproduce with their own DNA, while cell nuclei seem to have become the repository for the host DNA. While eukaryotes need these organelles, their key evolutionary enhancement was sexual reproduction, which combines genes from two parents to create a new combination of genes in every offspring. Sexual reproduction is a nearly universal feature of eukaryotic organisms4 and the basic mechanisms are believed to have been fully established in the last eukaryotic common ancestor (LECA) about 2.2 billion years ago. In the short term, sex has a high cost but few benefits. However, in the long term it provides enough advantages that eukaryotes almost always use it. Asexual reproduction, which is used by prokaryotes (non-eukaryotes, including the bacteria and archaea) and by somatic (non-sex) cells of eukaryotes, is done by a cell division process called mitosis. During mitosis, a double strand of DNA is separated and each single strand is then used as a template to create two new double strands. When the cell divides into two, each daughter cell ends up with one set of DNA. Sexual reproduction uses a modified cell division process called meiosis and a cell fusion process called fertilization. Cells that undergo meiosis contain a complete set of genes from each of two parents. They first replicate the DNA, making four sets of DNA in all, and then randomly shuffle genes between parent strands in a process called crossing over. The cell then divides twice to make four gametes each with a unique combination of parental genes. Gametes from different parents then fuse during fertilization to create a new organism with a complete set of genes from each of its two parents, where each set is a random mixture from each parents’ parents.

Sexual reproduction is clearly a much more complex and seemingly unlikely process compared to asexual reproduction, but I will show why sex is probably a necessary development in the functional ratchet of life. The underlying reason for sex is that it facilitates points 3 and 4 above, namely natural selection at the level of traits and proactive gene creation. Because mechanisms evolved to do both 3 and 4 well, prokaryotes evolved in just two billion years instead of two trillion or quadrillion. Of course, I can only guess about time frames this large, but in my estimation evolution would have made almost no progress at all without refining these two mechanisms, so any organisms that could improve on them would have a huge advantage over those that did them less well. We know that conjugation is not the only mechanism prokaryotes use to transfer genetic material between; all such mechanisms outside of sexual reproduction are called horizontal gene transfer (HGT), and also include transformation and transduction, the latter of which is the incorporation of DNA from viruses. Any mechanism that can share genetic information at the gene or function level to other organisms creates opportunities for new combinations of genes to compete, which makes it possible for individual advantageous functions to spread preferentially to less capable ones. HGT has been sufficient for the evolution of two large groups of single-celled organisms, bacteria and archaea, and so is no doubt deployed in many strategic ways we can still only guess at, but the outcome is fundamentally pretty haphazard, which makes it inadequate to support multicellular life. On the one hand, it allows many new genetic combinations to be tried at a fairly low cost since the number of single-cell organisms is very high. But on the other hand, it lacks many mathematical advantages that sex brings to the table. I will assume “that the protoeukaryote → LECA era featured numerous sexual experiments, most of which failed but some of which were incorporated, integrated, and modified,”5 and that consequently a great many intermediate forms before LECA formed are no longer extant to give us insight into the incremental stages of evolution.6

What benefits does sex provide that led to its evolution? John Maynard Smith famously pointed out that in a male-female sexual population, a mutation causing asexual reproduction (i.e. parthenogenesis, which does naturally arise sometimes allowing females to reproduce as clones without males) should rapidly spread because asexual reproduction has a “twofold” advantage since they no longer need males. It is true that when resources allow unlimited growth, asexual reproduction can thus spread faster, but this rarely happens. Usually, populations are constrained by resources to a roughly stable population. Achieving the fastest reproduction cycle is not the critical factor in long-term success in these situations, and it is actually rather irrelevant. In any case, eukaryotic populations probably could and would have evolved a way to switch between sexual and asexual reproduction depending on which is more beneficial at the time, and very few ever choose asexual. This strongly suggests that sexual reproduction nearly always confers more advantages than asexual reproduction. We are aware of a number of such advantages, but I think the critical ones are better solutions to my points 3 and 4 above. Sexual reproduction is set up to create an almost unlimited number of genomes with different combinations of genes, while all asexual reproduction can do is accumulate genes (although prokaryotic genomes stay pretty small, so they must also have ways of knocking genes out). And sex pits each trait against its direct competitors so that natural selection can operate on each independently. Beneficial traits can spread through a population “surgically” knocking out less effective alleles, something asexual reproduction can’t do. Sex gives a species vastly more capacity to adapt to changing environments because variants of every gene can remain in the gene pool waiting to spread when conditions make them more desirable.7 Asexual creatures can’t keep genes around for long that aren’t useful right now, because they can’t generate new combinations. Horizontal gene transfer is apparently sufficient to allow prokaryotes to adapt, but obligate parthenogenesis in multicellular species leaves them with essentially no prospects for further adaptation and so represents a dead end. This includes about 80 species of unisex reptiles, amphibians, and fishes. All major vertebrate groups except mammals have species that can sometimes reproduce parthenogenetically.8 We can conclude that Maynard Smith was right that asexual reproduction provides a “quick win”, but because it is a poor long-term strategy its use is limited in multicellular life. Overall, I would estimate that eukaryotes are roughly 10 to 100 times “better” at evolution than prokaryotes, mostly because of sex, but their improved technologies really start to shine in multicellular organisms, because their ability to pinpoint the focus of natural selection allows complex organisms to arise.

(h) complex multicellularity, meaning organisms with specialized cell types.

Multicellular life has arisen independently dozens of times, starting about 1 billion years ago, and even some prokaryotes have achieved it, but only six independently achieved complex multicellularity: animals, two kinds of fungi, green algae (including land plants), red algae, and brown algae. The relatively new science of evo-devo (evolutionary development) is focused largely on cell differentiation in complex multicellular (eukaryotic) organisms. The way that the cells of the body achieve such dramatically different forms, simplistically, is by first dividing and then turning on regulatory genes that usually then stay on permanently. Regulatory genes don’t code for proteins, but they do determine what other regulatory genes will do and ultimately what proteins will be transcribed. Consequently, as an embryo grows, each area can become specialized to perform specific tasks based on what proteins the cell produces. The most dramatic power of this process is the ease with which radial or bilateral symmetry can be triggered. Most animals (the bilateria) have near perfect bilateral symmetry because the same regulatory strategy is deployed on each side, which means that so long as growth conditions are maintained equally on both sides, a perfect (but reversed) “clone” will form on each side. Evo-devo has revealed that the eyes of insects, vertebrates, and cephalopods (and probably all bilateral animals) evolved from the same common ancestor, contrary to earlier theory. Homeoboxes are parts of regulator genes shared widely across eukaryotic species that regulate what organs develop where. As evo-devo uncovers the functions of regulatory genes, the new science of genomics is mathematically proving the specific evolutionary origins of every gene. Knowing when it started to be used and roughly what it does will go a long way to building a comprehensive understanding of development.

Before I move on, I should note that all complex multicellular eukaryotes live symbiotically with countless single-cell bacteria, archaea, fungi, and protists, and also with viruses, which don’t have cell membranes at all. Evolution has built on its successes in surprisingly deep ways which we are only beginning to appreciate through scientific models.

Minds: 600 million to 4 million years ago

Animals form my second major chapter in the history of life after the eukaryotes established the dominant evolutionary mechanisms of complex life. Plants and fungi have evolved some highly specialized features of their own, but animals are unique among multicellular life forms in having to meet the challenges of mobility. Being able to move through the environment introduces many more opportunities and risks than sessile forms face. While sessile lifeforms can largely just sit tight and focus on molecular-level environmental interactions, animals must continuously decide what to do next. By “decide,” I only mean that they must commit their bodies to being in one place at a time, and also must coordinate their extremities appropriately as well. Bodies and limbs are, of course, much larger structures than cells, so they shift the whole scope of the control problem to a much higher level. Top-level animal control mechanisms must address the survival needs of the animal from the top down. To do this, animals developed brains which collect information about the body and the environment from the bottom up, but then select actions for the body to take based on top-down considerations because they must decide, first and foremost, where they are going to be, which then has logical implications for all their other activities. So mobility creates, more than for any other kind of organism, a need to solve logic problems. Logically, a body necessarily acts as an agent in the world whose activities must be subdivided into discrete functions if the animal is to able to prioritize activities so that they can be completed with conflicting resources (most notably with one body). No matter the size of the brain or the animal, it must have a means of distinguishing its possible activities by function or it will oscillate between functions ineffectively like Dr. Doolittle’s pushmi-pullyu. This in no way mandates conscious awareness of such logical distinctions; I will address consciousness separately later. Also, before we get into reasoning out how the brain works, let’s look closer at what kinds of animals evolved.

The last animalian common ancestor is called the urmetazoan and is thought to have been a flagellate marine creature. The urmetazoan is important because, like the LUCA and LECA before it, an unknown but perhaps significant amount of animal evolution went into making the urmetazoan and an unknown but perhaps significant number of competing multicellular mobile forms were squeezed out by the metazoans (aka animals). Although we only now see what got through this bottleneck, animals have differentiated into many branches and have a wide variety of forms, so I will climb up through the animal family tree.

Sponges are the most primitive animals from a control standpoint, having no neurons or indeed any organs or specialized cells. But they have animal-like immune systems and some capacity for movement in distress.1. Cnidarians (like jellyfish, anemones, and corals) come next and feature diffuse nervous systems with nerve cells distributed throughout the body without a central brain, but often featuring a nerve net that coordinates movements of a radially symmetric body. Although jellyfish move with more energy efficiency than any other animal, a radial body design provides limited movement options, which may explain why all higher animals are bilateral (though some, like sea stars and sea urchins, have bilateral larvae but radial adults). Nearly all creatures that seem noticeably “animal-like” to us do so because of their bilateral design which features forward eyes and a mouth. This group is so important that we have to think of the features of the urbilaterian, the first bilateral animal about 570-600 million years ago. As I mentioned above, we now have evidence that the urbilaterian did have eyes. While the exact order in which the features of animals first appeared is still unknown, a functional principle that developed in many bilateral animals was a centralized control center that can make high-level decisions leveraging a variety of sensory information.

A few exceptions to centralized control exist among the invertebrates, most notably the octopus (a mollusk), which has a brain for each arm and a central brain that loosely administers them. Having independent eight-way control of its arms comes in handy for an octopus because the arms can often usefully pursue independent tasks. Octopus arms are vastly more capable than those of any other animals, and they use them in amazingly coordinated ways, including to “bounce-walk” across the sea floor and to jump out of the water’s edge to capture crabs.

Why, then, don’t animals all have separate brains for each limb and organ? The was function evolves is always a compromise between logical need and physical mechanism. To some degree, historical accident has undoubtedly shaped and constrained evolution, but, on the other hand, where logical needs exist, nature often finds a way, which sometimes results in convergent evolution of the same trait through completely different mechanisms. In the case of control, it seems likely that it was physically feasible for animals to either localize or centralize control according to which strategy was more effective. An example of decentralized control in the human body is the enteric nervous system, or “gut brain”, which lines the gut with more than 100 million nerve cells. This is about 0.1% of the 100 billion or so nerves in the human brain. Its main role is controlling digestion, which is largely an internal affair that doesn’t require overall control from the brain.2 However, the brain and gut brain do communicate in both directions, and the gut brain has “advice” for the brain in the form of gut feelings. Much of the information sent from the gut to the brain is now thought to arise from our microbiota. The microbes in our gut can weigh several pounds and comprise hundreds of times more genes than our own genome. So gut feelings are probably a show of “no digestion without representation” that works to both parties’ benefit.34 The key point in terms of distributed control is that if the gut has information relevant to the control of the whole animal, it needs to convey that information in a form that can impact top-level control, and it does this through feelings and not thoughts.

So let’s consider how control of the body is accomplished in the other two families of highly mobile, complex animals, namely the arthropods and vertebrates. The control system of these animals is most broadly called the neuroendocrine system, as the nervous and endocrine systems are complementary control systems that work together. The endocrine system sends chemical messages using hormones traveling in the blood while the nervous system sends electrochemical messages through axons, which are long, slender projection of nerve cells, aka neurons, and then between neurons through specialized connections called synapses. Endocrine signals are generally slower to begin than nervous signals and tend to last longer. Both arthropods and vertebrates have endocrine glands in the brain and about the body, including the ovaries and testes. Hormones regulate both physiology and behavior of bodily functions like digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress, growth and development, movement and reproduction. Hormones also affect our conscious mood, which encompasses a range of slowly-changing subjective states that can influence our behavior.

While the endocrine system focuses on control of specific functions, the nervous system provides overall control of the body, which includes communication to and from the endocrine system. In addition to the enteric nervous system (gut brain) discussed before, the body has two other peripheral systems called the somatic and autonomic nervous systems that control movement and regulation of the body somewhat independently from the brain. The central nervous system (CNS) comprises the spinal cord and the brain itself. Nerve cells divide into sensory or afferent neurons that send information from the body to the CNS, motor or efferent neurons that send information from the brain to the body, and interneurons which comprise the brain itself.

The functional capabilities of brains have developed quite differently in arthropods and vertebrates, but, for my purposes, even more significantly in certain kinds of vertebrates. Moving through the vertebrates on the way to Homo sapiens, first

fish branch off, then
amphibians, and then
amniotes, which enclose embryos with an amniotic sac that provides a protective environment and makes it possible to lay eggs on land. Amniotes divide into
reptiles, from which derive
birds (by way of dinosaurs), and
mammals. And mammals then divide into
monotremes (like the duck-billed platypus), then
marsupials, and then
placentals, which gestate their young internally to a relatively late stage of development. There are eighteen orders of placentals, one of which is
primates, to which
humans belong.

Evolution did not reach its apotheosis in Homo sapiens, even though it seems that way to us. We know it is technically incorrect to say some species are “more evolved” than others because all species have had the same amount of time to evolve. However, brain power unquestionably tends to increase as one moves through the above branches toward humans, and new brain structures appear along the way that help to account for that increase in power tend to appear. Why would this happen if evolution is not directed? It is because evolution is directed; not toward more complexity but toward greater functionality. In animals, that functionality is most critically driven by the power of the top-level control center, which is the brain. When one reviews an evolutionary tree from the perspective of the organisms with the greatest range of functionality, humans in this case, one will usually see members of earlier branches that, despite having had just as much time to evolve in new directions, seem to be functionally unchanged in the fossil record since the time of the branching. This is because the ecological niches they filled then often still need to be filled, and they are still the best-equipped to fill those niches. Fish are a prime example; all the other lines above moved to land (and a few, like cetaceans, later moved back to water). There are no doubt varieties of fish which have evolved some impressive new functions since that branching, but no fish became as smart as mammals. This is mostly because evolution is inclined to keep things the same so long as they are working, so relatively unchanging aquatic environments have led to fish keeping their basic body plan, lifestyle, and brain.

But it is also worth noting that fish are cold-blooded. Warm-blooded animals need much more food but can engage in a more active lifestyle. This gives them some advantages over their less active peers, but not enough to unseat the dinosaurs during the Mesozoic era from 250 to 66 million years ago. An ingenious hypothesis from Arturo Casadevall is that warm-bloodedness evolved as a near-perfect protection against fungal diseases, which plague cold-blooded animals. The protection this offers may not have been the driving reason, but it definitely helps. Casadevall further speculated that an asteroid strike 66 million years ago amplified this advantage: “Deforestation and proliferation of fungal spores at cretaceous-tertiary boundary suggests that fungal diseases could have contributed to the demise of dinosaurs and the flourishing of mammalian species.”56 Still, most evidence suggests dinosaurs were themselves warm-blooded or close to it78, so this may not have been the advantage that led to the Age of Mammals (the Cenozoic).

My point here is that although physical differences can be suggestive of function, they are not conclusive. All the changes that distinguish fish brains from ours don’t prove that fish don’t have deep thoughts just like us. However, scientific evidence supported by evolutionary theory, physiological evidence, and behavioral studies all consistently and strongly suggest that brain complexity is necessary, if not sufficient, for more complex brain functions. Arthropods do engage in specific functionally-distinct tasks, as I said they logically must as agents, but the way they perform them appears to be entirely instinctive and not taught to them or learned from experience. That said, this is not entirely black and white as bees, for example, learn much from their environment and can adapt their behavior accordingly.9 But one part of the brain in particular, the vertebrate cerebrum, is thought to be the principal control center of more complex behavior. Let’s just consider what some measures of brain and cerebral differences among the animals suggest.

  • By total number of neurons, humans have the most (about 86 billion), except for elephants (about 250 billion) and whales, who likely have several times more as well.10
  • By total number of cerebral cortex neurons, humans have the most (about 16 billion), except for some whales who have the same number or more. But, again, whales are much larger. Elephants have about 6 billion, which is only bested by other primates and sea mammals.11
  • By brain-to-body mass ratio, ants have the most brains at 1:7, then shrews at 1:10, then small birds at 1:12, then mice and humans at 1:40, with other larger birds and mammals having smaller ratios.12
  • By Encephalization Quotient (EQ). Because some brain control functions seem to be independent of body size, larger animals need comparatively less brains than smaller animals. The EQ attempts to correct for size but also gives primates a numerical advantage (which arguably takes their larger cerebrums into account). By this measure, humans are 66% smarter than dolphins and three times smarter than chimps and ravens, with all other animals further down the list.1314

The above suggests that larger brains with more cortical neurons are generally more capable, but of course gross sizes are just the tip of the iceberg of genetic differences between species, so this tells us nothing specific.

It turns out that the mind as we think of it arose because of the logical problems the brain faces. Animals (and all organisms) have the long-term function of propagating their gene line, also called the “selfish gene” perspective, but it is really the cooperative and ratcheting effort of functions in the gene line to preserve their function and hence to survive. Perhaps individuals have their own (possibly selfish) reasons for behaving as they do, which is a subject I will broach later, we can definitely say that evolution statistically protects the gene line. The life cycle of animals requires that they eat, mate, and maintain their bodies in other appropriate ways, including breathing, sleeping, cleaning, etc. These activities usually depends on a number of subsidiary activities which also need to be done appropriately. For example, animals must search for food and must sometimes also take steps to prepare it for consumption. Mating does not happen often and so usually has a variety of formalized subtasks to ensure the suitability of a mate.

All of these sorts of activities must be performed to completion to deliver functionality, and the steps to complete them need to plan their use of resources, most importantly what the body will be doing. Since this is fundamentally a logic problem, the brain needs to think of these activities and the units they break down to as logical units or references and have ways of manipulating them logically. This doesn’t mean the logic needs to be overt or general purpose the way we think of deductive logic. It can also be managed by fixed instinctive approaches or using inductive trial-and-error learning. But however it is done, on some appropriate level the activities must be mapped to logical references. I’m not saying how the animal should do it, just that the problems exist and must be solved.

Living organisms are homeostatic, meaning they must maintain their internal conditions in a working state at all times. Animals had to evolve a homeostatic control system, meaning that it had to be able to adjust its supervision on a continuous basis. But it still needs to be able to fulfill tasks smoothly and efficiently and not in a herky-jerky panic. Karl Friston was the first to characterize these requirements of a homeostatic control system through his free energy principle, which could more accurately be called the variational bounding principle, as it does not really relate to energy or anything physical at all.15 This principle says that a homeostatic control system must minimize its surprise, meaning that it should proceed calmly with its ongoing actions so long as all incoming information falls within the expected range. Any unexpected information is a surprise, which should be bumped up in priority and dealt with until it can be brought back into an expected range itself. In order to follow this principle, the control system must know what to expect, but, more than that, it must act so as to minimize the chance that those inputs will go outside the expected range. Animals must follow this principle simply because it is maladaptive not to. Unlike human machines, which are not homeostatic or homeostatically controlled, animals must have a holistic reaction strategy that can deal with control issues fractally, that is, at potentially any level of concern.

Simple animals have simple expectations. Even a single-cell creature, like yeast, can sense its environment in some ways and respond to it. Simple creatures evolve a very limited range of expectations and fixed responses to them, but animals developed a broader range of senses, which made it possible to develop a broader range of expectations and responses. In a control arms race, animals have ratcheted up their capacity to respond to an increasing array of sensory information to develop ever more functional responses. But it all starts with the idea of real-time information, which is, of course, the specialty of sensory neurons. These neurons bring signals to the brain, but what the brain needs to know about each sense has to be converted logically into an expected range. Information within the range is irrelevant and can be ignored. Information outside the range requires a response. This requirement to translate the knowledge into a form usable for top-level control created the mind as we know it.

From a logical standpoint, here is what the brain does. First, it monitors its internal and external environment using a large number of sensory neurons, which are bundled into specific functional channels. The brain reprocesses each channel using a logical transformation and feeds it to a subprocess called the mind that maintains an “awareness” state over the channel that it ignores. These channels are kept open because a secondary process in the brain called an “attention” process evaluates each channel to see if it falls outside the expected range. When a channel does that, the attention process focuses on that channel, which moves the mind subprocess from an aware (but ignoring) state to a focused (attentive) state. The purpose of the mind subprocess is to collect incoming information that has been converted into a logical form that is relevant to tasks at hand so that it can prioritize and act so as to minimize future surprise. Of course, its more complex reactions complete necessary functions, and that is its “objective” if we view the problem deductively, but the brain doesn’t have to operate deductively or understand that it has objectives. All it needs to be able to do is convert sensory information into expected ranges and have ways of keeping them there.

Relatively simpler animal brains, like those of insects, use entirely instinctive strategies to make this happen. But you can still tell from observing them that, from a logical standpoint, they are operating with both awareness and attention. This alone doesn’t make their mind subprocess comparable to ours in any way we can intuitively identify with, but it does mean that they have a mind subprocess. They are very capable at shifting their focus when inputs fall outside expected ranges, and they then select new behaviors to deal with the situation. Do they “go back” to what they were dong once a minor problem has been stabilized? The free energy principle doesn’t answer questions like that directly, but it does indirectly. Once a crisis has been averted, the next most useful thing the animal can do to avoid a big surprise in its future is to return to what it was doing before. But for very simple animals it may be sufficiently competitive to just continually evaluate current conditions to decide what to do next rather than to devise longer-term plans. After all, current conditions can include desires for food or sex, which can then be prioritized to devise a new plan on a continuous basis. Insects have very complex instinctive strategies for getting food which often depend on monitoring and remembering environmental features. So even though their mind subprocess is simple compared to ours, it must be capable of bringing things to attention, considering remembered data, and consulting known strategies to prioritize its actions to choose an effective logical sequence of steps to take.

People usually consider the ability to feel pain as the most significant hallmark of consciousness. Insects don’t have nociceptors, the sensory neurons that transmit pain to the brain, so they don’t suffer when their legs are pulled off. It is just not sufficiently helpful or functional for insects to feel pain. More complex animals make a larger investment in each individual and need them to be able to recover from injuries, and pain provides its own signal which is interpreted within an expected range to let the mind subprocess know whether it should ignore or act. Every sensory nerve (a bundle of sensory neurons) creates its own discrete and simultaneous channel of awareness in the mind. If you have followed my argument so far, you can see that what we think of as our first-person awareness or experience of the world is just the mind subprocess doing its job. Minds don’t have to be aware of themselves, or have compression or metacognition, to feel things. Feelings are, at their lowest level, just the way these nerve channels are processed for the mind subprocess. Feelings in this sense of being experienced sensations are called qualia. We distinguish red from green as very different qualia, but we could never describe the difference to a person who has red-green color blindness. The feelings themselves are indescribable experiences; words can only list associations we may have with them.

We don’t count each sensory nerve as its own quale (pronounced kwol-ee, singular of qualia), even though we can tell it apart from all others. Instead, the brain groups the sensory nerves functionality into a fixed number of categories, and the feeling of each quale as we experience it is exactly the same regardless of which nerve triggered it. Red looks the same to me no matter which optic nerve sends it. The red we experience is a function of perception and is not part of nature itself, which deals only in wavelengths, so our experience seems like magic as it is supernatural. But it isn’t really magic because there is a natural explanation: outside of our conscious awareness in the mind subprocess, the brain has done some information processing and presented a data channel to the mind in the form of a quale. The most important requirement of each sensory nerve is that we can distinguish it from all others, and the second most important requirement is that we can concurrently categorize it into a functional group, its quale. The third most important requirement is that we monitor each channel for being what we expect, and that unexpected signals then demand our attention. These requirements of qualia must all hold from ant minds to human minds, and, in an analogous way, for senses in yeast. But the detective and responsive range in yeast is much simpler than in ants, and that in ants is much simpler than in people. As we will see, the differences that arise are not just quantitative, but also qualitative as they bring new kinds of function to the table.

The Rise of Consciousness


The Rise of Function
The Power of Modeling and Entailment
Qualia and Thoughts
The Self
The Stream of Consciousness
The Hard Problem of Consciousness
Our Concept of Self

The Rise of Function

I’ve established what function is and suggested ways we can study it objectively, but before I get into doing that I would like to review how function arose on Earth. It started with life, which created a need for behavior and its regulation, which then created value in learning, and which was followed at last by the rise of consciousness. We take information and function for granted now, but they are highly derived constructs that have continuously evolved over the past 4.28 billion years or so. We can never wholly separate them from their origins as the acts of feedback that created them help define what they are. However, conceptual generalizations about what functions they perform can be fairly accurate for many purposes, so we don’t have to be intimately familiar with everything that ever happened to understand them. This is good, because the exact sequence of events that sparked life is still unknown, along with most of the details since. The fossil record and genomes of living things are enough to support a comprehensive overview and also gives us access to many of the details.

We know that living, metabolizing organisms invariably consist of cells that envelop a customized chemical stew. We also know that all organisms have a way to replicate themselves, although viruses do it by hijacking the cells of other organisms and so cannot live independently. But all life has mutual dependencies on all other life either very narrowly through symbiosis or more broadly by sharing resources in the same ecological niche or the same planet. Competition between or across species is rewarded with a larger population and share of the resources. The chemical stew in each cell is maintained by a set of blueprint molecules called DNA (though originally thought to have been RNA) which contain recipes for all the chemicals and regulatory mechanisms the cell needs to maintain itself. Specifically, genes are DNA segments that are either transcribed into proteins or regulate when protein-coding genes turn on or off. Every call capable of replication has at least one complete set of DNA1. DNA replicates itself as a double helix that unwinds like a zipper while simultaneously “healing” each strand to form two complete double helices. While genes are not directly functional, proteins have direct functions and even more indirect ones. Proteins can act as enzymes to catalyze chemical reactions (e.g. replicating DNA or breaking down starch), as structural components (e.g. muscle fibers), or can perform many other metabolic functions. Not all the information of life is in the DNA; some is in the cell walls and the chemical stew. The proteins can maintain the cells but can’t create them from scratch. Cell walls probably arose spontaneously as lipid bubbles, but through eons of refinement, their structure has been completely transformed to serve the specific needs of each organism and tissue. The symbiosis of cells and DNA was the core physical pathway that brought function into the world.

A stream has no purpose; water just flows downhill. But a blood vessel is built specifically to deliver resources to tissues and to remove waste. This may not be the only purpose it serves, but it is definitely one of them. All genes and tissues have specific functions which we can identify, and usually one that seems primary. Additional functions can and often do arise because having multiple applications is the most convenient way for evolution to solve problems given proteins and tissue that are already there. Making high-level generalizations about the functions of genes and tissues is the best way to understand them, provided we recognize the limitations of generalizations. Furthermore, studying the form without considering the function is not very productive: physicalism must take a back seat to functionalism in areas driven by function.

Lifeforms have many specialized mechanisms to perform specific functions, many of which happen simultaneously. However, an animal that moves about can’t do everything at once because it can only be in one place at a time. A mobile animal must, therefore, prioritize where it will go and what it will do. This functional requirement led to the evolution of animal brains, which collect information about their environment through senses and analyze it to develop strategies to control the body. Although top-level control happens exclusively in the brain, the nervous and endocrine systems work in tandem as a holobrain (whole brain) to control the body. Nerves transmit specialized or generalized messages electronically while hormones transmit specialized messages chemically. While instinctive behavior has evolved for as many fixed functions as has been feasible, circumstances change all the time, and nearly all lifeforms consequently have some capacity to learn, whether they have brains or not. Learning was recently demonstrated quite conclusively in plants by Monica Gagliano2. While non-neural learning mechanisms are not yet understood, it seems safe to say that both plants and animals will habituate behaviors using low-level and high-level mechanisms because the value of habituation is so great. While we also can’t claim full knowledge about how neural learning works, we know that it stores information dynamically for later use.

My particular focus here, though, is not on every way brains learn (using neurons or possibly hormonal or other chemical means), but on how they learn and apply knowledge using minds. “Mind” is something of an ambiguous word: does it mean the conscious mind, the subconscious mind, or both? English doesn’t have distinct words for each, but the word “mind” mostly refers to the conscious mind with the understanding that the subconscious mind is an important silent partner. When further clarity is needed, I will say “whole mind” to refer to both and “conscious mind” or “subconscious mind” to refer to each separately. Consciousness is always relevant for any sense of the word “mind” and never of particular relevance when using the word “brain” (except when used very informally, which I won’t do). Freud distinguished the unconscious mind as a deeper or repressed part of the subconscious mind, but I won’t make that distinction here as we don’t know enough about the subconscious to subdivide it into parts. While subconscious capabilities are critical, we mostly associate the mind with conscious capabilities, namely four primary kinds: awareness, attention, feelings, and thoughts. Under feelings, I include sensations and emotions. Thoughts include ideas and beliefs, which we form using many common sense thinking skills we take for granted, like intuition, language, and spatial thinking. Feelings all have a special experiential quality independent of our thoughts about them, while thoughts reference other things independent of our feelings about them. Beliefs are an exception; they are thoughts because they reference other things, but we have a special feeling of commitment toward them. We have many thoughts about our feelings, but emotions and beliefs are feelings about our thoughts. I’ll discuss them in detail below after I have laid some more groundwork. Awareness refers to our overall grasp of current thoughts and feelings, while attention refers to our ability to focus on select thoughts and feelings. All four capabilities — awareness, attention, feelings, and thoughts — help the conscious mind control the body, which it operates through motor skills that work subconsciously. Habituation lets us delegate the subconscious to execute fairly complex behaviors with little or no conscious direction. This may make it seem like the subconscious mind acts independently because it initiates some reactions before we are consciously aware we reacted. It is more efficient and effective to leave routine matters to the subconscious as much as possible, but we can quickly override or retrain it as needed.

The Power of Modeling and Entailment

The role of consciousness is to promote effective top-level decision making in animals. While good decisions can be made without consciousness, as some computer programs demonstrate, consciousness is probably the most effective way for animals to make decisions, and in any case, it is that path that evolution chose. Consciousness is best because it solves the problem of TMI: too much information. Gigabytes of raw sensory information flow into the brain every second. A top-level decision, on the other hand, commits the whole body to just one task. How can all that information be processed to yield one decision at a time? Two fundamentally different information management techniques might be used to do this, which I generically call data-driven and model-driven. Data-driven approaches essentially catalog lots of possibilities and pick the one that seems best. Model-driven approaches break situations down into more manageable pieces that follow given rules. Data-driven techniques are holistic and integrate diverse data, while model-driven techniques are atomistic and differentiate data. The subconscious principally uses data-driven methods, while the conscious mind principally uses model-driven methods, though they can leverage results from each other. The reason is that data-driven methods need parallel processing while model-driven methods work require single-stream processing (at least, they require it at the top level). The conscious mind is a single-stream process while the rest of the mind, the subconscious, is free to process in parallel and most likely is entirely parallel. This difference is not a coincidence, and I will argue more later that the sole reason consciousness exists is so that our decision making can leverage model-driven methods.3 The results of subconscious thinking like recognition, recollection, and intuition just spring into our conscious minds after the subconscious has holistically scanned its stored data to find matches for given search criteria. While we know these searches are massively parallel, the serial conscious mind has no feel for that and only sees the result. The drawback of data-driven techniques is that while they can solve any problem whose solution can be looked up, the world is open-ended and most real-world problems haven’t been posed yet, much less solved and recorded. Will a data-driven approach suffice for self-driving cars? Yes, probably, since the problem space is “small” enough that millions of hours of experience logged by self-driving cars is enough for them to equal or exceed what humans can do on just hundreds to thousands of hours. Many other human occupations can also be largely automated by brute-force data-driven approaches, all without introducing consciousness to robots.

The more interesting things humans do involve consciously setting up models in our minds that simplify more complex situations. Information summarizes what things are about — phenomena describing noumena — and so is always a simplification or model of what it represents. But models are piecewise instead of holistic; they explicitly attempt to break down complex situations into simpler parts. The purpose of this dissection is that one can then consider logical relationships between the simplified parts and derive entailment (cause and effect) relationships. The power of this approach is that conclusions reached about models will also work for the more complex situations they represent. They never work perfectly, but it is uncanny how well they work most of the time. Data-driven approaches just don’t do this; they may discriminate parts but don’t contemplate entailment. Instead, they look solutions up from their repertoire, which must be very large to be worthwhile. While physical models are comprised of parts and pieces, conceptual models are built out of concepts, which I will also sometimes call objects. Concepts or objects are functionally delineated things that are often spatially distinct as well. An object (as in object-oriented programming) is not the thing itself, but what we know about it. What we can know about an object is what it refers to (is about) and its salient properties, where salience is a measure of how useful a property is likely to be. Because a model is simpler than reality, the function of the concepts and properties that comprise it can be precisely defined, which can lead to certainty (or at least considerable confidence) in matters of cause and effect within the model. Put into the language of possible worlds logic, we say that if something is true in a possible world, then it is necessarily true. Knowing that something will necessarily happen is perfect foreknowledge, and some of our models apply so reliably to the real world that we feel great confidence that many things will happen just the way we expect, even though we know that in the real world extraneous factors occasionally prevent our simple models from being perfect fits. We also use many models that are only slightly better than blind guessing (e.g. weather forecasting), but any measure of confidence better than guessing provides a huge advantage.

Though knowledge is imperfect, we must learn so we can act confidently in the world. Our two primary motivations to learn are consequences and curiosity. Consequences inspire learning through positive and negative feedback. Direct positive consequences provide an immediate reward for using a skill correctly. Direct negative consequences, aka the school of hard knocks, let us know what it feels like to do something wrong. Indirect positive or negative consequences such as praise, punishment, candy, grades, etc., guide us when direct feedback is lacking. The carrot-and-stick effect of direct and indirect consequences pulls us along, but we mostly need to push. We can’t afford to wait and see what lessons the world has for us, we need to explore and figure it out for ourselves, and for this we have curiosity. Curiosity is an innate, subconscious motivating force that gives us a rewarding feeling for acquiring knowledge about useless things. Ok, that’s a joke; we are most curious about things we think will be helpful, but we do often find ourselves fascinated by minor details. But curiosity drives us to pursue mastery of skills assiduously. Since the dawn of civilization people have needed a wide and ever-changing array of specialized skills to survive. We don’t really know what knowledge will be valuable until we use it, so our fascination with learning for its own sake is critical to our survival. We do try to guess what kind of knowledge will benefit people generally and we try to teach it to them at home and in school, but we are just guessing. Parenting never had a curriculum and only emerged as a verb in the 1960’s. Formal education traditionally stuck to history, language, and math, presumably because they are incontrovertibly useful. But picking safe subjects and formally instructing them is not the same as devising a good education. The Montessori method addresses a number of the shortcomings of traditional early education, largely by putting more emphasis on curiosity than consequences. In any case, evolution will favor those with a stronger drive to explore and learn over the less curious up until the point where it distracts them from goals more directly necessary for survival. So curiosity is balanced against other drives but is increasingly helpful in species that are more capable of applying esoteric knowledge. So curiosity was a key component of the positive feedback loop that drove up human intelligence. Because it is so fundamental to survival, curiosity is both a drive and an emotion; I’ll clarify the distinction a bit further down.

To summarize, consciousness operates as a discrete top-level decision-making process in the brain by applying model-driven methods while simultaneously considering data-driven subconscious inputs. We compartmentalize the world into concepts and models in which they operate according to rules of cause and effect. Emotional rewards, including curiosity, continually motivate us to strive to improve our models to be more successful. Data-driven approaches can produce good decisions in many situations, especially where ample experience is available, but they are ultimately simplistic, “mindless” uses of pattern recognition which can’t address many novel problems well. So the brain needs the features that consciousness provides — awareness, attention, feeling, and thinking — to achieve model-based results. But now for the big question: why is consciousness “experienced”? Why do we exist as entities that believe we exist? Couldn’t the brain go about making top-level decisions effectively without bothering to create self-possessed entities (“selves”) that believe they are something special, something with independent value above and beyond the value of the body or the demands of evolution? Maybe; it is not my intention to disprove that possibility. But I do intend to prove that first-person experience serves a vital role and has a legitimate claim to existence, namely functionality, which I have elaborated on at length already but which takes on new meanings in the context of the human mind.

Qualia and Thoughts

The context the brain finds itself in naturally drives it toward experiencing the world in the first person. The brain must conduct two activities during its control of the body. First, it must directly control the inputs and outputs, i.e. receive sensory information and move about in the world. And second, it must develop plans and strategies telling it what to do. We can call these sets of information self and not-self, or self and other, or subject and object (before invoking any concept of first-personness). It is useful to keep these two sets of information quite distinct from each other and to have specialized mechanisms for processing each. My hypothesis of consciousness is that it is a subprocess within the brain that manages top-level decisions and that the (subconscious) brain creates an experience for it that only makes information relevant to top-level decisions available. In particular, it uses specialized mechanisms to create a very different experience for self-information than for not-self-information. Self-information is experienced consciously as awareness, attention, feelings, and thoughts, but by thoughts, here, I mean just experiencing the thoughts without regard to their content. These things constitute our primary sense of self. Not-self-information is the content of the thoughts, i.e. what we are thinking about. Put another way, the primary self is the information an agent grasps about itself automatically (without having to think about it), and not-self are the things it sees and acts upon as a result of thinking about them. It is the responsibility of the consciousness subprocess to make all self-information appear in our minds experientially without having to be about anything. The customized feel of this information can be contrasted with the representational “feel” of not-self-information. Not-self-information doesn’t “feel” like anything at all, it just tells us things about other things, representing them, describing them and referencing them. Those things themselves don’t need to really exist; they exist for us by virtue of our ability to think about them.

We know what our first-person experience feels like, but we can’t describe it objectively. Or rather, we can describe anything objectively, but not in a way that will adequately convey what that experience feels like to somebody who can’t feel it. The qualities of the custom feelings we experience are collectively called qualia. We distinguish red from green as very different qualia which we know from intimate experience, but we could never characterize in any useful way how they feel different to a person who has red-green color blindness. Each quale (pronounced kwol-ee, singular of qualia) has a special feeling created for our conscious minds by untold subconscious processing. It is very real to our conscious minds, and has the objective reality that some very specialized subconscious processing is making certain information feel a certain way for our conscious benefit. Awareness and attention themselves are fundamental sorts of qualia that conduct all other qualia. And thoughts feel present in our minds as we have them, but thoughts don’t “feel” like anything because they are general-purpose ways of establishing relationships about things (beliefs are an exception discussed below that have a feeling of “truth”). So we most commonly use the word qualia to describe feelings, which each have a very distinct customized feel that awareness, attention, and thoughts lack. In other words, all awareness, attention, and thoughts feel the same, but every kind of feeling feels different. Our qualia for feelings divide into sensory perceptions, drives, and emotions. Sensory perceptions come either from sense organs like eyes, ears, nose, tongue, and skin, or from body senses like awareness of body parts (proprioception) or hunger. Drives and emotions arise from internal (subconscious) information management mechanisms which I will describe more further down. Our qualia, then, are the essence of what makes our subjective experience exist as its own perspective, namely the first-person perspective. They can be described in terms of what information they impart, but not how they feel (except tautologically in terms of other feelings).

We create a third-person world from our not-self-information. This is the contents of all our thoughts about things. Where qualia result from innate subconscious data processing algorithms, thoughts develop to describe relationships about things encountered during experience. Thoughts can characterize these relationships by representing them, describing them, or referencing them. This description makes it sound like some “thing” must exist to which thoughts refer, but actually thoughts, like all information, are entities of function: information separates from noise only to the extent that it provides predictive power, aka function. It can be often useful when describing information to call out conceptual boundaries separating functional entities of representation, description or reference, but much information (especially subconscious information) is a much more abstract product of data analysis. But in any case, thoughts are data about data, patterns found in the streams of information that flow into our brains. We can consequently have thoughts about awareness, attention, feelings, and other thoughts thought those thoughts being the awareness, attention, feelings, and thoughts themselves. These thoughts about our conscious processes form our secondary sense of self. We know humans have a strong secondary sense of self because we are very good at thinking, and so we suspect other animals have a much weaker secondary sense of self because they are not as good at thinking, though they do all have such a sense because all animals with brains can analyze and learn from experiential data, which includes data about consciousness.

This logical separation of self and not-self-information does not in itself imply the need for qualia, i.e. first-person experience. The reason feeling is vital to consciousness has to do with how self-information is integrated. Subconsciously, we process sensory information into qualia so we can monitor all our senses simultaneously and yet be able to tell them apart. It is important that senses work this way as the continuous, uninterrupted and distinguished flow of information from each sense helps us stay alive. But it is how we tell them apart that gives each quale its distinctive feel. Ultimately, information is a pattern that can be transmitted as a signal, and viewed this way each quale is much like another because patterns don’t have a feel in and of themselves. But each quale reaches our conscious mind through its own data channel (logically, the physical connection is still unknown) that brings not only the signal but a custom feel. What we perceive as the custom feel of each quale is really just “subconscious sugar” to help us conveniently distinguish qualia from each other. The distinction between red and green is just a convenient fiction created by our subconscious to facilitate differentiation, but they feel different because the subconscious has the power to create feelings and the conscious mind must accept the reality fed to it. We can think whatever conscious thoughts we like, but qualia are somehow made for us outside conscious control. While the principal role of qualia is to distinguish incoming information for further analysis, they can also trigger preferences, emotions, and memories. Taste and smell closely associate with craving and disgust. Color and sound have an inherent calming or alerting effect. These associations help to further differentiate qualia in a secondary way. To some extent, we can feel qualia not currently stimulated by the senses by remembering them, but the memory of a quale is not as vivid or convincing as it felt at first-hand, though it can seem that way when dreaming or under hypnosis.


How we tell red and green apart is ineffable; we just can. We see different colors in the rainbow as a wide variety of distinctive qualities and not just as shades of (say) gray. All shades of gray share the same color quale and only vary in brightness. We are dependent on ambient lighting even to tell them apart. Not so with red and green, whose quale feel completely different. This facility stems from how red, green, and blue cone cells in the eye separate colors into independent qualia. Beyond this, we see every combination of red, green, and blue as a distinct color, up to about ten million hues. While we interpret many of these combined hues as colors in the spectrum, three color values in combination define a plane, not a line, so we see many colors not in the rainbow. Significantly, we interpret the combination of red and blue without green as pink, and all three together as white. Brown is another, but we can actually only distinguish hundreds to (at the very most) thousands of distinct colors along the visible band of the electromagnetic spectrum, which means that nearly all colors we distinguish are non-spectral. Although being able to distinguish colors is the primary reason we can do it, this doesn’t explain why they are “pretty”, i.e. colorful. First, note that if we take consciousness as a process that is only fed simple intensity signals for red, green and blue, then it could distinguish them but they wouldn’t feel like anything. I propose, but can’t prove, that the qualia we feel for colors that I called subconscious sugar above result from considerable additional subconscious processing which extends a simple intensity signal into something which feels much more readily unique to the conscious mind than the qualia would feel if, say, they appeared as a number of gauges. While qualia are ultimately just information and not qualities, the way we consciously feel the world is entirely a product of the information our subconscious feeds us, so we shouldn’t think of our conscious perception of the world as a reflection of it, we should think of it as a complex creation of the subconscious that gives a deep informational structure to each kind of sensory input. Qualia are like built-in gauges which we don’t have to read; we can just feel the value they are using awareness alone. Since the role of consciousness is to evaluate available information to make decisions quickly and continuously, anything the subconscious mind can do to make different kinds of information distinctively appear in our awareness helps. We can distinguish many colors, but nowhere near all the light information objects give off. Our sense of a three-dimensional object feels to us like we know the noumenon of the object itself and are not just representing it in our minds. To accomplish this, we subconsciously break a scene down into a set of physically discrete objects, automatically building an approximate inventory of objects in sight. Our list of objects and features of each, like their color, form an informational picture. That picture is not the objects themselves but is just a compendium of facts we consider useful. Qualia innately convert a vast stream of information into a seamless recreation of the outside world in our head. Our first-person lives are just an efficient way to allow animals to focus their decision-making processing on highly-condensed summaries of incoming information, solving the problem of TMI (too much information).

But why does red specifically feel red as we know it, and is everyone’s experience of red the same? The specific qualities of the qualia is at the heart of what David Chalmers has famously called the hard problem of consciousness. This problem asks why we experience qualia at all, and specifically why does a given quale feel the specific way it does. We think of qualia as being as real as anything we know since all of our reality is mediated through them. However, we must admit that they are imaginary informational constructs our brain puts together for us subconsciously and presents to our conscious mind with the mandate that we believe they are real. So, objectively, then, we realize our brains are creating these experiences for us. It is in our brain’s best interests that the information the qualia provide us be as consistent with the outside world as possible so that we will have full confidence to act. When we lose a vital sense, e.g. when we are plunged into darkness, our confidence and reactions are severely compromised. But even knowing that a quale’s feel is imaginary doesn’t explain why it has the characteristic feel that it has. To explain this, I would suggest that we recall that the mind exists to serve a function, not just to exist as physical noumena do. The nature of the qualia, then, is intimately and entirely a product of their function: they feel like what they inspire us to do. While the primary role of the feel of qualia is let us simultaneously feel many channels of information simultaneously while keeping them distinct, their exact feeling is designed to persuade us to consider them appropriately. Green is not just distinct from red, it is calming where red is provocative. Colors are pretty, yes, but my contention is that their attractiveness actually derives from their emotional undertones. Grays don’t carry this extra emotional coloring, so we feel neutral about them. There is typically no reason to be interested in gray. From an evolutionary standpoint, the more helpful it is to consciously notice and distinguish a quale when it is perceived, the stronger the need for it to have a strong and distinctive custom feeling. Hunger, thirst, and sex drives can be very compelling. Damaging heat or cold or injuries are painful. Dangerous substances often smell or taste bad. We shy away from the unpleasant and seek the comfortable to the exact degree the custom feeling of the qualia involved inspire us. Qualia are the way the subconscious inspires the conscious mind to behave well. To develop this further, let’s consider how we perceive color.

We sense colored light using three types of cone cells in the retina. A second stage of vision processing done by retinal ganglion cells creates two signals, one indicating either yellow or blue (y/b) and the other indicating either red or green (r/g). Because these two signals can never produce both yellow and blue or both red and green, it is psychologically impossible for something to be reddish green or yellowish blue (note that mixing red and green paint may make brown, and yellow and blue may make green, but that is not the same thing as making something reddish or yellowish). These four psychological primary colors are then blended in a third stage of vision processing to make all the colors we see. The blended colors form a color wheel from blue to green, green to yellow, yellow to red, and red to blue. These follow the familiar spectral colors until one reaches red to blue, at which point they pass through the non-spectral purples, including magenta. If one blends toward white in the center of the color wheel one creates pastel colors, or one can blend toward any shade of gray or black in the center for additional colors. This gives the full range of ten million colors we can see, of which only a few hundred on the outer rim are spectral colors. Instead of thinking of color as a frequency of light, it is more accurate to think of it as a three-dimensional space using y/b, r/g and brightness dimensions that is built with measurements from the four kinds of photopsins (photoreceptor proteins) in the eye. More accurately still, color goes through a fourth stage in which the colors surrounding an object are taken into consideration. The brain will automatically adjust the color actually seen to the color it would most likely take if properly illuminated under white light by reversing the effects of shadowing and colored lighting. For example, contextual interpretation can make gray look like blue in yellow context or like yellow in blue context. The brain can’t defeat this illusion until the surrounding context is removed. 4

One way to explore the meaning of qualia is by inverting them5. John Locke first proposed an inverted spectrum scenario in which one person’s blue was another person’s yellow and concluded that because they could still distinguish the colors as effectively and we could not see into their minds, completely “different” but equivalent ideas would result. Locke’s choice of yellow and blue was prescient, as we now know the retina sends a y/b signal to the brain which could theoretically be flipped with surgery, producing the exact effect he suggests6. Or we could design a pair of special glasses that flipped colors along the y/b axis, which would produce a very similar effect. (Let’s ignore some asymmetries in how well we discriminate colors in different parts of the color wheel.)7 Locke’s scenario presumes a condition present from birth and concludes that while an inverted person’s ideas would be different, their behavior would be the same. As a functionalist, I disagree. I would argue that whether the condition existed from birth or was the result of wearing glasses, the inverted person would see yellow the same as the normal person. This hinges entirely on what we mean by “different” and “same”; after all, no two people have remotely similar neural processes at a low level. By “same”, I mean what you would expect: if we could find a way to let one person peek into another person’s mind (and I believe such mind sharing is possible in principle and happens to some degree with some conjoined twins), then they would see colors the way that person did and would find that the experience was like their own. What I mean by this stance is that our experience of yellow and blue are not created by the eye but by the visual cortex, which interprets the signals to serve functional goals. But wait; certainly if one put one the glasses, yellow would become blue and vice versa right away. Yes, that is true, but how would our minds accommodate the change over time? Consider the upside-down vision experiment conducted by George Straiten in 1896 and again by Theodor Erismann and Ivor Kohler in 1955. Wearing glasses that inverted the perceived image from top to bottom and left to right, just as a camera flips an image, was disorienting at first, but after a week to ten days the world effectively appeared normal and one could even ride a motorcycle with no problem. The information available had been mapped to produce the same function as before. I believe an inverted y/b signal would produce the same result, with the colors returning to the way the mind used to see them after some days or weeks. Put another way, many of the associations that make yellow appear yellow are functional, so for the brain to restore functionality effectively it would subconsciously notice that the best solution is to change the way we interpret the incoming signals to realign them with their functions. For colors to function correctly, yellow needs to stand out more provocatively to our attention process than blue, and blue needs to be darker and calmer. We would remember how things used to be colored and how they felt, and our brains would not be happy with the new arrangement and would start to pick up on it and start making yellow things seem calm and blue things stand out. As our feelings toward the colors changed, our subconscious would become more inclined to map them back to the way they were. And if it were a condition we were born with, we would just wire physical yellow to psychological yellow in the first place. I don’t know if adult minds would necessarily be plastic enough for this effect to be perfect, but there is no reason to think they are any less capable of reversing a color flip than an orientation flip, though it would probably take longer as the feedback is much more subtle. Our brains probably have the plasticity needed to make these kinds of adjustments because we do continually adjust our interpretation of sensory information, for example to different levels of brightness. Our brains are built to interpret sensory data effectively. I am not saying that yellow has no real feel to it and that we just make it up as we go; quite the opposite. I am saying that yellow and all our qualia have a substantial computational existence at a high (but subconscious) level in the cortex which is flexibly connected to the incoming sensory signals, and that this flexibility is not only lucky but necessary. Knee-jerk-type reflexes are hardwired, but it is much more practical and adaptable for many fixed subconscious skills (like sensory interpretation) to develop in the brain adaptively to fulfill a role rather than using rigid neural connections. This kind of development has the additional advantage that it can be rewired or refined later for special circumstances, for example the help the remaining senses compensate when one sense is lost (e.g. through blindness).


We have two kinds of qualia, sensory and dispositional. Sensory qualia bring information from outside the brain into it using sensory nerves, while dispositional qualia bring us information from inside our brain that tell us how we feel about ourselves. We experience both consciously, but kinds of experiences are created for us subconsciously. Dispositional qualia come in two forms, drives and emotions. Each drive and emotion has a complex subconscious mechanism that generates it when triggered by appropriate conditions. Drives arise without conscious involvement, while emotions depend on how we consciously interpret what is happening to us. The hunger drive is triggered by a need for energy, thirst by a need for water, sleep for rest, and sex for bonding and reproduction. Emotions are subconscious reactions to conscious assessments: a stick on the ground prompts no emotional reaction, but once we recognize it as a snake we might feel fear. When an event fails to live up to our expectations, we may feel sad, angry, or confused, but the feeling is based on our conscious assessment of the situation. We can’t choose to suppress an emotion because the subconscious mind “reads” our best conscious estimation of the truth and creates the appropriate reaction. But we can learn control our emotional reactions better by viewing our conscious interpretations from more perspectives, which is another way of saying we can be more mature. Both drives and emotions have been shaped by evolution to steer our behavior in common situations, and are ultimately the only forces that motivate us to do anything. The subconscious mind can’t generate emotions independent of our conscious assessments because only the conscious mind understands the nuances involved, especially with interpersonal interactions. And the rational, conscious mind needs subconsciously-generated emotional reactions because rational thinking needs to be directed towards problems worth solving, which is the feedback that emotions provide.

We have more emotions than we have qualia for emotions, which means many emotions overlap in how they feel. The analysis of facial expressions suggests there are just four basic emotions: happiness, sadness, fear, and anger.8 I disagree with that, but these are certainly four of the most significant emotional qualia. While there are no doubt good evolutionary reasons why emotions share qualia, but the most basic reason, it seems to me, is that qualia help motivate us to react in certain ways, and we need fewer ways to react than we need subconscious ways to interpret conscious beliefs (i.e. emotions). So satisfaction, amusement, joy, awe, admiration, adoration, and appreciation are distinct emotions, but share an uplifting happy feeling that makes us want to do more of the same. We distinguish them consciously, so if the quale for them feels about the same it doesn’t impair our ability to keep them distinct. Aggressive emotions (like happiness and anger) should spur us to participate more, while submissive emotions (like sadness and fear) should spur us to back off. We don’t need to telegraph all our emotional qualia thought facial expressions; sexual desire and romance are examples that have their own distinct qualia (and sex, like curiosity, is backed by both drives and emotions). We feel safe emotions (like happiness and sadness) when we are not threatened, and unsafe emotions (like anger and fear) when we are. In other words, emotions feel “like” what action they inspire us to take. Wikipedia lists seventy or so emotions, while the Greater Good Science Center identifies twenty-seven9. But just as we can see millions of colors with three qualia, we can probably distinguish a nearly unlimited range of emotional feelings by combining four to perhaps a dozen emotional qualia which correspond to a nearly unlimited set of circumstances. Sadness, grief, despair, sorrow, and regret principally trigger a sadness quale in different degrees, and probably also bring in some pain, anger, surprise, confusion, nostalgia, etc. Embarrassment, shyness, and shame may principally trigger awkwardness, tinged with sadness, anxiety, and fear. Similarly to sensory qualia, emotional responses recalled from memory tend not to be quite as vivid or convincing as they originally were. When we remember an emotion, we feed the circumstances to our subconscious, which evaluates how strongly we believe the situation calls for an emotional response. Remembered emotional reactions are muted by the passage of time, during which memories fade and lose their relevance and connection to now, and because memory only stores a small fraction of the original sensory qualia experienced.

When I was young, I tried to derive a purely rational reason for living, but the best I could come up with is that continuing to live is more interesting than dying. Unfortunately, this reason hinges on interest or curiosity, which I did not realize was an emotion. Unfortunately, as much as it irks committed rationalists, there is no rational reason to live or to do anything. Our reason for living, and indeed all our motivation, comes entirely from drives and emotions. The purpose of the brain is to control an animal, and the purpose of the conscious mind within it is to make top-level decisions well. The brain is free to pursue any course that furthers its overall function, and it does, but the conscious mind, being organized around the first-person perspective, must believe that the top-level decisions it makes are the free product of its thought processes, i.e. it must believe in its own free will. Humans have an unlimited capacity to pursue general-purpose thought processes using a number of approaches (which I have not yet described), and there is nothing intrinsic to general-purpose thoughts that would direct them along one path in preference to any other. In other words, we can contemplate our navels indefinitely. But we don’t; we are still physical creatures who must struggle to survive, so our minds must have internal mechanisms that will ensure we apply our minds to survive and flourish. If our first-person experience consisted only of awareness, attention, sensory qualia and thoughts, we would not prioritize survival and would soon die. Drives and emotions fill the gap through dispositional qualia. These qualia alter our mood, affecting what we feel inclined to do. They create and maintain our passion for survival and nudge us toward behaviors that ensure it. They don’t mandate immediate action the way reflexes do, because that would be too inflexible, but they apply enough “mental pressure” to “convince” us to do their bidding eventually. Drives impact our thoughts independent of any conscious thought, but emotions “read” our thoughts and react to them. So while our rational thinking does spur us toward goals, those goals ultimately come from drives and emotions. The purpose of life, from the perspective of consciousness, is to satisfy all our drives and emotions as best we can. We must check all the boxes to feel satisfied with the result. We sometimes oversimplify this mission by saying happiness is the goal of life, but the positive emotional feeling of joy is just one of many dispositional qualia contributing to our overall sense of fulfillment of purpose. People with very difficult lives can feel pretty satisfied with how well they have done despite a complete absence of joy.


Let’s take a closer look now at thoughts. Thoughts are the product of reasoning, which refers loosely to all the ways we consciously manage descriptive or referential information, i.e. information that is about something else. Thoughts are constructed using models that combine concepts and subconcepts, which are in turn based on sensory information. Although reasoning is conscious, it draws on subconscious sources like feelings, recollection, and intuition for information and inspiration. The subconscious mind provides these things either unbidden or in response to conscious queries. Consequently, many of our ideas and the models that support them arise entirely from intuitive, subconscious sources and appear in our minds as hunches about the way things are. We then employ these in a variety of more familiar conscious ways to form our thoughts about things. This is a highly iterative process that over a lifetime leads to what seems to be a clear idea of how the world works, even though it is really a very loose amalgamation of subconceptual and conceptual frameworks (mental models). Consciousness directs and makes top-level decisions, but is heavily influenced by qualia, memory, and intuition.

Unlike awareness, attention, and feelings, which are innate, reactive information management systems, thinking is the proactive management of custom information that an animal gathers from experience through its single stream of consciousness and multiple paths of subconsciousness. Our ability to think is innate, but how we think about things both specifically and generally is unlimited and unpredictable because how it develops depends on what we experience. We remember things we think about as thoughts, which are configurations of subconceptual and conceptual details. Subconsciously, we derive patterns from our experiences and store them away as subconcepts. Subconcepts group similar things with each other without labeling them as specific kinds. Consciously, we label frequently-seen groupings as kinds or concepts, and we associate details with them that apply to all, most, or at least some instances we encounter. Once we have filed concepts away in our memory, we can access them subconsciously, so our subconscious minds work with both subconcepts and concepts. Consciously, subconcepts all bubble up from memory and usually feel very familiar, though sometimes they only feel like vague hunches. Much of subconceptual memory imitates life in that we can imagine feeling something first hand even though we are just imagining doing so. Our sense of intuition also springs from familiarity, as it is really just an effort to recall explanations for situations similar to the one we currently find ourselves in. We can reason using both subconcepts and concepts, essentially pitting intuition against rationality. In fact, we always look for intuitive support of rational thinking, except for the most formalized rational thinking in strict formal systems. We also perceive concepts as the bubble up through memory. Concepts can be thought of as a subset of subconcepts which have been labeled and clarified to a higher degree. As we think about concepts and subconcepts pulled from memory we continually form new concepts and subconcepts as needed, which we then automatically commit to memory. How well we commit something to memory and how well we recall it later is largely a function of intently we rehearse and use it.

We have a natural facility to form mental models and concepts, and with practice we can develop self-contained logical models where the relationships are explicit and consistently applied. Logical reasoning leverages these conceptual relationships to develop entailments. Rigorous logical systems like mathematics can prove long chains of necessary conclusions, but we have to remember that all models are subconceptual at the lowest levels because concepts are built out of subconcepts. The axiomatic premises of formal logical models arguably need no subconceptual foundation, but if we ever hope to apply such models to real-world situations then we need subconceptual support to map those premises to physical correlates. From a practical standpoint, we let our intuition (meaning the whole of our subconscious and subconceptual support system) guide us to the concepts and purposes that matter most, and then we use logical reasoning to formalize mental models and develop strong implications.

Logical reasoning itself is a process, but concepts are functional entities that can represent either physical entities or other functional entities. Concepts can be arbitrarily abstract since function itself is unbounded, but functional concepts typically refer to procedures or strategies for dealing with certain states of affairs within certain mental models. The same concept can be applied in different ways across an arbitrarily large range of mental models whose premises vary. Consequently, concepts invariably only specify high-level relational aspects, and often with built-in degrees of freedom. Apples don’t have to be red, but are almost certainly red, yellow, green or a combination of them, though in very exceptional cases might be colored differently still. Concepts are said to have prototypical properties that are more likely to apply in generic mental models than more specific ones. As a functional entity, a concept or thought has its own noumenon, and because it is necessarily about something else (its referent), that referent also has its own noumenon. We think about the thought itself via phenomena or reflections on its noumenon, and additionally we only know of the referent’s noumenon through phenomena about it. Our awareness of our knowledge is hence a phenomenon, even though noumena underlies it (including functional and not just physical noumena). Just as with physical noumena, we can prove the existence of functional noumena (thoughts, in this case) by performing repeated observations that demonstrate their persistence. That is, we can think the same sorts of thoughts about the same things in many ways. Persistence is arguably the primary attribute of existence, so the more we observe and see consistency, the more it can be said to exist. Things exist functionally because we say they do, but physical existence is unprovable and is only inferred from phenomena. While physical persistence means persistence in spacetime, functional persistence means persistence of entailment: the same causes yield the same effects. Put another way, logic is inherently persistent, so any logical configuration necessarily exists functionally (independent of time and space).

All thoughts fall into one of two camps, theoretical or applied. Thoughts are comprised of information, and while information is always a functional construct, its function is latent in theoretical information and active in applied information. We process theory and application quite differently because theory is mostly conceptual while application is mostly subconceptual. Subconceptual thought is one level deep; we look things up by “recognizing” them as appropriate. So subconceptual “theories” consist of direct, “one-step” consequences: using recall we can search all the conceptual models we know to see if we can match a starting condition to an outcome without thinking through any steps one might need to actually employ the model. For example, if we need to go to work, we will recall that our car is appropriate without any thought as to why or how we drive it. But if we need to compare our car to foot, bike, motorcycle, bus, or car service, for example, we would use logical models that spelled out the pros and cons of each. Logical models, aka conceptual theories, decompose problems into explanatory parts. Application, on the other hand, is mostly a matching operation, which is why we usually do it subconceptually rather than devising a conceptual way to do it. We have a native capacity to align our models to circumstances and to keep them aligned by monitoring sense data as we go. Similarly, the easiest way to teach something is to demonstrate it, which taps into this native matching capacity. Monkey-see-monkey-do has been demonstrated at the neuron level via mirror neurons, which fire when we perform an action we see someone else do. Alternately, one could verbally teach applied information via a conceptual theory. For example, why a cyclist must countersteer to the left to make a right turn can be explained via physical theory. But few cyclists ever learn the theory. Like most applied knowledge, it is easily done but not easily explained. This exemplifies why there is no substitute for experience; conceptual theories are just the tip of the iceberg of all the hands-on knowledge and experience is required to do most jobs well. Consequently, most of our basic knowledge comes from doing, not studying, which means development of subconceptual rather than conceptual knowledge. As we do things, our subconscious innate heuristics will detect subconceptual patterns which we can then recall later.

In principle, theory is indifferent to application. We can develop arbitrarily abstract theories which may have no conceivable application. In practice, our time is limited and need to be productive in life, so most of our theories are designed with possible applications in mind. We can certainly develop theories purely for fun, but then we are amused, which also serves a function. We are unlikely to pursue theories we don’t find interesting, because satisfying curiosity is, as noted, a basic drive and emotion. But my point is just that theory can proceed in any direction and stand on its own without regard to application. Application, on the other hand, cannot proceed in any direction but must make accommodations to the situation at hand. Application done purely subconceptually makes a series of best-fit matches from stored knowledge to current conditions. Application done with the support of theories, which are conceptual, will also make a series of best-fit matches. First, we subconsciously evaluate how well the current circumstance fit models we know to pick the best model. We further evaluate ways that model might fit and doesn’t fit to establish rough probabilities that the model will correctly predict the outcome. Although theories are open-ended, application requires commitment to a single decision at a time. Given everything that can go wrong picking a model, fitting it to the situation, and extrapolating the outcomes, how can we quickly and continuously make decisions and act on them? The answer is belief. Beliefs must be general enough to support quick, confident decisions for almost any situation where a belief would be helpful, but specific enough to apply to real situations correctly. I said above that thoughts are comprised of ideas and beliefs: uncommitted thoughts are ideas, and committed ones are beliefs. Belief is also called commitment, opinion, faith, confidence, and conviction. The critical distinction between ideas and beliefs is that belief, like emotions, is a subconscious reaction to conscious knowledge. We feel commitment as a quale similarly to happiness and sadness, but it makes us feel determined, even to the point of stubbornness. We won’t act without belief, and conversely, belief makes us feel like acting, and acting confidently at that. Because belief has to pass through two evaluations, one conscious and rational and the other subconscious (which we then experience consciously via the feeling of commitment), the rational and subconscious components can get out of sync when more information becomes available. It sounds redundant, but we have to believe in our beliefs; that is, we have to rationally endorse what we feel. Our subconscious mind will resist changing beliefs because the whole value of beliefs comes from being tightly held. Trust, and trust in beliefs, must be earned over time. We are consequently prone to rationalizing, which is the creation of false reasoning that deflects new knowledge while supporting emotional loyalties. Ironically, rationalizing (in this sense) is an irrational abuse of rational powers. While this loyalty to beliefs is a handy survival mechanism, it also makes us very susceptible to propaganda and advertising, which seek to monopolize our attention with biased information to control our minds.10

The effectiveness of theories and their degree of applicability are matters of probability, but belief creates a feeling of certainty. This raises the question of what certainty and truth really are. Logically, certainty and truth are defined as necessities of entailment; that which is true can be proven to be necessarily true from the premises and the rules of logic. One could argue that logical truths are tautologies and are true by definition and so are not telling us anything we didn’t know would follow from the premises. Usually, though, when we think about truth we not concerned so much with the logical truths of theory, whose may spell things out perfectly, but with the practical truths of applied knowledge. We can’t prove anything about the physical world beyond all doubt or know anything about the true nature of noumena, because knowledge is entirely a phenomenal construct that can only imperfectly describe the physical world. However, with that said, we also know that an overwhelming amount of evidence now supports the uniformity of nature. The Standard Model of particle physics professes that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.11. This uniformity translates very well for aggregate materials at human scale, leading to very reliable sciences for materials, chemistry, electricity, and so on. Models from physical science make quite reliable predictions so long as nothing happens to muck the model up. If something unexpected does happen, we can usually identify new physical causes and adjust or extend the models to restore our ability to predict reliably. Circumstances with too many variables or that involve chaotic conditions are less amenable to prediction, but even in these cases models can be developed that do much better than chance. If we believe something physical will happen, it means we are sufficiently convinced that the model we have in mind is applicable and will work. So the purpose of belief is to enable us to act quickly, which we feel subjectively as confidence. Because the goal is to be able to apply the model for all intents and purposes that are reasonably anticipated, our standard for truth is pragmatic rather than logical and so can admit exceptions, which can then be dealt with. The scope of what is reasonably anticipated is usually more of a hunch than something we reason out. This is a good strategy most of the time because our hunches, whose scope includes the full range of our subconscious intuition, are quite trustworthy for most matters. Most of what we have to do, after all, is just moving about and interacting with things, and our vast experience doing this gives us very reliable intuitions about what we should believe about our capabilities. Logical reasoning, and consciousness in general, steps in when the autopilot of intuition doesn’t have the answer.

Many of the considerations we manage as minds concern purposes, which have no physical corollary. In particular, whether we should pursue given purposes are ethical considerations, so we need to understand what drives ethics. We have preferences for some things over other due to dispositional qualia, which as noted above include drives and emotions. Like belief, ethics are a subconscious reaction to conscious knowledge, and so critically depend on both subconscious and conscious factors. But disposition itself is not rational, so ethics are ultimately innate. Considerable evidence now exists supporting the idea that ethical inclinations are innate12, and it stands to reason that behavioral preferences as fundamental as ethics would be influenced by genes because reason alone can’t create preferences. To understand ethical truth, then, we need only understand these inclinations. Note that inclinations don’t lead to universal ethics; ethics need to be flexible enough to adapt to different circumstances. We don’t yet have a sufficient understanding of the neurochemistry of the mind to unravel the physical basis of any qualia, let alone one as complex as ethics, but evolutionary psychology suggests some things. Our understanding of evolution suggests that we should feel an ethical responsibility to protect, with decreasing priority, ourselves, our family, our tribe, our habitat, our species, and our planet. I think we do feel those responsibilities in just that decreasing order, which I can tell by comparing them to each other to see how I would prioritize them. While these ethical inclinations are innate, we build our beliefs about ethics from ideas we learn consciously, which we then accept subconsciously and consequently feel consciously. So, as with any belief, our feelings can get out of sync with our thoughts. Many people believe things that contradict their ethical responsibilities without realizing it because they have either not learned enough to know better or have been taught or accepted false information. So adequate and accurate information is essential to making good ethical decisions.

The Self

I’ve spoken of self and not-self-information, but not about self-awareness. Is self-awareness unavoidable or can a conscious agent get by in the world without noticing themselves? First, consider that all animals with brains exhibit behavior characteristic of awareness, but this doesn’t imply they all have the conscious experience of awareness. Ants, for example, act as if they were aware, but with such small brains, it may seem more plausible that their behavior is simply automatic. And yet ants are self-aware: “ants marked with a blue dot on their forehead attempted to clean themselves after looking in the mirror while ants with brown dots did not.”1314 You can’t fake this: the ants knew exactly where their own bodies were and could reverse information about their bodies from a mirror. From a behavioral standpoint, they are self-aware. But this still doesn’t imply they experience this awareness. Without anthropomorphizing, I would say that an aware (and self-aware) agent experiences things if its brain uses an intermediate layer between sensation and decision (essentially a theater of consciousness) that makes decisions based on a simplified representation of the external world rather than on all data at its disposal. This “experiencing” layer would necessarily acquire a first-person perspective in order to interpret that simplified world and keep it aligned with the external world. We can’t actually tell from behavior whether the ant brain has this additional layer, but I would argue that it does not. The reason is that arbitrarily complex behavior can be encoded as instinct, and in very small animals this strategy is the most effective route. They do detect foreign dots on their heads and interact with them, so their brains have advanced visual and motor skills, but they do this only as a consequence of instincts that help them preserve body integrity. Only a handful of more advanced animals (e.g. apes, elephants, dolphins, Corvus) can pass the mirror test to identify themselves, which they do not as a direct consequence of instinct but because they have an agent layer that experiences self-awareness. Probably all vertebrates and some invertebrates have some degree of consciousness including awareness, attention, and some qualia, and probably all mammals and birds have some measure of emotions and thoughts. Those that can pass the mirror test have sufficient agency to model their physical selves and probably their mental selves as well. Still, though some animals have abilities that can match ours, and many have senses and abilities that surpass ours, something special about human consciousness sets us apart. That something, as I previously noted, is our greater capacity for abstraction, the ability to decouple information from physical referents, which lets us think logically independent of physical reality. And when we think about self-awareness, we are more concerned about this abstract introspection into our “real” selves, which is our set of feelings, desires, beliefs, and thoughts, than with our body awareness.

The idea that the consciousness subprocess is a theater that acts as an intermediate layer between sensation and decision raises the question of who watches that theater. The homunculus argument suggests that a tiny person (homunculus) in the brain watches it, which humorously implies an infinite regress of tinier and tinier homunculi. Though we don’t feel someone else is inside our self, and so on ad infinitum, we do feel like our self is inside us watching that theater. Explaining it away as an illusion is pointless because we already know that the way our minds interpret the world is just a representation of reality and not reality itself. It only counts as an illusion if the intent is to deceive or mislead us, and, of course, the opposite is the case: our senses are designed to give us reliable information. What is happening is that the qualia fed into the consciousness process represent the outside world using an internal representation that simplifies the world down to what matters to us, i.e. down into functional terms. The internal representation bears no resemblance to the external reality. How could it, after all? One is physical and the other is functional (information). But for consciousness to work effectively to bring all those disparate and incomplete sources of information together, it must create the lusion (to coin a word opposite to illusion) that all this information accurately represents the external world. To be completely accurate, it provides this information in two forms, spatial and projected. We feel our bodies themselves spatially through thousands of nerves carrying information to our brains from actual points in space. We interpret our bodies in a spatially omnipotent way, although these nerves actually convey rather limited information. This lusion seems real, though, because we have many body-sense qualia keeping us updated continuously and we feel it gives us seamless, complete awareness of our bodies in 3-D.

We have no such spatial beyond our body, but we have projected senses. Sight and hearing give our eyes and ears information about distant objects. To interpret it, we have a head-centric agent that builds a projection-based internal model of the external world. Smell and touch provide additional information from airflow and vibrations. As with body senses that work together to create a seamless spatial model of the body, our projected senses work together to great a seamless projected model of the world. Eyes collect visual information the same way cameras do, so it should come as no surprise that vision interprets 2-D projections as a window into a 3-D world. Furthermore, binocular vision could in theory and does in practice achieve stereoscopic sight. The signal from a monocular or binocular video camera itself does nothing to facilitate the interpretation of images. We interpret at that data using a highly bespoke process that cherry-picks the information most likely to be relevant (e.g. lines of sharp contrast (boundaries)) and applies real-time recognition to create the lusion that one has correctly identified what one is looking at and can consequently interact with it confidently. The goal is entirely functional (i.e. to give us the competence to act), and our feeling that the outside world has “actually” been brought into our minds happens only because the consciousness process is “instructed” to believe what it sees. The resulting lusion is a fair and complete description of reality given “normal” senses, though we are abundantly biased about what counts as normal. Scientific instruments can extend our perception of the world down to the microscopic level (for example), but not by giving us new qualia. Rather, instruments just map information into the range of our existing qualia, which can create a new kind of fair and complete lusion when done well. Our sensory capacity remains constrained by the qualia our subconscious feeds our conscious. Also, we consciously commit to a single interpretation of a sensory input at a time15, demonstrating that sensory “belief” happens below the level of consciousness. Consciously, we go along with sensory belief so long as it is not contradicted, but if our recognition triggers any other match, as when a harmless stick starts to move like a snake, we will flip instantly to the new match. In practice, surprises are rare and we feel like we continuously and effortlessly understand what we are seeing. This is a pretty surprising result considering how complex a process real-time recognition is, but it is not so surprising once we appreciate the contribution of memory.

We are convinced by the lusion our senses present to us because it is integrated so closely with our memory. In fact, understanding is really a product of memory and not senses; our senses only confirm what we already (think we) know. We don’t have to examine everything about us closely because we have seen it all before (or stuff similar enough to it) and we are comfortable that further sensory analysis would only confirm what we already know. We do inevitably reexamine the objects we interact with in order to use them, but never more closely than is necessary to achieve the functions we have in mind because knowledge is a functional construct. If we do take the time to study an ordinary object just for fun or to pass time, this dedication of attention has still surpassed all others in that moment to become the one action that has the most potential to serve our overall functional objectives. In other words, we can’t escape our functional imperatives. If our senses don’t align with any memory (e.g. consider the inverted glasses example, or being unexpectedly swallowed a whale, etc.), we will be disoriented until we can connect senses to memory somehow. Our confidence in the seamless continuity of what we see is a function of the mind’s real world, which is the mental model (in our memory) of the current state of the physical world. Our sensory inputs don’t create that model, they only confirm it. The attention subprocess of consciousness (which is itself subconscious) stays alert for differences between what it expects the mind’s real world to be and what the senses provide. These differences are resolved subconsciously in real-time to prevent the simultaneous interpretations of one image into multiple objects despite the fact that any image is potentially ambiguous. The subconscious mind actively crafts the lusion we perceive, even though it can be tricked into seeing an illusion. The important thing is that the match between lusion and reality is generally very reliable, meaning we can act on it with confidence. Our whole suite of qualia continuously confirm that the mind’s real world is the actual world by making it “feel like” it is. As I work by the window, cars travel up and down my street and I hear them before I see them. I know about what they will look like before I see them, and I generally only see them peripherally, but I am confident in my seamless mental model despite being surprisingly short on detailed information.

Back to the original question, is there a homunculus viewing these material and projected views of the world? Yes, there definitely is, and it is the consciousness subprocess. This subprocess is technically quite distinct from a small person because it is just one subprocess in a person’s brain and not a whole person. The confusion comes because we identify our conscious minds with the bodies that use them to preserve the lusion. But we don’t need to extrapolate another body for the mind; we know minds are disembodied functional entities with no physical substance. So there is no regress; consciousness was always a disembodied agent. It is only awkward for us to conceive of ourselves as having both physical and functional components to ourselves if we are militantly physicalist. Using common sense, we have had no trouble with this dichotomy, probably for thousands and even millions of years. It is not regress to say that consciousness is a subprocess of the brain with its own internal model of the world, it is just a statement of fact. Consciousness is designed to see itself as an agent in the world rather than as a collector and processor of information, and the subconscious is designed to spoon-feed consciousness information in the forms that support that lusion. The result is that consciousness has a lusion that mirrors the outside world, and interacting with the lusion makes the body perform in the real world, much like pulling on puppet strings.

The Stream of Consciousness

We’ve taken a closer look at some of the key components, but haven’t yet hit some of the bigger questions. I have pointed out how consciousness separates self and not-self-information. I described why qualia need to be distinguishable from each other and also how stronger custom feelings inspire stronger reactions. I reviewed how emotions and thoughts work. And I described how the self is an informational entity that is fed a simulation (a lusion) that lets us engage in virtual interactions that power physical interactions in the external world. But I still haven’t tied together just why consciousness uses awareness, attention, feelings, and thoughts to achieve its goal of controlling the body. It comes down to a simple fact: there is only one body to control. This has the consequence that the whatever algorithm is used to control the body, it must settle on just one action at a time (if one takes the direction of all bodily parts as a single, coordinated action). For this reason, I call this core part of consciousness that does the deciding the SSSS, for single-stream step selector. The SSSS must be organized to facilitate taking the most advantageous step appropriate in every circumstance it encounters. This is not the kind of problem modern-day procedural computer programs can solve because it must simultaneously identify and address many goals whose attainment covers many time scales. The evolutionary goal of survival, which requires sustenance and reproduction (at least), is the only long-term goal, but it must be subdivided into many sub-strategies to outperform many competitors.

From a logical standpoint, before we consider the role of consciousness, let’s look at Maslow’s Hierarchy of Needs, which Abraham Maslow proposed in 1943. He outlined five levels of needs which must be satisfied in order for a person to thrive: physiological, safety, belongingness, esteem, and self-actualization. All of these needs follow necessarily and logically from the single evolutionary need to survive, but it is not immediately apparent why and how. I would put his first two needs at the same level. Physiological needs, such as food, shelter, and sex, are positive goals and their corresponding negative goals, which are defensive or protective measures, are Maslow’s safety needs. All animals must achieve these positive and negative goals to survive. The next two needs are belongingness and esteem, which are only relevant for social species. Individuals must work together in a social species to maximize fitness of the group, so whatever algorithm controls the body must incorporate drives for socialization. Belongingness refers to the willingness to engage with others, while esteem refers to the effectiveness of those engagements. Addressing physiological and safety needs benefits survival directly, but the benefits of one socialization strategy over another are indirect and can take generations to demonstrate their value. This value has been captured by instincts that make us inclined to favor socialization behaviors that have been successful in the past. We may feel that our social behavior is mostly rational, but it is mostly driven by emotions, which are complex instinctive socialization mechanisms. We must attend to these first four needs to prevent problems, so they are called deficiency needs. The last need, self-actualization, is called a growth need because it inspires action beyond a deficiency need. For maximum competitiveness, all animals must both avoid deficiencies and desire gains for their own sake, which I described above as consequences and curiosity. But self-actualization goes beyond curiosity to provide what Kurt Goldstein originally called in 1934 “the driving force that maximizes and determines the path of an individual”16. We don’t really have any concrete evidence to support a self-actualization drive, but it does stand to reason that instincts would evolve to push us both to cover deficits and seize opportunities to flourish and grow, and the latter should provide more competitive edge than the former. Such a drive would inspire people to achieve overall purposes in life, i.e. to seek meaning in life through certain kinds of activities, and all people feel a pull to satisfy such purposes. It is safe to say that the reasons we imagine drive us toward those purposes are rationalizations, meaning that we devise the reasons to explain the behavior rather than the other way around. But it doesn’t matter whether we know this; we are still inspired to lead purpose-driven lives that go above and beyond apparent survival benefit because the self-actualization drive compels us to excel and not just live.

Consciousness may not be the only solution that can satisfy these needs effectively, but it is the one nature has chosen I’m going to list the main reasons consciousness is well suited to meet these needs, roughly from most to least important.

  1. Divide and conquer. Some decisions are more important than others, so any control algorithm needs to be able to devote adequate resources and focus to important decisions and less to mundane ones without becoming confused. Consciousness solves this in many ways, but most significantly it bifurcates matters worthy of top-level consideration from those that are not by cordoning the latter group off in the subconscious outside conscious awareness. Secondly, it uses qualia of different levels of motivating power to focus more attention on pressing needs. And third, it provides a medium for conceptual analysis, which divides the world up into generalized groups about which one has useful predictive knowledge.

  2. Awareness and attention. Important information can arrive at any moment, so any control algorithm should stay alert to any and all information the senses provide. At the same time, computing resources are finite, so senses should specialize in just the kinds of information that have proven the most useful. Consciousness achieves this goal admirably with awareness and attention. Awareness keeps all qualia running at all times, while attention leverages subconscious algorithms to notice unusual inputs and also lets us direct conscious thoughts along the most promising pathways. Sleep is a notable exception and so it must provide enough benefits to warrant the cost.

  3. Incorporating feedback. Let’s look first at the selection part of single-stream step selection. The algorithm must pick one action instead of others, and it has to live with the consequences immediately. This is equivalent to saying it is responsible for its decisions. Consciousness creates a feeling of responsibility, not because we controlled what we did but because future behavior builds on past decisions. This side-steps the question of free will for now; I’ll get back to that. The important point is that consciousness feels like it caused decisions because this feeling of responsibility is such a great way to incorporate feedback.

  4. Single-stream. Now let’s think about single-stream. It is not a coincidence that our decisions must happen one at a time and we also have a single stream of consciousness. We know the subconscious does many things in parallel, but we can’t consciously think in parallel, and we must, in fact, resolve concepts down to one thing at a time to think further with them. The reason is that it is a strong adaptive advantage to be of one mind when it comes time to make a decision. If we have two or more competing lines of thought which simultaneously have our attention, then when we come to the moment of decision we will need to narrow them down to one since we have only one body. But the problem is, every moment is a possible moment of decision. If evolved creatures could make decisions on their own timetable, then it would be faster to think through many scenarios simultaneously and then pick the best. But not only do we not get to choose those moments, we actually make decisions every moment. We are always doing something with our bodies, and while much of that is not consuming much of our conscious attention because we have it on subconscious “autopilot,” we are committing to actions continuously, and it wouldn’t do to be unsure. Consequently, it is better for our conscious perspective to be one which perceives a single train of thought making the best decisions it can with the available information. This means that alternative strategies must be simulated serially and then compared. While this is slower than thinking in parallel, our subconscious helps us reap some benefits of parallel thinking by giving us a good feel for alternatives and by facilitating switching between different lines of thought.

  5. Qualia help us prioritize. Qualia help us keep our many goals straight in our minds simultaneously. If the primary purpose of consciousness is to make decisions, but it needs to prioritize many constantly competing goals, it needs an easy way to prioritize them without bogging down the reasoning process. Qualia are perfect at that because they force our attention to the needs our subconscious considers most pressing. It is not a good idea to leave the decisions about what to think about next entirely up to the rational mind, because it has no motivation to focus on important matters.

The Hard Problem of Consciousness

Does the above resolve the “hard” problem? The problem is only hard if we aren’t willing to think of consciousness as an experience created by the subconscious. That’s odd, because we know it has to be. There is clearly no physical substance to the conscious mind other than the neurochemistry supporting it. It must therefore be a consequence of processes in the brain. We know that our senses (including complex processing like 3-D vision), recognition, recollection, language support, and so on require a lot of computation of which we have no conscious awareness, so we have to conclude the subconscious does the work and feeds it to us in the form of our first-person perspective. What separates this first-person perspective from what a zombie or toaster would experience (i.e. nothing), and what ultimately gives our qualia and other experiences meaning and “feel,” is their function. Experience is information, which means it enables us to predict likelihoods and take actions. The whole feel and excitement of experience relate to the potential value of the information. If it were white noise, we wouldn’t care and all of experience would dissolve into nothingness. So the subconscious doesn’t just provide us with pretty accurate information through many qualia, it also compels us to care about that information (via drives, emotions, and beliefs) roughly in proportion to how much it matters to our continued survival. As I said, qualia feel like what they inspire us to do. The feel of qualia is just the feel of the survival drive itself broken down to a more granular level.

So my contention is that information is the secret sauce that makes experience possible, specifically because it makes function possible. Chalmers is open to information being the missing link:

Information seems to be a simple and straightforward construct that is well suited for this sort of connection, and which may hold the promise of yielding a set of laws that are simple and comprehensive. If such a set of laws could be achieved, then we might truly have a fundamental theory of consciousness.

It may just be…that there is a way of seeing information itself as fundamental.17

Information is fundamental. While it is ultimately physical systems that collect and manage information in a physical world, one can’t explain the capabilities of these systems in physical terms because they leverage feedback loops to create informational models. The connection from the information to their application just becomes too indirect and abstract to track physically. But they can be explained in functional terms. Our first-person experience of consciousness is a kind of program running in the brain that interprets awareness, attention, feelings, and thoughts as the components of a unified self. We are predisposed by evolution to think of ourself as a consistent whole, despite it really being a mishmash of disparate information. Millions of years of tweaks bring it all together to make a convincing lusion of a functional being acting as an agent in the world.

The first-person perspective of consciousness is a good and possibly ideal solution for controlling animal bodies. The reason is that it links function to form using effectiveness as the driving force. Specifically, it continuously aligns representations in the brain to external reality by providing conscious rewards for behavior that lines up with survival needs. Survival is a hard job and sounds onerous, but we enjoy doing it when we do it well because of these rewards. The other side of the coin, of course, is that we dislike it when we do it badly, which gives us an incentive to up our game. It isn’t the kind of control system one could build with if-x-happens-then-do-y logic. Rather, it is self-balancing and autopoietic. Autopoiesis refers to an organism’s ability to reproduce and maintain itself, but in the case of the brain’s top-level control system it more generally refers to its ability to manage many simultaneous sources of information and many goals (polytely) gracefully. Subjectively, we feel different priorities fighting for our attention in different ways, leading us to prioritize them as needed. I don’t know if first-person subjectivity is the only way to solve this problem gracefully and well, but I suspect so. In any case, it evolved and works and we only need to explain how. Subjectively, consciousness has its own functional, high-level way of seeing the world that interprets things in terms of their perceived functions, dwells on what it knows, and issues orders to the body to act.

We can only consciously think a single stream, but it seems likely that all the algorithms of the subconscious are parallel. Thousands to millions of parallel paths are vastly more powerful than a single path, but the single path of consciousness draws on many subconscious algorithms to become much more than a single logical path could. Consciousness must logically resolve into a single stream so we can come to one decision at a time to guide one action at a time. Theoretically, a brain could maintain multiple streams of consciousness before the moment of decision, but mother nature has apparently found that this creates a counterproductive internal conflict because we find we can feel or think just one non-conflicting thing at a time. Note that severing the hemispheres via corpus callosotomy possibly forces such a split in some ways, at least temporarily. But split-brain patients report feeling the same as before the split, and can still use either hemisphere to sense objects presented only to one (e.g. to the left eye, which is controlled by the right hemisphere). It is now theorized that the thalamus, which is not separated by this operation, mediates communication between the hemispheres18. It is also possible that neural plasticity can either regrow cortical pathways or repurpose subcortical (e.g. thalamic) pathways to maximize interhemispheric integration19. But I would just emphasize that the stream of consciousness is linked to the SSSS (single-stream step selector), so having more than one stream of consciousness would make coordinated action impossible, which would at the very least require one stream to dominate the other.

Multiple personalities, now called dissociative identity disorder (DID), arguably creates serial rather than parallel streams of consciousness. This strategy usually arises as an adaption for dealing with severe childhood abuse. As a protective mechanism, DID patients’ identities fragments into imaginary roles as an escape from their true circumstances. Although their situation has driven them to invest an unrealistic level of belief in these alter personas, which in turn suppresses belief in their real persona, I don’t consider this condition to represent true multiple serial streams of consciousness, but rather just a single, confused stream. All of us live in worlds we construct with our mental models, and that includes hopes and dreams not at all apparent from our physical circumstances but constructed from our interpretation of the fabric of social reality. When we embrace personality traits as our own, we are trying on a role to see how it fits. The way we behave then comes to define us, both from our own perspective and from others. But it isn’t all we are; we always have potential to change, and, in any case, our past behavior may be indicative of but does not constrain our future behavior. Physical events don’t actually repeat themselves; only patterns repeat, and patterns have boundaries that are always subject to interpretation. Given this inherent fluidity of function, is it meaningful to speak of a unified self?

Our Concept of Self

I have spoken of how our knowledge divides into knowledge about the self and not-self, and of how consciousness creates a perspective of an agent in the world, which is the self. But I haven’t spoken about what we know about our own self, i.e. about self-knowledge. Self-knowledge is clearly an exercise in abstract thinking because while all higher animals have some capacity to experience simulations of themselves acting in the world, knowledge of self goes further to attribute situation-independent qualities to that self. Self-knowledge arises mostly from the same submerged iceberg of subconscious knowledge that underlies all our knowledge and then leverages the same kind of generalization skills we use to process all conceptual knowledge. But it has the important distinction of being directed at ourselves, the machine doing the processing. Having the capacity to reflect on ourselves is a liability from an evolutionary standpoint because it presents evolution with the additional challenge of ensuring that we will be happy with what we see. Since we have been designed incrementally, this problem has been solved seamlessly by expanding our drives and emotions sufficiently over time to keep our rational minds from wandering excessively. Humans are such emotional creatures, relative to other animals, because we have to be persuaded to dedicate our mental powers sufficiently toward survival and all its attendant needs. This persuasion doesn’t stop with short-term effects, but influences how we model the world by leading us to adopt beliefs and convictions for which we will strive diligently. Those beliefs may be rationally supported, but they are more usually emotionally supported, often based solely on social pressure, because we are adapted to put more stock in our drives, emotions, and the opinions of others than our own ability to think.

So we can see ourselves, but we are biased in our perspective. When well-adjusted, we will accept the conflicting messages from our emotional, social, and rational natures to see a unified and balanced entity. While it is not hard to see why it is adaptive for us to feel like unified agents, doesn’t our ability to think rationally highlight all the diverse pieces of our minds, which could alternately support a fractured view of ourselves? Yes, and it does. We see many perspectives and are often torn over which ones to back. We can doubt whether the choices we make even accurately represent what we are or are just compromises we have to make because we don’t have enough time or information to discover our true preferences. But our persistent sense of self is generally not shaken by such conflicts. Our feeling of continuity with our past creates for us at any given moment a feeling of great stability: we may not be able to picture all the qualities we associate with ourselves as we probably have not given any of them much active thought recently, but we know they are there. Themes about ourselves float in our heads in such a way that we can sense that they are there without thinking about them. This “floating”, which applies to any kind of memory and not just thoughts about ourselves, feels like (and is) thousands of parallel subconscious recollections helping to back up our conscious thoughts without our having to focus directly on them. Subconscious thoughts not brought into focus contribute to our conscious thought by giving us more confidence in the applicability of any intuition or concept to the matters at hand. We know they are there because we can and often do explore such peripheral thoughts consciously, which reinforces our sense that our whole minds go much deeper than the thoughts we are executing at any given moment in time. Note that this is both because the subconscious does so much for us and because thoughts are informational structures in their own right — functional entities — and not just steps in a process, and so have a timeless quality.

So do we know ourself? Knowledge is always phenomenal, not noumenal, and so presents a perspective or representation of something without explaining the whole thing. We know many things about ourselves. Intuitively, we know ourselves through drives, emotions, our responses to social situations, and even our habitual thinking patterns. Rationally, we know ourselves from impartial considerations of the above and our abilities. It isn’t a complete picture — nothing ever is — but it is enough for most of us to go on. We see ourselves both as entities with certain capabilities and potential and as accomplishers of a variety of deeds. We know many things strongly or for sure, many more things still we only suspect or know little about, and of everything else, which is surely a lot, we know nothing. But it is enough. We always know enough to do something, because time marches on and we must always act. Is it wrong that we don’t spend more time in self-contemplation to ferret out our inner nature? Aside from the obvious point that what we do with our minds can’t be either right or wrong but simply is what it is, we can’t break with our nature anyway. Our genomes have a mission which has been reinforced over countless generations, and it makes us strongly inclined to pursue certain objectives. “I yam what I yam and tha’s all what I yam.”20 So while I would argue that we can direct our thoughts in any direction, we can’t fundamentally alter the nature of our self, which is determined by nature and nurture, though we can alter it somewhat over time by nurturing it in new directions.

Knowing now that we look at ourselves from a combination of intuitive and rational perspectives, what do we see? Mostly, we see an agent directing its body, both in practice and as the lead character in stories we tell ourselves. The self is a fiction just as all knowledge is a fiction, but that doesn’t make knowledge or the self unreal; both are real as functional entities: we are real because we can think of ourselves as real. As I said above, self and not-self information is a fundamental distinction in the brain, so far from being illusory, it is profoundly “lusory”. Our concept of self develops from turning our subjective attention inward, making our subjective capacity the object of study: I as subject study me as object. Some philosophers claim that the self can’t understand the self, because self-referential analysis in inherently circular, but this is not true. The study of anything creates another layer, a description of the phenomenon that is not the object (noumenon) under study itself. But if what we know about the self or mind is created by the mind, is it really knowledge? Can we escape our inherent subjectivity to achieve objectivity? Yes, because objective knowledge is never absolute, it is functional — it is knowledge if it makes good predictions. We don’t actually have to know anything that is incontrovertibly true about the mind; we only need to make generalizations that stand up consistently and well. So it doesn’t matter if the theories we cook up are far-fetched or bizarre from some perspectives as that can be said of all scientific theories. Theories about our subjective lives attain objectivity if they test well. This doesn’t mean we should throw every theory we can concoct at the wall to see if it sticks; we should try to devise theories that are consistent with all available knowledge using both subjective and objective sources. We can’t afford to ignore our subjective knowledge about the mind because almost everything we know about it emanates from our subjective awareness of it.