My Posts Page

The Story of the Mind: A New Scientific Perspective into Our Essential Nature

We all have minds. We use them continuously every waking moment of every day. We take what they can do for granted. Why we have them and how they work has no direct bearing on our lives, so we mostly just carry on and don’t worry about it. It’s remarkable that all understanding, scientific or otherwise, depends critically on our ability to use our minds, yet we don’t have to understand them to use them and thus far have failed to do so. Similarly, we can use software without having read its source code, much less having been able to have written it. The “software” of our minds, usually called wetware, was “written” to meet our needs in the ancestral state, not the rapidly changing and increasingly artificial environment we have created. We can’t afford to remain mere users; we have to understand what makes us tick and even to tweak or upgrade our programming if we want to survive in the long run. To get started, we have to find a way to make minds and ideas into objects of study themselves. But how should we go about it? The Greeks started with the psyche, which is analogous to what we would call the soul. Aristotle wrote in Peri psyche that the psyche is that which makes the body alive and able to perform its characteristic functions. He divided them into vegetative powers, concerned with nutrition and growth; sensory powers (that is, vision, hearing, taste, smell, and touch, as well as the internal senses of imagination and memory); and intellectual powers (understanding, assertion, and discursive thinking).1. From my perspective, this is pretty close; closer than anyone has come since. The theory I will develop here will corroborate his view. What Aristotle had that has been in short supply lately is a broad mandate. Science did not yet exist, so he created it, substantially filling in the major branches. As the tree of science has grown, it has become less fashionable and feasible to address the big picture with fresh eyes the way he did. Science has trended toward specialization, not generalization. There are perfectly good reasons for this, which I will address later, but suppose we take it as a challenge. What if our understanding of the mind has been held back by the way science has branched, leading to detailed study in specialized areas while missing the forest for the trees? What if I took on the broad mandate to explain the mind from first principles, rethinking the structure of science and what it means in relation to the mind?

That challenge is a raison d’être quest for me. I’ve always been just a bit obsessed with examining my own thought processes to get to the bottom of it all. We all have thoughts about our thoughts and don’t expect to make a career of it, so I was not surprised to find no obvious path forward in college. I started out focusing on genetics, but it was all lab work in those days and I am more of a theorist than an experimentalist. I turned my attention to computers but saw no promise in the artificial intelligence of the time, which was based entirely on representation and logic. I put my thoughts on the matter aside as something to get back to in the future and settled into a career in systems and application programming, which kept me off the streets. But it kept bothering me, so I started writing a book to explain the mind in 1996. But with a family and a full-time job, I was only able to make sporadic progress until I retired in 2016, at which time I decided to fulfill the quest. I’ve rewritten the first few chapters dozens of times as my ideas have evolved.

Many other people have also been thinking about the problem. Unraveling the mind has become something of an international obsession over the past fifty years. But I don’t think many have looked at it with a broad mandate and fresh eyes. It’s all dividing and no conquering because it is not a problem that can be solved with specialization. We need to step back to a state of maximal generalization and from there start to focus in. I am not here to refute any of the findings of science. I am here to embrace them. But our scientific knowledge that bears on the mind is scattered and does not speak to the nature of mind confidently from the top down. Different schools of thought have evolved to cover different aspects but have only culminated in a lot of conflicting schools of thought. I’m going to try to develop a firm foundation for a comprehensive view that integrates our scientific knowledge into one framework.

My approach is scientific but to achieve that we have to agree on what it means to be scientific. For starters, I will take on the philosophy of science itself, both defining meaning in science and providing an expanded framework of what science should be. Science is founded on educated guesswork, by which I mean proposing hypotheses to explain phenomena. One then tests the hypotheses, which either confirms them or highlights the need for new hypotheses. All practicing scientists are expected to conduct original scientific research, which includes both new hypotheses and new experiments to test them. I am not an experimentalist; I am a synthesist. My goal is not to make new scientific discoveries, but to reorganize existing scientific knowledge into a more explanatory framework. Consequently, I will only be proposing hypotheses that are already supported by abundant evidence. My claims, as I state them initially, may not seem adequately supported, but as the book proceeds I will fill in the gaps. It is not my intention to be contentious or even controversial as I am only seeking to form a larger accord in scientific thought, which is necessary to propose and advance theories of the mind. Keep your eyes open for any claims that contradict settled science and feel free to call me out on them.

The Mind Matters: The Scientific Case for Our Existence

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. The eliminativist idea that everything that exists is physical, aka physical monism, is true so long as one is referring to physical things, including matter, energy, and spacetime. But another kind of existence entirely, which I will hereafter call functional existence, is about the capabilities inherent in some systems. These can be capabilities of physical or fictional systems; their physical existence is independent from their functional existence and only relevant to it in limited ways. A capability is the power to do something, so functionality can be called a behavioral condition, which is a different thing than any underlying physical mechanism that might make it possible. What the mind does, as opposed to what the brain does, is to support the appropriate function of the body; the term “mind” refers to functionality while “brain” indicates the physical mechanism. Scientific explanations themselves (even those of materialists) are entirely functional and speak to capabilities, and are not at all about the physical form an explanation might have in our brains. But these theories don’t have to be about functional things, and those of physics and chemistry never are.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. So far as we know, everything in the universe except life is devoid of function. Physics and chemistry help us predict the behavior of nonliving things with equations. Physical laws are quite effective for describing simple physical systems but quite helpless with complex physical systems, where complexity refers to chaotic, complex, or functional factors, or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. But the weather and all other nonliving systems are not capable of controlling their own behavior; they are reactive and not proactive. Capability arises in living things because they are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes in DNA and bodies that replicate it. I can’t prove that functionality could only arise in a physical universe through complex adaptive systems, but any system lacking cycles of both positive and negative feedback would probably never get there. Over time, a CAS creates an abstract quantity called information, which is a pattern that has predictive power over the system. We can’t actually predict the future using information, but we can identify patterns that are more likely to happen than random chance, and this power is equivalent to an ability to foretell, just less certain.

Functional systems, which I will also refer to as information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities at all. While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented.

In summary, I am proposing a new kind of dualism, which I call form and function dualism, which says that everything is physical, but also that information management systems create additional entities that have a functional existence. Further, an infinity of functional entities can be said to exist hypothetically even if no physical system is implementing them. Information management systems that exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

    Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful to discuss such functions independently of the underlying physical systems they run on.

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final, but the latter three are just aspects of function (akin to what, who, and why) and so are teleological. He used the word cause more broadly than we do today; cause, as in cause and effect, refers only to the efficient cause, “who” caused what. The formal cause refers to the lines we draw to distinguish wholes from their parts, i.e. our system of classification. To the Greeks, these lines seemed mostly intrinsic, but we see them today more as artificial constructs we impose on the world for our convenience.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. It is my contention that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 19481, which then led into systems theory2, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Minds are dynamic information management systems built in animal brains that create information in real-time. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind3. While we know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, Ryle felt it still had tacit if not explicit “official” support in 1949. While we know our lives metaphorically bifurcate into two, one of ‘inner’ mental happenings and ‘outer’ physical happenings, each with a distinct vocabulary, he felt we went further philosophically: “It is assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed, contending that the mind is not a “ghost in the machine”, something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake”, a situation in which one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, the mistake arises from a failure to understand that forest has a different scope than tree.4 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

Ironically, Ryle made the bigger mistake with categories than Descartes. His mistake was in thinking that the whole problem arose from a category mistake, when actually only a superficial aspect of it did. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because the function of what happens mentally cannot be explained in physical terms because while the brain runs the mind (akin to software), it doesn’t know its purpose. The mistake is in seeing the superficial category mistake but missing the legitimate categorization. Function is not form and can never be reduced to it, even though it can only happen in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but that doesn’t mean those processes are the subject matter of our mental vocabulary. Those processes are a minor aspect; what we are really talking about is what we can do, i.e. how we can use information to change the future. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong; the way the mind manifests physically is not a different type of existence, but they way it manifests functionally is, and that is what really matters here. This is the kind of dualism Descartes was grasping for, but he overstepped his knowledge by attempting to provide the physical explanation. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are fundamentally not physical and their existence is not dependent on space or time; they are pure expressions of hypothetical relationships and possibilities.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because the exhaustive reach of material existence has come to be synonymous with the triumph of science over mysticism. But physical science alone can’t give us a complete picture of nature because function, which begins as physical processes, can acquire persistence and hence existence in the physical world through information management systems.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. Understanding the underlying physical system can’t explain behavior because the link between them is indirect, which as noted above detaches the physical from the functional. Also, unlike digital computers, which are perfectly predictable given starting conditions, minds have chaotic and complex factors that impede prediction. We can conclude that emergence is a valid philosophical position that describes the creation of information, though it is a misleading word because it suggests the underlying physical system causes the functional system. Cause in a feedback-based system is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system can thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but through feedback a tool can come to be designed to perform the job better.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that relates to the capabilities it confers to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same traits. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while it others the “purpose” of the gene can be hard to identify. Also, knowing the chemistry will not reveal all the kinds of circumstances in which it might help or hurt. Any model of causes and kinds of circumstances we develop will be a gross simplification of what really happens, even though it might often work well most of the time. In other words, the traits we describe are only generalizations about the purpose of the gene. Its real purpose is an amalgamated sum of every selection event back to the dawn of life. The functionality is real, but with a very deep complexity that can’t be summarized without loss of information. The functional information wrapped up in genes is a gestalt that cannot be decomposed into parts, though an approximation of that function through generalized traits works well enough in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and the traits they confer when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts arise subconsciously but sometimes present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first conceptual thinking, i.e. thinking with concepts, which most notably includes logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects) taken to be true, and draws consequences from them. Subjects, predicates, and premises are concepts viewed from a logical perspective. The second approach is subconceptual thinking, which is a kitchen sink of data analysis capabilities. Unlike instincts, whose reactions are fixed, subconceptual thinking does not produce fixed responses. Subconceptual thinking includes common sense, pattern recognition, and intuition, but also includes much of our facility for math, language, music and other largely innate but not fixed, instinctive talents. Much of what we learn from experience is subconceptual in that it is not dependent on conceptualizing or logical reasoning. Conditioning, for example, with or without reinforcement, is subconceptual. Much, or even most, of the data our brains gather about the world is subconceptual and is there to help us despite the lack of a conceptual understanding. When conceptual and subconceptual thinking are done consciously we call it reasoning. Reasoning is the conscious capacity to “make sense” of things, which means to produce useful information. What we are conscious of is organizing, weighing, and otherwise assessing all the factors relevant to a situation. It doesn’t need to involve concepts or logic, and mostly it doesn’t. We lack conscious awareness of many aspects of both conceptual and subconceptual thinking, so these are said to be subconscious. For example, we recognize and recall things without knowing how, we can tell when sentences are properly formed, and we have hunches about the best way to do things that just come to us by intuition.

Subconceptual thinking uses subconceptual data (“subconcepts”) while conceptual thinking uses concepts. Subconcepts are drawn from the senses and subconceptual processes to create a large pool akin to big data in computers. Subconcepts and big data are data that is collected without knowing the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction, i.e. without a corollary in the physical world, that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Some kinds of reasoning can be done subconceptually by pattern analysis, specifically recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises, not diffuse, unstructured data. The other kinds of reasoning can also leverage concepts, but deduction specifically requires them. Also, so far as we know, deduction can’t be done subconsciously. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity. Note that while all reasoning, being the top level or final approval of our decision-making process, is also strictly conscious, many other kinds of conceptual and subconceptual thinking happen subconsciously.

Relationships that bind subconcepts and concepts together form mental models, which constitute our sense for how things work or behave. Mental models have strong subconscious support that lets them appear in our heads with little conscious effort. The subconceptual aspects of these models give them a very “real,” sensory feel to us, while the conceptual aspects that overlay them connect things at a higher level of meaning. Although subconceptual thinking supports much of what we need to do with these models (akin to an autopilot), conceptual thinking organizes things better for higher purposes, and logical reasoning can be much more effective for solving problems than our more limited conceptual and subconceptual thinking processes. While conceptual and subconceptual data analysis is quite powerful, it can’t readily solve novel problems. Logical reasoning, however, gives us an open-ended capacity to chain causes and effects in real time. As we mature we build a vast catalog of mental models to help us navigate the world. We remember the specific times they were applied, but mostly the general sense of how to use them.

The physical world lives in our minds via mental models. Our minds hold an overall model of the current state of the physical world that I call the mind’s real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world. The mind’s real world leverages countless mental models we have that have helped us understand everything we have ever seen. These models don’t have to be right or mutually exclusive; whatever models help provide us with our most accurate view of physical reality comprise our conception of it. The mind’s real world “feels” real to us, although it is purely a mental construct, because the mind is inclined to interpret its sensory connections to the physical world that way instinctively, subconceptually and conceptually. But we don’t just live in the here and now. Because the mind’s primary task (and the whole role of information and function) is to predict the future, mental models flexibly apply to a range of circumstances. We call the alternative ways things could have been or might yet be possible worlds. In principle, the mind’s real world is a single possible world, but in practice our knowledge of the physical world is imperfect, so our model of it in the past, present and future is always a constellation of possible worlds.

In summary, all behavior results from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genetic data is a first-order bearer of information that is collected and refined on an evolutionary timescale. Instincts (senses, drives, and emotions) are second-order bearers of information that process patterns in real time whose utility has been predetermined by evolution. Subconcepts are third-order bearers of information in which the exact utility of the patterns has not been predetermined by evolution, but which do tend to turn out to be valuable in general ways. Finally, concepts are fourth-order bearers of information that are fundamentally symbolic; a concept is a pure abstraction that represents a block of related information distilled from patterns in the feedback. Some subconscious thought processes (e.g. vision and language processing) manipulate concepts in customized ways without applying general-purpose logical reasoning, which can only be done consciously. Logic finds reasons, i.e. rules, that work reliably or even perfectly in mental models. The utility of logical reasoning ultimately depends on correlating models back to the real-world, and for this we depend on mostly subconscious but conceptual reverse recognition mechanisms that fit our models back to reality. Recognition and reverse recognition are complex problems requiring massive parallel computation for which present-day computers are only recently developing some facility, but for us they just happen with no conscious effort. This not only lets us think about more important things, it makes our simplified, almost cartoon-like representation of the world through concepts feasible.

Our four real-time thinking talents — instinct, subconceptual thinking, conceptual thinking, and logical reasoning (this last one being a kind of conceptual thinking) — are distinct but can be very hard to cleanly separate. We know instinct influences much of our behavior, but we are quite unsure where instinct leaves off and tailored information management begins because they integrate very well. And even complex behavior, most notably mating, can be driven by instincts, so we can’t be too sure instinct isn’t behind any given action. While subconceptual and conceptual thinking can be readily separated based on the presence of concepts, it can be difficult to impossible to say at exactly what point a concept has coalesced from subconcepts. In theory, though, I believe there must be a logical and physical point at which a concept comes to exist, the moment that a set of information is referenced as a collective. This suggests that conceptual processes differ from subconceptual ones because they involve objectification of data by reference. Logical reasoning refines conceptual thinking by introducing the logical form, which abstracts logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen subconsciously, because we consciously decide whether to act. Or do we? Habitual or snap decisions are sometimes made on a “preapproved” basis where we act entirely on subconscious reasoning which we then only observe consciously. We do always have the conscious prerogative to override “automated” behavior, though it may take us some time to decide whether to do so. The truth is, at one level of granularity or another, all our activity is driven subconsciously by muscle memory or procedural memory, which needs some conscious approval to proceed, but that approval can be implied by circumstances, effectively taking it out of the hands of consciousness. I posit that logical reasoning of any complexity only happens consciously as well because only consciousness is equipped to pursue a chain of reasoning. We can still reason logically when we dream and daydream, as this chaining capacity is not otherwise occupied then, with greater free association that can lead to more creativity, though with less rigor.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”5 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one6. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, but subconceptual and conceptual, and the experience they created. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that indicate the conceptual use of cause and effect, which goes beyond what instinct and subconceptual thinking could achieve. Other animals do not, and I suspect all others lack even a rudimentary conceptual capacity. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as it provides greater adaptive power. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, meaning that language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Advanced animals can logically reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Perhaps the ecological niches into which they evolved did not present them with enough situations where directed abstract thinking would benefit them to justify the additional costs such abilities bring. But why is it useful to be able to decouple information from the world to such a degree? The greater facility a mind has for abstraction, the more creatively it can develop causal chains that can outperform instinct and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Subconceptual thinking is also a gestalt approach that applies innate algorithms to subconcepts (big data) and uses feedback to collect useful patterns. Conceptual thinking (logical reasoning) creates the criteria it uses for feedback. A criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed general-purpose skills to find certain kinds of patterns. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”7. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Some have posited additional categories of being beyond form and function, but I think any additional categories can be reduced to these two. Aristotle’s Organon enumerated all ten possible kinds of things that can be the subject or the predicate of a proposition, the first of which is substance. Substance is equivalent to what I call form. Aristotle essentially defined it as that which is not function, by saying substance cannot be “predicated of anything” or be said to “be in anything”, which are relational or functional aspects. The other nine categories are functional aspects and as so are inherently indirect and not the substance themselves, namely quantity, quality, relationship, where, when, posture, condition, action, and recipient of action. I would say that the location of a physical object in time and space is part of its physical existence, but how we describe it relative to other things is not. As Immanuel Kant would have put it, a physical thing is a noumenon, or thing-in-itself, while our description of it is a phenomenon, or thing-as-sensed. We have no direct knowledge of the noumena of the physical world, but we talk about them as phenomena all the time. Noumena are strictly physical and phenomena strictly functional. I see no reason for any additional categories. Quantity and quality may accurately describe traits of a noumenon, but they are still descriptions and so are functional; the noumenon itself just exists without regard to how it might be characterized for some purpose extrinsic to itself. Ironically, this means that science is entirely a functional pursuit, even though its greatest successes so far are about the physical world. “About” is the key word; science studies phenomena, not noumena. We are curious about the noumena, but we can never know them as we only see their phenomena. This is not due to limitations of measurement but to limitations of understanding. Any noumenon can be understood through an infinite variety of phenomena and ways of interpreting those phenomena that model it but are never the same as it. They consequently describe some features, perhaps very accurately, but miss others, and in any event, what we think of as a feature is a generalization that makes sense to us functionally but means nothing physically.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

The Science of Function

Discussing function as an entity can lead to some confusion. As with physical things, we can name functional things, such as 3D vision or justice. The names are not the things themselves, but if we understand the reference then we know the function being discussed. As with physical things, sometimes instead of explicitly naming a function we will describe it, and the description then acts as a reference to it, still without being the thing itself. What gets confusing is that a name or a description is functional itself. Naming something is a way of declaring it a concept, and describing it is a way of attaching traits to the concept, whether named or not. Concepts and traits are intrinsically functional, where the function of concepts is to help us group similar entities under one name in a general way. So while the name or description of a function, like 3D vision or justice, is not, as I said, the function itself, it serves its own function, namely to name or describe another function. So Boyle’s Law and descriptions of it are not the function themselves, but let us discuss it.

I mention this because our descriptions of things are always sketchy. First, they always presuppose a great deal of context which is presumably understood and agreeable. Then, they focus only on the most salient aspects in the hopes that the rest will be implied. For physical things, one can presumably see the object and make other physical observations that provide deeper understanding far beyond the verbal description. Most functional things also have instinctive and subconceptual functional support, such a 3D vision and justice. So while we can’t see them in the physical world, we can experience them as functions in our mental world to understand them beyond our verbal descriptions. Physical objects can be objectively described through measurement by instruments (which at least push subjective factors back one level), but it is harder to measure functional things objectively. Still, techniques are possible, such as the observation of the effects of function, e.g. of behavior.

Herein I am only describing the mind, that is, its overall and composite functions. I will, therefore, be sketchy, but I am hoping that a great deal of context is understood and agreeable, which should bring what it is into much better focus than before. Especially considering viewing the mind as a functional entity is a somewhat new perspective, I will start from the top down with the most salient aspects and fill in more detail as I go. My explanations will appeal to and depend on our experience of mental function, which is to say how we subjectively experience our own minds. This approach is introspective, which a challenge to objectivity. I will address that challenge in more detail later, but in short I will look to introspection to stimulate hypotheses, not to test them. The resulting descriptions of mind I develop will constitute a theory to be tested. Like all theories, it is not intended to have the same function as the mind or to be complete, only to be internally consistent and supported by the evidence. I intend to show that it is consistent with prevailing scientific perspectives once those perspectives are interpreted in the framework of form and function dualism.

All scientific theories are descriptions, and hence sketchy representations of reality. Boyle’s Law describes the relationship between volume and pressure of a gas at constant temperature. It is “sketchy” even though it seems to work perfectly because volume and pressure are approximate measures of reality based on instruments and not fundamental properties of spacetime. But we can measure volume and pressure almost perfectly most of the time, and in these cases the law has always worked to our knowledge and so we can feel pretty comfortable that it always will. Though unprovable and sketchy, we can label it “true” and depend on it without fear. Of course, to use it we have to match our logical model which models gases as collections of independently-moving particles with volume and pressure to a real-world circumstance involving gas, which depends on observations and measurements to provide a high probability of a good fit of theory to practice. This modeling, matching, observing, and measuring involves some subjective elements which have some uncertainty and vagueness themselves, so all objectivity has limits and caveats. But we can accurately estimate these uncertainties and establish a probability of success that is very close to 100%.

In principle, theories of function can also be almost certainly true. We have to start with what we are most certain about and be careful about introducing less supportable hypotheses. The existence of function itself is the first thing about which we need certainty. We have more direct access to function than to form (“I think therefore I am”), but our scientific explanations of form have so much coherence, i.e. corroborating evidence, logical consistency and explanatory power, that they set a very high bar. We probably can’t achieve a level of certainty or objectivity about function that is as high as we can for form, but we can still do pretty well because we have a number of objective and scientific sources of information about function. It is not all opinion.

So in what sciences is the subject matter functional? The physical sciences strictly study form because nonliving systems don’t have function. The formal sciences strictly study function, because they are not physical and function is all that is left. Though they don’t study physical form, they are named after formal systems, where “form” means well-defined. The formal sciences are indirectly functional; they specify and follow rules, and following rules is a good way to achieve functions, though not the only way. The biological and social sciences study the functions of living systems. So three out the four branches of science study function. The physical and formal sciences have achieved significantly more definitive results than the biological and social sciences. Physical laws can be tested in many ways with great precision. It doesn’t mean physics is solved; general relativity improved on Newton’s law of universal gravitation, and MOG (MOdified Gravity) may improve on general relativity. But we can predict physical phenomena with great accuracy and reliability, which has led to very empowering technologies. The formal sciences have been instrumental to that success, as physical sciences and technology depend heavily on mathematics and computer science. Significantly, our objectivity in the formal sciences is in many ways perfect. Knowledge within a well-defined formal system can be known for certain and provably so. While some define objectivity as being based only on facts and evidence, which is an empirical standard, for functional matters the broader definition of being without of bias, judgment, or prejudice is more appropriate. Any formal system in and of itself is necessarily objective by this standard, but the motivations for creating the formal system that way in the first place are necessarily subject to some degree of subjectivity, as are the methods used to apply a formal system to a given problem. So we can conclude that the study of formal function has met with great success.

Living systems, however, quickly introduce complexities that make the levels of certainty seen in the formal and physical sciences seem very distant indeed. Function in living systems derives from iterative mechanisms that refine knowledge and strategies over time. By construction, then, it is dynamic and adaptable rather than fixed and repeatable. However, many creatures in Earth’s history remained nearly unchanged for millions of years, suggesting that their function was accurately and reliably tuned to the needs of their niche. Human tools, too, sometimes remain unchanged for generations, like the dagger and paper clip. So it is not that elements of living functional systems can’t be fixed and repeatable, it is just that the playing field is much larger. The universe has just one physical system, one set of rules, but every lifeform is designed differently to its own set of rules, on a playing field where the design specifications often change. Much more than that, most physical systems work with no or little feedback, but most functional systems adapt their reactions continuously from feedback. So nothing about the design and not too much about the behavior of functional systems can be explained or predicted by equations. We consequently call the physical sciences hard science can call biology and the social sciences soft science, which makes them sound inherently wishy-washy. This is unfair; they are trying to explain much more complex phenomena. But they do need a firmer foundation: they need to acknowledge that they are built on function, not form. They also need to develop stronger ideas about what function means in living systems. We can never achieve the same precision with the life sciences that we can with formal and physical sciences, but precision is not really the goal. Function is all about capacity, not precision. If we understand what is functionally possible and why, we will be able to say what we can do and should do.

This raises the question, what is the best scientific method for studying the functional systems of living things, or, more broadly, all non-formal functional sciences? Is it definitely not the same method that has been refined for the physical sciences. The physical sciences produce fixed laws of action while functional sciences produce models of capacity. Physical laws can be tested with precision, but models of capacity are less tractable. Yes, we know exactly many of the capacities that math and computer science bring to the table, but living systems were not designed to solve idealized problems but to survive in a complex, interconnected environment. Unlike physical laws, which remain unchanged across billions of years, the functions of living things are being interpreted within biological niches which are potentially changing all the time. Positive and negative feedback in living systems has spurred the evolution of systems almost infinitely more complex than nonfunctional physical systems. Rather than hard and soft, the division in the sciences should be characterized as fixed and variable. The fixed sciences are necessarily more tractable and knowable than the variable sciences, but the variable sciences can yield almost infinitely more designs capable of almost infinitely more functions.

Does this mean we need to revamp the scientific methods used in the soft sciences? Not exactly. The methods have evolved to right about where they should be despite the lack of a firm philosophical footing. But yes, I will shore them up and it will highlight some shortcomings. Lacking a clear vision, the soft sciences have more or less adopted the same scientific method used by the hard sciences. (Note that I am not using the line between social and natural sciences because biology is a functional science and hence soft, despite the information captured by genes being more fixed than the information of minds and societies). The basic elements of the method are: observe, hypothesize, predict and test. Peer review and anti-biasing techniques (like preregistering research) have been added to control human factors. The steps are iterated as often as needed to refine the match of model (hypothesis) to reality (as observed). Feedback loops like this are the source of all function and information, but the iterations of the scientific method put a premium on truth and not just utility. Scientific truth is the quest for a single, formal model that accurately describes certain aspects of reality, whereas useful information doesn’t have to be modeled (via experience or intuition), and when it is (via logical reasoning) can include a variety of competing models. Most of our “common sense” knowledge consists of likelihoods of this sort rather than very specific models that attempt to make exacting predictions, often according to precise rules of cause and effect. Although we can’t prove that a scientific model is correct because our knowledge of the physical world is limited to sampling, all particles of a given type seem to behave identically, so we can effectively prove correctness in the physical sciences. The exact rules needed are still a bit too complex for us to nail down completely, but what we have devised so far works well enough that we can take it as true for all intents and purposes in the range of covered circumstances.

But how did science arrive at a consensus on general relativity and quantum mechanics as the theories to back? Their preeminence as the most correct theories available right now stems from science’s commitment to objectivity. Although our minds are inherently subjective, both by definition and simply because we must filter all our knowledge through our mental lens, science has given us a big appreciation of the value of objectivity. Science, and more importantly the technology it makes possible, has completely transformed our lives, giving us power over our circumstances that let us live as gods relative to our forebears. It owes this success to its ability to distinguish well-supported hypotheses from poorly-supported ones. This begins by instilling students with a belief in scientific truth, which is a bit ironic considering science is all theory and no truth. But the essentially perfect ability of the prevailing theories of physical science to predict the future seems about as reliable as mathematical truth, which is completely certain, so we take them as ever-so-slightly-qualified truths. Next, when practicing the scientific method, scientists know that they have to formulate any hypothesis in such a way that it is supported by all the relevant prevailing paradigms. If you want to flout a paradigm you had better be prepared to provide a better paradigm, which is a much higher hurdle to cross than just refining existing paradigms. It is for this reason that Thomas Kuhn felt that paradigm shift was the hardest but most critical step behind scientific revolutions. Well-formed and tested hypotheses are then scrutinized through peer review, which happens formally at publication time and also informally through the individual and collective opinion of the scientific community. While not perfect, these efforts to increase objectivity do usually result in the endorsement of scientific laws and principles that are more functional, i.e. carry greater predictive power by modeling more situations with greater accuracy than their alternatives. Our personal attempts to gather information in the world are nowhere near as exacting or detailed, though in practice they work pretty well considering that we deal with many situations of greater functional complexity than those addressed by science and we have to act with less study.

But the soft sciences are of more interest to us here. Do we even have prevailing paradigms, and if so, have they even been fleshed out enough for their boundaries to be well understood? Let’s look first at the sub-mental biological sciences, as anything about minds must build first on that. By now evolution is well-established as the shaper of all lifeforms, even if many details of its exact mechanisms are still uncertain. Most notably, the evidence and mathematical models support both punctuated equilibrium and gradualism as valid mechanisms, so we can’t say why evolution is sometimes very fast and other times very slow. But millions if not billions of observations now support evolution and no observations discredit it. Even so, the paradigms of biology don’t quite fully embrace evolution. The drawback of evolution is that it comes with too many caveats. Scientific models work best when they are simple and closed, making clear predictions in known circumstances. Evolution, on the other hand, is messy. Every individual is unique, every gene comes in many variants, and no gene can be said to have just one function that is crystal clear. It is just more practical to sort of put evolution on the back burner, as it were, and teach biology with simpler models, focusing on the predominant purposes of cells, tissues, organs, and organisms. This approach, while not as foolproof as physical science, is amazingly effective for many purposes and enhances the biology’s credibility. However, to some degree it hides biology’s dirty little secret, that it is a functional, aka soft, science, under the rug. But biologists need to be proud of this and need to shore up their metaphysical foundation by declaring that all function in the universe starts with and depends on biology, the first science of function.

Now, as much as I hold biology near and dear to my heart, to the point where my first ambition was to be a geneticist, my interest here is with the second science of function, the mind. From there I will eventually move into the tertiary sciences of function, the social sciences, which are secondary ramifications of minds. (Note that I haven’t forgotten formal science (including math and computer science), which I call the zeroth science of function as it needs no empirical evidence. But our access to it is only through our minds, so it is derivative from my perspective). The question is whether we have a prevailing paradigm of the mind. Interestingly, it was the absence of paradigms in the social sciences that led Kuhn to recognize the existence of them in the natural sciences and then write his seminal book. And to this day, the social sciences are not bound by any paradigms that could act to prevent the “controversies over fundamentals” Kuhn noticed1. Each subfield of each social science is instead built on a school of thought which has been developed taking certain assumptions for granted that support its theories. So long as their theories are not overtly incompatible with science at large, their foundation is taken to be adequate. Despite the absence of a firm foundation, social sciences can still claim to be scientific if they can reliably predict outcomes with better success than chance alone. Many theories can hit this bar given even a very vague or biased perspective because high-level patterns appear in all systems despite ignorance of their deeper structure. The social sciences detect patterns without having to explain all the forces that cause them. But although the social sciences are practiced without paradigms today, I believe we can gradually change that by developing cognitive science.

This brings me back to the question, “Do we have prevailing paradigm of the mind?” We have nothing as concrete or broadly accepted as is found in the natural sciences, but there is much, taken primarily from our knowledge of biology, that is nearly settled. So then, whether it is a paradigm or not, there are at least some strongly prevailing views, which I will now present without further ado. The mind is a process of the brain that proceeds neurochemically, collecting information using senses and producing actions that control our bodies. We have a single train of thought or stream of consciousness running internally that sometimes appears to us subjectively like a movie running at live speed. We can rather easily swap between trains of thought we have set aside, giving us some facility for multitasking. Without conscious effort, memories and hunches come to us from stimulation of our senses or relevance to our thoughts. We have preferences and emotions that motivate us, i.e. influence our decisions. We know quite a bit about the specific range of our senses and physical capabilities. Humans also have some highly-developed mental skills, most notably language, which we know are largely genetic. These skills also include thinking, imagination, creativity, recognition, appreciation, sense of agency, theory of mind (the ability to see agency in others), and more. We acknowledge these skills and what we can do with them, but we can’t say how we came by them or how they work. We find ourselves in the curious circumstance of being able to use our minds but not knowing how they work. It sounds like a problem warranting immediate attention, and yet we have gotten by just fine without knowing. But there is undoubtedly value in knowing. As technology expands the scope of our physical activities, a better understanding of how and why we act are probably our best tools for preserving ourselves.

While the above views are widely accepted by scientists based on a huge amount of evidence across many fields, they lack the kind of firm foundation that one would expect of a paradigm. My sole ambition here is to elevate these prevailing views to the status of an actual paradigm for thinking about the mind. Nearly everything posited above about the mind came from introspection. We struggle to develop more objective explanations for consciousness, memory, thinking, language, etc., but without our subjective awareness of these things and our having observed them in others, we would have no idea that they even exist. So if it just comes down to hearsay, how can we speak scientifically about their existence? Is there any way to see the man behind the mirror? I am saying that there is and that it starts with our ability to see non-physical things, namely functions, for what they are. By addressing function as the elephant in the room we can start to develop scientific conversations about mental states, whose physical basis still eludes. Once again, keep in mind that the physical basis of thought will not prove sufficiently illuminating once we have figured it out because the physical can’t explain the functional. Function leverages but is not caused by its underlying physical mechanism; it is caused by the reinforcement of negative and positive feedback to produce a functional outcome. So the study of function in natural systems has to be based on just what kinds of functions feedback will create.

Minds are specifically designed (taking this feedback cycle as the implicit designer) to control bodies, but not so much at the cellular level as at the aggregate level, which in higher animals has come to mean devising and prioritizing a single stream of coordinated actions. In principle, we can reverse engineer how minds work using evolutionary psychology. As it is currently practiced, however, evolutionary psychology has drawn a conservative boundary that includes just instinctive traits. Recall that instinct covers all information-based behavior except experience or reasoning. Our capacities to learn from experience and to reason also evolved and are instinctive, but they are general-purpose mechanisms that base behavior on stored information and not just on current sensory inputs. It is much harder to predict what a general-purpose mechanism might do, so it is a lot harder to study scientifically. Science traditionally derives its value from helping us predict, so this is not even an area that appears to be of value. Contrast the value of understanding learning and reasoning with the value of helping us learn or reason better, which supports the education industry. We spend a lot on education, so we also invest quite a bit in educational theory and techniques.

By establishing a new all-encompassing paradigm for science based on form and function dualism instead of having form-based paradigms for the physical sciences and no paradigms for the rest, I think we can develop a new-found appreciation for the study of function. We already recognize the value of studying function abstractly through the formal sciences. Mathematics and computer science yield algorithms that give us much greater control over the world than we had before. We can’t always predict what general purpose algorithms will do, but we don’t need to if we understand their function and know that they will relentlessly act to accomplish their function. They become predictable on a functional basis rather than in terms of their detailed actions, which are just the means to an end. We also recognize the value of functional descriptions of the universe, considering this has been the triumph of physical science. The physical world is not functional, but how we group objects and their behavior using cause and effect is. Our minds, as functional entities, can then leverage scientific laws for functional ends. And we recognize the value of functional descriptions of life, seeing genes as bearers of all the traits that make life work. Finally, we recognize the value of function in society and founded the social sciences to tease out patterns in the ways we have applied ourselves. Function is all about capacity, and our greatest and least understood capacity is our general-purpose intelligence. We have to get our heads out of the sand and start identifying the functional building blocks of the mind if we want to have control of our own future.

It’s not that we haven’t been making steps in the right direction, but more course correction is needed. As recently as the 1930’s, psychologists hoped that behaviorism might explain all2. But by the 1950’s general-purpose thinking could no longer be ignored and the cognitive revolution arose to address higher mental function head on. By the 1970’s cognitive science had become a formal discipline and today most universities have departments for it. Unlike other disciplines where one or at most a couple of paradigms back up all research, cognitive science has avoided firm paradigms and has instead embraced a cross-disciplinary approach where paradigms from different fields support different aspects. While it is wiser to admit the limits of knowledge and work with what you know, it isn’t helping to establish a consensus paradigm. Practicing scientists can’t work outside accepted paradigms, so they end up working on details and hoping the big picture just emerges at some point. Cognition is recognized as a process; the mind as an entity remains ineffable. But if we just say the mind is a functional entity we can start to study functionality as a kind of existence instead of looking only at physicality and processes. Furthermore, all the functional constructs of the mind, from instincts to subconcepts to concepts to mental models are functional entities as well, whose physical implementations are secondary and not primary causes. The failure to address function as the underlying entity to be studied has created confusion and splintered study into dozens of subdisciplines. For example, many of these fall under the umbrella of postcognitivism, which holds that “algorithmic” cognitive approaches are short-sighted and one must consider a variety of more embedded mechanisms. That’s great, but I would argue that cognition as a process was never the point; function is the point of the mind, whether it is achieved through genetic, instinctive, subconceptual or conceptual information management. Furthermore, until function is acknowledged as one of the two fundamental categories of existence, science will proceed on the tacit assumption that eliminative materialism will eventually reduce the social sciences to physical explanations. That isn’t possible, and even leaving it on the table as a possibility creates a blind spot that hampers scientific progress.

Minds not Brains: Introducing Theoretical Cognitive Science

I’m going to make a big deal about the difference between the mind and the brain. We know what minds are from long experience and take the concept for granted, despite an almost complete absence of a scientific explanation. Conventionally, the mind is “our ability to feel and reason through a first-person awareness of the world”. This definition begs the question of what “feel”, “reason” and “first-person awareness” might be, since we can’t just define the mind by using terms that are only meaningful to the owner of one. While we can safely say they are techniques that help the brain perform its primary function, which is to control the body, we will have to dig deeper to figure out how they work. Our experience of mind links it strongly to our bodies, and scientists have long said it resides in the nervous system and the brain in particular. Steven Pinker says that “The mind is what the brain does.”1 This is only superficially right, because it is not what, but why. It is not the mechanism or form of the mind that matters as much as its purpose or function. But how can we embark on the scientific study of the mind from the perspective of its function? As currently practiced, the natural sciences don’t see function as a thing itself, but more as a side effect of mechanical processes. The social sciences start with the assumption that the mind exists but take no steps to connect it back to the brain. Finally, the formal sciences study theoretical, abstract systems, including logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics, but leave it to natural and social scientists to apply them to natural phenomena like brains and minds. What is the best scientific standpoint to study the mind? Cognitive science was created in 1971 to fill this gap, which it does by encouraging collaboration between the sciences. I think we need to go beyond collaboration and admit that the existing three branches have practical and metaphysical constraints that limit their reach into the study of the mind. We need to lift these constraints and develop a unified and expanded scientific framework that can cleanly address both mental and physical phenomena.

Viewed most abstractly, science divides into two branches, the formal and experimental sciences, with the formal being entirely theoretical, and the experimental being a collaboration between theory and testing. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and special sciences (all other natural and social sciences), which are presumed to be reducible to fundamental physics, at least in principle. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum. Hypotheses are purely functional while testing is purely physical. That is, hypotheses are ideas with no physical existence, though we think about and discuss them through physical means, while testing tries to evaluate the physical world as directly as possible. Of course, we use theory to perform and interpret the tests, so it can’t escape some dependence on function. The scientific method tacitly acknowledges and leverages both functional and physical existence, even though it does not overtly explain what functional existence might be or attempt to explain how the mind works. That’s fine — science works — but we can no longer take functional existence and its implications for granted as we start to study the mind. It’s remarkable, really, that all scientific understanding, and everything we do for that matter, depend critically on our ability to use our minds, yet don’t need an understanding of how it works or what it is doing. But we have to find a way to make minds and ideas into objects of study themselves to understand what they are.

The special sciences are broken down further into the natural and social sciences. The natural sciences include everything in nature except minds, and the social sciences study minds and their implications. The social sciences start with the assumption that people, and hence their minds, exist. They draw on our perspectives about ourselves, our behavior patterns, and what we think we are doing to explain what we are and help us manage our lives better. Natural scientists (aka hard scientists) call the social sciences “soft sciences” because they are not based on physical processes bound by mathematical laws of nature; nothing about minds has so far yielded that kind of precision. Our only direct knowledge of the mind is our subjective viewpoint, and our only indirect knowledge comes from behavioral studies, evolutionary psychology and outright speculation into the functions our minds appear to perform. The study of behavior finds patterns in the ways brains make bodies behave and may support the idea of mental states but doesn’t prove they exist. Evolutionary psychology also suggests how mental states could explain behavior, but can’t prove they exist. Studying the functions the mind does by just guessing about them sounds crazy at first, but is actually the way all scientific hypotheses are formed: take a guess and see if it holds up. It too can’t prove mental states exist, but we need to remember that science isn’t about proving, it is about developing useful explanations.

The differences in approach between hard and soft sciences have opened up a gap that currently can’t be bridged, but we have to bridge it to develop a complete explanation of the mind. This schism between our subjective and objective viewpoints is sometimes called the explanatory gap. The gap is that we don’t know how physical properties alone could cause a subjective perspective (and its associated feelings) to arise. I closed this gap in The Mind Matters, but not rigorously. In brief, I said that the mind is a process in the brain that experiences things the way it does because creating a process that behaves like an agent and sees itself as an agent is the most effective way to get the job done. More to the point, it feels like an agent because it has to have some way of thinking about its senses and that way needs to keep them all distinct from each other. So perceptions are just the way our brains process information and “present” it to the process of mind. It is not a side effect; much of the wiring of the brain was designed to make this illusion happen exactly the way it does.

Natural science currently operates on the assumption that natural phenomena can be readily modeled by hypotheses which can be tested in a reproducible way. This works well enough for simple systems, i.e. those which can be modeled using a handful of components and rules. The mind, however, is not a simple system for three reasons: complexity, function, and control. Living tissues are complex systems with many interacting components, so while muscle tissue can be modeled as a set of fibers working together as a simple machine, like any complex system its behavior will become chaotic outside normal operating parameters. Next, the mind (and muscles) have a different metaphysical nature than nonliving things. Unlike rocks and streams, muscles and nerves are organized to perform a function rather than employ a specific physical form. And most significantly, the mind is not organized to perform functions itself but to control how the body will perform functions, and so could be called metafunctional. These three complicating factors make developing and testing hypotheses about the mind vastly more complicated than doing it for rocks and streams, so paradigms based only on natural laws won’t work. Yet the attitude among natural scientists is that the mind is just an elaborate cuckoo clock and so understanding it reduces to knowing its brain chemistry. That will indeed reveal the physical mechanisms, but it won’t reveal the reasons for the design, any more than understanding the clock explains why we want to know what time it is. When we study complex systems, like the weather, we have to accept that chaos and unpredictability are around every corner. When we study functional systems, like living things, we have to accept that functional explanations — and all explanations are functional — need to acknowledge the existence of function. And when we study control systems, like brains and minds, we have to accept that direct cause and effect is supplanted by indirect cause and effect through information processing. Natural sciences study complexity and function in living systems, but not the control aspect of minds. Control is addressed by a number of the formal sciences, but since the formal sciences are not concerned with natural phenomena like minds, the study of control by minds has been left high and dry. It falls under the purview of cognitive science, but we need to completely revamp our concept of what scientific method is appropriate to study function and control. We will need theories that seek to explain how control is managed from a functional perspective, that is, using information processing, and we will need ways to test them that are less direct than tests of natural laws.

Nearly all our knowledge of our mind comes from using it, not understanding it. We are experts at using our minds. Our facility develops naturally and is helped along by nurture. Then we spend decades at schools to further develop our ability to use our mind. But despite all this attention on using it, we think little about what it is and how it works. Just as we don’t need to understand how any machine works to use it, we don’t need to know how our mind works to use it. And we can no more intuit how it works that we can intuit how a car or TV works. We consequently take it for granted and even develop a blindness about the subject because of its irrelevance. But it is the relevant subject here, so we have to overcome this innate bias. We can’t paint a picture of a scene we won’t look at. While we have no natural understanding of it, we do know it is a construct of information managed by the brain. Understanding the physical mechanisms of the brain won’t explain the mind any more than taking a TV apart would explain TV shows, because for both the mind and TV shows the hardware is just a starting point from which information management constructs highly complex products. So the mind is less what the brain does than why it does it. It is about how it physically accomplishes things so much as what it is trying to accomplish. This is the non-physical, functional existence I have argued for. In fact, for us, functional existence is primary to physical existence, because knowledge itself is information or function, so we only know of physical existence as mediated through functional existence, i.e. from observations we make with our minds (i.e. “I think therefore I am”).

Knowing that functional existence is real and being able to talk about it still doesn’t explain how it works. We take understanding to be axiomatic. We use words to explain it, but they are words defined in terms of each other without any underlying explanation. For example, to understand is to know the meaning of something, to know is to have information about, information is facts or what a representation conveys, facts are things that are known, convey is to make something known to someone, meaning is a worthwhile quality or purpose, purpose is a reason for doing something, reason is a cause for an event, and cause is to induce, give rise, bring about, or make happen. If anything, causality seems like it should reduce to something physical and not mental, yet it doesn’t. But the language of the mind is not intended to explain how understanding or the mind works, just to let us use understanding and our minds. If we are to explain how understanding and other mental processes work we will need to develop an objective frame of reference that can break mental states down into causes and effects or we will remain trapped in a relativistic bubble.

Let’s consider which sciences study the mind directly. Neuroscience studies the brain and nervous system, but this is not direct for the same reason studying computer hardware says little or nothing about what computer software does. On the other hand, psychology and cognitive science are dedicated to the study of the mind. Psychology studies the mind as we perceive it, our experience of mind, while cognitive science studies how it works. One could say psychology studies the subjective side and cognitive science studies the objective side. Psychology divides into a variety of subdisciplines, including neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, and humanistic psychology. They each draw on a different objective source of information. Neuropsychology studies the brain for effects on behavior and cognition. Behavioral psychology studies behavior. Evolutionary psychology studies the impact of evolution. Cognitive psychology studies mental processes like perception, attention, reasoning, thinking, problem-solving, memory, learning, language, and emotion. Psychoanalysis studies experience (but with a medical goal). Humanistic psychology studies uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. Cognitive science focuses on the processes that support and create the mind. Most cognitive scientists, including me, are functionalists, maintaining that the mind should be explained in terms of what it does. But science continues to be almost completely dominated by a physicalist tradition, which suggests and even claims that studying the brain will ultimately explain the mind. I have adamantly argued that function does not reduce to form, even though it needs form. And it is true that knowing the form provides many clues to the function, and it is also true that form is our only hard evidence. But we are still a long way from unraveling all the mechanics of neurochemistry, though rapid progress is being made. In the meantime, without any more information than we already have at hand there is much that we can say about the brain’s function, that is, about the mind, by taking a functional perspective on what it is doing. So cognitive science should not be an interdisciplinary collaboration, but should reboot science from scratch by establishing a scientific approach to studying function that can meet a comparable level of objectivity as our paradigm for studying form. I have, so far, proposed that all of science be refounded on the ontology of form and function dualism. The prevailing paradigm, which derives as I have noted from the Deductive Nomological Model, uses function to study form, while I propose to use function to study both form and function.

One other discipline formally studies the mind: philosophy. Practiced as an independent field, general philosophy studies fundamental questions, such as the nature of knowledge, reality, and existence. But because they don’t establish an objective basis for their claims, philosophers ultimately depend on the subjective, intuitive appeal of their perspectives. For example, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism. Universality and relativism assume the concepts of facts, discovery, understanding, and perception, but these assumptions are at best loosely defined and really depend on a common knowledge of what they are. Philosophy builds on common knowledge ideas without attempting to establish an objective basis. What principally distinguishes science is the effort to establish objectivity, and the way it does this it itself studied unscientifically as the philosophy of science. It is an ironic situation that the solid foundation upon which science has presumably been built is itself unclear and ultimately pretty subjective. George Bernard Shaw said, “Those who can do, those who can’t teach,” and this is a theme I have been repeating. We are designed to do things but not to understand how we do them or, much less, to teach how they are done. But understanding and teaching are important to get us to the next level so we can leverage what we know in new ways. We have long been perfectly capable of practicing science without dwelling too much on its philosophical basis, but that was before we started to study the mind. We desperately need an objective basis of objectivity itself and how to apply it to the study of both form and function in order to proceed. Philosophers have asked the questions and laid out the issues, but scientists now have to step up and answer them.

Philosophy of science and philosophy of mind have detailed the issues at hand from a number of directions, and characteristically of philosophy have failed to indicate an objective path forward. I believe we can derive the objective philosophy we need by reasoning it out from scratch using the common and scientific knowledge of which we are most confident, which I will do in the next chapter. But a brief summary of the fields is a good starting point to provide some orientation. Science was a well-established practice long before efforts were made to describe its philosophy. August Comte proposed in 1848 that science proceeds through three stages, the theological, the metaphysical, and the positive. The theological stage is prescientific and cites supernatural causes. In the metaphysical stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes and embrace an ever-progressing refinement of facts based on empirical observations. So every theory must be guided by observed facts, which in turn can only be observed under the guidance of some theory. Thus arises the hypothesis-testing loop of the scientific method and the widely accepted view that science continually refines our knowledge of nature. Comte’s third stage developed further in the 1920’s into logical positivism, the theory that only knowledge verified empirically (by observation) was meaningful. More specifically, logical positivism says that the meaning of logically defined symbols could mirror or capture the lawful relationship between an effect and its cause2. Every term or symbol in a theory must correspond to an observed phenomena, which then provides a rigorous way to describe nature mathematically. It was a bold assertion because it says that science derives the actual laws of nature, even though we know any given evidence can be used to support any number of theories, even if the simplest theory (i.e. by Occam’s razor) seems more compelling. In the middle of the 20th century, cracks began to appear in logical positivism (and its apotheosis in the DN Model, see above) as the sense of certainty promised by modernism began to be replaced by a postmodern feeling of uncertainty and continuous change. In the sciences, Thomas Kuhn published The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin that phrase specifically). Though Kuhn’s goal was to help science by unmasking the forces behind scientific revolutions, he inadvertently opened a door he couldn’t shut, forever ending dreams of absolutism and a complete understanding of nature and replacing it with a relativism in which potentially all truth is socially constructed. In the 1990s, postmodernists claimed all of science was a social construction in the so-called science wars. Because this seems to be true in many ways, science formally lost this battle against relativism and has continued full steam without clarifying its philosophical foundations. Again, while this is good enough to do science that studies form, it is not enough to do science that studies function. Arguably, we could and very well might develop a scientific tradition for studying function that lets us get the job done without a firm philosophical foundation either. After all, we need news you can use regardless of why it works. Maybe it will happen that way, but I personally consider the why to be the more interesting question, and because function is so much more self-referential than form that I think studying it will turn out to require understanding what it means to study it.

The philosophy of mind is studied as a survey of topics including existence (the mind-body problem), theories of mental phenomena, consciousness/qualia/self/will, and thoughts/concepts/meaning. My goal, as noted, is to establish an objectively supportable stance on these topics and on objectivity itself, which I will then use to launch an investigation into the workings of the mind. It will take some time to do all this, but as a preview I will lay out where I will land on some fundamental questions:

I endorse physicalism (i.e. minimal or supervenience physicalism), which says the mind has a physical basis, or, as philosophers sometimes say, the mental supervenes on the physical. This means that a physical duplicate of the world would also duplicate our minds. While true duplication is impossible, my point here is just that the mind draws its power entirely from physical materials. Physicalism rejects the idea of an immortal soul and Descartes’ substance dualism in which mind and body are distinct substances. Physicalism is often taken to simultaneously reject any other kind of existence, making it a physical monism, but that rejection is unnecessary. At its core physicalism just says that physical things are physical. That one might also interpret something physical from another perspective is irrelevant to physicalism.

I endorse non-reductive physicalism, which is just a fancy way of saying that things that are not physical are not physical, and in particular, that function is not form or reducible to it. More accurately, mental explanations cannot be reduced solely to physical explanations. That doesn’t mean that physical things like brains, that can carry out functions, are not physical, because they are entirely physical from a physical perspective. But if you look at brains from the perspective of what they are doing you create an auxiliary kind of explanation, a functional one. And because explanatory perspectives are abstract, there are an unlimited number of functional perspectives (or existences) about everything. The brain is still physical, the explanations of it are not. To the extent the word “mind” is taken to be a functional perspective of what the brain is doing, it is really the union of all the explanatory perspectives the brain uses when going about its business. These functional perspectives are not mystical, they are relational, tying information to other information using math, logic or correlation. A given thought has a form as an absolute, physical particular in a brain, but its meaning is relative, being a generalization or idealization that might refer to any number of things. Thus, “three” and “above” are not physical particulars. A thought is a functional tool that may be employed in a specific physical example but exists as an abstraction independent of the physical.

I endorse functionalism, which is the theory that mental states are more profitably viewed from the perspective of what they do rather than what they are made of, that is, in terms of their function, not their form. In my ontology of form & function dualism mental states have both kinds of existence, with many possible takes as to what their function is, but they evolved to satisfy the control function for the body, and so our efforts to understand them should take this perspective first.

I endorse the idea that consciousness is a subprocess of the brain that is designed to create a subjective theater from which centralized control of the body by the brain can be performed efficiently. All the familiar aspects of consciousness such as qualia, self, and the will are just states managed by this subprocess. As a special spoiler, I will reveal that I endorse free will, even if the universe is deterministic, which to the best of our knowledge it is not.

Finally, I endorse the idea that thoughts, concepts, and meaning are information management techniques that have both conscious and subconscious aspects, where subconscious refers to subprocesses of the brain that are supportive of consciousness, which is the most supervisory subprocess.

While this says much about where I am going, it doesn’t say how how I will get there or how a properly unified philosophy of science and mind imply these things.

Deriving an Appropriate Scientific Perspective for Studying the Mind

I have made the case for developing a unified and expanded scientific framework that can cleanly address both mental and physical phenomena. I am going to focus first on deriving an appropriate scientific perspective for studying the mind, which also bears on science at large. I will follow these five steps:

1. The common knowledge perspective of how the mind works
2. Form & Function Dualism: things and ideas exist
3. The nature of knowledge: pragmatism, rationalism and empiricism
4. What Makes Knowledge Objective?
5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

1. The common-knowledge perspective of how the mind works

Before we get all sciency, we should reflect on what we know about the mind from common knowledge. Common knowledge has much of the reliability of science in practice, so we should not discount its value. Much of it is uncontroversial and does not depend on explanatory theories or schools of thought, including our knowledge of language and many basic aspects of our existence. So what about the mind can we say is common knowledge? This brief summary just characterizes the subject and is not intended to be exhaustive. While some of the things I will assume from common knowledge are perhaps debatable, my larger argument will not depend on them.

First and foremost, having a mind means being conscious. Consciousness is our first-person (subjective) awareness of our surroundings through our senses and our ability to think and control our bodies. We implicitly trust our sensory connection to the world, but we also know that our senses can fool us, so we’re always re-sensing and reassessing. Our sensations, formally called qualia, are subjective mental states like redness, warmth, and roughness, or emotions like anger, fear, and happiness. Qualia have a persistent feel that occurs in direct response to stimuli. When not actually sensing we can imagine we are sensing, which stimulates the memory of what qualia felt like. It is less vivid than actual sensation, though dreams and hallucinations can seem pretty real. While our sensory qualia inform us of physical properties (form), our emotional qualia inform us of mental properties (function). Fear, desire, love, revulsion, etc., feel as real to us as sight and sound, though mature humans also recognize them as abstract constructions of the mind. As with sensory qualia, we can recall emotions, but again the feeling is less vivid.

Even more than our senses, we identify our conscious selves with our ability to think. We can tell that our thoughts are happening inside our heads, and not, say, in our hearts. It is common knowledge that our brains are in our heads and brains think1, so this impression is a well-supported fact, but why do we feel it? Let’s call this awareness of our brains “encephaloception,” a subset of proprioception (our sense of where the parts of our body are), but also including other somatosenses like pain, touch, pressure. The main reason our encephaloception pinpoints our thoughts in our heads is that senses work best when they provide consistent and accurate information, and the truth is we are thinking with our brains. Like other internal organs, it helps us to be aware of pain, motion, impact, balance, etc. on the head and brain as this can affect our ability to think, so having sufficient sensory awareness of our brain just makes sense. It is not just a side effect, say, of having vision or hearing in the head that we assume our thoughts originate there; it is the consistent integration of all the sensory information we have available.

But what is thinking? Loosely speaking it is the union of everything we feel happening in our heads, but more specifically we think of it as a continuous train of thought which connects what is happening in our minds from moment to moment in a purposeful way. This can happen through a variety of modalities, but the primary one is the simulation of current events. As our bodies participate in events, the mind simultaneously simulates those events to create an internal “movie” that represents them as well as we understand them. We accept that our understanding is limited to our experience and so tends to focus on levels of detail and salient features that have been relevant to us in the past. The other modalities arise from emphasizing the use of specific qualia and/or learned skills. Painting and sculpting emphasize vision and pattern/object recognition, music emphasizes hearing and musical pattern recognition, and communication usually emphasizes language. Trains of thought using these modalities feel different from our default “movie” modality but have in common that our mind is stepping through time trying connecting the dots so things “make sense.” Making sense is all about achieving pleasing patterns and our conscious role in spotting them.

And even above our ability to think, we consciously identify with our ability to control our bodies and, indirectly through them, the world. Though much of our talent for thought is innate, we believe the most important part is learned, the result of years of experience in the school of hard knocks. We believe in our free will to take what our senses, emotions, and memory can offer us to select the actions that will serve us best. At every waking moment, we are consciously considering and choosing our upcoming actions. Sometimes those actions are moments away, sometimes years. Once we have selected a course of action, we will, as much as possible, execute it on “autopilot,” which is to say we leverage conditioned behavior to reduce the burden on our conscious mind by letting our subconscious handle it. So we recognize that we have a conscious mind that is just that part that is actively considering our qualia and memories to select next actions and a subconscious mind that is processing our qualia and memories and performing a variety of control functions that don’t require conscious control. All of this is common knowledge from common sense, and it is also well-established scientifically.

But what is thinking? What does it mean to consider and decide? Thinking seems like such an ineffable process, but we know a lot about it from common knowledge. We know that concepts are critical building blocks of thought, and we know that concepts are generalizations gleaned from grouping similar experiences together into a unit. Language itself functions by using words to invoke concepts. We each make strong associations between each word we know and a variety of concepts that word has been used to represent. Our ability to use language to communicate hinges on the idea that the same word will trigger very similar concepts in other people. Our concepts are all connected to each other through a web of relationships which reveal how the concepts will affect each other under different circumstances. This web thus reveals the function of the concept and constitutes its meaning, so its meaning and hence its existence is entirely functional and not physical. Its neural physical manifestation is only indirectly related and hence incidental, as the meaning could in principle be realized in different people or by another intelligent being or even just written down. Although every physical brain contemplating any given concept will have some subtle and deep differences in their understanding of it, because the concept is fundamentally a generalization, subtle and deep characteristics are necessarily of less significance than the overall thrust.

The crux of thinking, though, is what we do with concepts: we reason with them. Basically, reasoning means carefully laying out a set of related concepts and the relevant relationships that bind them and drawing logical implications. To be useful, the concepts and implications have to be correlated to a situation for which one wants to develop a purposeful strategy. In other words, when we face a situation we don’t know how to handle it creates a problem we have to solve. We try to identify the most relevant factors of the problem by correlating the situation to all the solutions we have reasoned out in the past, which lets us narrow it down to a few key concepts and relationships. To reason, we consider just these concepts and our rules about them in a kind of cartoon of reality, and then we hope that conclusions we drew about these generalized concepts will apply to the real situation we are addressing. In practice, it usually works so well that we think of our concepts as being identical to the things they represent, even though they are really just loose descriptive generalizations that are nothing like what they represent and, in fact, only capture a small slice of abstract functional properties about those things. But they tend to be exactly what we need to know. “Thinking outside the box” refers to the idea of contemplating uses for concepts beyond the ones most familiar to us. An infinite variety of possible alternate uses for any thing or concept always exists, and it is a good idea to consider some of them when a problem arises, but most of the time we can solve most problems well enough by just recombining our familiar concepts in familiar ways.

This much has arguably been common knowledge for thousands of years, even if not articulated as such, and so can arguably even be subsumed under the more heading common sense, which includes everything intuitively obvious to normal people 2. But can civilization and culture be said to have generated trustworthy common knowledge that goes beyond what we can intuit for ourselves using common sense just by growing up? I am not referring to the common knowledge of details, e.g. historical facts, but to the common knowledge of generalities, i.e. the way things work. Here I would divide such generalities into two camps, those that have scientific support and hence can be clearly explained and demonstrated and those that don’t, but which still have broad enough acceptance to be considered common knowledge. I will consider these two camps in turn.

Our scientific common knowledge expands dramatically with each generation. We take much for granted today from physics, chemistry, and biology that were unknown a few hundred years ago. Even if we are weak in the details, we are all familiar with the scope of physical and chemical discoveries from artifacts we use every day. We know evolution is the prime mover in evolution, causally linking biological traits to the benefits they provide. Relative to the mind specifically, we have familiarity with discoveries from neuroscience, computer science, psychology, sociology and more that expand our insight into what the brain is up to. Although we recognize there is still much more unknown than known, we are pretty confident about a number of things. We know the mind is produced by the brain and not an ethereal force independent of the brain or body. This is scientific knowledge, as thoroughly proven from innumerable scientific experiments as gravity or evolution, and is accepted as common knowledge by those who recognize science’s capacity to increase our predictive power over of the world. Those who reject science or who employ unscientific methods should read no further as I believe the alternatives are smoke and mirrors and should not be trusted as the basis for guiding decisions.

Beyond being powered by the brain, we also now know from common knowledge that the mind traffics solely in information. We don’t need to have any idea how it manages it to see that everything that is happening in our subjective sphere is relational, just a big description of things in terms of other things. It is a large pool of information that we gather in real time and integrate both with information we have stored from a lifetime of experience and collected as instinctive intuitions from millions of years of evolution. The advent of computers has given us a more general conception of information than our parents and grandparents had. We know it can all be encoded as 0’s and 1’s, and we have now seen so many kinds of information encoded digitally that we have a common-knowledge intuition about information that didn’t exist 30 to 60 years ago.

It is also common knowledge that there is something about understanding the brain and/or mind that makes it a hard problem. While everything else in the known universe can be explained with well-defined (if not perfectly fleshed-out) laws of physics and chemistry, biology has introduced incredible complexity. How has it accomplished that and how can we understand it? The ability of living things to use feedback from natural selection, i.e. evolution, is the first piece of the puzzle. Complexity can be managed over countless generations to develop traits that exploit almost any energy source to support life better. But although this can create some very complex and interdependent systems, we have been pretty successful in breaking them down into genetic traits with pros and cons. We basically understand plants, for example, which don’t have brains per se. The control systems of plants are less complex than animal brains, but there is much we still don’t understand, including how they communicate with each other through mycorrhizal networks to manage the health of whole forests. But while we know the role brains serve and how they are wired to do it with neurons, we have only a vague idea how the neurons do it. We know that even a complete understanding of how the one hundred or so neurotransmitters activate isn’t going to explain it.

We know now from common knowledge that we have to confront head-on the question of what brains are doing with information to tackle the problem. And the elephant in the room is that science doesn’t recognize the existence of information. There are protons and photons, but no informatons or cogitons. What the brain is up to is still viewed strictly through a physical lens as a process reducible to particles and waves. This has always run counter to our intuitions about the mind, and now that we understand information it runs counter to our common-knowledge understanding of what the mind is really doing. So we have a gap between the tools and methods science brings to the table and the problem that needs to be solved. The solution is not to introduce informatons and cogitons to the physical bestiary, but to see information and thought in a way that makes them explainable as phenomena.

So when we think to ourselves that we “know what we know” and that it is not just reducible to neural impulses, we are on to something. That knowledge can be related verbally and so “jump” between people is proof that it is fundamentally nonphysical, although we need a physical brain to reflect on it. All ideas are abstractions that indirectly characterize real or imagined things. Our minds themselves, using the physical mechanisms of the brain, are organized and oriented so as to leverage the power this abstraction brings. We know all this — better today than ever before — but we find ourselves stymied to address the matter scientifically because abstraction has no scientific pedigree. But I am not going to ignore common sense and common knowledge, as science is wont to do, as I unravel this problem.

2. Form & Function Dualism: things and ideas exist

We can’t study anything without a subject to study. What we need first is an ontology, a doctrine about what kinds of things exist. We are all familiar with the notion of physical existence, and so to the extent we are referring to things in time and space that can be seen and measured we share the well-known physicalist ontology. Physicalism is an ontological monism, which means it says just one kind of thing exists, namely physical things. But is physicalism is a sufficient ontology to explain the mind? Die-hard natural scientists insist it is and must be, and that anything else is new-age nonsense. I am sympathetic to that view as mysticism is not explanatory and consequently has no place in discussions about explanations. And we can certainly agree from common knowledge that there is a physical aspect, being the body of each person and the world around us. But knowing that seems to give us little ability to explain our subjective experience, which is so much more complex than the observed physical properties of the brain would seem to suggest. Can we extend science’s reach with another kind of existence that is not supernatural?

We are intimately familiar with the notion of mental existence, as in Descartes’ “I think therefore I am.” Feeling and thinking (as states of mind) seem to us to exist in a distinct way from physical things as they lack extent in space or time. Idealism is the monistic ontology that asserts that only mental things exist, and what we think of as physical things are really just mental representations. In other words, we dream up reality any way we like. But science and our own experience have provided overwhelming evidence of a persistent physical reality that doesn’t fluctuate in accord with our imagination, and this makes idealism rather untenable. But if we join the two together we can imagine a dualism between mind and matter in which both the mental and physical exist without either being reducible to the other. All religions have seized on this idea, stipulating a soul (or equivalent) that is quite distinct from the body. But no scientific evidence has been found supporting the idea that the mind can physically exist independent of the body or is in any way supernatural. But if we can extend science beyond physicalism, we might find a natural basis for the mind that could lift religion out of this metaphysical quicksand. Descartes also promoted dualism, but he got into trouble identifying the mechanism: he supposed the brain had a special mental substance that did the thinking, a substance that could in principle be separated from the body. Descartes imagined the two substances somehow interacted in the pineal gland. But no such substance was ever found and the pineal gland’s primary role is to make melatonin, which helps regulate sleep.

If the brain just operates under the normal rules of spacetime, as the evidence suggests, we need an explanation of the mind bound by that constraint. While Descartes’ substance dualism doesn’t deliver, two other forms of dualism have been proposed. Property dualism tries to separate mind from matter by asserting that mental states are nonphysical properties of physical substances (namely brains). This misses the mark too because it suggests a direct or inherent relationship between mental states and the physical substance that holds the state (the brain), and as we will see it is precisely the point that this relationship is not direct. It is like saying software is a non-physical property of hardware; while software runs on hardware, the hardware reveals nothing about what the software is meant to do.

Finally, predicate dualism proposes that predicates, being any subjects of conversation, are not reducible to physical explanations and so constitute a separate kind of existence. I will demonstrate that this is true and so hold that predicate dualism is the correct ontology science needs, but I am rebranding it as form and function dualism (just why is explained below). Sean Carroll writes,3 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. Baseball encompasses everything from an abstract set of rules to a national pastime to specific events featuring two baseball teams. Some parts have a physical corollary and some don’t, but the physical part isn’t the point. A game is an abstraction about possible outcomes when two sides compete under a set of rules. “Apple” and “water” are (seemingly) physical predicates while “three”, “red” and “happy” are not. Three is an abstraction of quantity, red of color, happy of emotion. Quantity is an abstraction of groups, color of light frequency, brightness and context, and emotion of experienced mental states. Apple and water are also abstractions; apples are fruits from certain varieties of trees and water is the liquid state of H2O, but is usually used generically and not to refer to a specific portion of water.4 Any physical example of apple or water will fall short of any ideal definition in some ways, but this doesn’t matter because function is never the same as form; it is intentionally an abstract characterization.

I prefer form and function dualism to predicate dualism because it is both clearer and more technically correct. It is clearer because it names both kinds of things that exist. It is more correct because function is bigger than predicates. I divide function into active and passive forms. Active function uses reference, logical reasoning, and intelligence. The word “predicate” emphasizes a subject, being something that refers to something else, either specifically (definite “the”) or generally (indefinite “a”) through the ascription of certain qualities. Predicates are the subjects (and objects) of logical reasoning. Passive function, which is employed by evolution, instinct, and conditioned responses, uses mechanisms and behaviors that were previously established to be effective in similar situations. Evolution established that fins, legs, and wings could be useful for locomotion. Animals don’t need to know the details so long as they work, but the selection pressures are on the function, not the form. We can actively reason out the passive function of wings to derive principles that help us build planes. Some behaviors originally established with reason, like tying shoelaces, are executed passively (on autopilot) without active use of predicates or reasoning. Function can only be achieved in physical systems by identifying and applying information, which as I have previously noted is the basic unit of function. Life is the only kind of physical system that has developed positive feedback mechanisms capable of capturing and using information. These mechanisms evolved because they enable life to do things more competitively than it could otherwise do because predicting the future beats blind guessing. Evolution captures information using genes, which apply it either directly through gene expression (to regulate or code proteins) or indirectly through instinct (to influence the mind). Minds capture information using memory, which is a partially understood neural process, and then applies it through recall or recognition, which subconsciously identify appropriate memories through triggering features. But if information is captured using physical genes or neurons, what trick makes it nonphysical? That is the power of abstraction: it allows stored patterns to be as indefinite generalities to be correlated later to new situations to provide a predictive edge. Information is created actively by using concepts to represent general situations and passively via pattern matching. Genes create proteins that do chemical pattern matching, while instinct and conditioned response leverage subconscious neural pattern matching.

This diagram shows how form and function dualism compares to substance dualism and several monisms. These two perspectives, form and function, are not just different ways of viewing a subject, but define different kinds of existences. Physical things have form, e.g. in spacetime, or potentially in any dimensional state in which they can have an extent. Physical systems that leverage information have both form and function, but to the extent we are discussing the function we can ignore or set aside considerations of the form because it just provides a means to an end. Function has no extent but is instead measured in terms of its predictive power. Pattern-matching techniques and algorithms implement functionality passively through brute force, while reasoning creates information actively by laying out concepts and rules that connect them. In a physical world, form makes function possible, so they coexist, but form and function can’t be reduced to each other. This is why I show them in the diagram as independent dimensions that intersect but generally do their own thing. Technically, function emerges from form, meaning that interactions of forms cause function to “spring” into existence with new properties not present in forms. But it has nothing to do with magic; it is just a consequence of abstraction decoupling information from what it refers to. The information systems are still physical, but the function they manage is not. Function can be said to exist in an abstract, timeless, nonphysical sense independent of whether it is ever implemented. This is true because an idea is not made possible because we think it; it is “out there” waiting to be thought whether we think it or not. However, as physical creatures, our access to function and the ideal realm is limited by the physical mechanisms our brains use to implement abstraction. We could, in principle, build a better mind, or perhaps a computer, that can do more, but any physical system will always be physically constrained and so limit our access to the infinite domain of possible ideas. Idealism is the reigning ontology across this hypothetical space of ideas, but it can’t stand alone in our physical space. And though we can’t think all ideas, we can potentially steer our thoughts in any direction, so given enough time we can potentially conceive anything.

So the problem with physicalism as it is generally presented is that form is not the only thing a physical universe can create; it can create form and function, and function can’t be explained with the same kind of laws that apply to form but instead needs its own set of rules. If physicalism had just included rules for both direct and abstract existence in the first place, we would not need to have this discussion. But instead, it was (inadvertently) conceived to exclude an important part of the natural world, the part whose power stems from the fact that it is abstracted away from the natural world. It is ironic considering scientific explanation itself (and all explanation) is itself immaterial function and not form. How can science see both the forest and the trees if it won’t acknowledge the act of looking?

Pipe

A thought about something is not the thing itself. “Ceci n’est pas une pipe,” as Magritte said5. The phenomenon is not the noumenon, as Heidegger would have put it: the thing-as-sensed is not the thing-in-itself. If it is not the thing itself, what is it? Its whole existence is wrapped up in its potential to predict the future; that is it. However, to us, as mental beings, it is very hard to distinguish phenomena from noumena, because we can’t know the noumena directly. Knowledge is only about representations, and isn’t and can’t be the physical things themselves. The only physical world the mind knows is actually a mental model of the physical world. So while Magritte’s picture of a pipe is not a pipe, the image in our minds of an actual pipe is not a pipe either: both are representations. And what they represent is a pipe you can smoke. What this critically tells us is that we don’t care about the pipe, we only care about what the pipe can do for us, i.e. what we can predict about it. Our knowledge was never about the noumenon of the pipe; it was only about the phenomena that the pipe could enter into. In other words, knowledge is about function and only cares about form to the extent it affects function. We know the physical things have a provable physical existence — that the noumena are real — it is just that our knowledge of them is always mediated through phenomena. Our minds experience phenomena as a combination of passive and active information, where the passive work is done for us subconsciously finding patterns in everything and the active work is our conscious train of thought applying abstracted concepts to whatever situations seem to be good matches for them.

Given the foundation of form and function dualism, what can we now say distinguishes the mind from the brain? I will argue that the mind is a process in the brain viewed from its role of performing the active function of controlling the body. That’s a mouthful, so let me break it down. First, the mind is not the brain but a process in the brain. Technically, a process is any series of events that follows some kind of rules or patterns, but in this case I am referring specifically just to the information managing capabilities of the brain as mediated by neurons. We don’t know quite how they do it, but we can draw an analogy to a computer process that uses inputs and memory to produce outputs. But, as argued before, we are not so concerned with how this brain process works technically as with what function it performs because we now see the value of distinguishing functional from physical existence. Next, I said the mind is about active function. To be clear, we only have one word for mind, but might be referring to several things. Let’s call the “whole mind” the set of all processes in the brain taken from a functional perspective. Most of that is subconscious and we don’t necessarily know much about it consciously. When I talk about the mind, I generally mean just the conscious mind, which consists only of the processes that create our subjective experience. That experience has items under direct focused attention and also items under peripheral attention. It includes information we construct actively and also provides us access to much information that was constructed passively (e.g. via senses, instinct, intuition, and recollection). The conscious mind exists as a distinct process from the whole mind because it is an effective way for animals to make the kinds of decisions they need to make on a continuous basis.

3. The nature of knowledge: pragmatism, rationalism and empiricism

Given that we agree to break entities down into form and function, things and ideas, physical and mental, we next need to consider what we can know about them, and what it even means to know something. A theory about the nature of knowledge is called an epistemology. I described the mental world as being the product of information, which is patterns that can be used to predict the future. What if we propose that knowledge and information are the same thing? Charles Sanders Peirce called this epistemology pragmatism, the idea that knowledge consists of access to patterns that help predict the future for practical uses. As he put it, pragmatism is the idea that our conception of the practical effects of the objects of our conception constitutes our whole conception of them. So “practical” here doesn’t mean useful; it means usable for prediction, e.g. for statistical or logical entailment. Practical effects are the function as opposed to the form. It is just another way of saying that information and knowledge differ from noise to the extent they can be used for prediction. Being able to predict well doesn’t confer certainty like mathematical proofs; it improves one’s chances but proves nothing.

Pragmatism takes a hard rap because it carries a negative connotation of compromise. The pragmatist has given up on theory and has “settled” for the “merely” practical. But the whole point of theory is to explain what will really happen and not simply to be elegant. It is not the burden of life to live up to theory, but of theory to live up to life. When an accepted scientific theory doesn’t exactly match experimental evidence, it is because the experimental conditions are more complex than the theory’s ideal model. After all, the real world is full of imperfections that the simple equations of ideal models don’t take into account. However, we can potentially model secondary and tertiary effects with additional ideal models and then combine the models and theories to get a more accurate overall picture. However, in real-world situations it is often impractical to build this more perfect overall ideal model, both because the information is not available and because most situations we face include human factors, for which physical theories don’t apply and social theories are imprecise. In these situations pragmatism shines. The pragmatist, whose goal is to achieve the best prediction given real-world constraints, will combine all available information and approaches to do it. This doesn’t mean giving up on theory; on the contrary, a pragmatist will use well-supported theory to the limit of practicality. They will then supplement that with experience, which is their pragmatic record of what worked best in the past, and merge the two to reach a plan of action. Recall that information is the product of both a causative (reasoned) approach and a pattern analysis (e.g. intuitive) approach. Both kinds of information can be used to build the axioms and rules of a theoretical model. We aspire to causative rules for science because they lead to necessary conclusions, but in their absence we will leverage statistical correlations. We associate subconscious thinking with the pattern analysis approach, but it also leverages concepts established explicitly with a causative approach. Both our informal and formal thinking is a combination at many levels of both causation and pattern analysis. Because our conscious and subconscious minds work together in a way that appears seamless to us, we are inclined to believe that reasoned arguments are correct and not dependent on subjective (biased) intuition and experience. But we are strongly wired to think in biased ways, not because we are fundamentally irrational creatures but because biased thinking is often a more effective strategy than unbiased reason. We are both irrational and rational because both help in different ways, but we have to spot and overcome irrational biases or we will make decisions that conflict with our own goals. All of our top-level decisions have to strike a balance between intuition/experience-based (conservative) thinking and reasoned (progressive) thinking. Conservative methods let us act quickly and confidently so we can focus our attention on other problems. Progressive methods slow us down by casting doubt but they reveal better solutions. It is the principal role of consciousness to provide the progressive element, to make the call between a tried-and-true or a novel approach to any situation. These calls are always themselves pragmatic, but if in the process we spot new causal links then we may develop new ad hoc or even formal theories, and we will remember these theories along with the amount of supporting evidence they seem to have. Over time our library of theories and their support will grow, and we will draw on them for rational support as needed.

Although pragmatism is necessary at the top level of our decision-making process where experience and reason come together to effect changes in the physical world, it is not a part of the theories themselves, which exist independently as constructs of the mental (i.e. functional) world. We do have to be pragmatic about what theories we develop and about how we apply them, but since theories represent idealized functional solutions independent of practical concerns, the knowledge they represent is based on a narrower epistemology than pragmatism. But what is this narrower epistemology? After all, it is still the case that theories help predict the future for practical benefits. And Peirce’s definition, that our conception of the practical effects of the objects of our conception constitutes our whole conception of them, is also still true. What is different about theory is that it doesn’t speak to our whole conception of effects, inclusive of our experience, but focuses on causes and effects in idealized systems using a set of rules. Though technically a subset of pragmatism, rule based-systems literally have their own rules and can be completely divorced from all practical concerns, so for all practical purposes they have a wholly independent epistemology based on rules instead of effects. This theory of knowledge is called rationalism, and holds that reason (i.e. logic) is the chief source of knowledge. Put another way, where pragmatism uses both causative and pattern analysis approaches to create information, reason only uses the logical, causative approach, though it leverages axioms derived from both causative and pattern-based knowledge. A third epistemology is empiricism, which holds that knowledge comes only or primarily from sensory experience. Empiricism is also a subset of pragmatism; it differs in that it pushes where pragmatism pulls. In other words, empiricism says that knowledge is created as stimuli come in, while pragmatism says it arises as actions and effects go out. The actions and effects do ultimately depend on the inputs, and so pragmatism subsumes empiricism, which is not prescriptive about how the inputs (evidence) might be used. In science, the word empiricism is taken to mean rationalism + empiricism, i.e. scientific theory and the evidence that supports it, so one can say that rationalism is the epistemology of theoretical science and empiricism is the epistemology of applied science.

Mathematics and highly mathematical physical theories are often studied on an entirely theoretical basis, with considerations as to their applicability left for others to contemplate. The study of algorithms is mostly theoretical as well because their objectives are established artificially, so they can’t be faulted for inapplicability to real-world situations. Developing algorithms can’t, in and of itself, explain the mind, because even if the mind does employ an algorithm (or constellation of algorithms), the applicability of those algorithms to the real-world problems the mind solves must be established. But iteratively we can propose algorithms and tune them so that they do align with problems the mind seems to solve. Guessing at algorithms will never reveal the exact algorithm the mind or brain uses, but that’s ok. Scientists never discover the exact laws of nature; they only find rules that work in all or most observed situations. What we end up calling an understanding or explanation of nature is really just a framework of generalizations that helps us predict certain kinds of things. Arguably, laws of nature reveal nothing about the “true” nature of the universe. So it doesn’t matter whether the algorithms we develop to explain the mind have anything to do with what the mind is “actually” doing; to the extent they help us predict what the mind will do they will provide us with a greater understanding of it, which is to say an explanation of it.

Because proposing algorithms, or outlines of potential algorithms, and then testing them against empirical evidence is entirely consistent with the way science is practiced (i.e. empiricism), this is how I will proceed. But we can’t just propose algorithms at random; we will need a basis for establishing appropriate artificial objectives, and that basis has to be related to what it is we think minds are up to. This is exactly the feedback loop of the scientific method: propose a hypothesis, test it, and refine it ad infinitum. The available evidence informs our choice of solution, and the effectiveness of the solution informs how we refine or revise it. From the high level at which I approach this subject in this book, I won’t need to be very precise in saying just how the algorithms work because that would be premature. All we can do at this stage is provide a general outline for what kinds of skills and considerations are going into different aspects of the thought process. Once we have come to a general agreement on that, we can start to sweat the details.

While my approach to the subject will be scientifically empirical, we need to remember that the mind itself is primarily pragmatic and only secondarily capable of reason (or intuition) to support that pragmatism. So my perspective for studying the mind is not itself the way the mind principally works. This isn’t a problem so long as we keep it in mind: we are using a reasonable approach to study something that is itself uses a highly integrated combination of reason and intuition (basically causation and pattern). It would be disingenuous to suggest that I have freed myself of all possible biases in this quest and that my conclusions are perfectly objective; even established science can never be completely free of biases. But over time science can achieve ever more effective predictive models, which is the ultimate standard for objectivity: can results be duplicated? But the hallmark of objectivity is not its measure but its methods: logic and reason. The conclusions one reaches through logic using a system of rules built on postulates can be provably true, contingent on the truth of the postulates, which make it a very powerful tool. Although postulates are true by definition from the perspective of the logical model that employs them, they have no absolute truth in the physical world because our direct knowledge of the physical world is always based on evidence from individual instances and not on generalities across similar instances. So truth in the physical world (as we see it from the mental world) is always a matter of degree, the degree to which we can correlate a given generality to a group of phenomena. That degree depends both on the clarity of the generalization and on the quality of the evidence, and so is always approximate at best, but can often be close enough to a perfect correlation to be taken as truth (for practical purposes). Exceptions to such truths are often seen more as “shortcomings of reality” than as shortcomings of the truth since truth (like all concepts) exists more in a functional sense than in the sense of having a perfect correlation to reality.

But how can we empirically approach the study of the mind? If we can accept the idea that the mind is principally a functional entity, it is largely pointless to look for physical evidence of its existence, beyond establishing the physical mechanism (the brain) that supports it. This is because physical systems can make information management possible but can’t explain all the uses to which the information can be put, just as understanding the hardware of the internet doesn’t say anything about the information flowing through it. We must instead look at the functional “evidence.” We can never get direct evidence, being facts or physical signs, of function (because function has no form), so we either need to look at physical side effects or develop a way to see “evidence” of function directly independent of the physical. Behavior provides the clearest physical evidence of mental activity, but our more interesting behavior results from complex chains of thought and can’t be linked directly to stimulus and response. Next, we have personal evidence of our own mind from our own experience of it. This evidence is much more direct than behavioral evidence but has some notable shortcomings as well. Introspection has a checkered past as a tool for studying the mind. Early hopes that introspection might be able to qualitatively and quantitatively describe all conscious phenomena were overly optimistic, largely because they misunderstand the nature of the tool. Our conscious minds have access to information based both on causation and pattern analysis, but our conscious awareness of this information is filtered through an interpretive layer that generalizes the information into conceptual buckets. So these generalized interpretations are not direct evidence, but, like behavior, are downstream effects of information processing. Even so, our interpretations can provide useful clues even if they can’t be trusted outright. Freud was too quick to attach significance to noise in his interpretation of dreams as we have no reason to assume that the content of dreams serves any function. Many activities of the mind do serve a function, however, so we can study them from the perspective of those functions. As the conscious mind makes a high-level decision, it will access functionally relevant information packaged in a form that the conscious subprocess can handle, which is at least partially in the form of concepts or generalizations. These concepts are the basis of reason (i.e. rationality), so to the extent our thinking is rational then our interpretation of how we think is arguably exactly how we think (because we are conscious of it). But that extent is never exact or complete because our concepts draw on a vast pool of subconscious information which heavily colors how we use them, and also we use subconscious data analysis algorithms (most notably memory/recognition). For both of these reasons any conscious interpretation will only be approximate and may cause us to overlook or misinterpret our actual motivations completely (for which we may have other motivations to suppress).

While both behavior and introspection can provide evidence that can suggest or support models of the mind, they are pretty indirect and can’t provide very firm support for those models. But another way to study function is to speculate about what function is being performed. Functionalism holds that the defining characteristics of mental states are the functions they bring about, quite independent of what we think about those functions (introspectively) or whether we act on them (behaviorally). This is the “direct” study of function independent of the physical to which I alluded. Speculation to function, aka the study of causes and effects, is an exercise of logic. It depends on setting up an idealized model with generalized components that describes a problem. These components don’t exist physically but are exemplars that embody only the properties of their underlying physical referents that are relevant to the situation. Given the existence of these exemplars (including their associated properties) as postulates, we can then reason about what behavior we can expect from them. Within such a model, function can be understood very well or even perfectly, but it is never our expectation that these models will align perfectly with real-world situations. What we hope for is that they will match well enough that predictions made using the model will come true in the real world. Our models of the functions of mental states won’t exactly describe the true functions of those mental states (if we could ever discover them), but they will still be good explanations of the mind if they are good at predicting the functions our minds perform.

Folk explanations differ from scientific explanations in the breadth and reliability of their predictive power. While there are unlimited folk perspectives we can concoct to explain how the mind works, all of which will have some value in some situations, scientific perspectives (theories) seek a higher standard. Ideally, science can make perfect predictions, and in many physical situations it nearly does. Less ideally, science should at least be able to make predictions with odds better than chance. The social sciences usually have to settle for such a reduced level of certainty because people, and the circumstances in which they become involved, are too complex for any idealized model to describe. So how, then, can we distinguish bona fide scientific efforts in matters involving minds from pseudoscience? I will investigate this question next.

4. What Makes Knowledge Objective?

It is easier to define subjective knowledge that objective knowledge. Subjective knowledge is anything we think we know, and it counts as knowledge as long as we think it does. We set our own standard. It starts with our memory; a memory of something is knowledge of it. Our minds don’t record the past for its own sake but for its potential to help us in the future. From past experience we have a sense of what kinds of things we will need to remember, and these are the details we are most likely to commit to memory. This bias aside, our memory of events and experiences is fairly automatic and has considerable fidelity. The next level of memory is of our reflections: thoughts we have had about our experiences, memories and other thoughts. I call these two levels of memory and knowledge detailed and summary. There is no exact line separating the two, but details are kept as raw and factual as possible while summaries are higher-order interpretations that derive uses for the details. It takes some initial analysis, mostly subconscious, to study our sensory data so we can even represent details in a way that we can remember. Summaries are a subsidiary analysis of details and other summary information performed using both conscious (reasoned) and subconscious (intuitive) methods. These details and summaries are what we know subjectively.

We are designed to gather and use knowledge subjectively, so where does objectivity come in? Objectivity creates knowledge that is more reliable and broadly applicable than subjective knowledge. Taken together, reliability and broad applicability account for science’s explanatory power. After all, to be powerful, knowledge must both fit the problem and do so dependably. Objective approaches let us create both physical and social technologies to manage both goods and services to high standards. How can we create objective knowledge that can do these things? As I noted above, it’s all about the methods. Not all methods of gathering information are equally effective. Throughout our lives, we discover better ways of doing things, and we will often use these better ways again. Science makes more of an effort to identify and leverage methods that produce better information, i.e. with reliability and broad applicability. These methods are collectively called the “scientific method”. It isn’t one method but an evolving set of best practices. They are only intended to bring some order to the pursuit and do not presume to cover everything. In particular, they say nothing of the creative process or seek to constrain the flow of ideas. The scientific method is a technology of the mind, a set of heuristics to help us achieve more objective knowledge.

The philosophy of science is the conviction that an objective world independent of our perceptions exist and that we can gain an understanding of it that is also independent of our perceptions. Though it is popularly thought that science reveals the “true” nature of reality, it has been and must always be a level removed from reality. An explanation or understanding of the world will always be just one of many possible descriptions of reality and never reality itself. But science doesn’t seek a multitude of explanations. When more than one explanation exists, science looks for common ground between and tries to express them as varying perspectives of the same underlying thing. For example, wave-particle duality allows particles to be described both as particles and waves. Both descriptions work and provide explanatory power, even though we can’t imagine macroscopic objects being both at the same time. We are left with little intuitive feel for the nature of reality, which serves to remind us that the goal of objectivity is not to see what is actual there but to gain the most explanatory power over it that we can. The canon of generally-accepted scientific knowledge at any point in time will be considered charming, primitive and not terribly powerful when looked back on a century or two later, but this doesn’t mitigate its objectivity or claim on success.

That said, the word “objectivity” hints at certainty. While subjectivity acknowledges the unique perspective of each subject, objectivity is ostensibly entirely about the object itself, its reality independent of the mind. If an object actually did exist, any direct knowledge we had of it would then remain true no matter which subject viewed it. This goal, knowledge independent of the viewer, is admirable but unattainable. Any information we gather about an object must always ultimately depend on observations of it, either with our own senses or using instruments we devise. And no matter how reliable that information becomes, it is still just information, which is not the object itself but only a characterization of traits with which we ultimately predict behavior. So despite its etymology, we must never confuse objectivity with “actual” knowledge of an object, which is not possible. Objectivity only characterizes the reliability of knowledge based on the methods used to acquire it.

With those caveats out of the way, a closer look at the methods of science will show how they work to reduce the likelihood of personal opinion and maximize the likelihood of reliable reproduction of results. Below I list the principle components of the scientific method, from most to least helpful (approximately) in establishing its mission of objectivity.

    1. The refinement of hypotheses. This cornerstone of the scientific method is the idea that one can propose a rule describing how kinds of phenomena will occur, and that one can test this rule and refine it to make it more reliable. While it is popularly thought that scientific hypotheses are true until proven otherwise (i.e. falsified, as Karl Popper put it), we need to remember that the product of objective methods, including science, is not truth but reliability6. It is not so much that laws are true or can be proven false as that they can be relied on to predict outcomes in similar situations. The Standard Model of particle physics purports (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.7. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Also, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. Some aspects of mental function will prove to be highly predictable while others will be more chaotic, but our standard for scientific value should still be explanatory power.
    2. Scientific techniques. This most notably includes measurement via instrumentation rather than use of senses. Instruments are inherently objective in that they can’t have a bias or opinion regarding the outcome, which is certainly true to the extent the instruments are mechanical and don’t employ computer programs into which biases may have been unintentionally embedded. However, they are not completely free from biases or errors in how they are used, and also there are limits in the reliability of any instrument, especially at the limits of their operating specifications. Scientific techniques also include a wide variety of practices that have been demonstrated to be effective and are written up into standard protocols in all scientific disciplines to increase the chances that results can be replicated by others, which is ultimately the objective of science.
    3. Critical thinking. I will define critical thinking here without defense, as that requires a more detailed understanding of the mind than I have yet provided. Critical thinking is an effort to employ objective methods of thought with proven reliability while excluding subjective methods known to be more susceptible to bias. Next, I distinguish three of the most significant components of critical thinking:

3a. Rationality. Rationality is, in my theory of the mind, the subset of thinking concerned with applying causality to concepts, aka reasoning. As I noted in The Mind Matters, thinking and the information that is thought about divide into two camps, being reason, which manages information that derives using a causative approach, and intuition, which manages information that derives using a pattern analysis approach. Both approaches are used to some degree for almost every thought we have, but it is often useful to focus on one of these approaches as the sole or predominant one for the purpose of analysis. The value of the rational approach over the intuitive is in its reproducibility, which is the primary objective of science and the knowledge it seeks to create. Because rational techniques can be written down to characterize both starting conditions and all the rules and conclusions they imply, they have the potential to be very reliable.

3b. Inductive reasoning. Inductive reasoning extrapolates patterns from evidence. While science seeks causative links, it will settle for statistical correlations if it has to. Newton used inductive reasoning to posit gravity, which was later given a cause by Einstein’s theory of general relativity as a deformation of space-time geometry.

3c. Abductive reasoning. Abductive reasoning seeks the simplest and most likely explanations, which is a pattern matching heuristic that picks kinds of matches that tend to work out best. Occam’s Razor is an example of this often used in science: “Among competing hypotheses, the one with the fewest assumptions should be selected”.

3d. Open-mindedness. Closed-mindedness means having a fixed strategy to deal with any situation. It enables a confident response in any circumstance, but works badly if one tries to use it beyond the conditions those strategies were designed to handle. Open-mindedness is an acceptance of the limitations of one’s knowledge along with a curiosity about exploring those limitations to discover better strategies. While everyone must be open-minded in situations where ignorance is unavoidable, one hopes that one will develop sufficient mastery over most of the situations that one encounters to be able to act confidently in a closed-minded way without fear of making a mistake. While this is often possible, the scientist must always remember that perfect knowledge is unattainable and must always be alert for possible cracks in one’s knowledge. These cracks should be explored with objective methods to discover more reliable knowledge and strategies than one might already possess. By acknowledging the limits and fallibility of its approaches and conclusions, science can criticize, correct, and improve itself. Thus, more than just a bag of tricks to move knowledge forward, it is characterized by a willingness to admit to being wrong.

3e. Countering cognitive biases. More than just prejudice or closed-mindedness, cognitive biases are subconscious pattern analysis algorithms that usually work well for us but which are less reliable than objective methods. The insidiousness of cognitive biases was first exposed by Tversky and Kahneman their 1971 paper, “Belief in the law of small numbers.”89. Cognitive biases use pattern analysis to lead us to conclusions based on correlations and associations rather than causative links. They are not simply inferior to objective methods because they can account for indirect influences that can be overlooked by objective methods. But robust causative explanations are always more reliable than associative explanations, and in practice they tend to be right where biases are wrong. (where “right” and “wrong” here are taken not as absolutes but as expressions of very high and low reliability).

    4. Peer review. Peer review is the evaluation of a scientific work by one or more people of similar competence to assess whether it was conducted using appropriate scientific standards.
    5. Credentials. Academic credentials attest to the completion of specific education programs. Titular credentials, publication history, and reputation add to a researcher’s credibility. While no guarantee, credentials help establish an author’s scientific reliability.
    6. Pre-registration. A recently added best practice is pre-registration, which clears a study for publication before it has been conducted. This ensures that the decision to publish is not contingent on the results, which would be biased 10.

The physical world is not itself a rational place because reason itself it has a functional existence, not a physical existence. So rational understanding, and consequently what we think of as truth about the physical world, depends on the degree to which we can correlate a given generality to a group of phenomena. But how can we expect a generality (i.e. hypothesis) that worked for some situations to work for all similar situations? The Standard Model of particle physics professes (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.11. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Particles are not simply free-moving; they clump into atoms and molecules in pretty strict accordance with laws of physics and chemistry that have been elaborated pretty well. Macroscopic objects in nature or manufactured to serve specific purposes seem to obey many rules with considerably more fidelity than free-moving weather systems, a fact upon which our whole technological civilization depends. Still, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. The question I am going to explore in this book is whether scientific, rational thought can be successfully applied to function and not just form, and specifically to the mental function comprising our minds. Are some aspects highly predictable while others remain chaotic?

We have to keep in mind just how much we take correlation of theory to reality for granted when we move above the realm of subatomic particles. No two apples are alike, or any two gun parts, though Eli Whitney’s success with interchangeable parts has led us to think of them as being so. They are interchangeable once we slot them into a model or hypothesis, but in reality any two macroscopic objects have many differences between them. A rational view of the world breaks down as the boundaries between objects become unclear as imperfections mount. Is a blemished or rotten apple still an apple? What about a wax apple or a picture of an apple? Is a gun part still a gun part if it doesn’t fit? A hypothesis that is completely logical and certain will still have imperfect applicability to any real-world situation because the objects that comprise it are idealized, and the world is not ideal. But still, in many situations this uncertainty is small, often vanishingly small, which allows us to build guns and many other things that work very reliably under normal operating conditions.

How can we mitigate subjectivity and increase objectivity? More observations from more people help, preferably with instruments, which are much more accurate and bias-free than senses. This addresses evidence collection, but it not so easy to increase objectivity over strategizing and decision-making. These are functional tasks, not matters of form, and so are fundamentally outside the physical realm and so not subject to observation. Luckily, formal systems follow internal rules and not subjective whims, so to the degree we use logic we retain our objectivity. But this can only get us so far because we still have to agree on the models we are going to use in advance, and our preference of one model over another ultimately has subjective aspects. To the degree we use statistical reasoning we can improve our objectivity by using computers rather than innate or learned skills. Statistical algorithms exist that are quite immune to preference, bias, and fallacy (though again, deciding what algorithm to use involves some subjectivity). But we can’t yet program a computer to do logical reasoning on a par with humans. So we need to examine how we reason in order to find ways to be more objective about it so we can be objective when we start to study it. It’s a catch-22. We have to understand the mind first before we figure out how to understand it. If we rush in without establishing a basis for objectivity, then everything we do will be a matter of opinion. While there is no perfect formal escape from this problem, we informally overcome this bootstrapping problem with every thought through the power of assumption. An assumption, logically called a proposition, is an unsupported statement which, if taken to be true, can support other statements. All models are built using assumptions. While the model will ultimately only work if the assumptions are true, we can build the model and start to use it on the hope that the assumptions will hold up. So can I use a model of how the mind works built on the assumption that I was being objective to then establish the objectivity I need to build the model? Yes. The approach is a bit circular, but that isn’t the whole story. Bootstrapping is superficially impossible, but in practice is just a way of building up a more complicated process through a series of simpler processes: “at each stage a smaller, simpler program loads and then executes the larger, more complicated program of the next stage”. In our case, we need to use our minds to figure out our minds, which means we need to start with some broad generalizations about what we are doing and then start using those, then move to a more detailed but still agreeable model and start using that, and so on. So yes, we can only start filling in the details, even regarding our approach to studying the subject, by establishing models and then running them. While there is no guarantee it will work, we can be guaranteed it won’t work if we don’t go down this path. While not provably correct, nothing in nature can be proven. All we can do is develop hypotheses and test them. By iterating on the hypotheses and expanding them with each pass, we bootstrap them to greater explanatory power. Looking back, I have already done the first (highest level) iteration of bootstrapping by endorsing form & function dualism and the idea that the mind consists of processes that manage information. For the next iteration, I will propose an explanation for how the mind reasons, which I will then use to support arguments for achieving objectivity.

So then, from a high level, how does reasoning work? I presume a mind that starts out with some innate information processing capabilities and a memory bank into which experience can record learned information and capabilities. The mind is free of memories (a blank slate) when it first forms but is hardwired with many ways to process information (e.g. senses and emotions). Because our new knowledge and skills (stored in memory) build on what came before, we are essentially continually bootstrapping ourselves into more capable versions of ourselves. I mention all this because it means that the framework with which we reason is already highly evolved even from the very first time we start making conscious decisions. Our theory of reasoning has to take into account the influence of every event in our past that changed our memory. Every event that even had a short-term impact on our memory has the potential for long-term effects because long-term memories continually form and affect our overall impressions even if we can’t recall them specifically.

One could view the mind as being a morass of interconnected information that links every experience or thought to every other. That view won’t get us very far because it gives us nothing to manipulate, but it is true, and any more detailed views we develop should not contradict it. But on what basis can we propose to deconstruct reasoning if the brain has been gradually accumulating and refining a large pool of data for many years? On functional bases, of which I have already proposed two: logical and statistical, which I introduced above with pragmatism. Are these the only two approaches that can aid prediction? Supernatural prophecy is the only other way I can think of, but we lack reliable (if any) access to it, so I will not pursue it further. Just knowing that however the mind might be working, it is using logical and/or statistical techniques to accomplish its goals gives us a lot to work with. First, it would make sense, and I contend that it is true, that the mind uses both statistical and logical means to solve any problem, using each to the maximum degree they help. In brief, statistical means excel at establishing the assumptions and logical means at drawing out conclusions from the assumptions.

While we can’t yet say how neurons make reasoning possible, we can say that it uses statistics and logic, and from our knowledge of the kinds of problems we solve and how we solve them, we can see more detail about what statistical and logical techniques we use. Statistically, we know that all our experience contributes supporting evidence to generalizations we make about the world. More frequently used generalizations come to mind more readily than lesser used and are sometimes also associated with words or phrases, such as about the concept APPLE. An APPLE could be a specimen of fruit of a certain kind, or a reproduction or representation of such a specimen, or used in a metaphor or simile, which are situations where the APPLE concept helps illustrate something else. We can use innate statistical capabilities to recognize something as an APPLE by correlating the observed (or imagined) aspects of that thing against our large database every encounter we have ever had with APPLES. It’s a lot of analysis, but we can do it instantly with considerable confidence. Our concepts are defined by the union of our encounters, not by dictionaries. Dictionaries just summarize words, and yet words are generalizations and generalizations are summaries, so dictionaries are very effective because they summarize well. But brains are like dictionaries on steroids; our summaries of the assumptions and rules behind our concepts and models are much deeper and were reinforced by every affirming or opposing interaction we ever had. Again, most of this is innate: we generalize, memorize, and recognize whether we want to or not using built-in capacities. Consciousness plays an important role I will discuss later, but “sees” only a small fraction of the computational work our brains do for us.

Let’s move on to logical abilities. Logic operates in a formal system, which is a set of assumptions or axioms and rules of inference that apply to them. We have some facility for learning formal systems, such as the rules of arithmetic, but everyday reasoning is not done using formal systems for which we have laid out a list of assumptions and rules. And yet, the formal systems must exist, so where do they come from? The answer is that we have an innate capacity to construct mental models, which are both informal and formal systems. They are informal on many levels, which I will get into, but also serve the formal need required for their use in logic. How many mental models (models, for short) do we have in our heads? Looked at most broadly, we each have one, being the whole morass of all the information we have every processed. But it is not very helpful to take such a broad view, nor is it compatible with our experience using mental models. Rather, it makes sense to think of a mental model as the fairly small set of assumptions and rules that describe a problem we typically encounter. So we might have a model of a tree or of the game of baseball. When we want to reason about trees or baseball, we pull out our mental model and use it to draw logical conclusions. From the rules of trees, we know trees have a trunk with ever small branches branching off that have leaves that usually fall off in the winter. From the rules of baseball, we know that an inning ends on the third out. Referring back a paragraph, we can see that models and concepts are the same things — they are generalizations, which is to say they are assessments that combine a set of experience into a prototype. Though the same data, models and concepts have different functional perspectives: models view the data from the inside as the framework in which logic operates, and concepts view it from the outside as the generalized meaning it represents.

While APPLE, TREE, and BASEBALL are individual concepts/models, no two instances of them are the same. Any two apples must differ at least in time and/or place. When we use a model for a tree (let’s call it the model instance), we customize the model to fit the problem at hand. So for an evergreen tree, for example, we will think of needles as a degenerate or alternate form of leaves. Importantly, we don’t consciously reason out the appropriate model for the given tree; we recognize it using our innate statistical capabilities. A model or concept instance is created through recognition of underlying generalizations we have stored from long experience, and then tweaked on an ad hoc basis (via further recognition and reflection) to add unique details to this instance. Reflection can be thought of as a conscious tool to augment recognition. So a typical model instance will be based on recognition of a variety of concepts/models, some of which will overlap and even contradict each other. Every model instance thus contains a set of formal systems, so I generally call it a constellation of models rather than a model instance.

We reason with a model constellation by using logic within each component model and then using statistical means to weigh them against each other. The critical aspect of the whole arrangement is that it sets up formal systems in which logic can be applied. Beyond that, statistical techniques provide the huge amount of flexibility needed to line up formal systems to real-world situations. The whole trick of the mind is to represent the external world with internal models and to run simulations on those models to predict what will happen externally. We know that all animals have some capacity to generalize to concepts and models because their behavior depends on being able to predict the future (e.g. where food will be). Most animals, but humans in particular, can extend their knowledge faster than their own experience allows by sharing generalizations with others via communication and language, which have genetic cognitive support. And humans can extend their knowledge faster still through science, which formally identifies objective models.

So what steps can we take to increase the objectivity of what goes on in our minds, which has some objective elements in its use of formal models, but which also has many subjective elements that help form and interpret the models? Devising software that could run mental models would help because it could avoid fallacies and guard against biases. It would still ultimately need to prioritize using preferences, which are intrinsically subjective, but we could at least try to be careful and fair setting them up. Although it could guard against the abuses of bias, we have to remember that all generalizations are a kind of bias, being arguments for one way of organizing information over another. We can’t write software yet that can manage concepts or models, but machine learning algorithms, which are statistical in nature, are advancing quickly. They are becoming increasingly generalized to behave in ever more “clever” ways. Since concepts and models are themselves statistical entities at their core, we will need to leverage machine learning as a starting point for software that simulates the mind.

Still, there is much we can do to improve our objectivity of thought short of replacing ourselves with machines, and science has been refining methods to do it from the beginning. Science’s success depends critically on its objectivity, so it has long tried to reject subjective biases. It does this principally by cultivating a culture of objectivity. Scientists try to put opinion aside to develop hypotheses in response to observations. They then test them with methods that can be independently confirmed. Scientists also use peer review to increase independence from subjectivity. But what keeps peers from being subjective? In his 1962 classic, The Structure of Scientific Revolutions12, Thomas Kuhn noted that even a scientific community that considers itself objective can become biased toward existing beliefs and will resist shifting to a new paradigm until the evidence becomes overwhelming. This observation inadvertently opened a door which postmodern deconstructionists used to launch the science wars, an argument that sought to undermine the objective basis of science, calling it a social construction. To some degree this is undeniable, which has left science with a desperate need for a firmer foundation. The refutation science has fallen back on for now was best put by Richard Dawkins, who noted in 2013 that “Science works, bitches!”13. Yes, it does, but until we establish why we are blustering much like the social constructionists. The reason science works is that scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that they don’t (and in fact can’t) eliminate it. All that matters is that they reduce it, which distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So, yes, science is a social construction, but one that continually moves closer to truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount and quality of useful information. It is not enough for scientific communities to assume best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”1415. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings, but is more commonly a problem when science is used to justify a commercial enterprise.

5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

The paradigm I am proposing to replace physicalism, rationalism, and empiricism is a superset of them. Form & function dualism embraces everything physicalism stands for but doesn’t exclude function as a form of existence. Pragmatism embraces everything rationalism and empiricism stand for but also includes knowledge gathered from statistical processes and function.

But wait, you say, what about biology and the social sciences: haven’t they been making great progress within the current paradigm? Well, they have been making great progress, but they have been doing it using an unarticulated paradigm. Since Darwin, biology has pursued a function-oriented approach. Biologists examine all biological systems with an eye to the function they appear to be serving, and they consider the satisfaction of function to be an adequate scientific justification, but it isn’t under physicalism, rationalism or empiricism. Biologists cite Darwin and evolution as justification for this kind of reasoning, but that doesn’t make it science. The theory of evolution is unsupportable under physicalism, rationalism, and empiricism alone, but instead of acknowledging this metaphysical shortfall some scientists just ignore evolution and reasoning about function while others just embrace it without being overly concerned that it falls outside the scientific paradigm. Evolutionary function occupies a somewhat confusing place in reasoning about function because it is not teleological, meaning that evolution is not directed toward an end or shaped by a purpose but rather is a blind process without a goal. But this is irrelevant from an informational standpoint because information never directs toward an end anyway, it just helps predict. Goals are artifacts of formal systems, and so contribute to logical but not statistical information management techniques. In other words, goals and logic are imaginary constructs; they are critical for understanding the mind but can be ignored for studying evolution and biology, which has allowed biology to carry on despite this weakness in its foundation.

The social sciences, too, have been proceeding on an unarticulated paradigm. Officially, they are trying to stay within the bounds of physicalism, rationalism, and empiricism, but the human mind introduces a black box, which is what scientists call a part of the system that is studied entirely through its inputs and outputs without any attempt to explain the inner workings. Some efforts to explain it have been attempted. Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s Verbal Behavior by explaining how language acquisition leverages innate linguistic talents16. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. So we now have good reason to believe the mind is much more than conditioned behavior and employs reasoning and subconscious know-how. But that is not the same thing as having an ontology or epistemology to support it. Form & function dualism and pragmatism give us the leverage to separate the machine (the brain) from its control (the mind) and to dissect the pieces.

Expanding the metaphysics of science has a direct impact across science and not just regarding the mind. First, it finds a proper home for the formal sciences in the overall framework. As Wikipedia says, “The formal sciences are often excluded as they do not depend on empirical observations.” Next, and critically, it provides a justification for the formal sciences to be the foundation for the other sciences, which are dependent on mathematics, not to mention logic and hypotheses themselves. But the truth is that there is no metaphysical justification for invoking formal sciences to support physicalism, rationalism, and empiricism. With my paradigm, the justification becomes clear: function plays an indispensable role in the way the physical sciences leverage generalizations (scientific laws) about nature. In other words, scientific theories are from the domain of function, not form. Next, it explains the role evolutionary thinking is already having in biology because it reveals how biological mechanisms use information stored in DNA to control life processes through feedback loops. Finally, this expanded framework will ultimately let the social sciences shift from black boxes to knowable quantities.

But my primary motivation for introducing this new framework is to provide a scientific perspective for studying the mind, which is the domain of cognitive science. It will elevate cognitive science from a loose collaboration of sciences to a central role in fleshing out the foundation of science. Historically the formal sciences have been almost entirely theoretical pursuits because formal systems are abstract constructs with no apparent real-world examples. But software and minds are the big exceptions to this rule and open the door for formalists to study how real-world computational systems can implement formal systems. Theoretical computer science is a well-established formal treatment of computer science, but there is no well-established formal treatment for cognitive science, although the terms theoretical cognitive science and computational cognitive science are occasionally used. Most of what I discuss in this book is theoretical cognitive science because most of what I am doing is outlining the logic of minds, human or otherwise, but with a heavy focus on the design decisions that seem to have impacted earthly, and especially human, minds. Theoretical cognitive science studies the ways minds could work, looking at the problem from the functional side, and leaves it as a (big) future exercise to work out how the brain actually brings this sort of functionality to life.

It is worth noting here that we can’t conflate software with function: software exists physically as a series of instructions, while function exists mentally and has no physical form (although, as discussed, software and brains can produce functional effects in the physical world and this is, in fact, their purpose). Drew McDermott (whose class I took at Yale) characterized this confusion in the field of AI like this (as described by Margaret Boden in Mind as Machine):

A systematic source of self-deception was their common habit (made possible by LISP: see 10.v.c) of using natural-language words to name various aspects of programs. These “wishful mnemonics”, he said, included the widespread use of “UNDERSTAND” or “GOAL” to refer to procedures and data structures. In more traditional computer science, there was no misunderstanding; indeed, “structured programming” used terms such as GOAL in a liberating way. In Al, however, these apparently harmless words often seduced the programmer (and third parties) into thinking that real goals, if only of a very simple kind, were being modelled. If the GOAL procedure had been called “G0034” instead, any such thought would have to be proven, not airily assumed. The self-deception arose even during the process of programming: “When you [i.e. the programmer] say (GOAL… ), you can just feel the enormous power at your fingertips. It is, of course, an illusion” (p. 145). 17

This begs the million-dollar question: if an implementation of an algorithm is not itself function, where is the function, i.e. real intelligence, hiding? I am going to develop the answer to this question as the book unfolds, but the short answer is that information management is a blind watchmaker both in evolution and the mind. That is, from a physical perspective the universe can be thought of as deterministic, so there is no intelligence or free will. But the main thrust of my book is that this doesn’t matter because algorithms that manage information are predictive and this capacity is equivalent to both intelligence and free will. So if procedure G0034 is part of a larger system that uses it to effectively predict the future, it can fairly also be called by whatever functional name you like that describes this aspect. Such mnemonics are actually not wishful. It is no illusion that the subroutines of a self-driving car that get it to its destination in one piece do wield enormous power and achieve actual goals. This doesn’t mean we are ready to start programming goals to the level human minds conceive them (and certainly not UNDERSTAND!), but function, i.e. predictive power, can be broken down into simple examples and implemented using today’s computers.

What are the next steps? My main point is that we need start thinking about how minds achieve function and stop thinking that a breakthrough in neurochemistry will magically solve the problem. We have to solve the problem by solving the problem, not by hoping a better understanding of the hardware will explain the software. While the natural sciences decompose the physical world from the bottom up, starting with subatomic particles, we need to decompose the mental world from the top down, starting (and ending) with the information the mind manages.

An Overview of What We Are

[Brief summary of this post]

What are we? Are we bodies or minds or both? Natural science tells us with fair certainty that we are creatures, one type among many, who evolved over the past few billion years in an entirely natural and explainable way. I certainly endorse broad scientific consensus, but this only confirms bodies, not minds. Natural science can’t yet confirm the existence of minds; we can observe the brain, by eye or with instruments, but we can’t observe the mind. Everything we know (or think we know) about the mind comes from one of two sources: our own experience or hearsay. However comfortable we are with our own minds, we can’t prove anything about the experience. Similarly, everything we learn about the world from others is still hearsay, in the sense that it is information that can’t be proven. We can’t prove things about the physical world; we can only develop pretty reliable theories. And knowledge itself, being information and the ability to apply it, only exists in our minds. Some knowledge appears instinctively, and some is acquired through learning (or so it seems to us). Beyond knowledge, we possess senses, feelings, desires, beliefs, thoughts, and perspectives, and we are pretty sure we can recognize these things in others. All of these mental words mean something about our ability to function in the world, and have no physical meaning in and of themselves. And not incidentally, we also have physical words that let us understand and interact with the physical world even though these words are also mental abstractions, being generalizations about kinds or instances of physical phenomena. We can comfortably say (but can’t prove) that we have a very good understanding of a mentally functional existence that is quite independent of our physical existence, an understanding that is itself entirely mentally functional and not physical. It is this mentally functional existence, our mind, that we most strongly identify with. When we are discussing any subject, the “we” doing the discussing is our minds, not our bodies. While we can identify with our bodies and recognize them as an inseparable possession, they, including our brains, are at least logically distinct entities from our minds. We know (from science) that the brain hosts our mind, but that is irrelevant to how we use our minds (excepting issues concerning the care of our heads and bodies) because our thoughts are abstractions not bound (except through indirect reference) to the physical world.

Given that we know we are principally mental beings, i.e. that we exist more from the perspective of function than form, what can we do to develop an understanding of ourselves? All we need to do is approach the question from the perspective of function rather than form. We don’t need to study the brain or the body; we need to study what they do and why. Just as homologous evolution caused eyes to evolve independently about 50-100 times, all our brain functions are evolving because of their value rather than because of their mechanism. Function drives evolution, not form, although form constrains what can be achieved.

But let’s consider the form for a moment before we move on to function. Observations of the brain will eventually reveal how it works in the same way dissection of a computer would. This will illuminate all the interconnections, and even which areas specialize in what kind of tasks. Monitoring neural activation alone could probably even get to the point where one could predict the gist of our thoughts with fair accuracy by correlating areas of neural activity to specific memories and mental states. But that would still be a parlor trick because such a physical reading would not reveal the rationale for the logical relationships in our cognitive models. The physical study of the brain will reveal much about the constraints of the system (the “hardware”), including signal speeds, memory storage mechanisms, and areas of specialized functions, but could it trace our thoughts (the “software”)? To extend the computer analogy, one can study software by doing a memory dump, so a similar memory reading ability for brains could reveal thoughts. But it is not enough to know the software or the thoughts; one needs to know what function is being served, i.e. what the software or thoughts do. A physical examination can’t reveal that; it is a mental phenomenon that can be understood only by reasoning out what it does from a higher-level (generalized) perspective and why. One can figure out what software does from a list of instructions, but one can’t see the larger purposes being served without asking why, which moves us from form to function, from physical to mental. So a better starting point is to ask what function is being served, from which one can eventually back out how the hardware and software do it. Since we are far from being able to decode the hardware or software of the brain (“wetware”) in much detail anyway, I will adopt this more direct functional approach.

From the above, we have finally arrived at the question we need to ask: What function do minds serve? The answer, for which I will provide a detailed defense later on, is that the function of the brain is to provide centralized, coordinated control of the body, and the function of the conscious mind is to provide centralized, coordinated control of the brain. That brains control bodies is, by now, not a very controversial stance. The rest of the body provides feedback to the brain, but the brain ultimately decides. The gut brain does a lot of “thinking” for itself, passing along its hungers and fears, but it doesn’t decide for you. That the conscious mind controls the brain is intuitively obvious but hard to prove given that our only primary information source about the mind is the mind itself, i.e. it is subjective instead of objective. However, if we work from the assumption that the brain controls the body using information management, which is to say the application of algorithms on data, then we can define the mind as what the brain is doing from a functional perspective. That is, the mind is our capacity to do things.

The conscious mind, however, is just a subset of the mind, specifically including everything in our conscious awareness, from sensory input to memories, both at the center of our attention and in a more peripheral state of awareness. We feel this peripheral awareness both because we can tell it is there without dwelling on it and because we often do turn our attention to it, at which point it happily becomes the center. The capacity of our mind to do things is much larger than our conscious awareness, including all things our brains can do for which we don’t consciously sense the underlying algorithm. Statistically, this includes almost everything our brains do. The things we use our minds to do which we can’t explain are said to be done subconsciously, by our subconscious mind. We only know the subconscious mind is there by this process of elimination: we can do it, but we are not aware of how we do it or sometimes that we are doing it at all.

For example, we can move, talk, and remember using our (whole) mind, but we can’t explain how we do them because they are controlled subconsciously, and the conscious mind just pulls the strings. Any explanations I might attempt of the underlying algorithms behind these actions sound like they are at the puppeteer level: I tell my body to move, I use words to talk, I remember things by thinking about them. In short, I have no idea how I really do it. The explanations or understandings available to the conscious mind develop independently of the underlying subconscious algorithms. Our conscious understanding is based only on the information available to conscious awareness. While we are aware of much of the sensory data used by the brain, we have limited access to the subconscious processing performed on that data, and consequently limited access to the information it contains. What ends up happening is that we invent our own view of the world, our own way of understanding it, using only the information we can access through awareness and the subconscious and conscious skills that go with it. What this means is that our whole understanding of the world (including ourselves) is woven out of information we derive from our awareness and not from the physical world itself, which we only know second-hand. Exactly like a sculptor, we build a model of the world, similar to it in as many ways as we can make it feel similar, but at all times just a representation and not the real thing. While we evolved to develop this kind of understanding, it depends heavily on the memories we record over our lifetimes (both consciously accessible and subconsciously not). As the mind develops from infancy, it acquires information from feedback that it can put to use, and it thinks of this information as “knowledge” because it works, i.e. it helps us to predict and consequently to control. To us, it seems that the mind has a hotline to reality. Actually, though, the knowledge is entirely contextual within the mind, not reality itself but only representative of it. But by representing it the contexts or models of the conscious mind arise: the conscious mind has no choice but to believe in itself because that is all it has.

Speaking broadly, subconscious algorithms perform specialized informational tasks like moving a limb, remembering a word, seeing a shape, and constructing a phrase. Consciously, we don’t know how they do it. Conscious algorithms do more generalized tasks, like thinking of ways to find food or making and explaining plans. We know how we do these things because we think them through. Conscious algorithms provide centralized, coordinated control of subconscious (and other conscious) algorithms. Only the top layer of centralized control is done consciously; much can be done subconsciously. For example, all our habitual behavior starts under conscious development and is then delegated to the subconscious going forward. As the control central, though, the buck stops with the conscious mind; it is responsible for reviewing and approving, or, in the case of habitual behavior, preapproving, all decisions. Some recent studies impugn this decisive capacity of the conscious mind with evidence that we make decisions before we are consciously aware that we have done so.1 But that doesn’t undermine the role of consciousness, it just demonstrates that to operate with speed and efficiency we can preapprove behaviors. Ideally, the conscious mind can make each sort of decision just once and self-program to reapply that decision as needed going forward without having to repeat the analysis. It is like a CEO who never pulls triggers himself but has others to do it for him, but continually monitors to see if things are being done right.

I thus conclude that the conscious mind is a subprocess of the mind that exists to make decisions and that it does it using perspectives called knowledge that are only meaningful locally (i.e. in the context of the information under its management) and that these contexts are distilled from information fed to it by subconscious processes. The conscious mind is separate from the subconscious mind for practicality reasons. The algorithmic details of subconscious tasks are not relevant to centralized control. We subconsciously metabolize, pump blood, breathe, blink, balance, hear, see, move, etc. We have conscious awareness of these things only to the degree we need to to make decisions. For example, we can’t control metabolization and heartbeat (at least without biofeedback), and we consequently have no conscious awareness of them. Similarly, we don’t control what we recognize. Once we recognize something, we can’t see it as something else (unless an alternate recognition occurs). But we need to be aware of what we recognize because it affects our decisions. We breathe and blink automatically, but we are also aware we are doing it so we can sometimes consciously override it. So the constant stream of information from the subconscious mind that flows past our conscious awareness is just the set we need for high-level decisions. The conscious mind is unaware how the subconscious does these things because this extraneous information would overly complicate its task, slowing it down and probably compromising its ability to lead. We subjectively know the limits of our conscious reach, and we can also see evidence of all the things our brains must be doing for us subconsciously. I suspect this separation extends to the whole animal kingdom, which is nearly all comprised of bilateral animals having one brain. Octopuses are arguably an exception as they have separate brains for each arm, but the central octopus brain must still have some measure of high-level control over them, perhaps in the form of an awareness, similar to our consciousness. Whether each arm also has some degree of consciousness is an open question.2 Although a separate consciousness process is not the only possible solution to centralized control, it does appear to be the solution evolution has favored, so I will take it as my working assumption going forward.

One can further subdivide the subconscious mind along functional lines into what are called modules, which are specialized functions that also seem to have specialized physical areas of the brain that support them. Steven Pinker puts it this way:

The mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation. The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. 3
The mind is a set of modules, but the modules are not encapsulated boxes or circumscribed swatches on the surface of the brain. The organization of our mental modules comes from our genetic program, but that does not mean that there is a gene for every trait or that learning is less important than we used to think.4

Positing that the mind has modules doesn’t tell us what they are or how they work. Machines are traditionally constructed from parts that serve specific purposes, but design refinements (e.g. for miniaturization) can lead to a streamlining of parts that are fewer in number, but that holistically serve more functions. Having been streamlined by countless generations, the modules of the mind can’t be as easily distinguished along functional boundaries as the other parts of the body because they all perform information management in a highly collaborative way. But if we accept that any divisions we make are preliminary, we can get on with it without getting too caught up in the details. Drawing such lines is reverse engineering. Evolution engineered us, explaining what it did is reverse engineering. Ideally one learns enough from reverse engineering to build a duplicate mechanism from scratch. But living things were “designed” from trillions of small interactions spread over billions of years. We can’t identify those interactions individually, and in any event, natural selection doesn’t select for individual traits but for entire organisms, so even with all the data one would be hard-pressed to be sure what caused what. However, if one generalizes, that is, if one applies statistical reasoning, one can distinguish functional advantages of one trait over another. And considering that all knowledge and understanding are the product of such generalizing, it is a reasonable strategy. Again, it is not the objective of knowledge to describes things “as they are,” only to create models or perspectives that abstract or generalize certain features. So we can and should try to subdivide the mind into modules and guess how they interact, with the understanding that there is more than one way to skin this cat and greater clarity will come with time.

Subdividing the mind into consciousness and a number of subconscious components will do much to elucidate how the mind provides its centralized control function, but the next most critical aspect to consider is how it manages information. Information derives from the analysis of data, the separation of useful data (the wheat) from noisy data (the chaff). Our bodies use at least two physical mechanisms to record information: genes and memory. Genes are nature’s official book of record, and many mental functions have extensive instinctive support encoded by genes. We have fully decoded all our genes and have identified some functions of some of them. Genes either code for proteins or they help or regulate those that do. Their function can be viewed narrowly as a biochemical role or more broadly as the benefit conferred to the organism. We are still a long way off from connecting the genes to the biochemical roles, and further still from connecting to benefits. Even with good explanations for everything questions will always remain because billions of years of subtlety are coded into genes, and models for understanding invariably generalize that subtlety away.

Memory is an organism’s book of record, responsible for preserving any information it gleans from experience, a process also called learning. We don’t yet understand the neurochemical basis of memory, though we have identified some of the chemicals and pathways involved. Nurture (experience) is often steered by nature (instinct) to develop memory. Some of our instinctive skills work automatically without memory but must leverage memory for us to achieve mastery of a learned behavior. We are naturally inclined to learn to walk and talk but are born with no memory of steps or words. So we follow our genetic inclinations, and through practice we record models in memory that help us perform the behaviors reliably.

Genes and memory store information of completely incompatible types and formats. Genetic information encodes chemical structures (either mRNA or proteins) which translate to function mostly through proteins and gene regulation. Memory encodes objects, events and other generalizations which translate to function through indirection, mostly by correlating memory with reality. Genetic information is physical and is mechanically translated to function. Remembered information is mental and is indirectly or abstractly translated to function. While both ultimately get the job done, the mind starts out with no memory as a tabula rasa (blank slate) and assembles and accumulates memory as a byproduct of cogitation. Many algorithmic skills, like vision processing, are genetically prewired, but on-the-job training leverages memory (e.g. recognition of specific objects). In summary, genes carry information that travels across generations while memory carries information transient to the individual.

I mentioned before that culture is another reservoir of information, but it doesn’t use an additional biological mechanism. While culture depends heavily on our genetic nature, significantly on language, we reserve the word culture for additions we make beyond our nature and ourself. Language is an innate skill; a group of children with no language can create a completely vocabulary and grammar themselves in a few years. Therefore, cultural information is not stored in genes but only in memory, and it is also stored in artifacts as a form of external memory. Each of us forms a unique set of memories based on our own experience and our exposure to culture. What an apple is to each of us is a unique derivation of our lifetime exposure to apples, but we all share general ideas (knowledge) about what one can do with apples. We create memories of our experiences using feedback we ourselves collect. Our memory of culture, on the other hand, is partially based on our own experiences and partially on the underlying cultural information others created. Cultural institutions, technologies, customs, and artifacts have ancient roots and continually evolve. Culture extends our technological and psychological reach, providing new ways to control the world and understand our place in it. While cultural artifacts mediate much of the transmission of culture, most culture is acquired from direct interaction with other people via spoken language or other activities. Culture is just a thin veneer sitting on top of our individual memories, but it is the most salient part to us because it encodes so much of what we can share.

To summarize so far, we have conscious and subconscious minds that manage information using memory. The conscious mind is distinct from the subconscious as the point where relevant information is gathered for top-level centralized control. But why are conscious minds aware? Couldn’t our top-level control process be unaware and zombie-like? No, it could not, and the analogy to zombies or robots reveals why. While we can imagine an automaton performing a task effectively without consciousness, as indeed some automated machines do, we also know that they lack the wherewithal to respond to unexpected circumstances. In other words, we expect zombies and robots to have rigid responses and to be slow or ineffective in novel situations. This intuition we have about them results from our belief that simple tasks can be automated, but very general tasks require generalized thinking, which in turn requires consciousness. I’m going to explain why this intuition is sound and not just a bias, and in the process we will see why the consciousness process must be aware of what it is doing.

I have so far described the consciousness process as being a distinct subprocess of the mind which is supplied just the information relevant to high-level decisions from a number of subconscious processes, many of them sensory but also memory, language, spatial processing, etc. Its task is to make high-level decisions as efficiently and efficaciously as possible. I can’t prove that this design is the only possible way of doing things, but it is the way the human mind is set up. And I have spoken in general about how knowledge in the mind is contextual and is not identical to reality but only representative of it. But now I am going to look closer at how that representative knowledge causes a mind to “believe in itself” and consequently become aware. It is because we create virtual worlds (called mental models, or models for short) in our heads that look the same as the outside world. We superimpose these on the physical world and correlate them so closely that we can usually ignore the distinction. But they could not be more different. One of them is out there, and the other in here. One exists only physically, the other only mentally (albeit with the help of a physical computational mechanism, the brain). One is detailed down to atoms and then quarks, while the other is a network of generalizations with limited detail, but extensive association. For this reason, a model can be thought of as a simplified, cartoon-like representation5 of physical reality. Within the model, one can do simple, logical operations on this abridged representation to make high-level decisions. Our minds are very handy with models; we mostly manage them subconsciously and can recognize them much the same way we recognize objects. We automatically fit the world to a constellation of models we manage subconsciously using model recognition.

So the approach consciousness uses to make top level decisions is essentially to run simulations: it builds models that correlate well to physical conditions and then projects the models into the future to simulate what will happen. Consciousness includes models of future possibilities and models of current and past experiences as we observed them. We can’t remember the actual past as it actually was, only how we experienced it through our models. All our knowledge is relative to these models, which in turn relate indirectly to physical reality. But where does awareness fit in? Awareness is just the data managed by this process. We are aware of all the information relevant to top-level decisions because our conscious selves are this consciousness process in the brain. Not all the data within our awareness is treated equally. Since much more information is sensed and recognized than is needed for decisions, the data is funneled down further through an attention process that focuses on just select items in consciousness.6 As I noted before, we can apply our focusing power on anything within our conscious awareness at will to pull it into attention, but our subconscious attention process continually identifies noteworthy stimuli for us to focus on, and it does it by “listening” for signals that stand out from the norm. We know from experience that although we are aware of a lot of peripheral sensory information and peripheral thoughts floating around in our heads at any given point in time, we can only actively think about one thing at a time, in what seems to us as a train of thought where one thought follows another. This linear, plodding approach to top-level decision making ensures that the body will make just one coordinated action at a time because we don’t have to compete with ourselves like a committee every time we do something.

Let’s think again about whether minds could be robotic again. Self-driving cars, for example, are becoming increasingly capable of executing learned behaviors, and even expanding their proficiency dynamically, without any need for awareness, consciousness, reasoning, or meaning. But even a very good learned behavior falls far short of the range of responses that animals need to compete in an evolutionary environment. Animals need a flexible ability to assess and react to situations in a general way, that is, by considering a wide range of past experience. The modeling approach I propose for consciousness can do that. If we programmed a robot to use this approach, it would both internally and externally behave as if it were aware of the data presented to it, which is wholly analogous to what we do. It will have been programmed with a consciousness process that considers access to data “awareness”. Could we conclude that it had actually become aware? I think we could because it meets the logical requirements, although this doesn’t mean robotic awareness would be as rich an experience of awareness as our own. A lot goes into the richness of our experience from billions of years of tweaks that would take us a long time to replicate faithfully in artificial minds. But it is presumptuous of us to think that our awareness, which is entirely a product of data interpretation, is exclusive just because we
are inclined to feel that way.

Let me talk for a moment about that richness of experience. How and why our sensory experiences (called qualia) feel the way they do is what David Chalmers has famously called the hard problem of consciousness. The problem is only hard if you are unwilling to see consciousness as a subroutine in the brain that is programmed to interpret data as feelings. It works exactly the way it does because it is the most effective way that has evolved to get bodies to take all the steps they need to survive. As will be discussed in the next section, qualia are an efficient way to direct data from many external channels simultaneously to the conscious mind. The channels and the attention process focus the relevant data, but the quality or feeling of the qualia results from subconscious influences the qualia exert. Taste and smell simplify chemical analyses down for the conscious mind into a kind of preference. Color and sound can warn us of danger or calm us down. These qualia seem almost supernatural but they actually just neatly package up associations in our minds so we will feel like doing the things that are best for us. Why do we have a first-person experience of them? Here, too, it is nothing special. First-person is just the name we give to this kind of processing. If we look at our, or someone else’s, conscious process more from a third-person perspective we can see that what sets it apart is just the flood of information from subconscious processes giving us a continuous stream of sensations and skills that we take for granted. First person just means being connected so intimately to such a computing device.

Now think about whether robots can be conscious. Self-driving cars use a specialized algorithm that consults millions of hours of driving experience to pick the most appropriate responses. These cars don’t reason out what might happen in different scenarios in a general way. Instead, they use all that experience to look up the right answer, more or less. They still use internal models for pedestrians, other cars, roads, etc, but once they have modeled the basic circumstances they just look up the best behavior rather than reasoning it out generally. As we start to build robots that need more flexibility we may well design the equivalent of a conscious subprocess, i.e. a higher-level process that reasons with models. If we also use the approach of giving it qualia that color its preferences around its sensory inputs in preprogrammed (“subconscious”) ways to simplify the task at the conscious level, then we will have built a consciousness similar to our own. But while we may technically meet my definition of consciousness and while such a robot may even be able to convince people into thinking it is human sometimes (i.e. pass the Turing test), that alone won’t mean it experiences qualia anywhere near as rich as our own, and that is because we have more qualia which encode more preferences in a highly interconnected and seamless way following billions of years of refinements. Brains and bodies are an impressive accomplishment. But they are ultimately just machines, and it is theoretically possible to build them from scratch, though not with the approaches to building we have today.

The Certainty Engine

The Certainty Engine: How Consciousness Arose to Drive Decisions Through Rationality

The mind’s organization as we experience it revolves around the notion of certainty. It is a certainty engine. It is designed so as to enable us to act with the full expectation of success. In other words, we don’t just act confidently because we are brash, but because we are certain. It is a surprising capacity, given that we know the future is unknowable. We know we can’t be certain about the future, and yet at the same time we feel certain. That feeling comes from two sources, one logical and one psychological.

Logically, we break the world down into chunks which follow rules of cause and effect. We gather these chunks and rules into mental models (models for short) where certainty is possible because we make the rules. When we think logically, we are using these model models to think about the physical world, because logic, and cause and effect, only exist in the models; they exist mentally but not physically. Cause and effect are just illusions of the way we describe things — very near and dear to our hearts — but not scientific realities. The universe follows its clockwork mechanism according to its design, and any attempt to explain what “caused” what after the fact is going to be a rationalization, which is not necessarily a bad thing, but it does necessarily mean simplifying down to an explanatory model in which cause and effect become meaningful concepts. Consequently, if something is true in a model, then it is a logical certainty in that model. We are aware on some level that our models are simplifications that won’t perfectly match the physical world, but on another level, we are committed to our models because they are the world as we understand it.

Psychologically, it wouldn’t do for us to be too scared to ever act for fear of making a mistake, so once our confidence reaches a given threshold we leap. In some of our models we will succeed while in others we will fail. Most of our actions succeed. This is because most of our decisions are habitual and momentary, like putting one foot in front of the other. Yes, we know we could stub our toe on any step, and we have a model for that, but we rarely think about it. Instead, we delegate such decisions to our subconscious minds, which we trust both to avoid obstacles and to alert us to them as needed, meaning to the degree avoidance is more of a challenge than the subconscious is prepared to handle. For any decision more challenging than habit can handle we try to predict what will happen, especially with regard to what actions we can take to change the outcome. In other words, we invoke models of cause and effect. These models stipulate that certain causes have certain effects, so the model renders certainty. If I go to the mailbox to get the mail and the mailman has come today, I am certain I will find today’s mail. Our plans fail when our models fail us. We didn’t model the situation well enough, either because there were things we didn’t know or conclusions that were insufficiently justified. The real world is too complicated to model perfectly, but all that matters is that we model it well enough to produce predictions that are good enough to meet our goals. Our models simplify to imply logical outcomes that are more likely than chance to come true. This property, which separates information from noise, is why we believe a model, which is to say we are psychologically prepared to trust the certainty we feel about the model enough to act on it and face the consequences.

What I am going to examine how this kind of imagination arose, why it manifests in what we perceive as consciousness, and what it implies for how we should lead our lives.

To the fundamental question, “Why are we here?”, the short answer is that we are here to make decisions. The long answer will fill this book, but to elaborate some, we are physically (and mentally) here because our evolutionary strategy for survival has been successful. That mental strategy, for all but the most primitive of animals, includes being conscious with both awareness and free will, because those capacities help with making decisions, which translates to acting effectively. Decision-making involves capturing and processing information, and information is the patterns hiding in data. Brains use a wide variety of customized algorithms, some innate and some learned, to leverage these patterns to predict the future. To the extent these algorithms do not require consciousness I call them subrational. If all of them were subrational then there would be no need for subjective experience; animals could go about their business much like robots without any of the “inner life” which characterizes consciousness. But one of these talents, reasoning, mandates the existence of a subjective theater, an internal mental perspective which we call consciousness, that “presents” version(s) of the outside world to the mind for consideration as if they were the outside world. All but the simplest of animals need to achieve a measure of the certainty of which I have spoken and to do that they need to model worlds and map them to reality. This capacity is called rationality. It is a subset of the reasoning process, with the balance being our subrational innate talents, which proceed without such modeling (though some support it or leverage it). Rationality mandates consciousness, not as a side effect but because reasoning (which needs rationality) is just another way of describing what consciousness is. That is, our experience of consciousness is reasoning using the faculties we possess that help us do so.

At its heart, rationality is based on propositional logic, a well-developed discipline that consists of propositions and rules that apply to them. Propositions are built from concepts, which are references that can be about, represent, or stand for things, properties and states of affairs. Philosophers call this “aboutness” that concepts possess “intentionality”, and divide mental states into those that are intentional and those are merely conscious, i.e. feelings, sensations and experiences in our awareness1. To avoid confusion and ambiguity, I will henceforth simply call intentional states “concepts” and conscious states “awareness”. Logic alone doesn’t make rationality useful; concepts and conclusions have to connect back to the real world. To accomplish this they are built on an extensive subrational infrastructure, and understanding that is a big part of understanding how the mind works.

So let’s look closer at the attendant features of consciousness and how they contribute to rationality. Steven Pinker distinguishes four “main features of consciousness — sensory awareness, focal attention, emotional coloring, and the will.”2 The first three of these are subrational skills and the last is rational. Let’s focus on subrational skills for now, and we will get to the rational will, being the mind’s control center, further down. The mind also has many more kinds of subrational skills, sometimes called modules. I won’t focus too much on exact boundaries or roles of modules as that is inherently debatable, but I will call out a number of abilities as being modular. Subrational skills are processed subconsciously, so we don’t consciously sense how they work; they appear to work magically to us. We do have considerable conscious awareness and sometimes control over these subrational skills, so I don’t simply call them “subconscious”. I am going to briefly discuss our primary subrational skills.

First, though, let me more formally introduce the idea of the mind as a computational engine. The idea that computation underlies thinking goes back at least 500 years to Thomas Hobbes who said “by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.” Alan Turing and Claude Shannon developed theories of computability and information from the 1930’s to the 1950’s that led to Hilary Putnam formalizing the Computational Theory of Mind (CTM) in 1961. As Wikipedia puts it, “The computational theory of mind holds that the mind is a computation that arises from the brain acting as a computing machine, [e.g.] the brain is a computer and the mind is the result of the program that the brain runs”. This is not to suggest it is done digitally as our computers do it; brains use a highly customized blend of hardware (neurochemistry) and software (information encoded with neurochemistry). At this point we don’t know how it works except for some generalities at the top and some details at the bottom. Putnam himself abandoned CTM in the 1980’s, though in 2012 he resubscribed to a qualified version. I consider myself an advocate of CTM, provided it is interpreted from a functional standpoint. It does not matter how the underlying mechanism works, what matters is that it can manipulate information, which as I have noted does not consist of physical objects but of patterns that help predict the future. So nothing I am going to say in this book is dependent on how the brain works, though it will not be inconsistent with it either. While we have undoubtedly learned some fascinating things about the brain in recent years, none of it is proven and in any case it is still much too small a fraction of the whole to support many conclusions. So I will speak of the information management done in the brain as being computational, but that doesn’t imply numerical computations, it only implies some mechanism that can manage information. I believe that because information is abstract, the full range of computation done in human minds could be done on digital computers. At the same time, different computing engines are better suited to different tasks because the time it takes to compute can be considerable, so to perform well an engine must be finely tuned to its task. For some tasks, like math, digital computers are much better suited than human brains. For others, like assessing sensory input and making decisions that match that input against experience, they are worse (for now). Although we are a long way from being able to tune a computer as well to its task as our minds are to our task of survival, computers don’t have to match us in all ways to be useful. Our bodies are efficient, mobile, self-sustaining and self-correcting units. Computers don’t need to be any of those things to be useful, though it helps and we are making improvements in these areas all the time.

So knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and thoroughly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can sense different kinds of things at the same time without difficulty.

Looking at sensory awareness, we internally process sensory information into qualia (singular quale, pronounced kwol-ee), which is how each sense feels to us subjectively. This processing is a computation and the quale is a piece of data, nothing more, but we are wired to attach a special significance to it subjectively. We can think of the qualia as being data channels into our consciousness. Consciousness itself is a computational process that interprets the data from each these channels in a different way, which we think of as a different kind of feeling, but which is really just data from a different channel. Beyond this raw feel we recognize shapes, smells, sounds, etc. via the subrational skills of recollection and recognition, which bring experiences and ideas we have filed away back to us based on their connections to other ideas or their characteristics. This information is fed through a memory data channel. Interestingly, the memory of qualia has some of the feel of first-hand qualia, but is not as “vivid” or “convincing”, though sometimes in dreams it can seem to be. This is consistent with the idea that our memory can hold some but not all of the information the data channels carried.

Two core subrational skills let us create and use concepts: generalizing and modeling. Generalization is the ability to recognize patterns and to group things, properties, and ideas into categories called concepts. I consider it the most important mental skill. Generalizations are abstractions, not of the physical world but about it. A concept is an internal reference to a generalization in our minds that lets us think about the generalization as a unit or “thing”. Rational thought in particular only works with concepts as building blocks, not with sensations or other kinds of preconceptual ideas. Modeling itself is a subrational skill that builds conceptual frameworks that are heavily supported by preconceptual data. We can take considerable conscious control of the modeling process, but still the “heavy lifting” is both subrational and subconscious, just something we have a knack for. It is not surprising; our minds make the work of being conscious seem very easy to us so that we can focus with relative ease on making top-level decisions.

There are countless ways we could break down our many other subrational skills, with logical independence from each other and location in the brain being good ones. Harvard psychologist Howard Gardner identified eight types of independent “intelligences” in his 1983 book Frames of Mind: The Theory of Multiple Intelligences3: musical, visual-spatial, verbal-linguistic, logical-mathematical, bodily, interpersonal, intrapersonal and naturalistic. MIT neuroscientist Nancy Kanwisher in 2014 identified specific brain regions that specialize in shapes, motion, tones, speech, places, our bodies, face recognition, language, theory of mind (thinking about what other people are thinking), and “difficult mental tasks”.4 As with qualia and memory, most of these skills interact with consciousness via their own kind of data channel.

Focus itself is a special subrational skill, the ability to weigh matters pressing on the mind for attention and then to give focus to those that it judges most important. Rather than providing an external data channel into consciousness, focus controls the data channel between conscious awareness and conscious attention. Focusing itself is subrational and so its inner workings are subconscious, but it appears to select the thoughts it sends to our attention by filtering out repetitive signals and calling attention to novel ones. We can only apply reasoning to thoughts under attention, though we can draw on our peripheral awareness of things out of focus to bring them into focus. While focus works automatically to bring interesting items to our attention, we have considerable conscious control to keep our attention on anything already there.

Drives are another special kind of subrational skill that can feed consciousness through data channels with qualia of their own. A drive is logically distinct from the other subrational skills in that it creates a psychological need, a “negative state of tension”, that must be satisfied to alleviate the tension. Drives are a way of reducing psychological or physiological needs to abstractions that can be used to influence reasoning, to motivate us:

A motive is classified as an “intervening variable” because it is said to reside within a person and “intervene” between a stimulus and a response. As such, an intervening variable cannot be directly observed, and therefore, must be indirectly observed by studying behavior.5

Just rationally thinking about the world using models or perspectives doesn’t by itself give us a preference for one behavior over another. Drives solve that problem. While some decisions, such as whether our heart should beat, are completely subconscious and don’t need motivation or drive, others are subconscious yet can be temporarily overridden consciously, like blinking and breathing. These can be called instinctive drives because we start to receive painful feedback if we stop blinking or breathing. Others, like hunger, require a conscious solution, but the solution is still clear: one has to eat. Emotions have no single response that can resolve them, but instead provide nuanced feedback that helps direct us to desirable objectives. Our emotional response is very context-sensitive in that it depends substantially on how we have rationally interpreted, modeled and weighed our circumstances. But emotional response itself is not rational; an emotional response is beyond our conscious control. Since it depends on our rational evaluation of our circumstances, we can ameliorate it by reevaluating, but our emotions have access to our closely-held (“believed”) models and can’t be fooled by those we consider only hypothetically.

We have more than just one drive (to survive) because our rational interactions with the world break down into many kinds of actions, including bodily functions, making a living, having a family, and social interactions.6 Emotions provide a way of encoding beneficial advice that can be applied by a subjective, i.e. conscious, mind that uses models to represent the world. In this way, drives can exert influence without simply forcing a prescribed instinctive response. And it is not just “advice”; emotions also insulate us from being “reasonable” in situations where rationality would hurt more than help. Our faces betray our emotions so others can trust us.7 Romantic love is a very useful subrational mechanism for binding us to one other person as an evolutionary strategy. It can become frustratingly out of sync with rational objectives, but it has to have a strong, irrational, even mad, pull on us if it is to work.8

Although our conscious awareness and attention exist to support rationality, this doesn’t mean people are rational beings. We are partly rational beings who are driven by emotions and other drives. Rather than simply prescribing the appropriate reaction, drives provide pros and cons, which allow us to balance our often conflicting drives against each other by reasoning out consequences of various solutions. For any system of conflicting interests to persist in a stable way, one has to develop rules of fair play or each interest will simply fight to the death, bringing the system down. Fair play, also known as ethics, translates to respect: interests should respect each other to avoid annihilation. This applies to our own competing drives and interpersonal relationships. The question is, how much respect should one show, on a scale of me first to me last? Selfishness and cooperation have to be balanced in each system accordingly. The ethical choice is presumably one that produces a system that can survive for a long time. And living systems all embrace differing degrees of selfishness and cooperation, proving this point. Since natural living systems have been around a long time, they can’t be unethical by this definition, so any selfishness they contain is justified by this fact. Human societies, on the other hand, may overbalance either selfishness or cooperation, leading to societies that fail, either by actually collapsing or by under-competing with other societies, which eventually leads to their replacement.

And so it is that our conscious awareness becomes populated with senses, memories, emotions, language, etc, which are then focused by our power of attention for the consideration of our power of reasoning. Of this Steven Pinker says:

The fourth feature of consciousness is the funneling of control to an executive process: something we experience as the self, the will, the “I.” The self has been under assault lately. The mind is a society of agents, according to the artificial intelligence pioneer Marvin Minsky. It’s a large collection of partly finished drafts, says Daniel Dennett, who adds, “It’s a mistake to look for the President of the Oval Office of the brain.”
The society of mind is a wonderful metaphor, and I will use it with gusto when explaining the emotions. But the theory can be taken too far if it outlaws any system in the brain charged with giving the reins or the floor to one of the agents at a time. The agents of the brain might very well be organized hierarchically into nested subroutines with a set of master decision rules, a computational demon or agent or good-kind-of-homunculus, sitting at the top of a chain of command. It would not be a ghost in the machine, just another set of if-then rules or a neural network that shunts control to the loudest, fastest or strongest agent one level down.9
The reason is as clear as the old Yiddish expression, “You can’t dance at two weddings with only one tuches.” No matter how many agents we have in our minds, we each have exactly one body. 10

While it may only be Pinker’s fourth feature, it is the whole reason for consciousness. We have a measure of conscious awareness and control over our subrational skills only so that they can help with reasoning and thereby allow us to make decisions. This culmination into a single executive control process is a logical necessity given one body, but that it should be conscious or rational is not so much necessary as useful. Rationality is a far more effective way to navigate an uncertain world than habit or instinct. Perhaps we don’t need to create a model to put one foot in front of the other or chew a bite of food. But paths are uneven and food quality varies. By modeling everything in many degrees of detail and scope, we can reason out solutions better than more limited heuristical approaches of subrational skills. Reasoning brings power, but it can only work if the mind can manage multiple models and map them to and from the world, and that is a short description of what consciousness is. Consciousness is the awareness of our senses, the creation (modeling) of worlds based on them, and the combined application of rational and subrational skills to make decisions. Our decisions all have some degree of rational oversight, though we can, and do, grant our subrational skills (including learned behaviors) considerable free reign so we can focus our rational energies on more novel aspects of our circumstances.

Putting the shoe on the other foot, could reasoning exist robotically without the inner life which characterizes consciousness? No, because what we think of as consciousness is mostly about running simulations on models we have created to derive implications and reactions, and measuring our success with sensory feedback. It would feel correct to us to label a robot doing those things as conscious, and it would be able to pass any test of consciousness we cared to devise. It, like us, would metaphorically have only one foot in reality while its larger sense of “self” would be conjecturing and tracking how those conjectures played out. For the conscious being, life is a game played in the head that somewhat incidentally requires good performance in the physical world. Of course, evolved minds must deliver excellent performance as only the fittest survive. A robot consciousness, on the other hand, could be given different drives to fit a different role.

To summarize, one can draw a line between conscious beings and those lacking consciousness by dividing thoughts into a conceptual layer and the support layers beneath it. In the conceptual layer, information has been generalized into packets called concepts which are organized into models which gather together the logical relationships between concepts. The conceptual layer itself is an abstraction, but it connects back to the real world whenever we correlate our models with physical phenomena. This ability to correlate is another major subrational skill, though it can be considered a subset of our modeling ability. Beneath the conceptual layer are preconceptual layers or modules, which consists of both information and algorithms that capture patterns in ways that have proven useful. While the rational mind only sees the conceptual layer, some subrational modules use both preconceptual and conceptual data. Emotions are the most interesting example of a subrational skill that uses conceptual data: to arrive at an emotional reaction we have to reason out whether we should feel good or bad, and once we have done that, we experience the feeling so long as we believe the reasoning (though feelings will fade if their relevance does). Only if our underlying reasoning shifts will our feelings change. This will happen quickly if we discover a mistake, or slowly as our reasoned perspective evolves over time.

One can picture the mind then as a tree, where the part above the ground is the conceptual layer and the roots are the preconceptual layers. Leaves are akin to concepts and branches to models. Leaves and branches are connected to the roots and draw support from them. The above-ground, visible world is entirely rational, but reasoning without connecting back to the roots would be all form and no function. So, like a tree “feeling” its roots, our conscious awareness extends underground, anchoring our modeled creations back to the real world.