The Story of the Mind: A New Scientific Perspective into Our Essential Nature

We all have minds. We use them continuously every waking moment of every day. We take what they can do for granted. Why we have them and how they work has no direct bearing on our lives, so we mostly just carry on and don’t worry about it. It’s remarkable that all understanding, scientific or otherwise, depends critically on our ability to use our minds, yet we don’t have to understand them to use them and thus far have failed to do so. Similarly, we can use software without having read its source code, much less having been able to have written it. The “software” of our minds, usually called wetware, was “written” to meet our needs in the ancestral state, not the rapidly changing and increasingly artificial environment we have created. We can’t afford to remain mere users; we have to understand what makes us tick and even to tweak or upgrade our programming if we want to survive in the long run. To get started, we have to find a way to make minds and ideas into objects of study themselves. But how should we go about it? The Greeks started with the psyche, which is analogous to what we would call the soul. Aristotle wrote in Peri psyche that the psyche is that which makes the body alive and able to perform its characteristic functions. He divided them into vegetative powers, concerned with nutrition and growth; sensory powers (that is, vision, hearing, taste, smell, and touch, as well as the internal senses of imagination and memory); and intellectual powers (understanding, assertion, and discursive thinking).1. From my perspective, this is pretty close; closer than anyone has come since. The theory I will develop here will corroborate his view. What Aristotle had that has been in short supply lately is a broad mandate. Science did not yet exist, so he created it, substantially filling in the major branches. As the tree of science has grown, it has become less fashionable and feasible to address the big picture with fresh eyes the way he did. Science has trended toward specialization, not generalization. There are perfectly good reasons for this, which I will address later, but suppose we take it as a challenge. What if our understanding of the mind has been held back by the way science has branched, leading to detailed study in specialized areas while missing the forest for the trees? What if I took on the broad mandate to explain the mind from first principles, rethinking the structure of science and what it means in relation to the mind?

That challenge is a raison d’être quest for me. I’ve always been just a bit obsessed with examining my own thought processes to get to the bottom of it all. We all have thoughts about our thoughts and don’t expect to make a career of it, so I was not surprised to find no obvious path forward in college. I started out focusing on genetics, but it was all lab work in those days and I am more of a theorist than an experimentalist. I turned my attention to computers but saw no promise in the artificial intelligence of the time, which was based entirely on representation and logic. I put my thoughts on the matter aside as something to get back to in the future and settled into a career in systems and application programming, which kept me off the streets. But it kept bothering me, so I started writing a book to explain the mind in 1996. But with a family and a full-time job, I was only able to make sporadic progress until I retired in 2016, at which time I decided to fulfill the quest. I’ve rewritten the first few chapters dozens of times as my ideas have evolved.

Many other people have also been thinking about the problem. Unraveling the mind has become something of an international obsession over the past fifty years. But I don’t think many have looked at it with a broad mandate and fresh eyes. It’s all dividing and no conquering because it is not a problem that can be solved with specialization. We need to step back to a state of maximal generalization and from there start to focus in. I am not here to refute any of the findings of science. I am here to embrace them. But our scientific knowledge that bears on the mind is scattered and does not speak to the nature of mind confidently from the top down. Different schools of thought have evolved to cover different aspects but have only culminated in a lot of conflicting schools of thought. I’m going to try to develop a firm foundation for a comprehensive view that integrates our scientific knowledge into one framework.

My approach is scientific but to achieve that we have to agree on what it means to be scientific. For starters, I will take on the philosophy of science itself, both defining meaning in science and providing an expanded framework of what science should be. Science is founded on educated guesswork, by which I mean proposing hypotheses to explain phenomena. One then tests the hypotheses, which either confirms them or highlights the need for new hypotheses. All practicing scientists are expected to conduct original scientific research, which includes both new hypotheses and new experiments to test them. I am not an experimentalist; I am a synthesist. My goal is not to make new scientific discoveries, but to reorganize existing scientific knowledge into a more explanatory framework. Consequently, I will only be proposing hypotheses that are already supported by abundant evidence. My claims, as I state them initially, may not seem adequately supported, but as the book proceeds I will fill in the gaps. It is not my intention to be contentious or even controversial as I am only seeking to form a larger accord in scientific thought, which is necessary to propose and advance theories of the mind. Keep your eyes open for any claims that contradict settled science and feel free to call me out on them.

I take heart from Joscha Bach’s essay Is Scientific Genius a Thing of the Past? on the current sad state of paradigm shifting in the sciences. Bach argues correctly that many sciences are in dire need of a revolution, but there just isn’t a framework for uprooting the status quo in the sciences. As he puts it in the case of cognitive science, it is “a bunch of incompatible methodologies competing for the same funding bucket” which has rather foolishly put most of its eggs in the brain scanning basket. Once paradigms have taken hold, as Thomas Kuhn taught us, strong sociological forces take hold that make it hard for new paradigms to overtake them. We don’t need to tear science down and build it up again, but we do need to reform from within to include the foundations of science in scientific discussions. Scientists know that science must always iterate and cannot produce absolute knowledge, so they must also admit that the foundations are not absolute either. Instead of just propping up status quo paradigms, every paper should question the paradigm it seeks to support both by describing what that paradigm even is and by offering some alternatives. In this way, we will empower all scientists to work on generalities and not just details. The status quo becomes an immovable block only if we have no mechanism to move it. Instead of simply hoping social forces will be strong enough to overcome the establishment, we need to put the seeds of change into the establishment. So I open the challenge to any scientific discipline: insist that every paper go beyond the details to encompass the full range of assumptions on which it rests, with at least a nod to alternative assumptions. It is not that every paper has to launch a scientific revolution, it is that every paper must be empowered to do so. We need to be given access to levers that can move the earth or the institutions we have constructed to keep civilization running will inadvertently destroy us by failing to respond adequately to change, which is always accelerating.

The Mind Matters: The Scientific Case for Our Existence

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. The eliminativist idea that everything that exists is physical, aka physical monism, is true so long as one is referring to physical things, including matter, energy, and spacetime. But another kind of existence entirely, which I will hereafter call functional existence, is about the capabilities inherent in some systems. These can be capabilities of physical or fictional systems; their physical existence is independent from their functional existence and only relevant to it in limited ways. A capability is the power to do something, so functionality can be called a behavioral condition, which is a different thing than any underlying physical mechanism that might make it possible. What the mind does, as opposed to what the brain does, is to support the appropriate function of the body; the term “mind” refers to functionality while “brain” indicates the physical mechanism. Scientific explanations themselves (even those of materialists) are entirely functional and speak to capabilities, and are not at all about the physical form an explanation might have in our brains. But these theories don’t have to be about functional things, and those of physics and chemistry never are.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. So far as we know, everything in the universe except life is devoid of function. Physics and chemistry help us predict the behavior of nonliving things with equations. Physical laws are quite effective for describing simple physical systems but quite helpless with complex physical systems, where complexity refers to chaotic, complex, or functional factors, or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. But the weather and all other nonliving systems are not capable of controlling their own behavior; they are reactive and not proactive. Capability arises in living things because they are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes in DNA and bodies that replicate it. I can’t prove that functionality could only arise in a physical universe through complex adaptive systems, but any system lacking cycles of both positive and negative feedback would probably never get there. Over time, a CAS creates an abstract quantity called information, which is a pattern that has predictive power over the system. We can’t actually predict the future using information, but we can identify patterns that are more likely to happen than random chance, and this power is equivalent to an ability to foretell, just less certain.

Functional systems, which I will also refer to as information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities at all. While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented.

In summary, I am proposing a new kind of dualism, which I call form and function dualism, which says that everything is physical, but also that information management systems create additional entities that have a functional existence. Further, functional entities can be said to exist even if no physical system is implementing them. This form of existence is hypothetical, from a physical perspective, but everything is hypothetical from a functional perspective, so that is not an impediment to their functional existence. So, in this sense, mathematics exists as abstract systems that may or may not find physical manifestations, and even functional entities that only exist because of their physical manifestations, like people, can be said to have “souls” that exist independent of their physical bodies because the essential functional capacities that make them special could potentially be realized using different physical mechanisms, even if that technology does not exist today.

Information management systems that do physically exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful to discuss such functions independently of the underlying physical systems they run on. More importantly, most of the meaning of these functions is quite independent of their underlying physical systems.

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final, but the latter three are just aspects of function (akin to what, who, and why) and so are teleological. He used the word cause more broadly than we do today; cause, as in cause and effect, refers only to the efficient cause, “who” caused what. The formal cause refers to the lines we draw to distinguish wholes from their parts, i.e. our system of classification. To the Greeks, these lines seemed mostly intrinsic, but we see them today more as artificial constructs we impose on the world for our convenience. We have let the dominance of physicalism drive teleology from our minds as quackery akin to Lamarckism, the debunked evolutionary theory that the degree to which a giraffe manages to stretch its neck will be inherited by its offspring. After all, objects don’t sink to lower points because it is their final cause. But teleology is both intuitively and actually true, but only in functional systems, of which gravity is not an example.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. It is my contention that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 19481, which then led into systems theory2, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Minds are dynamic information management systems built in animal brains that create information in real-time. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind3. While we know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, Ryle felt it still had tacit if not explicit “official” support in 1949. While we know our lives metaphorically bifurcate into two, one of ‘inner’ mental happenings and ‘outer’ physical happenings, each with a distinct vocabulary, he felt we went further philosophically: “It is assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed, contending that the mind is not a “ghost in the machine”, something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake”, a situation in which one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, the mistake arises from a failure to understand that forest has a different scope than tree.4 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

Ironically, Ryle made the bigger mistake with categories than Descartes. His mistake was in thinking that the whole problem arose from a category mistake, when actually only a superficial aspect of it did. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because the function of what happens mentally cannot be explained in physical terms because while the brain runs the mind (akin to software), it doesn’t know its purpose. The mistake is in seeing the superficial category mistake but missing the legitimate categorization. Function is not form and can never be reduced to it, even though it can only happen in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but that doesn’t mean those processes are the subject matter of our mental vocabulary. Those processes are a minor aspect; what we are really talking about is what we can do, i.e. how we can use information to change the future. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong; the way the mind manifests physically is not a different type of existence, but they way it manifests functionally is, and that is what really matters here. This is the kind of dualism Descartes was grasping for, but he overstepped his knowledge by attempting to provide the physical explanation. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are fundamentally not physical and their existence is not dependent on space or time; they are pure expressions of hypothetical relationships and possibilities.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because the exhaustive reach of material existence has come to be synonymous with the triumph of science over mysticism. But physical science alone can’t give us a complete picture of nature because function, which begins as physical processes, can acquire persistence and hence existence in the physical world through information management systems.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. Understanding the underlying physical system can’t explain behavior because the link between them is indirect, which as noted above detaches the physical from the functional. Also, unlike digital computers, which are perfectly predictable given starting conditions, minds have chaotic and complex factors that impede prediction. We can conclude that emergence is a valid philosophical position that describes the creation of information, though it is a misleading word because it suggests the underlying physical system causes the functional system. Cause in a feedback-based system is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system can thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but through feedback a tool can come to be designed to perform the job better.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that relates to the capabilities it confers to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same traits. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while it others the “purpose” of the gene can be hard to identify. Also, knowing the chemistry will not reveal all the kinds of circumstances in which it might help or hurt. Any model of causes and kinds of circumstances we develop will be a gross simplification of what really happens, even though it might often work well most of the time. In other words, the traits we describe are only generalizations about the purpose of the gene. Its real purpose is an amalgamated sum of every selection event back to the dawn of life. The functionality is real, but with a very deep complexity that can’t be summarized without loss of information. The functional information wrapped up in genes is a gestalt that cannot be decomposed into parts, though an approximation of that function through generalized traits works well enough in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and the traits they confer when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts arise subconsciously but sometimes present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first conceptual thinking, i.e. thinking with concepts, which most notably includes logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects) taken to be true, and draws consequences from them. Subjects, predicates, and premises are concepts viewed from a logical perspective. The second approach is subconceptual thinking, which is a kitchen sink of data analysis capabilities. Unlike instincts, whose reactions are fixed, subconceptual thinking does not produce fixed responses. Subconceptual thinking includes common sense, pattern recognition, and intuition, but also includes much of our facility for math, language, music and other largely innate but not fixed, instinctive talents. Much of what we learn from experience is subconceptual in that it is not dependent on conceptualizing or logical reasoning. Conditioning, for example, with or without reinforcement, is subconceptual. Much, or even most, of the data our brains gather about the world is subconceptual and is there to help us despite the lack of a conceptual understanding. When conceptual and subconceptual thinking are done consciously we call it reasoning. Reasoning is the conscious capacity to “make sense” of things, which means to produce useful information. What we are conscious of is organizing, weighing, and otherwise assessing all the factors relevant to a situation. It doesn’t need to involve concepts or logic, and mostly it doesn’t. We lack conscious awareness of many aspects of both conceptual and subconceptual thinking, so these are said to be subconscious. For example, we recognize and recall things without knowing how, we can tell when sentences are properly formed, and we have hunches about the best way to do things that just come to us by intuition.

Subconceptual thinking uses subconceptual data (subconcepts) while conceptual thinking uses concepts. The most primitive subconcepts, percepts, are drawn from the senses using internal processes to create a large pool of information akin to big data in computers. Subconcepts and big data are data that is collected without knowing the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful again. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction, i.e. without a corollary in the physical world, that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Some kinds of reasoning can be done subconceptually by pattern analysis, specifically recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises, not diffuse, unstructured data. The other kinds of reasoning can also leverage concepts, but deduction specifically requires them. Also, so far as we know, deduction can’t be done subconsciously. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity. Note that while all reasoning, being the top level or final approval of our decision-making process, is also strictly conscious, many other kinds of conceptual and subconceptual thinking happen subconsciously.

Relationships that bind subconcepts and concepts together form mental models, which constitute our sense for how things work or behave. Mental models have strong subconscious support that lets them appear in our heads with little conscious effort. The subconceptual aspects of these models give them a very “real,” sensory feel to us, while the conceptual aspects that overlay them connect things at a higher level of meaning. Although subconceptual thinking supports much of what we need to do with these models (akin to an autopilot), conceptual thinking organizes things better for higher purposes, and logical reasoning can be much more effective for solving problems than our more limited conceptual and subconceptual thinking processes. While conceptual and subconceptual data analysis is quite powerful, it can’t readily solve novel problems. Logical reasoning, however, gives us an open-ended capacity to chain causes and effects in real time. As we mature we build a vast catalog of mental models to help us navigate the world. We remember the specific times they were applied, but mostly the general sense of how to use them.

The physical world lives in our minds via mental models. Our minds hold an overall model of the current state of the physical world that I call the mind’s real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world. The mind’s real world leverages countless mental models we have that have helped us understand everything we have ever seen. These models don’t have to be right or mutually exclusive; whatever models help provide us with our most accurate view of physical reality comprise our conception of it. The mind’s real world “feels” real to us, although it is purely a mental construct, because the mind is inclined to interpret its sensory connections to the physical world that way instinctively, subconceptually and conceptually. But we don’t just live in the here and now. Because the mind’s primary task (and the whole role of information and function) is to predict the future, mental models flexibly apply to a range of circumstances. We call the alternative ways things could have been or might yet be possible worlds. In principle, the mind’s real world is a single possible world, but in practice our knowledge of the physical world is imperfect, so our model of it in the past, present and future is always a constellation of possible worlds.

In summary, all behavior results from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genetic data is a first-order bearer of information that is collected and refined on an evolutionary timescale. Instincts (senses, drives, and emotions) are second-order bearers of information that process patterns in real time whose utility has been predetermined by evolution. Subconcepts are third-order bearers of information in which the exact utility of the patterns has not been predetermined by evolution, but which do tend to turn out to be valuable in general ways. Finally, concepts are fourth-order bearers of information that are fundamentally symbolic; a concept is a pure abstraction that represents a block of related information distilled from patterns in the feedback. Some subconscious thought processes (e.g. vision and language processing) manipulate concepts in customized ways without applying general-purpose logical reasoning, which can only be done consciously. Logic finds reasons, i.e. rules, that work reliably or even perfectly in mental models. The utility of logical reasoning ultimately depends on correlating models back to the real-world, and for this we depend on mostly subconscious but conceptual reverse recognition mechanisms that fit our models back to reality. Recognition and reverse recognition are complex problems requiring massive parallel computation for which present-day computers are only recently developing some facility, but for us they just happen with no conscious effort. This not only lets us think about more important things, it makes our simplified, almost cartoon-like representation of the world through concepts feasible.

Our four real-time thinking talents — instinct, subconceptual thinking, conceptual thinking, and logical reasoning (this last one being a kind of conceptual thinking) — are distinct but can be very hard to cleanly separate. We know instinct influences much of our behavior, but we are quite unsure where instinct leaves off and tailored information management begins because they integrate very well. And even complex behavior, most notably mating, can be driven by instincts, so we can’t be too sure instinct isn’t behind any given action. While subconceptual and conceptual thinking can be readily separated based on the presence of concepts, it can be difficult to impossible to say at exactly what point a concept has coalesced from subconcepts. In theory, though, I believe there must be a logical and physical point at which a concept comes to exist, the moment that a set of information is referenced as a collective. This suggests that conceptual processes differ from subconceptual ones because they involve objectification of data by reference. Logical reasoning refines conceptual thinking by introducing the logical form, which abstracts logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen subconsciously, because we consciously decide whether to act. Or do we? Habitual or snap decisions are sometimes made on a “preapproved” basis where we act entirely on subconscious reasoning which we then only observe consciously. We do always have the conscious prerogative to override “automated” behavior, though it may take us some time to decide whether to do so. The truth is, at one level of granularity or another, all our activity is driven subconsciously by muscle memory or procedural memory, which needs some conscious approval to proceed, but that approval can be implied by circumstances, effectively taking it out of the hands of consciousness. I posit that logical reasoning of any complexity only happens consciously as well because only consciousness is equipped to pursue a chain of reasoning. We can still reason logically when we dream and daydream, as this chaining capacity is not otherwise occupied then, with greater free association that can lead to more creativity, though with less rigor.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”5 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one6. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, but subconceptual and conceptual, and the experience they created. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that indicate the conceptual use of cause and effect, which goes beyond what instinct and subconceptual thinking could achieve. Other animals do not, and I suspect all others lack even a rudimentary conceptual capacity. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as it provides greater adaptive power. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, meaning that language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Advanced animals can logically reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Perhaps the ecological niches into which they evolved did not present them with enough situations where directed abstract thinking would benefit them to justify the additional costs such abilities bring. But why is it useful to be able to decouple information from the world to such a degree? The greater facility a mind has for abstraction, the more creatively it can develop causal chains that can outperform instinct and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Subconceptual thinking is also a gestalt approach that applies innate algorithms to subconcepts (big data) and uses feedback to collect useful patterns. Conceptual thinking (logical reasoning) creates the criteria it uses for feedback. A criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed general-purpose skills to find certain kinds of patterns. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”7. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Some have posited additional categories of being beyond form and function, but I think any additional categories can be reduced to these two. Aristotle’s Organon enumerated all ten possible kinds of things that can be the subject or the predicate of a proposition, the first of which is substance. Substance is equivalent to what I call form. Aristotle essentially defined it as that which is not function, by saying substance cannot be “predicated of anything” or be said to “be in anything”, which are relational or functional aspects. The other nine categories are functional aspects and as so are inherently indirect and not the substance themselves, namely quantity, quality, relationship, where, when, posture, condition, action, and recipient of action. I would say that the location of a physical object in time and space is part of its physical existence, but how we describe it relative to other things is not. As Immanuel Kant would have put it, a physical thing is a noumenon, or thing-in-itself, while our description of it is a phenomenon, or thing-as-sensed. We have no direct knowledge of the noumena of the physical world, but we talk about them as phenomena all the time. Noumena are strictly physical and phenomena strictly functional. I see no reason for any additional categories. Quantity and quality may accurately describe traits of a noumenon, but they are still descriptions and so are functional; the noumenon itself just exists without regard to how it might be characterized for some purpose extrinsic to itself. Ironically, this means that science is entirely a functional pursuit, even though its greatest successes so far are about the physical world. “About” is the key word; science studies phenomena, not noumena. We are curious about the noumena, but we can never know them as we only see their phenomena. This is not due to limitations of measurement but to limitations of understanding. Any noumenon can be understood through an infinite variety of phenomena and ways of interpreting those phenomena that model it but are never the same as it. They consequently describe some features, perhaps very accurately, but miss others, and in any event, what we think of as a feature is a generalization that makes sense to us functionally but means nothing physically.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

The Science of Function

Discussing function as a thing that exists can lead to some confusion. As with physical things, we can name functional things, such as 3D vision or justice. The names are not the things themselves, but if we understand the reference then we know the function being discussed. As with physical things, functional things don’t always have explicit names, but we can still describe them, and the descriptions can then act as references to them, still without being the things themselves. Names and descriptions provide the metafunctional role of grouping similar entities into one category that can then be considered as a unit. Internal to the mind, concepts play this metafunctional role of naming or describing categorized entities. When we think of 3D vision or justice, we have a distinct notion (concept) in our head about what each means, which encompasses both the breadth of the concepts and the special cases with which we have personal experience. Names, descriptions, and concepts are ways we can refer to things without being the things themselves. What gets confusing is that names, descriptions, and concepts are functional themselves. Boyle’s Law describes the relationship between volume and pressure of a gas at constant temperature. Gases are not actually functional things themselves; they are entirely physical. Boyle’s Law gives us a functional way to discuss gases that we don’t confuse with gases because gases aren’t functional. We know that Boyle’s Law is a model that abstracts behaviors about gases and only relates to the reality of what gas quarks, atoms, and molecules are doing with respect to certain perspectives. “3D vision” and verbal descriptions of it give us functional ways to refer to sight that are not sight themselves, but in this case, sight is a function and is not physical. Both gases and sight have an essential nature (or noumenon) independent of our descriptions of them, but the former is physical and the latter is functional. We understand that we can never have knowledge with certainty about noumena, but only with phenomena, i.e. what we see. But the physical noumena are sitting right there, as it were, while the functional noumena have no physical location and are known to us only by virtue of what they can do. So it is easy to keep in mind that Boyle’s Law is a partial model that relates certain macroscopic behaviors of gases that doesn’t say anything about individual gas molecules. Any theory of 3D vision also will be a partial model that relates certain macroscopic behaviors of vision but says nothing about detailed sight input, e.g. from each rod and cone. More to my point, it will characterize the processing done from a limited perspective that will ignore some other aspects of 3D vision. We can’t actually talk about what 3D vision is, we can only talk about some things about it to develop a metafunctional perspective on it as a function.

I mention this because our descriptions of things are always sketchy. First, they presuppose a great deal of context which is presumably understood and agreeable. Then, they focus only on the most salient aspects in the hopes that the rest will be implied. For physical things, one can presumably see the object and make other physical observations that provide deeper understanding far beyond what superficial observation can achieve or verbal description can convey. But direct observation is not possible for functional things. Fortunately, most functional things, such as 3D vision and justice, also have instinctive and subconceptual support. So while we can’t see them in the physical world, we share an innate grasp of them, and we have names and descriptions for them which are adequate to call them to mind, so these kinds of functional things can be nearly as evident to us as physical things. We can further invent functional things out of whole cloth, which we do when we create art, fiction, or any system of rules, such as the law. Many (if not most) of these quite intentionally hearken back to innate concepts, subconcepts, or instincts, but they are nevertheless fabricated constructs of our own minds subject only to the limitations we set for them. So justice under the law is a pale shadow of the justice we feel; it seeks to provide justice but does so with a variety of rules that may or may not be just in any given situation.

Consequently, to the extent that I can reveal functions of the mind to you, I am limited to using words and hence names and descriptions. I will do my best to characterize what the mind is via such descriptions, outlining its overall and component functions, but these descriptions will necessarily be sketchy, as descriptions always are. Sill, I am hoping that a great deal of context is understood and agreeable, so what I have to say should be easy enough to follow. We do have an innate grasp of what the mind is up to, even if we are not in the habit of giving ourselves credit for knowing. Considering viewing the mind as a functional entity is a somewhat new perspective, I will start from the top down with the most salient aspects and fill in more details as I go. My explanations will appeal to and depend on our experience of mental function, which is to say how we subjectively experience our own minds. This approach is introspective, which poses a challenge to objectivity. I will address that challenge in more detail later, but in short, I will look to introspection to stimulate hypotheses, not to test them. The resulting descriptions of mind I develop will constitute a theory to be tested. Like all theories, it is not intended to have the same function as the mind or to be complete, only to be internally consistent and supported by the evidence. I intend to show that it is consistent with prevailing scientific perspectives once those perspectives are interpreted in the framework of form and function dualism. Just how one can objectively test theories about functional things, which are necessarily about and conceived using subjective mechanisms, is a subject I will discuss later.

All scientific theories are descriptions, and hence are sketchy representations of reality. Boyle’s Law is “sketchy” even though it seems to work perfectly because volume and pressure are approximate measures of reality dependent on instruments and not fundamental “properties” of spacetime. Spacetime does not actually have “properties”, which are ways of describing aspects in general ways when actually the component pieces under observation just follow their own paths and are not inherently collective. But we can measure volume and pressure almost perfectly most of the time, and in these cases the law has always worked to our knowledge and so we can feel pretty comfortable that it always will. Though unprovable and sketchy, we can label it “true” and depend on it without fear. Of course, to use it we have to match our logical model which models gases as collections of independently-moving particles with volume and pressure to a real-world circumstance involving gas, which depends on observations and measurements to provide a high probability of a good fit of theory to practice. This modeling, matching, observing, and measuring involves some subjective elements which have some uncertainty and vagueness themselves, so all objectivity has limits and caveats. But we can accurately estimate these uncertainties and establish a probability of success that is very close to one hundred percent in many real-world situations.

Given these caveats about how science can only characterize subjects and can’t reveal their true nature, let’s take a look at which sciences study functional subjects. Viewed most abstractly, science divides into two branches, the formal and experimental sciences. Formal science is entirely theoretical, but provides mathematical and logical tools for the experimental sciences, which study the physical world using a combination of hypotheses and testing. Testing gathers evidence from observations and correlates it to hypotheses to support or refute them. Superficially, the formal sciences are all creativity while the experimental sciences are all discovery, but in practice most formal sciences need to provide some real world value, and most experimental sciences require creative hypotheses, which are themselves wholly formal. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and the special sciences (all other natural and social sciences), which are presumed by materialists to be reducible to fundamental physics, at least in principle. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum.

Science can alternatively be subdivided based on whether it is about physical things or functional things. By this rule, the formal sciences are entirely functional, while the experimental sciences are partly about physical things and partly about functional ones. But even sciences about physical things are predominantly functional, because physical things in the natural world, the noumena, are only known to us through our observations of them, the phenomena, and phenomena are entirely functional themselves. The phenomena exist as functional entities in our minds which can be formalized using scientific hypotheses, which are also therefore entirely functional. And what of the functional things in the natural world, do they have noumena, and do we only know about them through their phenomena? Yes, that is exactly right. We know that the information management systems of life, from genes to instincts to subconcepts to concepts, all have real functions, but we can only guess what those functions might be by observing them in action. The true, noumenal functions are hidden to us. But how can that be? Isn’t a function cut and dried — do this because of that? No, function is a very complex, interwoven, and layered feedback response that only appears causal at the uppermost conceptual level, and at lower levels is a network of interactions which cannot be untangled. It is noteworthy, then, that physical science uses functional hypotheses to study functional phenomena, and is only indirectly about anything physical, and yet it does not even overtly acknowledge the existence of functional entities. Physicalism is seriously in denial.

What sciences can be called functional sciences? The formal sciences include logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics. They are named after formal systems, where “form” means well-defined or having a well-specified nature, scope or meaning. These are functional concepts that establish relationships to create information, i.e. patterns with predictive power over the system. The formal sciences don’t recognize their functional basis; in Mathematics Form and Function, Saunders MacLane proposed six possible foundations for mathematics: Logicism, Set Theory, Platonism, Formalism, Intuitionism, and Empiricism. But these foundations are all wet — all the formal sciences are really founded in functionalism: formal abstractions matter because of their functional capacity, not because they align with logic, sets, ideals, forms, hunches, evidence, or any arbitrary rules. Traditionally the formal sciences stayed within the boundaries of rigid systems of rules, but as computer science opens up more fluid kinds of data analysis using big data, it becomes increasingly apparent that function is the real foundation.

Within the experimental sciences, all function derives from life, which creates and uses information using feedback. Biology studies both the physical and functional products of life, although the “official” physicalist line is that the functional products are physical byproducts and not entities in their own right. The social sciences, on the other hand, study only functional products of life, and consequently have only a very tenuous and uncharted connection back to physicalism. This gap, commonly called the explanatory gap, is generally just ignored with the hope that it will go away if not looked at. So the social sciences just assume what seems obvious to us, that we have minds that are responsible for our behavior. What kind of thing these minds are is left as an exercise for the reader, which leaves the social sciences somewhat adrift at sea with no way to get back to dry land. One can, of course, build floating cities and live in them, and this is not unlike how our own minds are tethered to reality, but it is not ideal. A unified scientific foundation will give us better leverage to build more powerful and explanatory theories.

Science is sometimes characterized as a quest for truth or certainty, despite its admission that absolute knowledge (i.e. of noumena) is unattainable. So from the outset, science has admitted that certainty is somehow relative, yet maintains that higher relative certainty is possible. To build a better foundation for science means we have to characterize what we know in terms of how certain we are about it. What are we most certain about? Descartes’ insight “I think therefore I am” remains our most certain knowledge — though we don’t know (quite) what we are or what thinking is, we know we do it. Beyond that, we know our senses feed us information, which puts information and function at the center of our knowledge of the world. We think, and what we think about is information. Our next largest certainty is object permanence, the idea that the physical world exists independent of our conception of it. Not only does our sensory feedback strongly provide evidence of a permanent physical world, our whole sensory apparatus is designed from the ground up to endorse this perspective. So we think, we have information, and we have extremely strong corroborating information supporting the idea of a physical world. Thinking and information themselves are inherently subjective, while object permanence goes to the literal definition of objective, “being based on objects under observation.” Putting the focus on object permanence has led to the physical sciences’ claim that monism trumps dualism, which has cast the functional sciences adrift in a foundationless state. To be fair, the functional sciences have not laid claim to their functional bedrock and have left themselves grasping and gasping. But I am claiming it now: functions derived from life create and use information, with thought being the most intricate use of information to emerge from life. But it still begs the question: how can we achieve a level of certainty about fundamentally subjective things that can rival the kind of certainty we have over traditionally objective subjects? Because it is not enough to know that we need to make the functional sciences better founded, more scientific, and more certain, we need a means to accomplish it.

The objectivity science seeks is not dependent on the physical reality of the objects under observation; it is only necessary to establish a reality independent of the mind, i.e. an objective reality. Objective knowledge is more reliable and broadly applicable than subjective knowledge. Taken together, reliability and broad applicability account for science’s explanatory power, which has fueled the scientific revolution that has transformed civilization over the past 400+ years. While we can’t actually establish a reality independent of the mind, because that would be noumenal and we can’t prove the existence of noumena, information that has been corroborated in different ways by different people becomes increasingly reliable and applicable and thus comes to constitute a brand of knowledge we call scientific truth. It is not that it is true, it is that it can be taken as true for many intents and purposes with little risk of deficiency. Somewhat ironically, scientific truth is true exactly to the degree it is functional, so it does exist but as a functional thing, not a physical thing.

We can conclude, then, that the path to achieving objectivity about functional things in the natural world is by finding ways to corroborate hypotheses about them in different ways by different people. At that point, we will establish their reality independent of the mind through their phenomena. So what phenomena exist about functional things in nature, and can they be corroborated? I’m going to try to answer these questions for each of the four kinds of functional information managed by natural systems.

Genes. Living organisms have physical manifestations in their bodies, but what is of more interest to us here is the function of each gene. The information is kept on something of a “short leash”, because genes either encode proteins or regulate their encoding, and proteins engage in pretty specific chemical reactions which we can identify, which usually reveals at least the primary purpose of the gene. The physical, chemical basis of their operation is tied pretty closely to their apparent function. This doesn’t completely resolve their function, because we often only come to appreciate the net value of the gene relative to competing versions of the gene or to other genes when the species is under specific stresses that demonstrate that value. Still, for the most part, chemical knowledge translates pretty well to functional knowledge in the case of genes.

Instincts. Instincts are entirely genetic themselves, but can’t be considered to be on quite as short a leash because they influence mental behavior rather than just directly performing a chemical role. Still, in principle, we can identify behaviors driven by instinct if we can find circumstances under which they will occur independent of any opportunity to learn the behavior from another source. Beavers that have never seen a dam can still build them. Humans who have never heard a language can still create one. It’s a rather tricky job to pick out the exact genes that make these behaviors possible, but we know they must exist. While we have to wait to identify the genes and the proteins to achieve full objectivity about these functions, we can prove that they exist just by observing them and verifying that they are not learned behaviors.

Subconcepts. It starts with percepts, which are the sensory impressions that swirl around our minds continuously from our many senses, which start with the five classic senses sight, hearing, taste, smell and touch. Sight combines color, brightness, and depth to create percepts about objects and movement. Smell actually combines over 1000 independent smell senses. Taste is based on five underlying tastes (sweet, sour, salty, bitter, and umami). Hearing combines senses for pitch, volume, and other dimensions. And touch combines senses for pressure, temperature, and pain. Other somatosenses include balance, vibration sense, proprioception (limb awareness), hunger, sexual desire, and chemoreception (e.g. salt, carbon dioxide or oxygen levels in blood). Still higher level percepts experienced only in the brain include our sense of time, emotion, attention, agency, self, and familiarity, among others. All these things swirl around in our minds without our having to devote powers of reasoning to them. Beyond percepts, subconcepts form with and without conceptual feedback to bring some order to all the information that passes through our brains. The primary conscious sense that subconcepts give us is familiarity; nearly everything about our daily lives seems comfortable and familiar because we can align it so neatly with our subconceptual framework. When any pattern is detected that doesn’t fit nicely into this framework, subconscious processes in our brains immediately notify our conscious minds of this “break with reality.” But the real power of subconcepts is not in feeling comfortable but in managing our lower-level decisions. The subconceptual framework has ready-made solutions and support for nearly all the circumstances we encounter most often, making it easy for us to “act without thinking”. Such actions may, in fact, have been reasoned out in the past and are now largely processed for us subconceptually as “preapproved”.

The question for subconcepts, however, is whether we can gain any objective knowledge about them. All the direct phenomena about them are experienced subjectively. Our behavior may provide indirect phenomenal evidence, but such evidence could easily be from a very complex combination of thought processes and cannot be reliably linked to just one kind of subconcept. Arguably, certain percepts are closely associated with fixed behaviors, so we can definitely see eating and mating as broad responses to hunger and libido. But eating and mating can be put off or engaged in for other reasons, so behavior doesn’t prove them. We can also correlate chemical or neural reactions to percepts, but again, the connection cannot be more than approximate. We can prove that the body is in a state where food is needed or pain nerves are firing or heat is excessive, but we can’t conclude what state of mind will result. Partly, this is because the mind habituates to sensory stimuli that persist and can actually stop feeling hungry or in pain or too hot. Most commonly, background noise can be ignored. If we are interested in the function of the subconcept, then we have to accept that the answer is very contextual as the information that makes its way to conscious attention is filtered based on evaluations about its relevant value. Partly, then, we can’t conclude what state of mind will result because state of mind is a network of information and any summary view of it that “identifies” that state will necessarily gloss over much internal detail.

But is it possible to develop objectivity about subconceptual thinking based on our subjective experience of it? The traditional answer is no, of course not, introspection has been demonstrated to be wildly unreliable as anyone might say anything about how they interpret their own personal perspectives. But my answer is, yes, of course, because all phenomena are subjective at the point of perception and only acquire objectivity later as we corroborate them in different ways by different people. In the case of functional phenomena, we need to keep in mind that how we subjectively feel the quality of our personal experiences, i.e. as qualia, is not where we need or could get objectivity, it is about the function of the phenomena, i.e. what they inspire us to do. In this regard, we can draw conclusions about what value our percepts and subconcepts bring to the table which achieve a reality independent of the mind because they hold up well for different people in different circumstances.

Concepts. As I have noted before, the hallmark function of conceptual thinking is problem solving, which is much more powerful than the intuitive leaps of subconceptual thinking because logical reasoning uses concepts to chain causes and effects together in an infinite variety of creative ways. But the question here is whether there is any way to look at conceptual thinking objectively. On the one hand, we can definitely look at logic objectively, because any number of logical systems can be formalized with rules independent of the thinker. Oh the other hand, it is very hard to say what fraction of what we call logical reasoning is actually cut and dried logic and what fraction is, shall we say, hand waving. So if we are not careful, we might establish this very objective view of reasoning that actually has little to nothing to do with how we actually reason. But how can we examine our reasoning processes to extract an objective view? We can’t ignore that logic plays a role, but we have to acknowledge that it is embedded in a larger decision-making system that integrates instinct, subconcepts, and concepts to achieve the ends deemed most desirable. These ends may not be in accord with any particular line of reasoning, sometimes even including the line we believe we are consciously following. I’m simply proposing we take as high-level a view as possible, meaning one that can be verified in the most different ways by different people. This means we start by looking for broad commonalities in thought processes that everyone would agree to rather than for specific mechanisms which would seem to vary from person to person.

So yes, our thoughts are the phenomenal evidence of the existence of the mind and we can’t just ignore them when undertaking a scientific study of the mind. But we need to be dispassionate and work from our most certain knowledge down to more speculative thoughts. The phenomena that transpire in our minds are really there and we can observe them; we just have to be careful that our conclusions have broad support from many perspectives and are not just personal whims. It is not a whim to say we think or that inner speech can help us develop ideas in our heads. Everybody would agree to these things. And I believe there is plenty more that nearly everyone would agree to that if articulated could form a much broader objective explanation of the mind than we currently have. Still, we need to keep in mind that just because we all might agree with something doesn’t make it true; after all, our first-person perspective of our own minds is designed to help us in our role as an agent in the world, not to give us perspective into how it does that. I am at no point saying or suggesting we can’t understand our own minds — I am not a new mysterian (like Noam Chomsky) — but I am saying that our understanding of the mind will not be facilitated by an innate facility for understanding it, such as we have for vision or language. Instead, we can understand it because anyone can understand anything, in principle, because understanding always exists at a descriptive, sketchy level that only explains some but not all of the aspects the underlying noumena. To understand something deeper than a high-level overview takes more work and time, and these practical reasons may make deeper understanding impractical and unattainable, but we’re still a long way off from having to worry about going too deep considering we haven’t yet scratched the surface. So I would characterize new mysterians as simply philosophically confused; they are mistaking knowledge, which is about phenomena, with “absolute” knowledge, which is presumably “about” noumena, without realizing that such a position is illogical because knowledge and “aboutness” are inherently phenomenal and functional. Is the noumenon itself mysterious? Yes, of course, and permanently so, but this is irrelevant to our ability to understand phenomena on a variety of levels.

I have concluded that objective facts exist about our own minds to which we could agree if they were articulated. However, this answer to the question “What phenomena exist about functional things in nature, and can they be corroborated?” does presume some objective facts which previously said we should agree to, namely that four kinds of natural information management systems exist and that they are genes, instincts, subconcepts, and concepts. To discuss the subject, I am hypothesizing some high-level points and then trying to defend them objectively. I am doing it as cautiously and considerately as possible, starting with function, which I claim has an informational basis and separate claim to existence, and moving from there to systems that manage function on different time scales with different physical mechanisms. I will continue to add more hypotheses and conclusions and will not always be able to justify every conclusion from every angle. But I will try to provide adequate justification as I go. I am not intentionally treading into any controversial waters; I am only seeking hypotheses and conclusions well within the scope of objective information we all already take for granted but don’t usually articulate.

Now that I have established that we can derive objectivity about the mind from our thoughts about it, I will look closer at how we can go about doing it. The formal sciences, at least, can reasonably claim near perfect objectivity. Knowledge within a well-defined formal system can be known for sure and provably so (though some things in formal systems can’t be known or proven). Whatever their limitations, the clean, logical models of the formal sciences provide great coherence and functional power. Their rules are declared rather than discovered, so there is no need for empirical verification. The physical sciences are also quite objective because they are built on the formal sciences (e.g. for mathematical models) and impersonal empirical support gathered with instruments. It doesn’t mean physics is solved; general relativity improved on Newton’s law of universal gravitation, and MOG (MOdified Gravity) may improve on general relativity. And it doesn’t free formal and physical sciences from elements of subjectivity; our formalizations and theories on these subjects invariably involve judgment and bias because any system can be modeled (i.e. simplified) in an infinite number of ways. But we have had little trouble agreeing on models that work well, and keeping secondary models as backups. The main thing is that we know the assumptions we are building on.

But life is a far less tractable subject. Billions of years of adaptations have piled on complexities orders of magnitude harder to decipher than those of nonliving physical systems. That complexity is driven by feedback to provide general-purpose functionality instead of the very specific cause-and-effect actions studied in physics and chemistry. Consequently, theories about life can never achieve the same level of formality and closure enjoyed by the physical and formal sciences. But we do have some objective sources of information about function in living things. Chiefly, we know life evolved, and we know a number of the mechanisms that made that possible. More significantly than the mechanisms, we know that it was driven by the value of function to survival. Function was selected for, and the mechanisms that made it possible were only along for the ride. It was not the genes that were selfish, but the functions of the genes. Those functions are informational constructs whose true depth hides in the full history that led up to each gene surviving to the present day, and can’t be fully grasped just by discovering, say, the primary role of the protein the gene encodes. The whole context of how the function provides general value in a wide variety of circumstances contributed to why the gene is exactly the way it is as opposed to some slightly different way that might seem to be more advantageous superficially. The genes that make instinct, subconcepts and concepts possible were also selected for based on benefits they provided in a wide variety of circumstances.

Now, if we only had behavior to go on, we would be very hard-pressed to guess anything about the mechanisms of our minds. In fact, without our own first-hand experience of consciousness, we would have no reason to suspect that minds even existed. We would just see robots moving about getting things done, not unlike ants. To the extent ants can be said to have minds at all, which is pretty debatable, they are certainly not remotely as functionally complex as ours. And we can’t argue that minds are necessary, either, since it is clearly possible to design a brain in another way that could perform the same variety of tasks ours to without any mind in control. While it is tempting to suppose that such a zombie-like robot would not be as adaptable to new circumstances as us, it is certainly theoretically possible to program it to have a range of adaptability that would be more than enough to handle whatever issues Earthly creatures might reasonably face. While such robot humans would have no need for art or entertainment, they would procreate and advance civilization as well or better than we would. While we can’t argue that minds are necessary, we can argue that all earthly animals with centralized brains have features of consciousness strong enough to suggest that evolution strongly selects for minds and not so much for robots. So the real question is why consciousness is so useful to brains when simpler, more hard-wired methods would appear to be a more direct solution to the problem.

The answer is that consciousness is function made animate through agency because agency comes with some survival benefits that are useful in earthly evolution. Unpacking this, the brain is quite capable of getting things done without consciousness, and our ability to do many very familiar tasks while hardly thinking about them demonstrates this, as does the ability of sleepwalkers to raid the fridge without awareness. The value that consciousness brings to the table is the ability to carefully weigh all the options available at the top level to select the one best course of action that the body should undertake next. It doesn’t simply employ a prioritization algorithm as one might expect. Instead, it runs a subprogram in the brain and tells it that it is an autonomous agent in the world. This fiction, that the prioritization decisions can be “felt” by that agent through sensory feedback, very effectively focuses all the body’s priorities into functional space: every input and output is no longer just data, but is interpreted from the perspective of this fictitious first-person actor. The concept of an actor or agent is purely a functional interpretation and has no meaning in the physical world. That we observe others acting purposefully in no way implies that they experience agency; my example above with zombie-like robots shows that they don’t need to perceive themselves as agents. So how can one objectively explain the experience of agency; what does it feel like? Subjectively, the “feel” of the qualia that contribute to the sense of agency can only be explained in terms of their function. Things actually feel like what they make possible. The variations in feel between equivalent stimuli exist but are arbitrary. The net result is that everything feels very customized and special in its own right, even though the specialness actually derives from the function and not the stimuli itself. Many of these functional distinctions are learned, “acquired tastes” which we come to appreciate, but most are actually innate, the product of millions of years of an evolutionary interpretation of function as feeling.

To give an example, as we survey an ordinary scene in front of us, we are calm and nothing stands out to our attention, even though we can distinguish any number of discrete objects in the scene. But if anything in that scene becomes bright, or flashing, or red, or fast-moving, or loud, etc., our pulse will quicken and our attention will immediately be drawn to it. Those stimuli have the function of warning; they are different from each other, but in that moment any of them can trigger the warning reaction and so in many ways feel the same to us. Red and yellow just stand out more in any context than other colors because in our ancestral environment objects of these colors were more likely to warrant attention than green, blue, brown or gray objects. This doesn’t mean color alone alarms us, but it is a factor, and importantly, it affects how these colors feel to us. Blues and greens are calming, while reds, oranges, and yellows are a bit unnerving. It is not unpleasant; it is just part of the quality about them that we feel. If we wear glasses that invert our vision with a negative image in which all the colors are reversed, we will in time invert it again in our heads so that everything looks “normal” again. This is because the “real” feel of the colors doesn’t come from the color signal, it comes from our beliefs about the function of the color. Our brains are up to the task of doing such an unexpected flip because it does this sort of thing all the time; we automatically adjust for drastic changes in lighting without perceiving much shift in how it feels. The brain is constantly trying to interpret inputs into functional buckets, correcting for variations in the signal. Ultimately, redness, brightness, loudness, etc., are about how information is hooked up in our minds, not about what is happening outside them, and the way it is hooked up is all about what how that information can help us, i.e. what its function is.

The perception that physics and chemistry are fundamentally more objective, provable, and definitive has led to them being called hard science, while social science, which is seen as more subjective and less provable and definitive is called soft science. Biology has both hard and soft aspects. The distinction really derives from our intuition that hard science is a fixed or closed system while soft science is not. A closed system can be modeled as perfectly as you like with a logical model that explains all its fixed components. A variable or open system includes feedback loops which continually impact and adjust the design and capabilities of the system itself. When an open system is implemented using a closed system, as the mind uses the body, there will be an underlying fixed physical explanation for what is happening particle-wise at any given instant, but the physical explanation will reveal nothing about the functional capacities of the system. Physically, information doesn’t exist; fluctuating signals traveling on wires or nerves exist, but divorced from any concept or purpose they have no relevance to anything. It only acquires relevance when an information management system gathers and uses information about something else, summarized and analyzed at practical levels of detail. Cells manage inherited information through genes, which summarize metabolic information, mostly about proteins, in a practical way. Minds manage information summarized from sensory inputs using both inherited (natural) and learned (artificial) mechanisms. Because cells and organs have a very fixed structure and behavior for any given species at any given point in its evolutionary history, the study of these structures from a physical standpoint can often be done with clean, logical models that explain all the fixed components. These models can often be experimentally verified to a high degree of confidence. Although we know such models of biological systems are inherently less fixed than those of nonbiological systems, they are quite comparable for most intents and purposes. They do posit a function for each kind of tissue, which is necessarily a subjective or soft determination, but the primary purpose of most tissues seems very clear, so while some appreciation for multifunctional tissues is lost in this kind of summation, it still works pretty well. But this approach mostly breaks down when studying the brain because its functionality is so highly integrated across many levels. We have identified primary functions for many parts of the brain, but we also have to accept that almost every function of the brain includes substantial integration across many areas. Not only do different brain areas and functions work together to achieve overall function, but they also incorporate feedback across multiple timelines. Instinct gathers feedback over millennia, long-term memory gathers it over a lifetime, and short-term memory gathers it for the scope of a problem at hand. And then there is the matter of the processing, or thinking, that we do with the collected information. While this bears considerably more discussion, for now it is sufficient to say thinking is quite open-ended and impossible to predict. So the distinction between the hard and soft sciences, or more accurately between the physical experimental sciences and the functional experimental sciences, is quite significant. However, it is misleading to characterize it as hard vs. soft; the distinction is really between fixed, closed systems and variable, open systems.

The History of the Philosophy of Science Viewed Functionally

I have made the case that it is reasonable to use the mind to study the mind and I have outlined how minds, and functional systems in general, are variable and open instead of fixed and closed. This has implications for how we should study them which I am going to consider now.

First, let’s take a closer look at how we study fixed systems to see what we can learn. I have noted that science stands by the scientific method as the best way to approach experimental science. In addition to the basic loop — observe, hypothesize, predict, and test — the method now includes peer review and anti-biasing techniques (like preregistering research) to control human factors. Current anti-biasing strategies are inadequate because leading paradigms become entrenched, so we need to add anti-entrenchment techniques to the practice of science as well. Still, the basic scientific method that iterates the steps as often as needed to improve the match between model (hypothesis) and reality (as observed) works pretty well. In general, feedback loops are the source of all information and function, but the scientific method aims for more than just information — it is after truth. Scientific truth is the quest for a single, formal model that accurately describes the salient aspects of reality. General-purpose information we gather from experience and access through memory uses a mixture of data and informal models; it is more like a big data approach that catalogs impressions and casual likelihoods. And when we do reason logically, we usually do it quite informally with a variety of approximate models. But science recognizes the extra value that reliable models can provide. Although we can’t prove that a scientific model is correct because our knowledge of the physical world is limited to sampling, all particles of a given type do seem to behave identically, which makes near-perfect predictions in the physical sciences possible. While the exact laws of nature are still (and may always be) a bit too complex for us to nail down completely, the models we have devised so far work well enough that we can take them as true for all intents and purposes to which they apply. We count a law or theory as a scientific truth and simply say it is true if it has been supported by an overwhelming amount of experimental evidence, even though we know it is subject to certain caveats. Specifically, scientific truth only projects truth in models, which don’t necessarily correspond to reality and also may not apply well or perfectly to any given real circumstance. But although there is a risk we might misapply a law, we know that with skill and care the risk is low and manageable, so we tend to think more about the near certainty our models bring.

This pretty straightforward philosophy of science is sufficient for physical science. We just accept well-supported theories as effectively true until flaws are found, at which point we look for improved theories to supplant them. Knowledge is contextual around these theories and not intended to be absolute. This is a very workable approach and supports a lot of very effective technology. This approach is also serviceable for studying the functional sciences, but can only take us so far. Using it, we can lay out any set of assumptions we like and then develop and test theories based on them. If the theories hold up reasonably well, then that means we can make somewhat reliable predictions, even though the foundation is purely speculative. This is how the social sciences are practiced, and while nobody would consider any conclusions of the social sciences to be definitive, they are pretty useful and seem to be “steps in the right direction.” But couldn’t we do better? The social sciences should not be built out of unsupported assumptions about human nature but from the firm foundation of a comprehensive theory of the mind. My objective here is to expand the philosophy of science to encompass the challenges of studying functional systems, and minds in particular.

I’m going to build this philosophy from first principles, but before I start, I’m going to quickly review the history of the philosophy of science. Not all philosophy is philosophy of science, but perhaps it should be because philosophy that is not scientific is just art: pretty, but of dubious value.1 I’m going to discuss just a few key scientists and movements, first listing their contributions and then interpreting them from a functional stance.

Aristotle is commonly regarded as the father of Western philosophy, along with Plato and Socrates, whose tradition he inherited. Unlike them, Aristotle also extensively studied natural philosophy, which we have renamed science. Aristotle was an intuitive functionalist. He focused his efforts on distinctions that carried explanatory power, aka function, and from careful observations almost single-handedly discovered the uniformity of nature, which contrasted with the prevailing impression of an inherent variability of nature. Through many detailed biological studies, he established the importance of observation and the principle that the world followed knowable natural laws rather than unknowable supernatural ones at the whims of celestial spirits.

Francis Bacon outlined the scientific method in the <a href=” Organum (1620) by emphasizing the value of performing experiments to support theories with evidence. Bacon intentionally expanded on Aristotle’s Organon with a prescriptive approach to science that insisted that only a strict scientific method would build a body of knowledge based on facts instead of conjectures. Controlled induction and experiments would accurately reveal the rules behind the uniformity of nature if one were careful to avoid generalizing beyond what the facts demonstrate. In practice, most scientists today adopt this attitude and don’t think too much about the caveats that arose in the coming centuries that I will get to next.

René Descartes established a clear role for judgment and reason in his Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences (1637). His method had four parts: (a) trust your judgment, while avoiding biases, (b) subdivide problems into as many parts as possible, (c) start with the simplest and most certain knowledge and then build more complex knowledge, and (d) conduct general reviews to assure that nothing was omitted. Further, Descartes concluded, while thinking about his own thoughts, “that I, who was thinking them, had to be something; and observing this truth, I am thinking therefore I exist”2, which is known popularly as Cogito ergo sum or I think, therefore I am. He felt that whatever other doubts he might have about the world, this idea was so “secure and certain” that he “took this as the first principle of the philosophy I was seeking.” He further concluded that “I was a substance whose whole essence or nature resides only in thinking, and which, in order to exist, has no need of place and is not dependent on any material thing. Accordingly this ‘I’, that is to say the Soul by which I am what I am, is entirely distinct from the body and is even easier to know than the body; and would not stop being everything it is, even if the body were not to exist.”3 Descartes attempted a physical explanation based on the observation that most brain parts were duplicated in each hemisphere. He believed that since the pineal gland “is the only solid part in the whole brain which is single, it must necessarily be the seat of common sense, i.e., of thought, and consequently of the soul; for one cannot be separated from the other.”4 In this, he was quite mistaken, and it ultimately undermined his arguments, but it was a noble effort! Looking at Descartes functionally, he recognized the role our own minds play in scientific discovery and simply implored us to use good judgment. His assertion that some methods are more effective for science than others was a purely functional stance (because it does all come down to what is effective). He further recognized the preeminence of mind and reason, to the point of proposing substance dualism to resolve the mind-body problem, which I have reformulated into form and function dualism. Descartes was entirely correct in his cogito ergo sum statement, if we interpret it from a form and function dualism perspective. In this view, the function of our minds requires no place or time to exist but can be thought of as existing in the abstract by virtue of the information it represents. Although Descartes fascination with brain anatomy and assumption of the irreducibility of the soul (no doubt derived from a desire to align Catholicism with science) led to some unsupported and false conclusions, he was on the right track. The mind arises entirely from physical processes but is more than just physical itself, because information has a functional existence that transcends physical existence because it is referential and so can be detached from the physical. It is not that there is a “nonphysical” substance that is connected to the physical brain, it is that function is a different kind of thing than form. Physical mechanisms leverage feedback to create the mind, but function and behavior of these mechanisms can’t be explained by physical laws alone because information generalizes function into abstract entities in their own right. Descartes anatomical conclusion that the soul could not be distributed across the brain and so had to be concentrated in the one part that was not doubled was wrong. His assertion that common sense, thought, and the soul cannot be separated is similarly wrong; our sense of self is an aggregation of many parts, including the sense that it is unified and not aggregate.

David Hume anticipated evolutionary theory in his A Treatise of Human Nature (1739), which saw people as a natural phenomenon driven by passions more than reason. Hume divided knowledge into ideas (a priori) and facts (a posteriori). One studies ideas through math and the formal sciences and facts via the experimental sciences. As we ultimately only know of the physical world through our senses, all our knowledge of it must ultimately come from the senses. He further recognized via the problem of induction that we could never prove anything from experience or observation; we could only extrapolate from it. This meant we have no rational basis for belief in the physical world, though we have much instinctive and cultural basis. Hume expanded on Descartes’ “cogito ergo sum” by proving that knowledge from induction could not be proven and that we must therefore remain perpetually skeptical of science. Hume is arguably the founder of empiricism, the idea that knowledge comes only or primarily from sensory experience. While empiricism is a cornerstone of scientific inquiry, this focus on the source of knowledge may have inadvertently moved science away from functionalism, which focuses on the use of knowledge.

Though principally a sociologist, and the inventor of the word sociology, August Comte also lifted empiricism to another level called positivism, which asserted that all knowledge we know for sure or positively must be a posteriori from experience and not a priori from reason or logic. He proposed in 1822 in his book Positive Philosophy that society goes through three stages in its quest for truth: the theological, the metaphysical, and the positive (though different stages could coexist in the same society or in the same mind). The theological or fictitious stage is prescientific and cites supernatural causes. In the metaphysical or abstract stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes or causes and embrace the power of science to reveal nature’s invariant laws through an ever-progressing refinement of facts based on empirical observations5. While Comte did not insist that this progression was necessarily sequential or singular, but could happen at different times in different societies, institutions, or minds, he broadly proposed that the world entered the positivistic stage in 1800 and used this generalization to support his reactionary authoritarian agenda that sought to elevate scientists to elite technocrats who governed according to the findings of the new science of sociology that he founded. In Comte’s mind, skepticism of science was unnecessary; instead, we should embrace it as proven knowledge that could be refined further but not overturned. Although Hume may have been technically right, empiricism moved progressively toward positivism because it just worked so well, and by the end of the 19th century, many thought the perfect mathematical formulation of nature was nearly at hand.

In 1878, Charles Sanders Peirce wrote a paper called, “How To Make Our Ideas Clear,” which distinguished three grades of clarity we can have of a concept. The first grade was visceral, the understanding that comes from experience without analysis, such as our familiarity with our senses and habitual interactions with the world. The second grade was analytic, as evidenced by an ability to define the concept in general terms abstracted from a specific instance. The third grade was pragmatic, being a conception of “practical bearings” the concept might have. While Peirce had some considerable difficulty grappling with whether a general scientific law could be taken to imply practical bearings, in the end he did endorse such scientific implications even in instances where one could not test them. Peirce’s first grade of clarity describes what I call instinctive and subconceptual knowledge. The second grade characterizes conceptual knowledge. While being able to provide a definition is good evidence of conceptual knowledge, it is not actually necessary to provide a definition to use a concept. Peirce put great stock in language as the bearer of scientific knowledge, but I don’t; language is a layer above the knowledge which helps us characterize and communicate it, but which also inevitably opens the door for much to be lost in translation. I would describe the third grade of clarity as actually being the function. Instincts, subconcepts, and concepts all have functions, and the functions of the former contribute to the functions of the latter as well. Where empiricism tied meaning to the source of information, i.e. to empirical evidence, pragmatism shifted meaning to the destination, i.e. its practical effects. The power of science is that it focuses on the practical effects at the conceptual level as carefully and rigorously as we can manage. By construction, all information is pragmatic, but scientific information uses methods and heuristics to find the most widely useful information. While pragmatism has been slowly gathering support, it had little impact on science at the time.

Positivism made another big leap forward in the 1920’s and 30’s when a group of scientists and philosophers called the Vienna Circle proposed logical positivism, which held that only scientific knowledge was true knowledge and, brashly, that knowledge from other sources was not just false and empty, but meaningless. These other sources included not just tradition and personal sources like experience, common sense, introspection, and intuition, but also the whole metaphysics of academic philosophy. Logical positivism sought to perfect knowledge through reason and from there all of civilization. It all hinged on the hope that physical science (and by extension natural and social science) was “proving things” and “getting somewhere” to attain “progress”. To this end, they sought to unify science under a single philosophy that captured meaning and codified all knowledge into a standardized formal language of science. They maintained the empirical view that knowledge about the world ultimately derived from sensory experience but further acknowledged the role of logical reasoning in organizing it. Perhaps more accurately, logical positivism was part of a movement called logical empiricism across several decades and continents of leading scholars intent on improving scientific methodology and the role of science in society rather than espousing any specific tenets, but logical positivism as I described it approximates the philosophies of circle members Rudolf Carnap and Moritz Schlick. Logical positivism attempted to formalize what science seemed to do best, to package up knowledge perfectly. But even at the time, this idealized modernist dream was starting to crack at the seams. Instead of progressively adding detail, physics had revealed that reality was more nebulous than expected with wave-particle duality, curved space and time, and more. Gödel’s incompleteness theorems proved that no formal system could ever be complete or consistent but must be inherently limited in its reach. Willard Van Orman Quine famously wrote in Two Dogmas of Empiricism in 1951 that “a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith.” Analytic statements are a priori logical conclusions, while synthetic statements are a posteriori statements based on experience. The flaws Quine cited relate to the fact that statements are linguistic, and a linguistic medium in intrinsically synthetic because it is not itself physical. Logical positivism invested too much in the power of language, which is descriptive of function but not the same as function, and so it was left behind, along with the rest of modernism, to be replaced by the inherent skepticism of postmodernism. From my perspective, functionally, I would say that the logical positivists correctly intuited that science creates real knowledge about the world, but they just grasped for an overly simplified means of describing that knowledge.

If positivist paths to certainty were now closed, where could science look for a firm foundation? Thomas Kuhn provided an answer to this question in The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin the phrase himself). Without exactly intending to do so, Kuhn created a new kind of coherentist solution. An epistemology or theory of knowledge must provide a solution to the regress problem, which is this: if a belief is justified by providing a further justified belief, then how do you reach the base justified beliefs? There are two traditional theories of justification: foundationalism and coherentism. Aristotle and Descartes were foundationalists because they sought basic beliefs that could act as the foundation for all others, eliminating the perceived problem of infinite regress. Coherentists hold that ideas support each other if they are mutually consistent, much like the words in a language can all be defined in terms of each other. The positivists were struggling to make foundationalism work, and in the end it just didn’t because Hume was right: knowledge from induction could not be proven, so the logical base was just not there. Into this relative vacuum, Kuhn claimed that normal science consisted of observation and “puzzle solving” within a paradigm, which was a coherent set of beliefs that mutually support each other rather than depending on ultimate foundational beliefs. He further, somewhat controversially, proposed that revolutionary science occurred when an alternate set of beliefs incompatible with the normal paradigm overtook it in a paradigm shift. While Kuhn’s conclusions are right as far as they go, which helps explain why this was the most influential book on the philosophy of science ever written, he inadvertently alienated himself from most physical scientists because it made it look as if science was purely a social construction, which was not his intent at all. But once he had let the cat out of the bag, he could not put it back in again. With the door open for social constructionists to undermine science as an essentially artistic endeavor, scientific realists took on the challenge of restoring certainty to science.

Scientific realism (~1980-present) has supplanted logical positivism as the leading philosophy of science by looking to fallibilism for epistemological support. Fallibilism is not a theory of justification, but it is an excuse for claiming justification is unnecessary. Instead of looking to axioms, or mutual support, or support from an infinite chain of reasons, fallibilism just acknowledges that no beliefs can be conclusively justified, but asserts that “knowledge does not require certainty and that almost no basic (that is, non-inferred) beliefs are certain or conclusively justified”. They recognize that claims in the natural sciences, in particular, are “provisional and open to revision in the light of new evidence”. The difference between skepticism and fallibilism is that while skeptics deny we have any knowledge, fallibilists claim that we do, even though it might be revised following further observation. Knowledge can be said to arise because while “a theory cannot be proven universally true, it can be proven false (test method) or it can be deemed unnecessary (Occam’s razor). Thus, conjectural theories can be held as long as they have not been refuted.”6. This suggests that until it has been proven false or redundant, it can be taken as effectively true. Realists further propose that this mantle of scientific truth not be extended to every scientific claim not yet disproven, but should be reserved for those satisfying a quality standard, which is generally taken to be include things like having maturity and not being ad hoc. Maturity suggests having been established for some time and been well tested, and not being ad hoc suggests not being devised just to satisfy known observations without having undergone suitable additional testing.

With this philosophical underpinning, scientific realists feel justified in thinking that the observed uniformity of nature and success of established scientific laws can be taken to mean that the physical world described by science exists and is well characterized by those laws. Put another way, “The scientific realist holds that science aims to produce true descriptions of things in the world (or approximately true descriptions, or ones whose central terms successfully refer, and so on).”7 In a nutshell, Richard Dawkins summarized the realist sentiment in 2013 by noting that “Science works, bitches!”8. It sounds pretty plausible, but is it enough? The determination of what is mature enough and not too ad hoc is ultimately subjective, and a function of the paradigms of the day, which suggest that the social constructive view still permeates scientific realism. Furthermore, it takes for granted that the idealized models of science can be objectively applied to reality but specifies no certain way to do that. The methods and approaches that have become mature and established, though also subjective, are taken as valid ways to match theory to reality. So the question remains, is scientific realism actually justified, and if so, how?

Superficially, the central idea of scientific realism is that the physical world described by science exists. But I would claim that this is irrelevant and incidental; the deeper idea of scientific realism is that it works, where “works” means that it provides functionality. We do engage in science because we want to know the truth about nature, both because the knowledge brings functional power and just because it is cool — the potential power that elegant explanations bring is very satisfying to our function-seeking brains. Scientific laws are general; beyond specific situations, they specify general functionality or capacity for a range of possible situations. But none of this changes the fact that we can never prove that the physical world really exists. Its actual existence is not the point. The point is what science has to say about it, which is a functional existence, that we experience through the approximate but strong sense of consistency between our theories and observations. As I will explore later, our minds are wired to think about things as being certain even though deep down we can appreciate that nothing is certain. That deeper reality (that nothing is certain) just doesn’t impress our mental experience as much as the feeling of certainty does. So scientific realism is just an accommodation to human nature and our desire to feel certainty. The real philosophy of science has to be functionalism, which isn’t concerned with certainty, only with higher probabilities for desired outcomes. I am ok with scientific realism so long as we understand it is a slightly misleading shorthand for functionalism.9

“Epistemologically, realism is committed to the idea that theoretical claims (interpreted literally as describing a mind-independent reality) constitute knowledge of the world.”10 We can see what realism is after: it seems intuitive that since the scientific laws work we should just be able to think of them as knowledge. But was Newton’s law of gravity knowledge? We know it was not right; because of relativistic effects it is never 100% accurate, and because his model proposed action at a distance, even Newton felt it was unjustifiably mystical. Einstein later corrected gravity for relativity and also formulated it as a field effect and not an “interaction” between objects, but we know that general relativity is not the whole story about gravity either. So, if the models aren’t right, on what basis are we entitled to think we have knowledge? Is it our willingness to “commit” to it? Willingness to believe is not good enough. I interpret realism as an incomplete philosophy that takes the important step of affirming aspects of science we know intuitively make sense, without being too demanding about providing the ontological and epistemological basis for those aspects.

In the 1990s, postmodernists did push the claim that all of science was a social construction in the so-called science wars. Scientific realism alone was inadequate to fight off postmodern critiques, so formally science is losing the battle against relativism. I contend that the stronger metaphysical support of functionalism is enough to push the postmodernists back into the hills, but only if science embraces it. The Sokal affair, a bogus and meaningless scientific paper that actually did get published, highlights a fundamental flaw in science as practiced: it becomes divorced from foundational roots. The foundation must never be taken for granted but must always be spelled out to some level of detail in every scientific paper. The current convention is for a scientific paper to presume some level of innate acceptance of unspoken paradigms, and the greater the presumption, the more authoritative the paper sounds. But this is the wrong path; papers should start from nothing and introduce the assumptions on which they build, with a critical eye. This philosophical backdrop doesn’t need to take over the paper, but without it, the paper is only of use to specialists, which undermines generalism, which is ultimately as important to functionalism as specialization.

Now I can reveal the real solution to the regress problem. The answer is not in the complete support of foundationalism or the mutual support of coherentism, or any other theories put forth so far. It is in “bootstrapism”. Information is captured by living information systems through four levels: genetic, instinct, subconcept and concept, and only the last level leverages logic, and only a small part of that logic is based on logical systems we have thought up, e.g. the three traditional laws of thought. Furthermore, there is a further “fifth” level, the linguistic level, that is not really level of information but a level of representation of information from the other four levels. Also, note that the four to five interacting information management systems are not the only levels; we create virtual levels with every model that builds on other models and lower-level information. So the regress problem boils down to bootstrapping, which is done by building more powerful functional systems with the help of simpler ones. The solution to the seeming paradox of infinite regress doesn’t require infinite support (though feedback can cycle endlessly), it just requires a few levels of information that build on each other. The levels also interact with each other to become mutually supporting, which can create the illusion that the topmost, conceptual level, or even more absurdly, the linguistic level, might be keeping the whole boat afloat by itself. It just isn’t like that; the levels depend on each other, and language just renders a narrow slice of that information. The idea that well-formed sentences of a language have meaning is flawed; the sentences of languages, formal or natural, have no meaning in and of themselves, though they may stimulate us to think of things with meaning. The Vienna Circle inadvertently put too much faith in formal logic (which is one-leveled) and conflated it with thought (which is multi-leveled).

Science works because scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that it doesn’t (and, in fact, can’t) eliminate it. All that matters is that it reduces it. This distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So one could fairly say that science is a social construction, but it is one that continually moves closer to the truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount, quality, and levels of useful information.

It is not enough for scientific communities to assume their best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”1112. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings but is more commonly a problem when science is used to justify a commercial enterprise. Scientists must not be put in the position of having a vested interest in supporting a specific paradigm. To ensure this, they must be encouraged and required to mention both the paradigm they support and its alternatives, at least to a sufficient degree to fend off the passive coercion that failing to do so creates psychologically.


As practiced, physical science (arguably) starts with these paradigmatic assumptions:

(a) the physical world exists independent of our conception of it,

(b) it is comprised of components that operate with perfect consistency,

(c) evidence from the physical world can be used to learn about that consistency, and

(d) logical models can describe that consistency, making near-perfect prediction possible.

I have explained why assumption (a) is ultimately irrelevant since we think only about phenomena and not noumena. Point (b) is relevant, but not necessary either because functionalism doesn’t require perfect consistency, only enough consistency to be able to make useful predictions. Assumption (c) forms the practical basis for functionalism; the creation of information relies exclusively on feedback. And point (d) simply goes to the power of information management systems to build more powerful information from simpler information. So functionalism is largely consistent with science as practiced and vice versa. But as we look to explain purely functional phenomena, like the mind itself, we need to move beyond these simplified assumptions to the broader and stronger functional base, because they won’t get us very far.

Minds not Brains: Introducing Theoretical Cognitive Science

I’m going to make a big deal about the difference between the mind and the brain. We know what minds are from long experience and take the concept for granted, despite an almost complete absence of a scientific explanation. Conventionally, the mind is “our ability to feel and reason through a first-person awareness of the world”. This definition begs the question of what “feel”, “reason” and “first-person awareness” might be, since we can’t just define the mind by using terms that are only meaningful to the owner of one. While we can safely say they are techniques that help the brain perform its primary function, which is to control the body, we will have to dig deeper to figure out how they work. Our experience of mind links it strongly to our bodies, and scientists have long said it resides in the nervous system and the brain in particular. Steven Pinker says that “The mind is what the brain does.”1 This is only superficially right, because it is not what, but why. It is not the mechanism or form of the mind that matters as much as its purpose or function. But how can we embark on the scientific study of the mind from the perspective of its function? As currently practiced, the natural sciences don’t see function as a thing itself, but more as a side effect of mechanical processes. The social sciences start with the assumption that the mind exists but take no steps to connect it back to the brain. Finally, the formal sciences study theoretical, abstract systems, including logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics, but leave it to natural and social scientists to apply them to natural phenomena like brains and minds. What is the best scientific standpoint to study the mind? Cognitive science was created in 1971 to fill this gap, which it does by encouraging collaboration between the sciences. I think we need to go beyond collaboration and admit that the existing three branches have practical and metaphysical constraints that limit their reach into the study of the mind. We need to lift these constraints and develop a unified and expanded scientific framework that can cleanly address both mental and physical phenomena.

Viewed most abstractly, science divides into two branches, the formal and experimental sciences, with the formal being entirely theoretical, and the experimental being a collaboration between theory and testing. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and special sciences (all other natural and social sciences), which are presumed to be reducible to fundamental physics, at least in principle. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum. Hypotheses are purely functional while testing is purely physical. That is, hypotheses are ideas with no physical existence, though we think about and discuss them through physical means, while testing tries to evaluate the physical world as directly as possible. Of course, we use theory to perform and interpret the tests, so it can’t escape some dependence on function. The scientific method tacitly acknowledges and leverages both functional and physical existence, even though it does not overtly explain what functional existence might be or attempt to explain how the mind works. That’s fine — science works — but we can no longer take functional existence and its implications for granted as we start to study the mind. It’s remarkable, really, that all scientific understanding, and everything we do for that matter, depend critically on our ability to use our minds, yet don’t need an understanding of how it works or what it is doing. But we have to find a way to make minds and ideas into objects of study themselves to understand what they are.

The special sciences are broken down further into the natural and social sciences. The natural sciences include everything in nature except minds, and the social sciences study minds and their implications. The social sciences start with the assumption that people, and hence their minds, exist. They draw on our perspectives about ourselves, our behavior patterns, and what we think we are doing to explain what we are and help us manage our lives better. Natural scientists (aka hard scientists) call the social sciences “soft sciences” because they are not based on physical processes bound by mathematical laws of nature; nothing about minds has so far yielded that kind of precision. Our only direct knowledge of the mind is our subjective viewpoint, and our only indirect knowledge comes from behavioral studies, evolutionary psychology and outright speculation into the functions our minds appear to perform. The study of behavior finds patterns in the ways brains make bodies behave and may support the idea of mental states but doesn’t prove they exist. Evolutionary psychology also suggests how mental states could explain behavior, but can’t prove they exist. Studying the functions the mind does by just guessing about them sounds crazy at first, but is actually the way all scientific hypotheses are formed: take a guess and see if it holds up. It too can’t prove mental states exist, but we need to remember that science isn’t about proving, it is about developing useful explanations.

The differences in approach between hard and soft sciences have opened up a gap that currently can’t be bridged, but we have to bridge it to develop a complete explanation of the mind. This schism between our subjective and objective viewpoints is sometimes called the explanatory gap. The gap is that we don’t know how physical properties alone could cause a subjective perspective (and its associated feelings) to arise. I closed this gap in The Mind Matters, but not rigorously. In brief, I said that the mind is a process in the brain that experiences things the way it does because creating a process that behaves like an agent and sees itself as an agent is the most effective way to get the job done. More to the point, it feels like an agent because it has to have some way of thinking about its senses and that way needs to keep them all distinct from each other. So perceptions are just the way our brains process information and “present” it to the process of mind. It is not a side effect; much of the wiring of the brain was designed to make this illusion happen exactly the way it does.

Natural science currently operates on the assumption that natural phenomena can be readily modeled by hypotheses which can be tested in a reproducible way. This works well enough for simple systems, i.e. those which can be modeled using a handful of components and rules. The mind, however, is not a simple system for three reasons: complexity, function, and control. Living tissues are complex systems with many interacting components, so while muscle tissue can be modeled as a set of fibers working together as a simple machine, like any complex system its behavior will become chaotic outside normal operating parameters. Next, the mind (and muscles) have a different metaphysical nature than nonliving things. Unlike rocks and streams, muscles and nerves are organized to perform a function rather than employ a specific physical form. And most significantly, the mind is not organized to perform functions itself but to control how the body will perform functions, and so could be called metafunctional. These three complicating factors make developing and testing hypotheses about the mind vastly more complicated than doing it for rocks and streams, so paradigms based only on natural laws won’t work. Yet the attitude among natural scientists is that the mind is just an elaborate cuckoo clock and so understanding it reduces to knowing its brain chemistry. That will indeed reveal the physical mechanisms, but it won’t reveal the reasons for the design, any more than understanding the clock explains why we want to know what time it is. When we study complex systems, like the weather, we have to accept that chaos and unpredictability are around every corner. When we study functional systems, like living things, we have to accept that functional explanations — and all explanations are functional — need to acknowledge the existence of function. And when we study control systems, like brains and minds, we have to accept that direct cause and effect is supplanted by indirect cause and effect through information processing. Natural sciences study complexity and function in living systems, but not the control aspect of minds. Control is addressed by a number of the formal sciences, but since the formal sciences are not concerned with natural phenomena like minds, the study of control by minds has been left high and dry. It falls under the purview of cognitive science, but we need to completely revamp our concept of what scientific method is appropriate to study function and control. We will need theories that seek to explain how control is managed from a functional perspective, that is, using information processing, and we will need ways to test them that are less direct than tests of natural laws.

Nearly all our knowledge of our mind comes from using it, not understanding it. We are experts at using our minds. Our facility develops naturally and is helped along by nurture. Then we spend decades at schools to further develop our ability to use our mind. But despite all this attention on using it, we think little about what it is and how it works. Just as we don’t need to understand how any machine works to use it, we don’t need to know how our mind works to use it. And we can no more intuit how it works that we can intuit how a car or TV works. We consequently take it for granted and even develop a blindness about the subject because of its irrelevance. But it is the relevant subject here, so we have to overcome this innate bias. We can’t paint a picture of a scene we won’t look at. While we have no natural understanding of it, we do know it is a construct of information managed by the brain. Understanding the physical mechanisms of the brain won’t explain the mind any more than taking a TV apart would explain TV shows, because for both the mind and TV shows the hardware is just a starting point from which information management constructs highly complex products. So the mind is less what the brain does than why it does it. It is about how it physically accomplishes things so much as what it is trying to accomplish. This is the non-physical, functional existence I have argued for. In fact, for us, functional existence is primary to physical existence, because knowledge itself is information or function, so we only know of physical existence as mediated through functional existence, i.e. from observations we make with our minds (i.e. “I think therefore I am”).

Knowing that functional existence is real and being able to talk about it still doesn’t explain how it works. We take understanding to be axiomatic. We use words to explain it, but they are words defined in terms of each other without any underlying explanation. For example, to understand is to know the meaning of something, to know is to have information about, information is facts or what a representation conveys, facts are things that are known, convey is to make something known to someone, meaning is a worthwhile quality or purpose, purpose is a reason for doing something, reason is a cause for an event, and cause is to induce, give rise, bring about, or make happen. If anything, causality seems like it should reduce to something physical and not mental, yet it doesn’t. But the language of the mind is not intended to explain how understanding or the mind works, just to let us use understanding and our minds. If we are to explain how understanding and other mental processes work we will need to develop an objective frame of reference that can break mental states down into causes and effects or we will remain trapped in a relativistic bubble.

Let’s consider which sciences study the mind directly. Neuroscience studies the brain and nervous system, but this is not direct for the same reason studying computer hardware says little or nothing about what computer software does. On the other hand, psychology and cognitive science are dedicated to the study of the mind. Psychology studies the mind as we perceive it, our experience of mind, while cognitive science studies how it works. One could say psychology studies the subjective side and cognitive science studies the objective side. Psychology divides into a variety of subdisciplines, including neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, and humanistic psychology. They each draw on a different objective source of information. Neuropsychology studies the brain for effects on behavior and cognition. Behavioral psychology studies behavior. Evolutionary psychology studies the impact of evolution. Cognitive psychology studies mental processes like perception, attention, reasoning, thinking, problem-solving, memory, learning, language, and emotion. Psychoanalysis studies experience (but with a medical goal). Humanistic psychology studies uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. Cognitive science focuses on the processes that support and create the mind. Most cognitive scientists, including me, are functionalists, maintaining that the mind should be explained in terms of what it does. But science continues to be almost completely dominated by a physicalist tradition, which suggests and even claims that studying the brain will ultimately explain the mind. I have adamantly argued that function does not reduce to form, even though it needs form. And it is true that knowing the form provides many clues to the function, and it is also true that form is our only hard evidence. But we are still a long way from unraveling all the mechanics of neurochemistry, though rapid progress is being made. In the meantime, without any more information than we already have at hand there is much that we can say about the brain’s function, that is, about the mind, by taking a functional perspective on what it is doing. So cognitive science should not be an interdisciplinary collaboration, but should reboot science from scratch by establishing a scientific approach to studying function that can meet a comparable level of objectivity as our paradigm for studying form. I have, so far, proposed that all of science be refounded on the ontology of form and function dualism. The prevailing paradigm, which derives as I have noted from the Deductive Nomological Model, uses function to study form, while I propose to use function to study both form and function.

One other discipline formally studies the mind: philosophy. Practiced as an independent field, general philosophy studies fundamental questions, such as the nature of knowledge, reality, and existence. But because they don’t establish an objective basis for their claims, philosophers ultimately depend on the subjective, intuitive appeal of their perspectives. For example, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism. Universality and relativism assume the concepts of facts, discovery, understanding, and perception, but these assumptions are at best loosely defined and really depend on a common knowledge of what they are. Philosophy builds on common knowledge ideas without attempting to establish an objective basis. What principally distinguishes science is the effort to establish objectivity, and the way it does this it itself studied unscientifically as the philosophy of science. It is an ironic situation that the solid foundation upon which science has presumably been built is itself unclear and ultimately pretty subjective. George Bernard Shaw said, “Those who can do, those who can’t teach,” and this is a theme I have been repeating. We are designed to do things but not to understand how we do them or, much less, to teach how they are done. But understanding and teaching are important to get us to the next level so we can leverage what we know in new ways. We have long been perfectly capable of practicing science without dwelling too much on its philosophical basis, but that was before we started to study the mind. We desperately need an objective basis of objectivity itself and how to apply it to the study of both form and function in order to proceed. Philosophers have asked the questions and laid out the issues, but scientists now have to step up and answer them.

Philosophy of science and philosophy of mind have detailed the issues at hand from a number of directions, and characteristically of philosophy have failed to indicate an objective path forward. I believe we can derive the objective philosophy we need by reasoning it out from scratch using the common and scientific knowledge of which we are most confident, which I will do in the next chapter. But a brief summary of the fields is a good starting point to provide some orientation. Science was a well-established practice long before efforts were made to describe its philosophy. August Comte proposed in 1848 that science proceeds through three stages, the theological, the metaphysical, and the positive. The theological stage is prescientific and cites supernatural causes. In the metaphysical stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes and embrace an ever-progressing refinement of facts based on empirical observations. So every theory must be guided by observed facts, which in turn can only be observed under the guidance of some theory. Thus arises the hypothesis-testing loop of the scientific method and the widely accepted view that science continually refines our knowledge of nature. Comte’s third stage developed further in the 1920’s into logical positivism, the theory that only knowledge verified empirically (by observation) was meaningful. More specifically, logical positivism says that the meaning of logically defined symbols could mirror or capture the lawful relationship between an effect and its cause2. Every term or symbol in a theory must correspond to an observed phenomena, which then provides a rigorous way to describe nature mathematically. It was a bold assertion because it says that science derives the actual laws of nature, even though we know any given evidence can be used to support any number of theories, even if the simplest theory (i.e. by Occam’s razor) seems more compelling. In the middle of the 20th century, cracks began to appear in logical positivism (and its apotheosis in the DN Model, see above) as the sense of certainty promised by modernism began to be replaced by a postmodern feeling of uncertainty and continuous change. In the sciences, Thomas Kuhn published The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin that phrase specifically). Though Kuhn’s goal was to help science by unmasking the forces behind scientific revolutions, he inadvertently opened a door he couldn’t shut, forever ending dreams of absolutism and a complete understanding of nature and replacing it with a relativism in which potentially all truth is socially constructed. In the 1990s, postmodernists claimed all of science was a social construction in the so-called science wars. Because this seems to be true in many ways, science formally lost this battle against relativism and has continued full steam without clarifying its philosophical foundations. Again, while this is good enough to do science that studies form, it is not enough to do science that studies function. Arguably, we could and very well might develop a scientific tradition for studying function that lets us get the job done without a firm philosophical foundation either. After all, we need news you can use regardless of why it works. Maybe it will happen that way, but I personally consider the why to be the more interesting question, and because function is so much more self-referential than form that I think studying it will turn out to require understanding what it means to study it.

The philosophy of mind is studied as a survey of topics including existence (the mind-body problem), theories of mental phenomena, consciousness/qualia/self/will, and thoughts/concepts/meaning. My goal, as noted, is to establish an objectively supportable stance on these topics and on objectivity itself, which I will then use to launch an investigation into the workings of the mind. It will take some time to do all this, but as a preview I will lay out where I will land on some fundamental questions:

I endorse physicalism (i.e. minimal or supervenience physicalism), which says the mind has a physical basis, or, as philosophers sometimes say, the mental supervenes on the physical. This means that a physical duplicate of the world would also duplicate our minds. While true duplication is impossible, my point here is just that the mind draws its power entirely from physical materials. Physicalism rejects the idea of an immortal soul and Descartes’ substance dualism in which mind and body are distinct substances. Physicalism is often taken to simultaneously reject any other kind of existence, making it a physical monism, but that rejection is unnecessary. At its core physicalism just says that physical things are physical. That one might also interpret something physical from another perspective is irrelevant to physicalism.

I endorse non-reductive physicalism, which is just a fancy way of saying that things that are not physical are not physical, and in particular, that function is not form or reducible to it. More accurately, mental explanations cannot be reduced solely to physical explanations. That doesn’t mean that physical things like brains, that can carry out functions, are not physical, because they are entirely physical from a physical perspective. But if you look at brains from the perspective of what they are doing you create an auxiliary kind of explanation, a functional one. And because explanatory perspectives are abstract, there are an unlimited number of functional perspectives (or existences) about everything. The brain is still physical, the explanations of it are not. To the extent the word “mind” is taken to be a functional perspective of what the brain is doing, it is really the union of all the explanatory perspectives the brain uses when going about its business. These functional perspectives are not mystical, they are relational, tying information to other information using math, logic or correlation. A given thought has a form as an absolute, physical particular in a brain, but its meaning is relative, being a generalization or idealization that might refer to any number of things. Thus, “three” and “above” are not physical particulars. A thought is a functional tool that may be employed in a specific physical example but exists as an abstraction independent of the physical.

I endorse functionalism, which is the theory that mental states are more profitably viewed from the perspective of what they do rather than what they are made of, that is, in terms of their function, not their form. In my ontology of form & function dualism mental states have both kinds of existence, with many possible takes as to what their function is, but they evolved to satisfy the control function for the body, and so our efforts to understand them should take this perspective first.

I endorse the idea that consciousness is a subprocess of the brain that is designed to create a subjective theater from which centralized control of the body by the brain can be performed efficiently. All the familiar aspects of consciousness such as qualia, self, and the will are just states managed by this subprocess. As a special spoiler, I will reveal that I endorse free will, even if the universe is deterministic, which to the best of our knowledge it is not.

Finally, I endorse the idea that thoughts, concepts, and meaning are information management techniques that have both conscious and subconscious aspects, where subconscious refers to subprocesses of the brain that are supportive of consciousness, which is the most supervisory subprocess.

While this says much about where I am going, it doesn’t say how how I will get there or how a properly unified philosophy of science and mind imply these things.

Deriving an Appropriate Scientific Perspective for Studying the Mind

I have made the case for developing a unified and expanded scientific framework that can cleanly address both mental and physical phenomena. I am going to focus first on deriving an appropriate scientific perspective for studying the mind, which also bears on science at large. I will follow these five steps:

1. The common knowledge perspective of how the mind works
2. Form & Function Dualism: things and ideas exist
3. The nature of knowledge: pragmatism, rationalism and empiricism
4. What Makes Knowledge Objective?
5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

1. The common-knowledge perspective of how the mind works

Before we get all sciency, we should reflect on what we know about the mind from common knowledge. Common knowledge has much of the reliability of science in practice, so we should not discount its value. Much of it is uncontroversial and does not depend on explanatory theories or schools of thought, including our knowledge of language and many basic aspects of our existence. So what about the mind can we say is common knowledge? This brief summary just characterizes the subject and is not intended to be exhaustive. While some of the things I will assume from common knowledge are perhaps debatable, my larger argument will not depend on them.

First and foremost, having a mind means being conscious. Consciousness is our first-person (subjective) awareness of our surroundings through our senses and our ability to think and control our bodies. We implicitly trust our sensory connection to the world, but we also know that our senses can fool us, so we’re always re-sensing and reassessing. Our sensations, formally called qualia, are subjective mental states like redness, warmth, and roughness, or emotions like anger, fear, and happiness. Qualia have a persistent feel that occurs in direct response to stimuli. When not actually sensing we can imagine we are sensing, which stimulates the memory of what qualia felt like. It is less vivid than actual sensation, though dreams and hallucinations can seem pretty real. While our sensory qualia inform us of physical properties (form), our emotional qualia inform us of mental properties (function). Fear, desire, love, revulsion, etc., feel as real to us as sight and sound, though mature humans also recognize them as abstract constructions of the mind. As with sensory qualia, we can recall emotions, but again the feeling is less vivid.

Even more than our senses, we identify our conscious selves with our ability to think. We can tell that our thoughts are happening inside our heads, and not, say, in our hearts. It is common knowledge that our brains are in our heads and brains think1, so this impression is a well-supported fact, but why do we feel it? Let’s call this awareness of our brains “encephaloception,” a subset of proprioception (our sense of where the parts of our body are), but also including other somatosenses like pain, touch, pressure. The main reason our encephaloception pinpoints our thoughts in our heads is that senses work best when they provide consistent and accurate information, and the truth is we are thinking with our brains. Like other internal organs, it helps us to be aware of pain, motion, impact, balance, etc. on the head and brain as this can affect our ability to think, so having sufficient sensory awareness of our brain just makes sense. It is not just a side effect, say, of having vision or hearing in the head that we assume our thoughts originate there; it is the consistent integration of all the sensory information we have available.

But what is thinking? Loosely speaking it is the union of everything we feel happening in our heads, but more specifically we think of it as a continuous train of thought which connects what is happening in our minds from moment to moment in a purposeful way. This can happen through a variety of modalities, but the primary one is the simulation of current events. As our bodies participate in events, the mind simultaneously simulates those events to create an internal “movie” that represents them as well as we understand them. We accept that our understanding is limited to our experience and so tends to focus on levels of detail and salient features that have been relevant to us in the past. The other modalities arise from emphasizing the use of specific qualia and/or learned skills. Painting and sculpting emphasize vision and pattern/object recognition, music emphasizes hearing and musical pattern recognition, and communication usually emphasizes language. Trains of thought using these modalities feel different from our default “movie” modality but have in common that our mind is stepping through time trying connecting the dots so things “make sense.” Making sense is all about achieving pleasing patterns and our conscious role in spotting them.

And even above our ability to think, we consciously identify with our ability to control our bodies and, indirectly through them, the world. Though much of our talent for thought is innate, we believe the most important part is learned, the result of years of experience in the school of hard knocks. We believe in our free will to take what our senses, emotions, and memory can offer us to select the actions that will serve us best. At every waking moment, we are consciously considering and choosing our upcoming actions. Sometimes those actions are moments away, sometimes years. Once we have selected a course of action, we will, as much as possible, execute it on “autopilot,” which is to say we leverage conditioned behavior to reduce the burden on our conscious mind by letting our subconscious handle it. So we recognize that we have a conscious mind that is just that part that is actively considering our qualia and memories to select next actions and a subconscious mind that is processing our qualia and memories and performing a variety of control functions that don’t require conscious control. All of this is common knowledge from common sense, and it is also well-established scientifically.

But what is thinking? What does it mean to consider and decide? Thinking seems like such an ineffable process, but we know a lot about it from common knowledge. We know that concepts are critical building blocks of thought, and we know that concepts are generalizations gleaned from grouping similar experiences together into a unit. Language itself functions by using words to invoke concepts. We each make strong associations between each word we know and a variety of concepts that word has been used to represent. Our ability to use language to communicate hinges on the idea that the same word will trigger very similar concepts in other people. Our concepts are all connected to each other through a web of relationships which reveal how the concepts will affect each other under different circumstances. This web thus reveals the function of the concept and constitutes its meaning, so its meaning and hence its existence is entirely functional and not physical. Its neural physical manifestation is only indirectly related and hence incidental, as the meaning could in principle be realized in different people or by another intelligent being or even just written down. Although every physical brain contemplating any given concept will have some subtle and deep differences in their understanding of it, because the concept is fundamentally a generalization, subtle and deep characteristics are necessarily of less significance than the overall thrust.

The crux of thinking, though, is what we do with concepts: we reason with them. Basically, reasoning means carefully laying out a set of related concepts and the relevant relationships that bind them and drawing logical implications. To be useful, the concepts and implications have to be correlated to a situation for which one wants to develop a purposeful strategy. In other words, when we face a situation we don’t know how to handle it creates a problem we have to solve. We try to identify the most relevant factors of the problem by correlating the situation to all the solutions we have reasoned out in the past, which lets us narrow it down to a few key concepts and relationships. To reason, we consider just these concepts and our rules about them in a kind of cartoon of reality, and then we hope that conclusions we drew about these generalized concepts will apply to the real situation we are addressing. In practice, it usually works so well that we think of our concepts as being identical to the things they represent, even though they are really just loose descriptive generalizations that are nothing like what they represent and, in fact, only capture a small slice of abstract functional properties about those things. But they tend to be exactly what we need to know. “Thinking outside the box” refers to the idea of contemplating uses for concepts beyond the ones most familiar to us. An infinite variety of possible alternate uses for any thing or concept always exists, and it is a good idea to consider some of them when a problem arises, but most of the time we can solve most problems well enough by just recombining our familiar concepts in familiar ways.

This much has arguably been common knowledge for thousands of years, even if not articulated as such, and so can arguably even be subsumed under the more heading common sense, which includes everything intuitively obvious to normal people 2. But can civilization and culture be said to have generated trustworthy common knowledge that goes beyond what we can intuit for ourselves using common sense just by growing up? I am not referring to the common knowledge of details, e.g. historical facts, but to the common knowledge of generalities, i.e. the way things work. Here I would divide such generalities into two camps, those that have scientific support and hence can be clearly explained and demonstrated and those that don’t, but which still have broad enough acceptance to be considered common knowledge. I will consider these two camps in turn.

Our scientific common knowledge expands dramatically with each generation. We take much for granted today from physics, chemistry, and biology that were unknown a few hundred years ago. Even if we are weak in the details, we are all familiar with the scope of physical and chemical discoveries from artifacts we use every day. We know evolution is the prime mover in evolution, causally linking biological traits to the benefits they provide. Relative to the mind specifically, we have familiarity with discoveries from neuroscience, computer science, psychology, sociology and more that expand our insight into what the brain is up to. Although we recognize there is still much more unknown than known, we are pretty confident about a number of things. We know the mind is produced by the brain and not an ethereal force independent of the brain or body. This is scientific knowledge, as thoroughly proven from innumerable scientific experiments as gravity or evolution, and is accepted as common knowledge by those who recognize science’s capacity to increase our predictive power over of the world. Those who reject science or who employ unscientific methods should read no further as I believe the alternatives are smoke and mirrors and should not be trusted as the basis for guiding decisions.

Beyond being powered by the brain, we also now know from common knowledge that the mind traffics solely in information. We don’t need to have any idea how it manages it to see that everything that is happening in our subjective sphere is relational, just a big description of things in terms of other things. It is a large pool of information that we gather in real time and integrate both with information we have stored from a lifetime of experience and collected as instinctive intuitions from millions of years of evolution. The advent of computers has given us a more general conception of information than our parents and grandparents had. We know it can all be encoded as 0’s and 1’s, and we have now seen so many kinds of information encoded digitally that we have a common-knowledge intuition about information that didn’t exist 30 to 60 years ago.

It is also common knowledge that there is something about understanding the brain and/or mind that makes it a hard problem. While everything else in the known universe can be explained with well-defined (if not perfectly fleshed-out) laws of physics and chemistry, biology has introduced incredible complexity. How has it accomplished that and how can we understand it? The ability of living things to use feedback from natural selection, i.e. evolution, is the first piece of the puzzle. Complexity can be managed over countless generations to develop traits that exploit almost any energy source to support life better. But although this can create some very complex and interdependent systems, we have been pretty successful in breaking them down into genetic traits with pros and cons. We basically understand plants, for example, which don’t have brains per se. The control systems of plants are less complex than animal brains, but there is much we still don’t understand, including how they communicate with each other through mycorrhizal networks to manage the health of whole forests. But while we know the role brains serve and how they are wired to do it with neurons, we have only a vague idea how the neurons do it. We know that even a complete understanding of how the one hundred or so neurotransmitters activate isn’t going to explain it.

We know now from common knowledge that we have to confront head-on the question of what brains are doing with information to tackle the problem. And the elephant in the room is that science doesn’t recognize the existence of information. There are protons and photons, but no informatons or cogitons. What the brain is up to is still viewed strictly through a physical lens as a process reducible to particles and waves. This has always run counter to our intuitions about the mind, and now that we understand information it runs counter to our common-knowledge understanding of what the mind is really doing. So we have a gap between the tools and methods science brings to the table and the problem that needs to be solved. The solution is not to introduce informatons and cogitons to the physical bestiary, but to see information and thought in a way that makes them explainable as phenomena.

So when we think to ourselves that we “know what we know” and that it is not just reducible to neural impulses, we are on to something. That knowledge can be related verbally and so “jump” between people is proof that it is fundamentally nonphysical, although we need a physical brain to reflect on it. All ideas are abstractions that indirectly characterize real or imagined things. Our minds themselves, using the physical mechanisms of the brain, are organized and oriented so as to leverage the power this abstraction brings. We know all this — better today than ever before — but we find ourselves stymied to address the matter scientifically because abstraction has no scientific pedigree. But I am not going to ignore common sense and common knowledge, as science is wont to do, as I unravel this problem.

2. Form & Function Dualism: things and ideas exist

We can’t study anything without a subject to study. What we need first is an ontology, a doctrine about what kinds of things exist. We are all familiar with the notion of physical existence, and so to the extent we are referring to things in time and space that can be seen and measured we share the well-known physicalist ontology. Physicalism is an ontological monism, which means it says just one kind of thing exists, namely physical things. But is physicalism is a sufficient ontology to explain the mind? Die-hard natural scientists insist it is and must be, and that anything else is new-age nonsense. I am sympathetic to that view as mysticism is not explanatory and consequently has no place in discussions about explanations. And we can certainly agree from common knowledge that there is a physical aspect, being the body of each person and the world around us. But knowing that seems to give us little ability to explain our subjective experience, which is so much more complex than the observed physical properties of the brain would seem to suggest. Can we extend science’s reach with another kind of existence that is not supernatural?

We are intimately familiar with the notion of mental existence, as in Descartes’ “I think therefore I am.” Feeling and thinking (as states of mind) seem to us to exist in a distinct way from physical things as they lack extent in space or time. Idealism is the monistic ontology that asserts that only mental things exist, and what we think of as physical things are really just mental representations. In other words, we dream up reality any way we like. But science and our own experience have provided overwhelming evidence of a persistent physical reality that doesn’t fluctuate in accord with our imagination, and this makes idealism rather untenable. But if we join the two together we can imagine a dualism between mind and matter in which both the mental and physical exist without either being reducible to the other. All religions have seized on this idea, stipulating a soul (or equivalent) that is quite distinct from the body. But no scientific evidence has been found supporting the idea that the mind can physically exist independent of the body or is in any way supernatural. But if we can extend science beyond physicalism, we might find a natural basis for the mind that could lift religion out of this metaphysical quicksand. Descartes also promoted dualism, but he got into trouble identifying the mechanism: he supposed the brain had a special mental substance that did the thinking, a substance that could in principle be separated from the body. Descartes imagined the two substances somehow interacted in the pineal gland. But no such substance was ever found and the pineal gland’s primary role is to make melatonin, which helps regulate sleep.

If the brain just operates under the normal rules of spacetime, as the evidence suggests, we need an explanation of the mind bound by that constraint. While Descartes’ substance dualism doesn’t deliver, two other forms of dualism have been proposed. Property dualism tries to separate mind from matter by asserting that mental states are nonphysical properties of physical substances (namely brains). This misses the mark too because it suggests a direct or inherent relationship between mental states and the physical substance that holds the state (the brain), and as we will see it is precisely the point that this relationship is not direct. It is like saying software is a non-physical property of hardware; while software runs on hardware, the hardware reveals nothing about what the software is meant to do.

Finally, predicate dualism proposes that predicates, being any subjects of conversation, are not reducible to physical explanations and so constitute a separate kind of existence. I will demonstrate that this is true and so hold that predicate dualism is the correct ontology science needs, but I am rebranding it as form and function dualism (just why is explained below). Sean Carroll writes,3 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. Baseball encompasses everything from an abstract set of rules to a national pastime to specific events featuring two baseball teams. Some parts have a physical corollary and some don’t, but the physical part isn’t the point. A game is an abstraction about possible outcomes when two sides compete under a set of rules. “Apple” and “water” are (seemingly) physical predicates while “three”, “red” and “happy” are not. Three is an abstraction of quantity, red of color, happy of emotion. Quantity is an abstraction of groups, color of light frequency, brightness and context, and emotion of experienced mental states. Apple and water are also abstractions; apples are fruits from certain varieties of trees and water is the liquid state of H2O, but is usually used generically and not to refer to a specific portion of water.4 Any physical example of apple or water will fall short of any ideal definition in some ways, but this doesn’t matter because function is never the same as form; it is intentionally an abstract characterization.

I prefer form and function dualism to predicate dualism because it is both clearer and more technically correct. It is clearer because it names both kinds of things that exist. It is more correct because function is bigger than predicates. I divide function into active and passive forms. Active function uses reference, logical reasoning, and intelligence. The word “predicate” emphasizes a subject, being something that refers to something else, either specifically (definite “the”) or generally (indefinite “a”) through the ascription of certain qualities. Predicates are the subjects (and objects) of logical reasoning. Passive function, which is employed by evolution, instinct, and conditioned responses, uses mechanisms and behaviors that were previously established to be effective in similar situations. Evolution established that fins, legs, and wings could be useful for locomotion. Animals don’t need to know the details so long as they work, but the selection pressures are on the function, not the form. We can actively reason out the passive function of wings to derive principles that help us build planes. Some behaviors originally established with reason, like tying shoelaces, are executed passively (on autopilot) without active use of predicates or reasoning. Function can only be achieved in physical systems by identifying and applying information, which as I have previously noted is the basic unit of function. Life is the only kind of physical system that has developed positive feedback mechanisms capable of capturing and using information. These mechanisms evolved because they enable life to do things more competitively than it could otherwise do because predicting the future beats blind guessing. Evolution captures information using genes, which apply it either directly through gene expression (to regulate or code proteins) or indirectly through instinct (to influence the mind). Minds capture information using memory, which is a partially understood neural process, and then applies it through recall or recognition, which subconsciously identify appropriate memories through triggering features. But if information is captured using physical genes or neurons, what trick makes it nonphysical? That is the power of abstraction: it allows stored patterns to be as indefinite generalities to be correlated later to new situations to provide a predictive edge. Information is created actively by using concepts to represent general situations and passively via pattern matching. Genes create proteins that do chemical pattern matching, while instinct and conditioned response leverage subconscious neural pattern matching.

This diagram shows how form and function dualism compares to substance dualism and several monisms. These two perspectives, form and function, are not just different ways of viewing a subject, but define different kinds of existences. Physical things have form, e.g. in spacetime, or potentially in any dimensional state in which they can have an extent. Physical systems that leverage information have both form and function, but to the extent we are discussing the function we can ignore or set aside considerations of the form because it just provides a means to an end. Function has no extent but is instead measured in terms of its predictive power. Pattern-matching techniques and algorithms implement functionality passively through brute force, while reasoning creates information actively by laying out concepts and rules that connect them. In a physical world, form makes function possible, so they coexist, but form and function can’t be reduced to each other. This is why I show them in the diagram as independent dimensions that intersect but generally do their own thing. Technically, function emerges from form, meaning that interactions of forms cause function to “spring” into existence with new properties not present in forms. But it has nothing to do with magic; it is just a consequence of abstraction decoupling information from what it refers to. The information systems are still physical, but the function they manage is not. Function can be said to exist in an abstract, timeless, nonphysical sense independent of whether it is ever implemented. This is true because an idea is not made possible because we think it; it is “out there” waiting to be thought whether we think it or not. However, as physical creatures, our access to function and the ideal realm is limited by the physical mechanisms our brains use to implement abstraction. We could, in principle, build a better mind, or perhaps a computer, that can do more, but any physical system will always be physically constrained and so limit our access to the infinite domain of possible ideas. Idealism is the reigning ontology across this hypothetical space of ideas, but it can’t stand alone in our physical space. And though we can’t think all ideas, we can potentially steer our thoughts in any direction, so given enough time we can potentially conceive anything.

So the problem with physicalism as it is generally presented is that form is not the only thing a physical universe can create; it can create form and function, and function can’t be explained with the same kind of laws that apply to form but instead needs its own set of rules. If physicalism had just included rules for both direct and abstract existence in the first place, we would not need to have this discussion. But instead, it was (inadvertently) conceived to exclude an important part of the natural world, the part whose power stems from the fact that it is abstracted away from the natural world. It is ironic considering scientific explanation itself (and all explanation) is itself immaterial function and not form. How can science see both the forest and the trees if it won’t acknowledge the act of looking?


A thought about something is not the thing itself. “Ceci n’est pas une pipe,” as Magritte said5. The phenomenon is not the noumenon, as Heidegger would have put it: the thing-as-sensed is not the thing-in-itself. If it is not the thing itself, what is it? Its whole existence is wrapped up in its potential to predict the future; that is it. However, to us, as mental beings, it is very hard to distinguish phenomena from noumena, because we can’t know the noumena directly. Knowledge is only about representations, and isn’t and can’t be the physical things themselves. The only physical world the mind knows is actually a mental model of the physical world. So while Magritte’s picture of a pipe is not a pipe, the image in our minds of an actual pipe is not a pipe either: both are representations. And what they represent is a pipe you can smoke. What this critically tells us is that we don’t care about the pipe, we only care about what the pipe can do for us, i.e. what we can predict about it. Our knowledge was never about the noumenon of the pipe; it was only about the phenomena that the pipe could enter into. In other words, knowledge is about function and only cares about form to the extent it affects function. We know the physical things have a provable physical existence — that the noumena are real — it is just that our knowledge of them is always mediated through phenomena. Our minds experience phenomena as a combination of passive and active information, where the passive work is done for us subconsciously finding patterns in everything and the active work is our conscious train of thought applying abstracted concepts to whatever situations seem to be good matches for them.

Given the foundation of form and function dualism, what can we now say distinguishes the mind from the brain? I will argue that the mind is a process in the brain viewed from its role of performing the active function of controlling the body. That’s a mouthful, so let me break it down. First, the mind is not the brain but a process in the brain. Technically, a process is any series of events that follows some kind of rules or patterns, but in this case I am referring specifically just to the information managing capabilities of the brain as mediated by neurons. We don’t know quite how they do it, but we can draw an analogy to a computer process that uses inputs and memory to produce outputs. But, as argued before, we are not so concerned with how this brain process works technically as with what function it performs because we now see the value of distinguishing functional from physical existence. Next, I said the mind is about active function. To be clear, we only have one word for mind, but might be referring to several things. Let’s call the “whole mind” the set of all processes in the brain taken from a functional perspective. Most of that is subconscious and we don’t necessarily know much about it consciously. When I talk about the mind, I generally mean just the conscious mind, which consists only of the processes that create our subjective experience. That experience has items under direct focused attention and also items under peripheral attention. It includes information we construct actively and also provides us access to much information that was constructed passively (e.g. via senses, instinct, intuition, and recollection). The conscious mind exists as a distinct process from the whole mind because it is an effective way for animals to make the kinds of decisions they need to make on a continuous basis.

3. The nature of knowledge: pragmatism, rationalism and empiricism

Given that we agree to break entities down into form and function, things and ideas, physical and mental, we next need to consider what we can know about them, and what it even means to know something. A theory about the nature of knowledge is called an epistemology. I described the mental world as being the product of information, which is patterns that can be used to predict the future. What if we propose that knowledge and information are the same thing? Charles Sanders Peirce called this epistemology pragmatism, the idea that knowledge consists of access to patterns that help predict the future for practical uses. As he put it, pragmatism is the idea that our conception of the practical effects of the objects of our conception constitutes our whole conception of them. So “practical” here doesn’t mean useful; it means usable for prediction, e.g. for statistical or logical entailment. Practical effects are the function as opposed to the form. It is just another way of saying that information and knowledge differ from noise to the extent they can be used for prediction. Being able to predict well doesn’t confer certainty like mathematical proofs; it improves one’s chances but proves nothing.

Pragmatism takes a hard rap because it carries a negative connotation of compromise. The pragmatist has given up on theory and has “settled” for the “merely” practical. But the whole point of theory is to explain what will really happen and not simply to be elegant. It is not the burden of life to live up to theory, but of theory to live up to life. When an accepted scientific theory doesn’t exactly match experimental evidence, it is because the experimental conditions are more complex than the theory’s ideal model. After all, the real world is full of imperfections that the simple equations of ideal models don’t take into account. However, we can potentially model secondary and tertiary effects with additional ideal models and then combine the models and theories to get a more accurate overall picture. However, in real-world situations it is often impractical to build this more perfect overall ideal model, both because the information is not available and because most situations we face include human factors, for which physical theories don’t apply and social theories are imprecise. In these situations pragmatism shines. The pragmatist, whose goal is to achieve the best prediction given real-world constraints, will combine all available information and approaches to do it. This doesn’t mean giving up on theory; on the contrary, a pragmatist will use well-supported theory to the limit of practicality. They will then supplement that with experience, which is their pragmatic record of what worked best in the past, and merge the two to reach a plan of action. Recall that information is the product of both a causative (reasoned) approach and a pattern analysis (e.g. intuitive) approach. Both kinds of information can be used to build the axioms and rules of a theoretical model. We aspire to causative rules for science because they lead to necessary conclusions, but in their absence we will leverage statistical correlations. We associate subconscious thinking with the pattern analysis approach, but it also leverages concepts established explicitly with a causative approach. Both our informal and formal thinking is a combination at many levels of both causation and pattern analysis. Because our conscious and subconscious minds work together in a way that appears seamless to us, we are inclined to believe that reasoned arguments are correct and not dependent on subjective (biased) intuition and experience. But we are strongly wired to think in biased ways, not because we are fundamentally irrational creatures but because biased thinking is often a more effective strategy than unbiased reason. We are both irrational and rational because both help in different ways, but we have to spot and overcome irrational biases or we will make decisions that conflict with our own goals. All of our top-level decisions have to strike a balance between intuition/experience-based (conservative) thinking and reasoned (progressive) thinking. Conservative methods let us act quickly and confidently so we can focus our attention on other problems. Progressive methods slow us down by casting doubt but they reveal better solutions. It is the principal role of consciousness to provide the progressive element, to make the call between a tried-and-true or a novel approach to any situation. These calls are always themselves pragmatic, but if in the process we spot new causal links then we may develop new ad hoc or even formal theories, and we will remember these theories along with the amount of supporting evidence they seem to have. Over time our library of theories and their support will grow, and we will draw on them for rational support as needed.

Although pragmatism is necessary at the top level of our decision-making process where experience and reason come together to effect changes in the physical world, it is not a part of the theories themselves, which exist independently as constructs of the mental (i.e. functional) world. We do have to be pragmatic about what theories we develop and about how we apply them, but since theories represent idealized functional solutions independent of practical concerns, the knowledge they represent is based on a narrower epistemology than pragmatism. But what is this narrower epistemology? After all, it is still the case that theories help predict the future for practical benefits. And Peirce’s definition, that our conception of the practical effects of the objects of our conception constitutes our whole conception of them, is also still true. What is different about theory is that it doesn’t speak to our whole conception of effects, inclusive of our experience, but focuses on causes and effects in idealized systems using a set of rules. Though technically a subset of pragmatism, rule based-systems literally have their own rules and can be completely divorced from all practical concerns, so for all practical purposes they have a wholly independent epistemology based on rules instead of effects. This theory of knowledge is called rationalism, and holds that reason (i.e. logic) is the chief source of knowledge. Put another way, where pragmatism uses both causative and pattern analysis approaches to create information, reason only uses the logical, causative approach, though it leverages axioms derived from both causative and pattern-based knowledge. A third epistemology is empiricism, which holds that knowledge comes only or primarily from sensory experience. Empiricism is also a subset of pragmatism; it differs in that it pushes where pragmatism pulls. In other words, empiricism says that knowledge is created as stimuli come in, while pragmatism says it arises as actions and effects go out. The actions and effects do ultimately depend on the inputs, and so pragmatism subsumes empiricism, which is not prescriptive about how the inputs (evidence) might be used. In science, the word empiricism is taken to mean rationalism + empiricism, i.e. scientific theory and the evidence that supports it, so one can say that rationalism is the epistemology of theoretical science and empiricism is the epistemology of applied science.

Mathematics and highly mathematical physical theories are often studied on an entirely theoretical basis, with considerations as to their applicability left for others to contemplate. The study of algorithms is mostly theoretical as well because their objectives are established artificially, so they can’t be faulted for inapplicability to real-world situations. Developing algorithms can’t, in and of itself, explain the mind, because even if the mind does employ an algorithm (or constellation of algorithms), the applicability of those algorithms to the real-world problems the mind solves must be established. But iteratively we can propose algorithms and tune them so that they do align with problems the mind seems to solve. Guessing at algorithms will never reveal the exact algorithm the mind or brain uses, but that’s ok. Scientists never discover the exact laws of nature; they only find rules that work in all or most observed situations. What we end up calling an understanding or explanation of nature is really just a framework of generalizations that helps us predict certain kinds of things. Arguably, laws of nature reveal nothing about the “true” nature of the universe. So it doesn’t matter whether the algorithms we develop to explain the mind have anything to do with what the mind is “actually” doing; to the extent they help us predict what the mind will do they will provide us with a greater understanding of it, which is to say an explanation of it.

Because proposing algorithms, or outlines of potential algorithms, and then testing them against empirical evidence is entirely consistent with the way science is practiced (i.e. empiricism), this is how I will proceed. But we can’t just propose algorithms at random; we will need a basis for establishing appropriate artificial objectives, and that basis has to be related to what it is we think minds are up to. This is exactly the feedback loop of the scientific method: propose a hypothesis, test it, and refine it ad infinitum. The available evidence informs our choice of solution, and the effectiveness of the solution informs how we refine or revise it. From the high level at which I approach this subject in this book, I won’t need to be very precise in saying just how the algorithms work because that would be premature. All we can do at this stage is provide a general outline for what kinds of skills and considerations are going into different aspects of the thought process. Once we have come to a general agreement on that, we can start to sweat the details.

While my approach to the subject will be scientifically empirical, we need to remember that the mind itself is primarily pragmatic and only secondarily capable of reason (or intuition) to support that pragmatism. So my perspective for studying the mind is not itself the way the mind principally works. This isn’t a problem so long as we keep it in mind: we are using a reasonable approach to study something that is itself uses a highly integrated combination of reason and intuition (basically causation and pattern). It would be disingenuous to suggest that I have freed myself of all possible biases in this quest and that my conclusions are perfectly objective; even established science can never be completely free of biases. But over time science can achieve ever more effective predictive models, which is the ultimate standard for objectivity: can results be duplicated? But the hallmark of objectivity is not its measure but its methods: logic and reason. The conclusions one reaches through logic using a system of rules built on postulates can be provably true, contingent on the truth of the postulates, which make it a very powerful tool. Although postulates are true by definition from the perspective of the logical model that employs them, they have no absolute truth in the physical world because our direct knowledge of the physical world is always based on evidence from individual instances and not on generalities across similar instances. So truth in the physical world (as we see it from the mental world) is always a matter of degree, the degree to which we can correlate a given generality to a group of phenomena. That degree depends both on the clarity of the generalization and on the quality of the evidence, and so is always approximate at best, but can often be close enough to a perfect correlation to be taken as truth (for practical purposes). Exceptions to such truths are often seen more as “shortcomings of reality” than as shortcomings of the truth since truth (like all concepts) exists more in a functional sense than in the sense of having a perfect correlation to reality.

But how can we empirically approach the study of the mind? If we can accept the idea that the mind is principally a functional entity, it is largely pointless to look for physical evidence of its existence, beyond establishing the physical mechanism (the brain) that supports it. This is because physical systems can make information management possible but can’t explain all the uses to which the information can be put, just as understanding the hardware of the internet doesn’t say anything about the information flowing through it. We must instead look at the functional “evidence.” We can never get direct evidence, being facts or physical signs, of function (because function has no form), so we either need to look at physical side effects or develop a way to see “evidence” of function directly independent of the physical. Behavior provides the clearest physical evidence of mental activity, but our more interesting behavior results from complex chains of thought and can’t be linked directly to stimulus and response. Next, we have personal evidence of our own mind from our own experience of it. This evidence is much more direct than behavioral evidence but has some notable shortcomings as well. Introspection has a checkered past as a tool for studying the mind. Early hopes that introspection might be able to qualitatively and quantitatively describe all conscious phenomena were overly optimistic, largely because they misunderstand the nature of the tool. Our conscious minds have access to information based both on causation and pattern analysis, but our conscious awareness of this information is filtered through an interpretive layer that generalizes the information into conceptual buckets. So these generalized interpretations are not direct evidence, but, like behavior, are downstream effects of information processing. Even so, our interpretations can provide useful clues even if they can’t be trusted outright. Freud was too quick to attach significance to noise in his interpretation of dreams as we have no reason to assume that the content of dreams serves any function. Many activities of the mind do serve a function, however, so we can study them from the perspective of those functions. As the conscious mind makes a high-level decision, it will access functionally relevant information packaged in a form that the conscious subprocess can handle, which is at least partially in the form of concepts or generalizations. These concepts are the basis of reason (i.e. rationality), so to the extent our thinking is rational then our interpretation of how we think is arguably exactly how we think (because we are conscious of it). But that extent is never exact or complete because our concepts draw on a vast pool of subconscious information which heavily colors how we use them, and also we use subconscious data analysis algorithms (most notably memory/recognition). For both of these reasons any conscious interpretation will only be approximate and may cause us to overlook or misinterpret our actual motivations completely (for which we may have other motivations to suppress).

While both behavior and introspection can provide evidence that can suggest or support models of the mind, they are pretty indirect and can’t provide very firm support for those models. But another way to study function is to speculate about what function is being performed. Functionalism holds that the defining characteristics of mental states are the functions they bring about, quite independent of what we think about those functions (introspectively) or whether we act on them (behaviorally). This is the “direct” study of function independent of the physical to which I alluded. Speculation to function, aka the study of causes and effects, is an exercise of logic. It depends on setting up an idealized model with generalized components that describes a problem. These components don’t exist physically but are exemplars that embody only the properties of their underlying physical referents that are relevant to the situation. Given the existence of these exemplars (including their associated properties) as postulates, we can then reason about what behavior we can expect from them. Within such a model, function can be understood very well or even perfectly, but it is never our expectation that these models will align perfectly with real-world situations. What we hope for is that they will match well enough that predictions made using the model will come true in the real world. Our models of the functions of mental states won’t exactly describe the true functions of those mental states (if we could ever discover them), but they will still be good explanations of the mind if they are good at predicting the functions our minds perform.

Folk explanations differ from scientific explanations in the breadth and reliability of their predictive power. While there are unlimited folk perspectives we can concoct to explain how the mind works, all of which will have some value in some situations, scientific perspectives (theories) seek a higher standard. Ideally, science can make perfect predictions, and in many physical situations it nearly does. Less ideally, science should at least be able to make predictions with odds better than chance. The social sciences usually have to settle for such a reduced level of certainty because people, and the circumstances in which they become involved, are too complex for any idealized model to describe. So how, then, can we distinguish bona fide scientific efforts in matters involving minds from pseudoscience? I will investigate this question next.

4. What Makes Knowledge Objective?

It is easier to define subjective knowledge that objective knowledge. Subjective knowledge is anything we think we know, and it counts as knowledge as long as we think it does. We set our own standard. It starts with our memory; a memory of something is knowledge of it. Our minds don’t record the past for its own sake but for its potential to help us in the future. From past experience we have a sense of what kinds of things we will need to remember, and these are the details we are most likely to commit to memory. This bias aside, our memory of events and experiences is fairly automatic and has considerable fidelity. The next level of memory is of our reflections: thoughts we have had about our experiences, memories and other thoughts. I call these two levels of memory and knowledge detailed and summary. There is no exact line separating the two, but details are kept as raw and factual as possible while summaries are higher-order interpretations that derive uses for the details. It takes some initial analysis, mostly subconscious, to study our sensory data so we can even represent details in a way that we can remember. Summaries are a subsidiary analysis of details and other summary information performed using both conscious (reasoned) and subconscious (intuitive) methods. These details and summaries are what we know subjectively.

We are designed to gather and use knowledge subjectively, so where does objectivity come in? Objectivity creates knowledge that is more reliable and broadly applicable than subjective knowledge. Taken together, reliability and broad applicability account for science’s explanatory power. After all, to be powerful, knowledge must both fit the problem and do so dependably. Objective approaches let us create both physical and social technologies to manage both goods and services to high standards. How can we create objective knowledge that can do these things? As I noted above, it’s all about the methods. Not all methods of gathering information are equally effective. Throughout our lives, we discover better ways of doing things, and we will often use these better ways again. Science makes more of an effort to identify and leverage methods that produce better information, i.e. with reliability and broad applicability. These methods are collectively called the “scientific method”. It isn’t one method but an evolving set of best practices. They are only intended to bring some order to the pursuit and do not presume to cover everything. In particular, they say nothing of the creative process or seek to constrain the flow of ideas. The scientific method is a technology of the mind, a set of heuristics to help us achieve more objective knowledge.

The philosophy of science is the conviction that an objective world independent of our perceptions exist and that we can gain an understanding of it that is also independent of our perceptions. Though it is popularly thought that science reveals the “true” nature of reality, it has been and must always be a level removed from reality. An explanation or understanding of the world will always be just one of many possible descriptions of reality and never reality itself. But science doesn’t seek a multitude of explanations. When more than one explanation exists, science looks for common ground between and tries to express them as varying perspectives of the same underlying thing. For example, wave-particle duality allows particles to be described both as particles and waves. Both descriptions work and provide explanatory power, even though we can’t imagine macroscopic objects being both at the same time. We are left with little intuitive feel for the nature of reality, which serves to remind us that the goal of objectivity is not to see what is actual there but to gain the most explanatory power over it that we can. The canon of generally-accepted scientific knowledge at any point in time will be considered charming, primitive and not terribly powerful when looked back on a century or two later, but this doesn’t mitigate its objectivity or claim on success.

That said, the word “objectivity” hints at certainty. While subjectivity acknowledges the unique perspective of each subject, objectivity is ostensibly entirely about the object itself, its reality independent of the mind. If an object actually did exist, any direct knowledge we had of it would then remain true no matter which subject viewed it. This goal, knowledge independent of the viewer, is admirable but unattainable. Any information we gather about an object must always ultimately depend on observations of it, either with our own senses or using instruments we devise. And no matter how reliable that information becomes, it is still just information, which is not the object itself but only a characterization of traits with which we ultimately predict behavior. So despite its etymology, we must never confuse objectivity with “actual” knowledge of an object, which is not possible. Objectivity only characterizes the reliability of knowledge based on the methods used to acquire it.

With those caveats out of the way, a closer look at the methods of science will show how they work to reduce the likelihood of personal opinion and maximize the likelihood of reliable reproduction of results. Below I list the principle components of the scientific method, from most to least helpful (approximately) in establishing its mission of objectivity.

    1. The refinement of hypotheses. This cornerstone of the scientific method is the idea that one can propose a rule describing how kinds of phenomena will occur, and that one can test this rule and refine it to make it more reliable. While it is popularly thought that scientific hypotheses are true until proven otherwise (i.e. falsified, as Karl Popper put it), we need to remember that the product of objective methods, including science, is not truth but reliability6. It is not so much that laws are true or can be proven false as that they can be relied on to predict outcomes in similar situations. The Standard Model of particle physics purports (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.7. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Also, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. Some aspects of mental function will prove to be highly predictable while others will be more chaotic, but our standard for scientific value should still be explanatory power.
    2. Scientific techniques. This most notably includes measurement via instrumentation rather than use of senses. Instruments are inherently objective in that they can’t have a bias or opinion regarding the outcome, which is certainly true to the extent the instruments are mechanical and don’t employ computer programs into which biases may have been unintentionally embedded. However, they are not completely free from biases or errors in how they are used, and also there are limits in the reliability of any instrument, especially at the limits of their operating specifications. Scientific techniques also include a wide variety of practices that have been demonstrated to be effective and are written up into standard protocols in all scientific disciplines to increase the chances that results can be replicated by others, which is ultimately the objective of science.
    3. Critical thinking. I will define critical thinking here without defense, as that requires a more detailed understanding of the mind than I have yet provided. Critical thinking is an effort to employ objective methods of thought with proven reliability while excluding subjective methods known to be more susceptible to bias. Next, I distinguish three of the most significant components of critical thinking:

3a. Rationality. Rationality is, in my theory of the mind, the subset of thinking concerned with applying causality to concepts, aka reasoning. As I noted in The Mind Matters, thinking and the information that is thought about divide into two camps, being reason, which manages information that derives using a causative approach, and intuition, which manages information that derives using a pattern analysis approach. Both approaches are used to some degree for almost every thought we have, but it is often useful to focus on one of these approaches as the sole or predominant one for the purpose of analysis. The value of the rational approach over the intuitive is in its reproducibility, which is the primary objective of science and the knowledge it seeks to create. Because rational techniques can be written down to characterize both starting conditions and all the rules and conclusions they imply, they have the potential to be very reliable.

3b. Inductive reasoning. Inductive reasoning extrapolates patterns from evidence. While science seeks causative links, it will settle for statistical correlations if it has to. Newton used inductive reasoning to posit gravity, which was later given a cause by Einstein’s theory of general relativity as a deformation of space-time geometry.

3c. Abductive reasoning. Abductive reasoning seeks the simplest and most likely explanations, which is a pattern matching heuristic that picks kinds of matches that tend to work out best. Occam’s Razor is an example of this often used in science: “Among competing hypotheses, the one with the fewest assumptions should be selected”.

3d. Open-mindedness. Closed-mindedness means having a fixed strategy to deal with any situation. It enables a confident response in any circumstance, but works badly if one tries to use it beyond the conditions those strategies were designed to handle. Open-mindedness is an acceptance of the limitations of one’s knowledge along with a curiosity about exploring those limitations to discover better strategies. While everyone must be open-minded in situations where ignorance is unavoidable, one hopes that one will develop sufficient mastery over most of the situations that one encounters to be able to act confidently in a closed-minded way without fear of making a mistake. While this is often possible, the scientist must always remember that perfect knowledge is unattainable and must always be alert for possible cracks in one’s knowledge. These cracks should be explored with objective methods to discover more reliable knowledge and strategies than one might already possess. By acknowledging the limits and fallibility of its approaches and conclusions, science can criticize, correct, and improve itself. Thus, more than just a bag of tricks to move knowledge forward, it is characterized by a willingness to admit to being wrong.

3e. Countering cognitive biases. More than just prejudice or closed-mindedness, cognitive biases are subconscious pattern analysis algorithms that usually work well for us but which are less reliable than objective methods. The insidiousness of cognitive biases was first exposed by Tversky and Kahneman their 1971 paper, “Belief in the law of small numbers.”89. Cognitive biases use pattern analysis to lead us to conclusions based on correlations and associations rather than causative links. They are not simply inferior to objective methods because they can account for indirect influences that can be overlooked by objective methods. But robust causative explanations are always more reliable than associative explanations, and in practice they tend to be right where biases are wrong. (where “right” and “wrong” here are taken not as absolutes but as expressions of very high and low reliability).

    4. Peer review. Peer review is the evaluation of a scientific work by one or more people of similar competence to assess whether it was conducted using appropriate scientific standards.
    5. Credentials. Academic credentials attest to the completion of specific education programs. Titular credentials, publication history, and reputation add to a researcher’s credibility. While no guarantee, credentials help establish an author’s scientific reliability.
    6. Pre-registration. A recently added best practice is pre-registration, which clears a study for publication before it has been conducted. This ensures that the decision to publish is not contingent on the results, which would be biased 10.

The physical world is not itself a rational place because reason itself it has a functional existence, not a physical existence. So rational understanding, and consequently what we think of as truth about the physical world, depends on the degree to which we can correlate a given generality to a group of phenomena. But how can we expect a generality (i.e. hypothesis) that worked for some situations to work for all similar situations? The Standard Model of particle physics professes (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.11. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Particles are not simply free-moving; they clump into atoms and molecules in pretty strict accordance with laws of physics and chemistry that have been elaborated pretty well. Macroscopic objects in nature or manufactured to serve specific purposes seem to obey many rules with considerably more fidelity than free-moving weather systems, a fact upon which our whole technological civilization depends. Still, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. The question I am going to explore in this book is whether scientific, rational thought can be successfully applied to function and not just form, and specifically to the mental function comprising our minds. Are some aspects highly predictable while others remain chaotic?

We have to keep in mind just how much we take correlation of theory to reality for granted when we move above the realm of subatomic particles. No two apples are alike, or any two gun parts, though Eli Whitney’s success with interchangeable parts has led us to think of them as being so. They are interchangeable once we slot them into a model or hypothesis, but in reality any two macroscopic objects have many differences between them. A rational view of the world breaks down as the boundaries between objects become unclear as imperfections mount. Is a blemished or rotten apple still an apple? What about a wax apple or a picture of an apple? Is a gun part still a gun part if it doesn’t fit? A hypothesis that is completely logical and certain will still have imperfect applicability to any real-world situation because the objects that comprise it are idealized, and the world is not ideal. But still, in many situations this uncertainty is small, often vanishingly small, which allows us to build guns and many other things that work very reliably under normal operating conditions.

How can we mitigate subjectivity and increase objectivity? More observations from more people help, preferably with instruments, which are much more accurate and bias-free than senses. This addresses evidence collection, but it not so easy to increase objectivity over strategizing and decision-making. These are functional tasks, not matters of form, and so are fundamentally outside the physical realm and so not subject to observation. Luckily, formal systems follow internal rules and not subjective whims, so to the degree we use logic we retain our objectivity. But this can only get us so far because we still have to agree on the models we are going to use in advance, and our preference of one model over another ultimately has subjective aspects. To the degree we use statistical reasoning we can improve our objectivity by using computers rather than innate or learned skills. Statistical algorithms exist that are quite immune to preference, bias, and fallacy (though again, deciding what algorithm to use involves some subjectivity). But we can’t yet program a computer to do logical reasoning on a par with humans. So we need to examine how we reason in order to find ways to be more objective about it so we can be objective when we start to study it. It’s a catch-22. We have to understand the mind first before we figure out how to understand it. If we rush in without establishing a basis for objectivity, then everything we do will be a matter of opinion. While there is no perfect formal escape from this problem, we informally overcome this bootstrapping problem with every thought through the power of assumption. An assumption, logically called a proposition, is an unsupported statement which, if taken to be true, can support other statements. All models are built using assumptions. While the model will ultimately only work if the assumptions are true, we can build the model and start to use it on the hope that the assumptions will hold up. So can I use a model of how the mind works built on the assumption that I was being objective to then establish the objectivity I need to build the model? Yes. The approach is a bit circular, but that isn’t the whole story. Bootstrapping is superficially impossible, but in practice is just a way of building up a more complicated process through a series of simpler processes: “at each stage a smaller, simpler program loads and then executes the larger, more complicated program of the next stage”. In our case, we need to use our minds to figure out our minds, which means we need to start with some broad generalizations about what we are doing and then start using those, then move to a more detailed but still agreeable model and start using that, and so on. So yes, we can only start filling in the details, even regarding our approach to studying the subject, by establishing models and then running them. While there is no guarantee it will work, we can be guaranteed it won’t work if we don’t go down this path. While not provably correct, nothing in nature can be proven. All we can do is develop hypotheses and test them. By iterating on the hypotheses and expanding them with each pass, we bootstrap them to greater explanatory power. Looking back, I have already done the first (highest level) iteration of bootstrapping by endorsing form & function dualism and the idea that the mind consists of processes that manage information. For the next iteration, I will propose an explanation for how the mind reasons, which I will then use to support arguments for achieving objectivity.

So then, from a high level, how does reasoning work? I presume a mind that starts out with some innate information processing capabilities and a memory bank into which experience can record learned information and capabilities. The mind is free of memories (a blank slate) when it first forms but is hardwired with many ways to process information (e.g. senses and emotions). Because our new knowledge and skills (stored in memory) build on what came before, we are essentially continually bootstrapping ourselves into more capable versions of ourselves. I mention all this because it means that the framework with which we reason is already highly evolved even from the very first time we start making conscious decisions. Our theory of reasoning has to take into account the influence of every event in our past that changed our memory. Every event that even had a short-term impact on our memory has the potential for long-term effects because long-term memories continually form and affect our overall impressions even if we can’t recall them specifically.

One could view the mind as being a morass of interconnected information that links every experience or thought to every other. That view won’t get us very far because it gives us nothing to manipulate, but it is true, and any more detailed views we develop should not contradict it. But on what basis can we propose to deconstruct reasoning if the brain has been gradually accumulating and refining a large pool of data for many years? On functional bases, of which I have already proposed two: logical and statistical, which I introduced above with pragmatism. Are these the only two approaches that can aid prediction? Supernatural prophecy is the only other way I can think of, but we lack reliable (if any) access to it, so I will not pursue it further. Just knowing that however the mind might be working, it is using logical and/or statistical techniques to accomplish its goals gives us a lot to work with. First, it would make sense, and I contend that it is true, that the mind uses both statistical and logical means to solve any problem, using each to the maximum degree they help. In brief, statistical means excel at establishing the assumptions and logical means at drawing out conclusions from the assumptions.

While we can’t yet say how neurons make reasoning possible, we can say that it uses statistics and logic, and from our knowledge of the kinds of problems we solve and how we solve them, we can see more detail about what statistical and logical techniques we use. Statistically, we know that all our experience contributes supporting evidence to generalizations we make about the world. More frequently used generalizations come to mind more readily than lesser used and are sometimes also associated with words or phrases, such as about the concept APPLE. An APPLE could be a specimen of fruit of a certain kind, or a reproduction or representation of such a specimen, or used in a metaphor or simile, which are situations where the APPLE concept helps illustrate something else. We can use innate statistical capabilities to recognize something as an APPLE by correlating the observed (or imagined) aspects of that thing against our large database every encounter we have ever had with APPLES. It’s a lot of analysis, but we can do it instantly with considerable confidence. Our concepts are defined by the union of our encounters, not by dictionaries. Dictionaries just summarize words, and yet words are generalizations and generalizations are summaries, so dictionaries are very effective because they summarize well. But brains are like dictionaries on steroids; our summaries of the assumptions and rules behind our concepts and models are much deeper and were reinforced by every affirming or opposing interaction we ever had. Again, most of this is innate: we generalize, memorize, and recognize whether we want to or not using built-in capacities. Consciousness plays an important role I will discuss later, but “sees” only a small fraction of the computational work our brains do for us.

Let’s move on to logical abilities. Logic operates in a formal system, which is a set of assumptions or axioms and rules of inference that apply to them. We have some facility for learning formal systems, such as the rules of arithmetic, but everyday reasoning is not done using formal systems for which we have laid out a list of assumptions and rules. And yet, the formal systems must exist, so where do they come from? The answer is that we have an innate capacity to construct mental models, which are both informal and formal systems. They are informal on many levels, which I will get into, but also serve the formal need required for their use in logic. How many mental models (models, for short) do we have in our heads? Looked at most broadly, we each have one, being the whole morass of all the information we have every processed. But it is not very helpful to take such a broad view, nor is it compatible with our experience using mental models. Rather, it makes sense to think of a mental model as the fairly small set of assumptions and rules that describe a problem we typically encounter. So we might have a model of a tree or of the game of baseball. When we want to reason about trees or baseball, we pull out our mental model and use it to draw logical conclusions. From the rules of trees, we know trees have a trunk with ever small branches branching off that have leaves that usually fall off in the winter. From the rules of baseball, we know that an inning ends on the third out. Referring back a paragraph, we can see that models and concepts are the same things — they are generalizations, which is to say they are assessments that combine a set of experience into a prototype. Though the same data, models and concepts have different functional perspectives: models view the data from the inside as the framework in which logic operates, and concepts view it from the outside as the generalized meaning it represents.

While APPLE, TREE, and BASEBALL are individual concepts/models, no two instances of them are the same. Any two apples must differ at least in time and/or place. When we use a model for a tree (let’s call it the model instance), we customize the model to fit the problem at hand. So for an evergreen tree, for example, we will think of needles as a degenerate or alternate form of leaves. Importantly, we don’t consciously reason out the appropriate model for the given tree; we recognize it using our innate statistical capabilities. A model or concept instance is created through recognition of underlying generalizations we have stored from long experience, and then tweaked on an ad hoc basis (via further recognition and reflection) to add unique details to this instance. Reflection can be thought of as a conscious tool to augment recognition. So a typical model instance will be based on recognition of a variety of concepts/models, some of which will overlap and even contradict each other. Every model instance thus contains a set of formal systems, so I generally call it a constellation of models rather than a model instance.

We reason with a model constellation by using logic within each component model and then using statistical means to weigh them against each other. The critical aspect of the whole arrangement is that it sets up formal systems in which logic can be applied. Beyond that, statistical techniques provide the huge amount of flexibility needed to line up formal systems to real-world situations. The whole trick of the mind is to represent the external world with internal models and to run simulations on those models to predict what will happen externally. We know that all animals have some capacity to generalize to concepts and models because their behavior depends on being able to predict the future (e.g. where food will be). Most animals, but humans in particular, can extend their knowledge faster than their own experience allows by sharing generalizations with others via communication and language, which have genetic cognitive support. And humans can extend their knowledge faster still through science, which formally identifies objective models.

So what steps can we take to increase the objectivity of what goes on in our minds, which has some objective elements in its use of formal models, but which also has many subjective elements that help form and interpret the models? Devising software that could run mental models would help because it could avoid fallacies and guard against biases. It would still ultimately need to prioritize using preferences, which are intrinsically subjective, but we could at least try to be careful and fair setting them up. Although it could guard against the abuses of bias, we have to remember that all generalizations are a kind of bias, being arguments for one way of organizing information over another. We can’t write software yet that can manage concepts or models, but machine learning algorithms, which are statistical in nature, are advancing quickly. They are becoming increasingly generalized to behave in ever more “clever” ways. Since concepts and models are themselves statistical entities at their core, we will need to leverage machine learning as a starting point for software that simulates the mind.

Still, there is much we can do to improve our objectivity of thought short of replacing ourselves with machines, and science has been refining methods to do it from the beginning. Science’s success depends critically on its objectivity, so it has long tried to reject subjective biases. It does this principally by cultivating a culture of objectivity. Scientists try to put opinion aside to develop hypotheses in response to observations. They then test them with methods that can be independently confirmed. Scientists also use peer review to increase independence from subjectivity. But what keeps peers from being subjective? In his 1962 classic, The Structure of Scientific Revolutions12, Thomas Kuhn noted that even a scientific community that considers itself objective can become biased toward existing beliefs and will resist shifting to a new paradigm until the evidence becomes overwhelming. This observation inadvertently opened a door which postmodern deconstructionists used to launch the science wars, an argument that sought to undermine the objective basis of science, calling it a social construction. To some degree this is undeniable, which has left science with a desperate need for a firmer foundation. The refutation science has fallen back on for now was best put by Richard Dawkins, who noted in 2013 that “Science works, bitches!”13. Yes, it does, but until we establish why we are blustering much like the social constructionists. The reason science works is that scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that they don’t (and in fact can’t) eliminate it. All that matters is that they reduce it, which distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So, yes, science is a social construction, but one that continually moves closer to truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount and quality of useful information. It is not enough for scientific communities to assume best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”1415. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings, but is more commonly a problem when science is used to justify a commercial enterprise.

5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

The paradigm I am proposing to replace physicalism, rationalism, and empiricism is a superset of them. Form & function dualism embraces everything physicalism stands for but doesn’t exclude function as a form of existence. Pragmatism embraces everything rationalism and empiricism stand for but also includes knowledge gathered from statistical processes and function.

But wait, you say, what about biology and the social sciences: haven’t they been making great progress within the current paradigm? Well, they have been making great progress, but they have been doing it using an unarticulated paradigm. Since Darwin, biology has pursued a function-oriented approach. Biologists examine all biological systems with an eye to the function they appear to be serving, and they consider the satisfaction of function to be an adequate scientific justification, but it isn’t under physicalism, rationalism or empiricism. Biologists cite Darwin and evolution as justification for this kind of reasoning, but that doesn’t make it science. The theory of evolution is unsupportable under physicalism, rationalism, and empiricism alone, but instead of acknowledging this metaphysical shortfall some scientists just ignore evolution and reasoning about function while others just embrace it without being overly concerned that it falls outside the scientific paradigm. Evolutionary function occupies a somewhat confusing place in reasoning about function because it is not teleological, meaning that evolution is not directed toward an end or shaped by a purpose but rather is a blind process without a goal. But this is irrelevant from an informational standpoint because information never directs toward an end anyway, it just helps predict. Goals are artifacts of formal systems, and so contribute to logical but not statistical information management techniques. In other words, goals and logic are imaginary constructs; they are critical for understanding the mind but can be ignored for studying evolution and biology, which has allowed biology to carry on despite this weakness in its foundation.

The social sciences, too, have been proceeding on an unarticulated paradigm. Officially, they are trying to stay within the bounds of physicalism, rationalism, and empiricism, but the human mind introduces a black box, which is what scientists call a part of the system that is studied entirely through its inputs and outputs without any attempt to explain the inner workings. Some efforts to explain it have been attempted. Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s Verbal Behavior by explaining how language acquisition leverages innate linguistic talents16. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. So we now have good reason to believe the mind is much more than conditioned behavior and employs reasoning and subconscious know-how. But that is not the same thing as having an ontology or epistemology to support it. Form & function dualism and pragmatism give us the leverage to separate the machine (the brain) from its control (the mind) and to dissect the pieces.

Expanding the metaphysics of science has a direct impact across science and not just regarding the mind. First, it finds a proper home for the formal sciences in the overall framework. As Wikipedia says, “The formal sciences are often excluded as they do not depend on empirical observations.” Next, and critically, it provides a justification for the formal sciences to be the foundation for the other sciences, which are dependent on mathematics, not to mention logic and hypotheses themselves. But the truth is that there is no metaphysical justification for invoking formal sciences to support physicalism, rationalism, and empiricism. With my paradigm, the justification becomes clear: function plays an indispensable role in the way the physical sciences leverage generalizations (scientific laws) about nature. In other words, scientific theories are from the domain of function, not form. Next, it explains the role evolutionary thinking is already having in biology because it reveals how biological mechanisms use information stored in DNA to control life processes through feedback loops. Finally, this expanded framework will ultimately let the social sciences shift from black boxes to knowable quantities.

But my primary motivation for introducing this new framework is to provide a scientific perspective for studying the mind, which is the domain of cognitive science. It will elevate cognitive science from a loose collaboration of sciences to a central role in fleshing out the foundation of science. Historically the formal sciences have been almost entirely theoretical pursuits because formal systems are abstract constructs with no apparent real-world examples. But software and minds are the big exceptions to this rule and open the door for formalists to study how real-world computational systems can implement formal systems. Theoretical computer science is a well-established formal treatment of computer science, but there is no well-established formal treatment for cognitive science, although the terms theoretical cognitive science and computational cognitive science are occasionally used. Most of what I discuss in this book is theoretical cognitive science because most of what I am doing is outlining the logic of minds, human or otherwise, but with a heavy focus on the design decisions that seem to have impacted earthly, and especially human, minds. Theoretical cognitive science studies the ways minds could work, looking at the problem from the functional side, and leaves it as a (big) future exercise to work out how the brain actually brings this sort of functionality to life.

It is worth noting here that we can’t conflate software with function: software exists physically as a series of instructions, while function exists mentally and has no physical form (although, as discussed, software and brains can produce functional effects in the physical world and this is, in fact, their purpose). Drew McDermott (whose class I took at Yale) characterized this confusion in the field of AI like this (as described by Margaret Boden in Mind as Machine):

A systematic source of self-deception was their common habit (made possible by LISP: see 10.v.c) of using natural-language words to name various aspects of programs. These “wishful mnemonics”, he said, included the widespread use of “UNDERSTAND” or “GOAL” to refer to procedures and data structures. In more traditional computer science, there was no misunderstanding; indeed, “structured programming” used terms such as GOAL in a liberating way. In Al, however, these apparently harmless words often seduced the programmer (and third parties) into thinking that real goals, if only of a very simple kind, were being modelled. If the GOAL procedure had been called “G0034” instead, any such thought would have to be proven, not airily assumed. The self-deception arose even during the process of programming: “When you [i.e. the programmer] say (GOAL… ), you can just feel the enormous power at your fingertips. It is, of course, an illusion” (p. 145). 17

This begs the million-dollar question: if an implementation of an algorithm is not itself function, where is the function, i.e. real intelligence, hiding? I am going to develop the answer to this question as the book unfolds, but the short answer is that information management is a blind watchmaker both in evolution and the mind. That is, from a physical perspective the universe can be thought of as deterministic, so there is no intelligence or free will. But the main thrust of my book is that this doesn’t matter because algorithms that manage information are predictive and this capacity is equivalent to both intelligence and free will. So if procedure G0034 is part of a larger system that uses it to effectively predict the future, it can fairly also be called by whatever functional name you like that describes this aspect. Such mnemonics are actually not wishful. It is no illusion that the subroutines of a self-driving car that get it to its destination in one piece do wield enormous power and achieve actual goals. This doesn’t mean we are ready to start programming goals to the level human minds conceive them (and certainly not UNDERSTAND!), but function, i.e. predictive power, can be broken down into simple examples and implemented using today’s computers.

What are the next steps? My main point is that we need start thinking about how minds achieve function and stop thinking that a breakthrough in neurochemistry will magically solve the problem. We have to solve the problem by solving the problem, not by hoping a better understanding of the hardware will explain the software. While the natural sciences decompose the physical world from the bottom up, starting with subatomic particles, we need to decompose the mental world from the top down, starting (and ending) with the information the mind manages.

An Overview of What We Are

[Brief summary of this post]

What are we? Are we bodies or minds or both? Natural science tells us with fair certainty that we are creatures, one type among many, who evolved over the past few billion years in an entirely natural and explainable way. I certainly endorse broad scientific consensus, but this only confirms bodies, not minds. Natural science can’t yet confirm the existence of minds; we can observe the brain, by eye or with instruments, but we can’t observe the mind. Everything we know (or think we know) about the mind comes from one of two sources: our own experience or hearsay. However comfortable we are with our own minds, we can’t prove anything about the experience. Similarly, everything we learn about the world from others is still hearsay, in the sense that it is information that can’t be proven. We can’t prove things about the physical world; we can only develop pretty reliable theories. And knowledge itself, being information and the ability to apply it, only exists in our minds. Some knowledge appears instinctively, and some is acquired through learning (or so it seems to us). Beyond knowledge, we possess senses, feelings, desires, beliefs, thoughts, and perspectives, and we are pretty sure we can recognize these things in others. All of these mental words mean something about our ability to function in the world, and have no physical meaning in and of themselves. And not incidentally, we also have physical words that let us understand and interact with the physical world even though these words are also mental abstractions, being generalizations about kinds or instances of physical phenomena. We can comfortably say (but can’t prove) that we have a very good understanding of a mentally functional existence that is quite independent of our physical existence, an understanding that is itself entirely mentally functional and not physical. It is this mentally functional existence, our mind, that we most strongly identify with. When we are discussing any subject, the “we” doing the discussing is our minds, not our bodies. While we can identify with our bodies and recognize them as an inseparable possession, they, including our brains, are at least logically distinct entities from our minds. We know (from science) that the brain hosts our mind, but that is irrelevant to how we use our minds (excepting issues concerning the care of our heads and bodies) because our thoughts are abstractions not bound (except through indirect reference) to the physical world.

Given that we know we are principally mental beings, i.e. that we exist more from the perspective of function than form, what can we do to develop an understanding of ourselves? All we need to do is approach the question from the perspective of function rather than form. We don’t need to study the brain or the body; we need to study what they do and why. Just as homologous evolution caused eyes to evolve independently about 50-100 times, all our brain functions are evolving because of their value rather than because of their mechanism. Function drives evolution, not form, although form constrains what can be achieved.

But let’s consider the form for a moment before we move on to function. Observations of the brain will eventually reveal how it works in the same way dissection of a computer would. This will illuminate all the interconnections, and even which areas specialize in what kind of tasks. Monitoring neural activation alone could probably even get to the point where one could predict the gist of our thoughts with fair accuracy by correlating areas of neural activity to specific memories and mental states. But that would still be a parlor trick because such a physical reading would not reveal the rationale for the logical relationships in our cognitive models. The physical study of the brain will reveal much about the constraints of the system (the “hardware”), including signal speeds, memory storage mechanisms, and areas of specialized functions, but could it trace our thoughts (the “software”)? To extend the computer analogy, one can study software by doing a memory dump, so a similar memory reading ability for brains could reveal thoughts. But it is not enough to know the software or the thoughts; one needs to know what function is being served, i.e. what the software or thoughts do. A physical examination can’t reveal that; it is a mental phenomenon that can be understood only by reasoning out what it does from a higher-level (generalized) perspective and why. One can figure out what software does from a list of instructions, but one can’t see the larger purposes being served without asking why, which moves us from form to function, from physical to mental. So a better starting point is to ask what function is being served, from which one can eventually back out how the hardware and software do it. Since we are far from being able to decode the hardware or software of the brain (“wetware”) in much detail anyway, I will adopt this more direct functional approach.

From the above, we have finally arrived at the question we need to ask: What function do minds serve? The answer, for which I will provide a detailed defense later on, is that the function of the brain is to provide centralized, coordinated control of the body, and the function of the conscious mind is to provide centralized, coordinated control of the brain. That brains control bodies is, by now, not a very controversial stance. The rest of the body provides feedback to the brain, but the brain ultimately decides. The gut brain does a lot of “thinking” for itself, passing along its hungers and fears, but it doesn’t decide for you. That the conscious mind controls the brain is intuitively obvious but hard to prove given that our only primary information source about the mind is the mind itself, i.e. it is subjective instead of objective. However, if we work from the assumption that the brain controls the body using information management, which is to say the application of algorithms on data, then we can define the mind as what the brain is doing from a functional perspective. That is, the mind is our capacity to do things.

The conscious mind, however, is just a subset of the mind, specifically including everything in our conscious awareness, from sensory input to memories, both at the center of our attention and in a more peripheral state of awareness. We feel this peripheral awareness both because we can tell it is there without dwelling on it and because we often do turn our attention to it, at which point it happily becomes the center. The capacity of our mind to do things is much larger than our conscious awareness, including all things our brains can do for which we don’t consciously sense the underlying algorithm. Statistically, this includes almost everything our brains do. The things we use our minds to do which we can’t explain are said to be done subconsciously, by our subconscious mind. We only know the subconscious mind is there by this process of elimination: we can do it, but we are not aware of how we do it or sometimes that we are doing it at all.

For example, we can move, talk, and remember using our (whole) mind, but we can’t explain how we do them because they are controlled subconsciously, and the conscious mind just pulls the strings. Any explanations I might attempt of the underlying algorithms behind these actions sound like they are at the puppeteer level: I tell my body to move, I use words to talk, I remember things by thinking about them. In short, I have no idea how I really do it. The explanations or understandings available to the conscious mind develop independently of the underlying subconscious algorithms. Our conscious understanding is based only on the information available to conscious awareness. While we are aware of much of the sensory data used by the brain, we have limited access to the subconscious processing performed on that data, and consequently limited access to the information it contains. What ends up happening is that we invent our own view of the world, our own way of understanding it, using only the information we can access through awareness and the subconscious and conscious skills that go with it. What this means is that our whole understanding of the world (including ourselves) is woven out of information we derive from our awareness and not from the physical world itself, which we only know second-hand. Exactly like a sculptor, we build a model of the world, similar to it in as many ways as we can make it feel similar, but at all times just a representation and not the real thing. While we evolved to develop this kind of understanding, it depends heavily on the memories we record over our lifetimes (both consciously accessible and subconsciously not). As the mind develops from infancy, it acquires information from feedback that it can put to use, and it thinks of this information as “knowledge” because it works, i.e. it helps us to predict and consequently to control. To us, it seems that the mind has a hotline to reality. Actually, though, the knowledge is entirely contextual within the mind, not reality itself but only representative of it. But by representing it the contexts or models of the conscious mind arise: the conscious mind has no choice but to believe in itself because that is all it has.

Speaking broadly, subconscious algorithms perform specialized informational tasks like moving a limb, remembering a word, seeing a shape, and constructing a phrase. Consciously, we don’t know how they do it. Conscious algorithms do more generalized tasks, like thinking of ways to find food or making and explaining plans. We know how we do these things because we think them through. Conscious algorithms provide centralized, coordinated control of subconscious (and other conscious) algorithms. Only the top layer of centralized control is done consciously; much can be done subconsciously. For example, all our habitual behavior starts under conscious development and is then delegated to the subconscious going forward. As the control central, though, the buck stops with the conscious mind; it is responsible for reviewing and approving, or, in the case of habitual behavior, preapproving, all decisions. Some recent studies impugn this decisive capacity of the conscious mind with evidence that we make decisions before we are consciously aware that we have done so.1 But that doesn’t undermine the role of consciousness, it just demonstrates that to operate with speed and efficiency we can preapprove behaviors. Ideally, the conscious mind can make each sort of decision just once and self-program to reapply that decision as needed going forward without having to repeat the analysis. It is like a CEO who never pulls triggers himself but has others to do it for him, but continually monitors to see if things are being done right.

I thus conclude that the conscious mind is a subprocess of the mind that exists to make decisions and that it does it using perspectives called knowledge that are only meaningful locally (i.e. in the context of the information under its management) and that these contexts are distilled from information fed to it by subconscious processes. The conscious mind is separate from the subconscious mind for practicality reasons. The algorithmic details of subconscious tasks are not relevant to centralized control. We subconsciously metabolize, pump blood, breathe, blink, balance, hear, see, move, etc. We have conscious awareness of these things only to the degree we need to to make decisions. For example, we can’t control metabolization and heartbeat (at least without biofeedback), and we consequently have no conscious awareness of them. Similarly, we don’t control what we recognize. Once we recognize something, we can’t see it as something else (unless an alternate recognition occurs). But we need to be aware of what we recognize because it affects our decisions. We breathe and blink automatically, but we are also aware we are doing it so we can sometimes consciously override it. So the constant stream of information from the subconscious mind that flows past our conscious awareness is just the set we need for high-level decisions. The conscious mind is unaware how the subconscious does these things because this extraneous information would overly complicate its task, slowing it down and probably compromising its ability to lead. We subjectively know the limits of our conscious reach, and we can also see evidence of all the things our brains must be doing for us subconsciously. I suspect this separation extends to the whole animal kingdom, which is nearly all comprised of bilateral animals having one brain. Octopuses are arguably an exception as they have separate brains for each arm, but the central octopus brain must still have some measure of high-level control over them, perhaps in the form of an awareness, similar to our consciousness. Whether each arm also has some degree of consciousness is an open question.2 Although a separate consciousness process is not the only possible solution to centralized control, it does appear to be the solution evolution has favored, so I will take it as my working assumption going forward.

One can further subdivide the subconscious mind along functional lines into what are called modules, which are specialized functions that also seem to have specialized physical areas of the brain that support them. Steven Pinker puts it this way:

The mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation. The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. 3
The mind is a set of modules, but the modules are not encapsulated boxes or circumscribed swatches on the surface of the brain. The organization of our mental modules comes from our genetic program, but that does not mean that there is a gene for every trait or that learning is less important than we used to think.4

Positing that the mind has modules doesn’t tell us what they are or how they work. Machines are traditionally constructed from parts that serve specific purposes, but design refinements (e.g. for miniaturization) can lead to a streamlining of parts that are fewer in number, but that holistically serve more functions. Having been streamlined by countless generations, the modules of the mind can’t be as easily distinguished along functional boundaries as the other parts of the body because they all perform information management in a highly collaborative way. But if we accept that any divisions we make are preliminary, we can get on with it without getting too caught up in the details. Drawing such lines is reverse engineering. Evolution engineered us, explaining what it did is reverse engineering. Ideally one learns enough from reverse engineering to build a duplicate mechanism from scratch. But living things were “designed” from trillions of small interactions spread over billions of years. We can’t identify those interactions individually, and in any event, natural selection doesn’t select for individual traits but for entire organisms, so even with all the data one would be hard-pressed to be sure what caused what. However, if one generalizes, that is, if one applies statistical reasoning, one can distinguish functional advantages of one trait over another. And considering that all knowledge and understanding are the product of such generalizing, it is a reasonable strategy. Again, it is not the objective of knowledge to describes things “as they are,” only to create models or perspectives that abstract or generalize certain features. So we can and should try to subdivide the mind into modules and guess how they interact, with the understanding that there is more than one way to skin this cat and greater clarity will come with time.

Subdividing the mind into consciousness and a number of subconscious components will do much to elucidate how the mind provides its centralized control function, but the next most critical aspect to consider is how it manages information. Information derives from the analysis of data, the separation of useful data (the wheat) from noisy data (the chaff). Our bodies use at least two physical mechanisms to record information: genes and memory. Genes are nature’s official book of record, and many mental functions have extensive instinctive support encoded by genes. We have fully decoded all our genes and have identified some functions of some of them. Genes either code for proteins or they help or regulate those that do. Their function can be viewed narrowly as a biochemical role or more broadly as the benefit conferred to the organism. We are still a long way off from connecting the genes to the biochemical roles, and further still from connecting to benefits. Even with good explanations for everything questions will always remain because billions of years of subtlety are coded into genes, and models for understanding invariably generalize that subtlety away.

Memory is an organism’s book of record, responsible for preserving any information it gleans from experience, a process also called learning. We don’t yet understand the neurochemical basis of memory, though we have identified some of the chemicals and pathways involved. Nurture (experience) is often steered by nature (instinct) to develop memory. Some of our instinctive skills work automatically without memory but must leverage memory for us to achieve mastery of a learned behavior. We are naturally inclined to learn to walk and talk but are born with no memory of steps or words. So we follow our genetic inclinations, and through practice we record models in memory that help us perform the behaviors reliably.

Genes and memory store information of completely incompatible types and formats. Genetic information encodes chemical structures (either mRNA or proteins) which translate to function mostly through proteins and gene regulation. Memory encodes objects, events and other generalizations which translate to function through indirection, mostly by correlating memory with reality. Genetic information is physical and is mechanically translated to function. Remembered information is mental and is indirectly or abstractly translated to function. While both ultimately get the job done, the mind starts out with no memory as a tabula rasa (blank slate) and assembles and accumulates memory as a byproduct of cogitation. Many algorithmic skills, like vision processing, are genetically prewired, but on-the-job training leverages memory (e.g. recognition of specific objects). In summary, genes carry information that travels across generations while memory carries information transient to the individual.

I mentioned before that culture is another reservoir of information, but it doesn’t use an additional biological mechanism. While culture depends heavily on our genetic nature, significantly on language, we reserve the word culture for additions we make beyond our nature and ourself. Language is an innate skill; a group of children with no language can create a completely vocabulary and grammar themselves in a few years. Therefore, cultural information is not stored in genes but only in memory, and it is also stored in artifacts as a form of external memory. Each of us forms a unique set of memories based on our own experience and our exposure to culture. What an apple is to each of us is a unique derivation of our lifetime exposure to apples, but we all share general ideas (knowledge) about what one can do with apples. We create memories of our experiences using feedback we ourselves collect. Our memory of culture, on the other hand, is partially based on our own experiences and partially on the underlying cultural information others created. Cultural institutions, technologies, customs, and artifacts have ancient roots and continually evolve. Culture extends our technological and psychological reach, providing new ways to control the world and understand our place in it. While cultural artifacts mediate much of the transmission of culture, most culture is acquired from direct interaction with other people via spoken language or other activities. Culture is just a thin veneer sitting on top of our individual memories, but it is the most salient part to us because it encodes so much of what we can share.

To summarize so far, we have conscious and subconscious minds that manage information using memory. The conscious mind is distinct from the subconscious as the point where relevant information is gathered for top-level centralized control. But why are conscious minds aware? Couldn’t our top-level control process be unaware and zombie-like? No, it could not, and the analogy to zombies or robots reveals why. While we can imagine an automaton performing a task effectively without consciousness, as indeed some automated machines do, we also know that they lack the wherewithal to respond to unexpected circumstances. In other words, we expect zombies and robots to have rigid responses and to be slow or ineffective in novel situations. This intuition we have about them results from our belief that simple tasks can be automated, but very general tasks require generalized thinking, which in turn requires consciousness. I’m going to explain why this intuition is sound and not just a bias, and in the process we will see why the consciousness process must be aware of what it is doing.

I have so far described the consciousness process as being a distinct subprocess of the mind which is supplied just the information relevant to high-level decisions from a number of subconscious processes, many of them sensory but also memory, language, spatial processing, etc. Its task is to make high-level decisions as efficiently and efficaciously as possible. I can’t prove that this design is the only possible way of doing things, but it is the way the human mind is set up. And I have spoken in general about how knowledge in the mind is contextual and is not identical to reality but only representative of it. But now I am going to look closer at how that representative knowledge causes a mind to “believe in itself” and consequently become aware. It is because we create virtual worlds (called mental models, or models for short) in our heads that look the same as the outside world. We superimpose these on the physical world and correlate them so closely that we can usually ignore the distinction. But they could not be more different. One of them is out there, and the other in here. One exists only physically, the other only mentally (albeit with the help of a physical computational mechanism, the brain). One is detailed down to atoms and then quarks, while the other is a network of generalizations with limited detail, but extensive association. For this reason, a model can be thought of as a simplified, cartoon-like representation5 of physical reality. Within the model, one can do simple, logical operations on this abridged representation to make high-level decisions. Our minds are very handy with models; we mostly manage them subconsciously and can recognize them much the same way we recognize objects. We automatically fit the world to a constellation of models we manage subconsciously using model recognition.

So the approach consciousness uses to make top level decisions is essentially to run simulations: it builds models that correlate well to physical conditions and then projects the models into the future to simulate what will happen. Consciousness includes models of future possibilities and models of current and past experiences as we observed them. We can’t remember the actual past as it actually was, only how we experienced it through our models. All our knowledge is relative to these models, which in turn relate indirectly to physical reality. But where does awareness fit in? Awareness is just the data managed by this process. We are aware of all the information relevant to top-level decisions because our conscious selves are this consciousness process in the brain. Not all the data within our awareness is treated equally. Since much more information is sensed and recognized than is needed for decisions, the data is funneled down further through an attention process that focuses on just select items in consciousness.6 As I noted before, we can apply our focusing power on anything within our conscious awareness at will to pull it into attention, but our subconscious attention process continually identifies noteworthy stimuli for us to focus on, and it does it by “listening” for signals that stand out from the norm. We know from experience that although we are aware of a lot of peripheral sensory information and peripheral thoughts floating around in our heads at any given point in time, we can only actively think about one thing at a time, in what seems to us as a train of thought where one thought follows another. This linear, plodding approach to top-level decision making ensures that the body will make just one coordinated action at a time because we don’t have to compete with ourselves like a committee every time we do something.

Let’s think again about whether minds could be robotic again. Self-driving cars, for example, are becoming increasingly capable of executing learned behaviors, and even expanding their proficiency dynamically, without any need for awareness, consciousness, reasoning, or meaning. But even a very good learned behavior falls far short of the range of responses that animals need to compete in an evolutionary environment. Animals need a flexible ability to assess and react to situations in a general way, that is, by considering a wide range of past experience. The modeling approach I propose for consciousness can do that. If we programmed a robot to use this approach, it would both internally and externally behave as if it were aware of the data presented to it, which is wholly analogous to what we do. It will have been programmed with a consciousness process that considers access to data “awareness”. Could we conclude that it had actually become aware? I think we could because it meets the logical requirements, although this doesn’t mean robotic awareness would be as rich an experience of awareness as our own. A lot goes into the richness of our experience from billions of years of tweaks that would take us a long time to replicate faithfully in artificial minds. But it is presumptuous of us to think that our awareness, which is entirely a product of data interpretation, is exclusive just because we
are inclined to feel that way.

Let me talk for a moment about that richness of experience. How and why our sensory experiences (called qualia) feel the way they do is what David Chalmers has famously called the hard problem of consciousness. The problem is only hard if you are unwilling to see consciousness as a subroutine in the brain that is programmed to interpret data as feelings. It works exactly the way it does because it is the most effective way that has evolved to get bodies to take all the steps they need to survive. As will be discussed in the next section, qualia are an efficient way to direct data from many external channels simultaneously to the conscious mind. The channels and the attention process focus the relevant data, but the quality or feeling of the qualia results from subconscious influences the qualia exert. Taste and smell simplify chemical analyses down for the conscious mind into a kind of preference. Color and sound can warn us of danger or calm us down. These qualia seem almost supernatural but they actually just neatly package up associations in our minds so we will feel like doing the things that are best for us. Why do we have a first-person experience of them? Here, too, it is nothing special. First-person is just the name we give to this kind of processing. If we look at our, or someone else’s, conscious process more from a third-person perspective we can see that what sets it apart is just the flood of information from subconscious processes giving us a continuous stream of sensations and skills that we take for granted. First person just means being connected so intimately to such a computing device.

Now think about whether robots can be conscious. Self-driving cars use a specialized algorithm that consults millions of hours of driving experience to pick the most appropriate responses. These cars don’t reason out what might happen in different scenarios in a general way. Instead, they use all that experience to look up the right answer, more or less. They still use internal models for pedestrians, other cars, roads, etc, but once they have modeled the basic circumstances they just look up the best behavior rather than reasoning it out generally. As we start to build robots that need more flexibility we may well design the equivalent of a conscious subprocess, i.e. a higher-level process that reasons with models. If we also use the approach of giving it qualia that color its preferences around its sensory inputs in preprogrammed (“subconscious”) ways to simplify the task at the conscious level, then we will have built a consciousness similar to our own. But while we may technically meet my definition of consciousness and while such a robot may even be able to convince people into thinking it is human sometimes (i.e. pass the Turing test), that alone won’t mean it experiences qualia anywhere near as rich as our own, and that is because we have more qualia which encode more preferences in a highly interconnected and seamless way following billions of years of refinements. Brains and bodies are an impressive accomplishment. But they are ultimately just machines, and it is theoretically possible to build them from scratch, though not with the approaches to building we have today.