Hey, Science, We’re Over Here

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. While I agree that there is a physical basis for everything physical, I don’t agree that all physical things are only physical. Some physical things (namely living things and artifacts they create) are also functional, which is a distinct kind of existence. While one can explain these things in physical terms, physical explanations can’t address function, which is a meaningless concept from a physical perspective. I will argue that functional existence transcends physical existence and is hence independent of it, even though our awareness of functional existence and use of it are mediated physically. This is because function is indirect; while it can be applied to physical circumstances, functional existence is essentially the realm of possibility, not actuality. The eliminativist idea that everything that exists is physical, aka physical monism, is only true so long as one is referring only to physical things that are not functional. But physical systems can become functional by managing information, which is patterns in data that can be used to predict what will happen with better than even odds. Once physical things start to manage information, physical monism breaks down because information isn’t physical, though it can be stored physically. Function is an additional kind of existence that characterizes a system’s capabilities rather than its substance. Functional systems can be physical or imaginary, but if they exist physically then their functional reach has physical limitations and their physical reach depends on their functional strength. In nonfunctional physical systems, events unfold in direct accordance with physical laws, but functional physical systems predict and influence events using information created using feedback. A capability can be thought of as the “power” to do something, which is distinct from the underlying physical mechanisms that make it possible. Living things principally exist functionally and only secondarily physically because natural selection selects functions, not forms. Consequently, any attempt to understand living things must focus first on function and then on form. Brains are organs responsible for managing dynamical information, and the mind is that functional capacity itself. We are not always precise in how we use the words brain and mind, but I will use brain only to refer to physical aspects of the organ, while mind refers to functional aspects independent of our knowledge of the brain. Understanding the brain necessarily depends on the physical study of it, while understanding the mind, I contend, depends on studying its functions. The two work together, and so the study of each can help inform the other, but at all times function and form are existentially independent. How the brain manages function (i.e. as the mind) is so abstracted from the physical mechanisms that make it possible that we only understand in broad terms how the joint venture of form and function works. It is my job here to establish those broad terms better and to clarify them as much as possible.

Reductionists reject downward causation1 as a nonsensical emergent power of complex systems. The idea is that some mysterious, nonphysical power working at a “higher” level causes changes at a “lower” level. For example, downward causation might be taken to imply that a snowflake’s overall hexagonal shape causes individual water molecules to attach to certain places to maintain the symmetry of the whole crystal. But this is patently untrue; only the local conditions each molecule encounters affects its placement. Each new water molecule will most likely attach to spots on the crystal most favored by current conditions encountered by that snowflake, which constantly change as the snowflake forms. But those favored spots at any given instant during its formation are hexagonally symmetric, making symmetrical growth most likely. The symmetry only reflects the snowflake’s history, not an organizing force. But just because downward causation doesn’t exist in purely physical systems, that doesn’t mean it doesn’t exist in functional systems like living things. Any system capable of leveraging feedback can make adjustments to reach an objective, hence “causing” it, if it also has an evaluative mechanism to prefer one objective over another. Mechanisms that record such objectives and have preferences about reaching them are called information management systems, and living things use several varieties of these systems to use feedback to bring about downward causation. It is a misnomer to call this capacity of life an “emergent” property because it doesn’t appear from nothing; it is just that certain physical systems can manage information and apply feedback. So we can see that the “higher” levels of organization are these information management systems and the “lower” levels are the things under management. Living things use at least two and arguably four or more information management systems at different levels (more on these levels later). Reductionists hold that causation is entirely a function of subatomic forces, and that “causes” at the atomic, molecular, or substance level are reducible to subatomic forces. They further conclude that, since life is natural with nothing added (e.g. no mystical “spirit”), the mind is an epiphenomenon or an illusion, and in either case, is certainly not causing anything to happen but is merely observing. Although this strongly contradicts our experience, reductionists will just say that our perception of time and logic is biased, so we should ignore our feelings and just accept that our existence as agents in the world is only a convenient way of describing matters really managed at the subatomic level.

But the reductionists underestimated the potential side-effects of the uniformity of nature. The fact that all subatomic particles of a certain type behave the same way all the time, similarly causing uniform behavior among atoms, molecules and substances, has implications about the kind of reality that will emerge. Without uniformity, information would be impossible and function could not be achieved because they work by collecting and applying feedback from past events to predict future events. With uniformity, feedback loops can discover and exploit patterns that repeat themselves. Given enough time these loops can create arbitrarily complex systems that are potentially capable of doing anything physically possible in that universe. You might think of them as trial and error machines: they start out trying things at random, but will launch an arms race that produces ever more systematic methods to deliver ever more relative functionality, until they are ultimately capable of doing potentially everything physically possible in that universe. So, ironically, information doesn’t emerge from complexity; it emerges from uniformity. Uniformity is akin to a superpower or supernatural force that, if present in a universe, enables the extra dimension of functional existence to manifest if feedback loops can flourish.

Bob Doyle (the Information Philosopher) explains it like this:

Some biologists (e.g., Ernst Mayr) have argued that biology is not reducible to physics and chemistry, although it is completely consistent with the laws of physics. Even the apparent violation of the second law of thermodynamics has been explained because living beings are open systems exchanging matter, energy, and especially information with their environment. In particular, biological systems have a history that physical systems do not, they store knowledge that allows them to be cognitive systems, and they process information at a very fine (atomic/molecular) level.

Information is neither matter nor energy, but it needs matter for its embodiment and energy for its communication.

A living being is a form through which passes a flow of matter and energy (with low or “negative” entropy, the physical equivalent of information). Genetic information is used to build the information-rich matter into an overall information structure that contains a very large number of hierarchically organized information structures. Emergent higher levels exert downward causation on the contents of the lower levels.2

I wrote most of this book before I discovered Bob Doyle’s work, so I did not know that anyone else had proposed the full-fledged existence of function/information independent of physical matter. But I’m glad to see that someone else thinks along the same lines as me. Doyle’s mission is to expose the larger role of information in philosophy, while my mission is to explain the mind. While doing that, I came to see function as a primary force and information as the medium that makes function possible. They are different: function is directly connected to purpose while information is only indirectly connected. Function is the capacity to do something, and information may or may not make it possible. Minds are functional entities that employ information.

I am not proposing property dualism, the idea that everything is physical but has both physical properties and informational properties. No physical thing has an informational property. Rather, I agree with eliminative materialists that physical things are just physical so far as physical existence goes. But physical systems called information management systems can arise that can exploit feedback to arbitrary degrees by recording that feedback as information, and this information can be viewed from a different perspective as having a distinct kind of existence. It doesn’t exist if your perspective is physical, but it is useful to have more than one perspective about what constitutes existence if your goal is to explain things. Ironically, explanation itself is not physical but is a functional kind of thing, so eliminative materialists can only argue their case by ignoring the very medium they are using to do so. But functional existence is a bit harder to put your finger on because function and information are abstractions which can never be perfectly described. Perfect descriptions are impossible because they are ultimately constructed out of feedback loops, which can reveal likelihoods but not certainties. Tautological information can be considered to be perfect, but only because it is true by definition. Tautological information doesn’t actually become functional until it is applied in a useful way, and “use” implies that at least something about the system is not known by definition. So some closed formal systems, e.g. some math and computer languages, can be perfectly deductive and within them information and functions are entirely predictable. But in all other systems, including the physical universe, some induction is necessary to predict what will happen, which will only work if those systems exhibit some uniformity. Function is a generalization about patterns that feedback has revealed, and a generalization is a nonphysical perspective, which is to say a way of explaining relationships between things. While function is strictly nonphysical, information can be thought of as having a physical manifestation through data stored in a physical medium like a brain, computer, or book. That the information is actually functional and not physical can be demonstrated through a simple thought experiment. Any information, e.g. the integer seventeen, can be stored in different ways in a person’s mind or in a computer, and can also be imagined to exist in a hypothetical sense without any physical manifestation. What it means to be “seventeen” is independent of any physical medium used to store it and above and beyond that physical form; it is what I call its function. One can argue that the “true” meaning of seventeen differs depends on how we define numbers and operations on them, but these things are themselves nonphysical functional concepts and not dependent on how or whether we store them in our brains or elsewhere.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. The only place function has appeared is in living organisms, who achieved it through evolution, which applies feedback from current situations to improve the chances of survival in future situations. The biochemical mechanisms they employ matter more from a functional standpoint than a physical standpoint because they are only selected for what they can do, giving them a reason to exist, and not for how they do it. In the nonliving world, things don’t happen for a reason, they just happen. We can predict subatomic and atomic interactions using physics, and molecular interactions using chemistry. Linus Pauling’s 1931 paper “On the Nature of the Chemical Bond” showed that chemistry could in principle be reduced to physics34. Geology and earth science generalize physics and chemistry to a higher level but reduce fully to them. However, while physical laws work well to predict the behavior of simple physical systems, they are not enough to help us predict complex physical systems, where complexity refers to chaotic, complex, or functional factors or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. These models are based on physical laws but use heuristics to approximate how systems will behave over time. But the weather and all other nonliving systems don’t control their own behavior; they are reactive and not proactive. Living things introduce functional factors, aka capabilities. Organisms are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes through DNA. I can’t prove that complex adaptive systems are the only way functionality could arise in a physical universe, but I don’t see how a system could get there without leveraging cycles of positive and negative feedback. Over time, a CAS creates an abstract quantity called information, which is a pattern that has occurred before and so is likely to occur again. The system then exploits the information to alter its destiny. Information can never reveal the future, but it does help identify patterns that are more likely to happen than random chance, and everything better than random chance constitutes useful predictive power.

Functional systems, i.e. information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities (though it does place some constraints on those functions). While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage (approximately) the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented. That said, in practice, physical strengths and limitations make each kind of brain and computer stronger or weaker at different tasks and so must be taken into consideration.

I call the new brand of dualism I am proposing form and function dualism. This stance says that while everything physical is strictly physical, everything functional is strictly functional. Physical configurations can act in functional ways via information management systems, and these systems can only be understood from a functional perspective (because understanding is functional itself). Consequently, both physical and functional things can be said to exist, never as different aspects of each other but as a completely independent kind of existence. Functional existence can be discussed on a theoretical basis independent of physical information management systems to implement them. So, in this sense, mathematics exists functionally whether we know it or not. More abstractly, even functional entities entirely dependent on physical implementations for their physical existence, like the functional aspect of people, could potentially be replicated to a sufficient level of functional detail on another physical platform, e.g. a computer, or it could be spoken of on a theoretical basis. In fact, when we speak of other people, we are implicitly referring to their functional existence and not their physical bodies (or, at least, their bodies are secondary). So, entirely aside from religious connotations, we generally recognize this immaterial aspect of existence in humans and other organisms as the “soul.”

Information management systems that do physically exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

    Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful and necessary to discuss such functions independently of the underlying physical systems they run on. Also note that minds heavily leverage the information managed by organisms, so one can’t deeply understand them without understanding organism function as well. Civilizations and software manage information using appropriately customized systems, but they are direct creations of minds and their ultimate purposes are only to serve the purposes of the mind. If we do learn to build artificial minds or, more abstractly, information management systems with their own purposes and reasons for attaining them independent of our own, then they could be considered an independent class beyond organisms and minds.

Joscha Bach says, “We are not organisms, we are the side-effects of organisms that need to do information processing to regulate interaction with the environment.” 5 This statement presumes we define organisms as strictly physical and “we” as strictly functional, which are the senses in which we usually use these words. But saying that we are the side-effects is a bit of a joke because it is really the other way around: the information processing (us) is primary and physical organisms are the side-effects. Bach points out that mind starts with pleasure and pain. This is the first inkling of the mind subprocess, a process within the brain, separate from all lower-level information processing, whose objective is to make top-level decisions. By summarizing low-level information into an abstract form, the behavior of the brain can be controlled more abstractly, specifically: “Pleasure tells you, do more of what you are currently doing; pain tells you, do less of what you are currently doing.” All pleasure and pain are connected to needs: social, physiological and cognitive. In higher brains like ours, consciousness is like an orchestra conductor who coordinates a variety of activities by attending to them, prioritizing them, and then integrating them into a coherent overall strategy (his “conductor theory of consciousness”). Bach identifies the dorsolateral prefrontal cortex as the brain region that is most likely acting as the conductor of consciousness, coordinating the features of consciousness together to make them goal oriented. This part of the brain can help us run simulations of possibilities through our memories with an eye to aligning them with desired objectives. Our memories are encoded in the same parts of the brain that originally experienced them, e.g. the visual cortex, motor cortex, or language centers, etc., and recalling them reactivates the neural areas that generated that initial encoding.6 Bach theorizes that the conductor is off when people sleepwalk. They can still go to the fridge or in some cases even make dinner or answer simple questions, but there is “nobody home”. Similarly, whenever we engage in any habitual behavior, our conscious conductor has ceded most of its control to other brain areas that specialize in that behavior. While the conductor can step in and micromanage, it usually won’t, and if it does step in where it has not been for a long time, it will often make things worse because it doesn’t remember how shoes are actually tied or how sentences are actually formed. Bach’s theory proposes that “You are not your brain, you are a story that your brain tells itself,” which is correct except for its humorous use of the word “itself” — the brain doesn’t have a self; you do. More accurately the sentiment should go, “You are your mind, but your mind is not your brain; the mind is a story the brain produces that you star in.”

I’m not going to go too deeply into the mechanisms of the brain, both because we only know superficial things about them and because my thesis is that function drives form, but I would like to talk for a moment about the default mode network (DMN). This set of interacting brain regions has been found to be highly active when people are not focused on the world around them, either because they are resting or because they are daydreaming, reflecting, or planning, but in any case not engaged in a task. It is analogous to a running car engine when the clutch is engaged and power isn’t going to the wheels. Probably more than any other animal, people maintain a large body of information associated with their sense of self, their sense of others (theory of mind), and planning, and so need to be able to comfortably balance using their mind for these activities versus using them for active tasks. We like to think we naturally maintain a balance between the two that is healthy for us, but we now know that our culture has prioritized task-oriented thinking over reflection. Excessive focus on tasks is stressful, and more engagement of the default mode network is the solution. This can be achieved through rest and relaxation, meditation and mindfulness exercises, and, most effectively of all, via psychotropic drugs like psilocybin and LSD. Even a single experience with psychedelic drugs can permanently improve the balance, potentially curing depression or improving one’s outlook, though more research needs to be done to establish good guidelines (and they also need to be decriminalized!)789

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final. The formal cause attaches meaning to the shape something will have, essentially a generalization or classification of it from an informational perspective. While this was an intrinsic property to the Greeks, nowadays we recognize that classification is extrinsically assigned for our convenience. The efficient cause is what we usually mean by cause today, i.e. cause and effect. Physicalism sees the material, formal and efficient causes as the physical substance, how we classify it, and what changes it. However, physicalism rejects the final, teleological cause because it sees no mechanism. After all, objects don’t sink to lower points because it is their final cause, but simply because of gravity. While I agree that this is true of ordinary physical things, I hold that teleology is both intuitively true and actually true for functional systems, and that the mechanism that makes it possible is information management. Physicalists consider the matter closed — teleology has been disproven because there is no “force” pulling things toward their purpose. But if one can see that function as real, then one can see that it exists to pull things toward purposes. Teleology is so far out of favor that Wikipedia is hesitant to admit that the life sciences might require teleological explanation: “Some disciplines, in particular within evolutionary biology, continue to use language that appears teleological when they describe natural tendencies towards certain end conditions.[citation needed] While some argue that these arguments can be rephrased in non-teleological forms, others[who?] hold that teleological language cannot be expunged from descriptions in the life sciences.” It goes on, “Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically.” I would make an analogy to trying to efforts to convert homosexuals to heterosexuality. Although evolution is, as Richard Dawkins says, a blind watchmaker, the information management systems of life nevertheless have purpose. The purpose of lungs is to provide oxygen, of the heart to circulate blood, and of limbs to provide mobility. They can also have additional purposes. The body is not aware of these purposes, and our attempts to summarize them through theory and explanation can only be approximately right, but their approximate truth is undeniable. Lungs, hearts and limbs that do not fulfill these purposes will seriously jeopardize survival. Although there is no actual watchmaker and the function of the lungs results from many chance feedback events, animals need oxygen and lungs help provide it. This is their design purpose whether the design was intentional or not.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. I contend that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 194810, which then led into systems theory11, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Brains are dynamic information management systems that create and manage information in real-time. Minds are subprocesses running in brains that create a first-person perspective to facilitate top-level decisions. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind12 in 1949. We know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, but Ryle felt it still had tacit if not explicit “official” support. He felt we officially or implicitly accepted that two independent arenas in which we live our lives, one of “inner” mental happenings and one of “outer” physical happenings. This view goes all the way down to the structure of language, which has a distinct vocabulary for mental things (using abstract nouns which denote ideas or qualities) and physical things (using concrete nouns which connect to the physical world through senses). As Ryle put it, we have “assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed with this view, contending that the mind is not a “ghost in the machine,” something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake” to describe a situation where one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, he identified the mistake as arising from a failure to understand that forest has a different scope than tree.13 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough, we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

But Ryle was more mistaken than Descartes. His mistake was in thinking that the whole problem was a category mistake, when actually only a superficial aspect of it was. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because saying how the clock works is not really the interesting part. The interesting part is the purpose of the clock: to tell time. The function of the brain cannot be explained physically because purpose has no physical corollary. The brain and the mind have a purpose — to control the body — but that function cannot be deduced from a physical examination. One can tell that nerves from the brain animate hands, but one must invoke the concept of purpose to see why. Ryle saw the superficial category mistake (forgetting that the brain is a machine) but missed the significant categorical difference (that function is not form). Function can never be reduced to form, even though it can only occur in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but talking about the mind is not the same as talking about those processes any more than talking about cogs is the same as caring what time it is. The subject matter of the brain and mind is functional and never the same as the physical means used to think about them. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong. In this case they really do indicate two different types of existence. Yes, the mind has a physical manifestation as a subprocess of the brain, so it is physical in that sense. But our primary sense of the word mind refers to what it does, which is entirely functional. This is the kind of dualism Descartes was grasping for. While Descartes overstepped by providing an incorrect physical explanation, we can be more careful. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are not physical and their existence is not dependent on space or time. Function and information abstractly characterize relationships and possibilities, and any concrete implementations of them are merely exemplary.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because material science has rejected teleology as mystical. But a physical science that ignores the existence of natural information management systems can’t explain all of nature.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. From a physical perspective the system is not doing some “new” kind of thing it could not do before; it is still essentially a set of cogs and wheels spinning. All that has changed is that feedback is being collected to let the system affect itself, a capacity I call functionality. The behavior that results builds on a vastly higher level of complexity which can only be understood or explained through paradigms like information and functionality. While there are an infinite number of ways one could characterize or describe information and functionality, all these ways have in common that they are detecting patterns to predict more patterns. Because one must look to information and function to explain these systems and not only to physical causes, it is as if something new emerged in organisms and the brain. Viewed abstractly, one could say that the simplistic causal chains of physical laws are broken and replaced by functional chains in functional systems. This is because in a system driven by feedback, cause itself is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages to achieve functional ends. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system could thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but, through feedback, a tool can come to be designed to perform the job better. Thus, from an ideal perspective, function begets form.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that confers capabilities to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same genes. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while in others the “purpose” of the gene can be hard to identify. It seems likely that each protein would fulfill one biological function (e.g. catalyzing a given chemical reaction) because heritability derives from selection events on one function at a time, so multiple functions would be challenging for natural selection to maintain because it seems mutations would be unlikely to be mutually beneficial to two functions. However, cases of protein moonlighting, in which the same protein performs unrelated functions, are now well-documented. In the best-known case, different sequences in the DNA for crystallins code either for enzyme function or transparency (as the protein is used to make lenses). A majority of proteins may moonlight, but, in any case, it is very hard to unravel all the effects of even a primary protein function. So any causal model of gene function will necessarily gloss over subtle benefits and costs. Its real purpose is a consolidated sum of the role it played in facilitating life and averting death in every moment since it first appeared. The gene’s functionality is real but has a deep complexity that can only be partially understood. Even so, approximating that function through generalized traits works pretty well in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and their traits when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first subconceptual thinking, which, as the name implies, is defined in terms of being below the other approach, conceptual thinking, which is also called logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects and objects) taken to be true, and draws consequences from them. Subjects, objects, and the premises and predicates about them are all concepts, meaning discrete, expressed ideas. The ideas in a conceptual bucket can be defined with significant clarity, e.g. as dictionaries define words (which are examples of concepts). Subconceptual thinking is more commonly known as common sense or intuition. The memory supporting it is called experience, but I prefer the more specific term subconcepts. While concepts are well-defined, subconcepts are just impressions, but impressions based on lots of experience. Subconcepts form naturally from experience with no conscious effort to give us a gist of the way things are. Our brains automatically match new events against prior experience to find patterns, and this matching creates subconceptual knowledge. Any two or more interpretations of events that share features will be grouped as a subconcept. We can never name subconcepts because that would make them concepts, which are more exact but are weaker in breadth, detail, and context, because all our subconcepts connect to each other through the web of experience. The lowest level of subconcepts develop from sensory feedback and can even have an instinctive component. For example, we instinctively avoid pain, but we build a subconceptual database that generalizes from situations that were painful in the past to the kind of situations that should be avoided. All of our memory of sensations and emotions is subconceptual, which means that how we think about senses and feelings is a combination of current sensations and subconceptual impressions. These impressions or hunches have no basis in cause and effect or logic, though we may have impressions about what their causes might be. By comparison, concepts are distinct and identifiable based on a specific set of traits. Where a subconcept only creates impressions, really comprised of innumerable associated subconcepts (since no one subconcept can be distinguished as such), a concept specifically comes to mind from the appropriate triggering stimuli. But concepts have an inherent fuzziness about them that is quite different from subconcepts. A concept is a bucket that holds a generalization about a class of things or other concepts. Each bucket is distinct in our minds even though its definition is based on an approximate and not an exact match. A generalization can never be so precise that every match fits perfectly because its power derives from its statistical nature — it is about the kind of thing that will match, not the matches themselves. In other words, it is about the functionality of matching and not the form of the match. But carving out these conceptual buckets of distinct kinds of matches opens the door to performing logical thinking with ideas instead of just forming impressions about them. And logical thinking leads to chains of entailment — causes and effects — which has much more potential to make strong predictions when it is done well. Instincts urge us toward fixed reactions and subconcepts promote simple reactions based on experience, but conceptual thinking can produce multi-step solutions beyond the reach of either.

Although many concepts are built on other concepts, ultimately all concepts are built on underlying instinctive and subconceptual knowledge. To be “built” on them ultimately means that their functional value to us depends on their connections to instinct and subconcepts, but concepts also have a sense in which they stand completely apart from this foundation. If one takes concepts as axiomatic, one can view all the logic that follows from rules operating on them as a self-contained system of knowledge. So conceptual thinking can be viewed as an information management process that combines information gathered instinctively and subconceptually from the bottom up with information created conceptually from the top down so that they meet in the middle. This process works smoothly in real time because internal brain processes are constantly at work at all levels trying to align top to bottom using best-fit algorithms. The requirement from the top down is ultimately to make discrete decisions, and conceptual approaches are well-suited to this, but we can make top-down decisions using just experience and instinct alone or just instinct alone, and more primitive animals lacking concepts or both concepts and subconcepts make all their decisions this way. All three of these approaches have in common that they are leveraging information, which means they are making predictions based on best fits to prior patterns, which is the essence of what information is. It is quite remarkable that we can seamlessly integrate these kinds of information and join top to bottom because these are more complex approaches to information management than we have even contemplated programming into a computer, but they have always been central design criteria for brains and so naturally evolved that way from the beginning.

Most of the information our brains gather about the world is subconceptual and can be trusted to guide us where we have little or no underlying conceptual understanding. While all our experiences create impressions that are used to form both subconcepts and concepts, concepts create an explicit link between a concept’s bucket or name to its definition (even though that definition itself has fuzziness as described). We know the approximate boundaries of the definition of a concept independent of any attempt to describe that definition. It is implied by supporting examples and connections to underlying traits. Any one example will carry more detail than the encompassing concept, but the traits the examples have in common help define it, even allowing for some traits that are optional to it. An object instance is a specialized kind of concept that associates to a specific thing or idea rather than a general type of thing as generic concepts do. While this makes instances functionally distinct from concepts at large, our mental machinery for managing instances can be thought of as treating them as special kinds of concepts. Like a concept, an instance is a bucket with defining characteristics, but, in the case of an instance, existence (physical or functional) is one of the characteristics. We can name a concept by attaching a word or phrase to it, and we can name an instance by using a proper noun. Many words are attached to multiple concepts, each of which will have its own entry in the dictionary. Words provide the most versatile anchors for referencing concepts, but we manage many concepts contextually without having specific or entirely accurate words to name them, for example for familiar parts of larger objects. Thinking about concepts, instances, and subconcepts to find new patterns and reach conclusions is called reasoning. Reasoning is the conscious capacity to “make sense” of things by producing useful information linking them together.

The most primitive subconcepts, percepts, are drawn from the senses using internal processes to create a large pool of information akin to big data in computers. Subconcepts and big data are data that is collected without explicit consideration of the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful again. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction (i.e. having no corollary in the physical world) that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Basic reasoning uses subconceptual pattern analysis, including recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises and not diffuse, unstructured data. Basic reasoning can also leverage concepts, but deduction specifically requires them. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity.

Science differentiates itself from other schools of thought by only embracing conceptual explanations. This is not to mitigate the value of instinctive and subconceptual knowledge, but to emphasize the role of science in providing logically-supported underlying explanations, which knowledge based on instinct and subconcept can’t do. While all of us will at times accept and act on instinctive and subconceptual knowledge, we set a higher bar for science because we have learned from experience that conceptual approaches, and especially those embraced by science, are both more reliable and versatile predictors of future events. So I will be making a very carefully reasoned argument rather than using my feelings and experiences to appeal to yours, but first I need to speak more about the powers and limitations of instincts and subconcepts. We have a conscious bias toward conceptual thinking because are aware of all or most of our logical reasoning but are unaware of most of our instinctive or subconceptual thinking. Thinking, or, more accurately, neural processing, that happens outside our conscious awareness is “nonconscious” thought performed by the nonconscious mind14. Up to 90 to 95 percent of mental processing is nonconscious, though it is arguably not meaningful to divide processing in this way given that the conscious mind is more a supervisor than a worker. Before going further, let me contrast nonconscious with the more familiar term subconscious:

nonconscious: mental activity that is not conscious and cannot be brought into conscious awareness because it is outside the realm of conscious experience

subconscious: mental activity just below the level of consciousness that influences conscious thoughts which can potentially be brought into conscious awareness because it is inside the realm of conscious experience

Freud coined the term subconscious in 1893 but abandoned it because he felt it could be misunderstood to mean an alternate “subterranean” consciousness. Though Freud gave up on it, it is used to refer to the well-known factors that influence our conscious thoughts and feelings that are at the “tip of the tongue”, as it were. Freud’s own model was based on the idea of a conscious mind and an unconscious mind. Freud’s unconscious mind was the union of repressed conscious thoughts that are no longer accessible (at least not without psychoanalytic assistance) and the nonconscious mind. He saw the preconscious, which is quite similar to what we now call the subconscious, as the mediator between them:

Freud described the unconscious section as a big room that was extremely full with thoughts milling around while the conscious was more like a reception area (small room) with fewer thoughts. The preconscious functioned as the guard between the two spaces and would let only some thoughts pass into the conscious area of thoughts. The ideas that are stuck in the unconscious are called “repressed” and are therefore unable to be “seen” by any conscious level. The preconscious allows this transition from repression to conscious thought to happen.15

Going forward I will only use the terms nonconscious and conscious, although I would just point out that although the two are strictly separate, the conscious mind receives lots of help from the nonconscious. Arguably, the main job of the nonconscious mind is to prepare information for conscious consumption. But consciousness can only process information in a specific way, namely from the perspective of awareness. Only a limited amount of information can be held in awareness at any given time, including some from each sensory channel and some under the focus of attention. The value of restricting the information in this way is that all the items under conscious awareness can be weighed, considered, and prioritized to help facilitate decision-making. It is a top-down approach for gathering all relevant information in one place to review it before acting on it. These restrictions suggest that information must be “presented” to consciousness in an appropriate form that let it perform just this high-level review. Conscious thinking performs operations on these items under focus in just one stream or sequence at a time. Conscious streams of thought eventually result in a single stream of conscious decisions. Note that we can all time-share different streams of thought at once by holding them in memory to produce an interleaved sequence of decisions supporting different streams, but we will only be thinking of one of them at a time. Some people with dissociative identity disorder (DID, formerly called multiple personality disorder) can run multiple consciousnesses or alters at the same time, but, having only one body, they must find a way to share the body to execute decisions. This suggests that while our capacity for consciousness is innate, a single, unified consciousness happens most of the time mostly because it is works better for most people. In fact, DID seems to nearly always result from extreme childhood trauma and starts with the main personality “going away” to protect itself, allowing alters to arise to fill the gap.

While awareness is a restricted lens that views information from custom channels through one stream of consciousness at a time, nonconscious processes can process information in parallel without such limitations. All this parallel processing, probably over 99% of the processing in the brain, ultimately creates the “theater of consciousness”, a simplified view of the world that unifies sensory inputs, memory, and thought into a seamless conscious experience. For example, to create sight the photoreceptor cells of the retina build a pixelated image to a given resolution, brightness, and color which is further processed using edge detection into similarly-colored regions and then grouped into objects which are then recognized by comparisons with memory. Consciously, we receive input from each of these levels, including the image, the objects, and what they are, but they are smoothly joined up for us. Emotion is a nonconscious process that accesses information created consciously and delivers emotional sensations back to consciousness. Subconceptual processing nonconsciously surveys all of memory simultaneously to find popular matches between current conditions and past experience and delivers impressional sensations back to consciousness. These impressions can carry instinctive overtones which affect how inclined we are to attend to them, and memory overtones which we can probe further to recall additional subconcepts or concepts. Finally, conceptual thinking itself depends on considerable nonconscious assistance which we take for granted. All our mental abilities need support processing which our brains do for us outside our awareness. This includes memory, mental modeling, math, spatial thinking, episodic thinking, language, and theory of mind (the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others), to name the most apparent of our talents.

Most of memory processing is nonconscious. Consciously, we know that memories come to us when they are appropriate, either from recognition or by lookup. We have a pretty good sense of what we know, which is like an index, so we expect that when we “look up something” that we believe know we will recall the full memory. Sometimes our memory fails us and we can’t recall something we used to know. But the storage and recall of memory are nonconscious processes for which consciousness is just an interface. We have no conscious understanding of how we do it. We all have enough memory to form an episodic record of every event in our lives, but this doesn’t mean we commit every event to long-term memory or that we could recall it if we did. Memories that the brain deems superfluous will tend to become less accessible to most people over time, and the most frequently-accessed memories become the strongest. When we forget, it is typically more because we have can’t access the memory than that we have lost it altogether, and with further effort we may recall it. Whether we can access an event depends on how unique it is, because as we store episodic memories we nonconsciously link them up to related memories both subconceptually and conceptually. Memories that overlap a lot with others in their links can seem to “bleed” into each other and become indistinguishable except for truly unique moments. When we do visit an unusual place just once, we are more likely to pay attention to the details and be able to recall it specifically long into the future.

Language processing is also mostly nonconscious. Although a language itself is an artificial construction that people develop over time, we know that our ability to use language depends on support from specialized brain areas for particular language functions. Damage to these areas can disrupt language ability without affecting other mental abilities. While some have speculated that these areas shape language through a genetically-based universal grammar (UG), I think it is more likely that it is pretty plastic and is mostly a kind of specialized memory. Linguistic memory is very good at linking concepts to representations which can be vocalized, signed, or written, and can learn and internalize grammar rules in a way that is analogous to motor memory. That we see universality in language elements is more a sign of the effectiveness of those elements than being “hardwired” to speak only certain kinds of grammars. This is a very controversial area, and I am not going to defend this position in detail, but I will guess that the commonalities in language stem more from commonalities in thinking processes than from a genetic universal grammar. All people think in terms of actors, actions, and objects acted upon, and all languages facilitate communicating these conceptual thoughts using vocabulary chained together into propositions. People also have emotions and impressions, and languages help us communicate these thoughts through non-propositional content, which can either employ vocabulary, intonation, or nonvocal means. All languages can add words and have the potential to express any concept, but all have native constructions that facilitate or hinder certain expressions. For example, some languages always reveal number, sex, time, location, etc., while others can be vague on these matters. Linguistic relativism, aka the Sapir-Whorf Hypothesis, suggests that the structure of a language affects and perhaps constrains its speakers’ worldview. Some effects are probably undeniable, e.g. the use of masculine pronouns for mixed or indeterminate gender in nearly all Indo-European languages creates a discriminatory bias in the minds of their speakers. I think the idea that language structure can limit what people can think goes too far since thought happens at a lower level than language, but it does establish paradigms that can influence thought. For example, Germans may experience more schadenfreude, the pleasure derived by someone from another person’s misfortune, than English speakers because they have a word for the concept.
It is a fair analogy to say that consciousness is produced for us, like a movie, by the nonconscious mind. Consciousness makes us feel like we are the interactive stars in a continuous movie. The difference is that the observer of this internal movie is not another person, but just the consciousness of a person, and their nonconsciousness is the producer. Also, this partnership creates a much richer experience than any movie. I call our live conscious experience that is based on our sensory inputs the mind’s real-world feed, and from it, we create an overall model of the current state of the physical world that I call the mind’s real world. We can also simulate “worlds” in our minds by “constructing” sensory information from memory. I call these simulated-world feeds and simulated worlds. A daydream or a story creates a detailed simulated world through a feed into it. Projected real worlds constitute a large subclass of simulated worlds. Projected real worlds attempt to realistically project what might happen in the mind’s real world. The ability to anticipate what will happen is instrumental to controlling the body, which is the mind’s purpose.

Simulated worlds themselves are a subclass of mental models, which are frameworks of subconceptual and conceptual thoughts bound together by a set of rules. Mental models may engage our conscious awareness as simulated worlds do, or they may be any abstract way we devise for thinking about something. As we mature we build a vast catalog of mental models to help us navigate the world. Our whole conception of the physical world comes to us through the mind’s real world, which is a simulation. It is very detailed and very reliable, but it is ultimately only a description of the real world with finite granularity. The implication of this is that while we only have one mind’s real-world feed, we model the mind’s real world using countless mental models to which we attach a greater or lesser degree of confidence. The mind’s real world is thus a constellation of simulated worlds and parts of worlds, many of which may take different perspectives on the same things. We don’t really know which is right or what “right” really means; we only know each model is effective in its own way. We aspire for our models to embody truth, meaning that they will always hold up when applied correctly, but we recognize that knowledge is always a compromise and truth is always contextual. Within a well-formed mental model, truth can be absolute, but the application of that model always introduces the possibility of alignment errors. Also note that the alignment from senses up to mental models is a continuous, two-way street. With substantial nonconscious support, we are constantly testing and adjusting the mental models that comprise the mind’s real world to bring their predictions more in line with the real world. As Andy Clark puts it in Surfing Uncertainty, “We alter our predictions to fit the world, and alter the world to fit our predictions.”

Our most effective mental models use a consistent and logical conceptual framework, because logical reasoning is better for solving problems than pattern recognition, common sense, and intuition, which can’t work out the implications that novel problems present. Logical reasoning gives us an open-ended capacity to chain causes and effects. It is important to remember that concepts and mental models build on other concepts, mental models and ultimately on instincts and subconcepts. This deeper framework is an essential component of understanding. John von Neumann once said, “Young man, in mathematics you don’t understand things. You just get used to them.”16 But it’s not just mathematics; all of understanding is really a matter of getting used to things. A conceptual model can be internally consistent, but its larger meaning depends on its alignment to simulated worlds, the mind’s real world, and the physical world, which we sense through its deeper connections to all our mental models. And good alignment is ultimately the basis of functionality, meaning news you can use.

In summary, all thought and behavior result from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genes collect and refine the first level of information across an evolutionary timescale. Instincts (senses, drives, and emotions) create the second level of information from patterns processed in real time but whose explicit utility has been established by evolution. Subconcepts form a third level of information whose specific use has not been predetermined by evolution, but which can be counted on to be useful over time. Finally, concepts are a fourth level of information which is representational or symbolic. Like subconcepts, conceptual thinking is innate and its adaptive value is established only by reproductive success. Evolution doesn’t know why collecting real-time information helps animals because genes are selected based only on their success and not the reasons for their success. But from our perspective, we can see the distinct purposes that instincts, subconcepts, and concepts fulfill. These three talents are distinct but can be hard to cleanly separate. It is hard to tell where instinct leaves off and real-time learning begins because they integrate so well. All our motivation derives from instincts, so even our most complex behaviors are partly instinct and partly learned. In principle we can always distinguish concepts from subconcepts because concepts are conscious collection points of information based on commonalities and subconcepts are only diffuse impressions, our mind doesn’t distinguish them for us. We group related bits of information together conceptually in countless ways for very transient purposes, and even common concepts have many variations and connotations, so it is hard for us to pin any concept down. It is even harder to identify subconcepts because to do so would conceptualize them, so we have to be content with inferring their existence from the realization that much of our experience is not bucketed conceptually. And although a concept only formally differs from a subconcept because it represents a grouping of information through an internal symbol or bucket, that symbol can participate in logical forms, which abstract logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. We then extend necessity to possibility using modal logic, which splits a necessary world into a series of possible worlds. While the mind’s real-world feed results in just one necessary past, we view the future as a set of possible worlds and use information to find the most probable.

Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen nonconsciously. All our top-level decisions are controlled consciously, which means we can control everything above the level of a reflex. But, ironically, none of our top-level decisions are executed consciously. What I mean by this is that the conscious mind delegates the execution of every decision to nonconscious processing. We take it for granted that our wish to blink our eyes will cause the appropriate muscles to contract, but we have no idea how that happens. Much more than that, we frequently delegate the decision to nonconscious control on a “preapproved” basis. Blinking is preapproved before we even know we have eyes or why we blink them, but we can take that preapproval away and control it consciously (though we won’t do it for long). Almost all the details of walking and moving are preapproved, and we don’t take much conscious notice unless we suspect we might want to override habit. This delegation of authority is necessary because the role of the conscious mind is to make top-level decisions, and it only has a single, narrow stream of conscious thought (or a handful, in the case of DID) with which to consider all relevant factors, so the more habitual behavior can be automated the better. Non-habitual behavior can’t be delegated to nonconscious processes because the nonconscious mind can’t reason through solutions logically, but it can remember strategies that have worked before subconceptually and will present good fits almost instantly through intuition, which constitutes much of the power behind snap judgments. While it is true that experiments have proven the brain can know what decision we will make up to ten seconds before we make it, this only demonstrates our power to “unconsciously prime” or preapprove decisions within operating parameters we have consciously established.17

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”18 So it is entirely instinctive, and results mostly from an innate desire to suppress the sound of running water. We also know language acquisition in humans is substantially innate because humans with no language will create one19. But we know that all the built artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, both subconceptual and conceptual. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects become so intertwined with feedback that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalize an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that goes far enough in some cases to indicate use of concepts to plan causes and effects. Non-advanced animals can’t do this and seem to lack the brain areas we associate with conceptual thought, so I believe only advanced animals have any capacity for it. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? We know our thinking skills derive substantially from the size of our cerebral cortex, which is largely a function of how wrinkled it is. The number of cortical neurons correlates fairly well to how we might rank animal intelligence, with humans far outstripping other mammals20 (though not quite equaling some whales). But what do those neurons do to make us smarter? Over 500 genes contribute to intelligence21, and though we don’t know what they do, I would say that our greater capacity for abstract logical reasoning is the most defining trait of human intelligence. Abstraction is disassociation from specific instances, also called generalizing, and logical reasoning draws conclusions from generalized concepts. While other advanced animals have some capacity for this, humans engage in directed abstract thinking, which means we use models and simulations to solve problems independent of the here and now. When humans became able to decouple simulations to an arbitrary degree from the mind’s real-world feed, it expanded our intellectual potential arbitrarily as well. Any topic is now within our reach, even if we are limited by our brains in how efficiently we can address it. Our capacity for abstraction coevolved with improved abilities to think spatially, temporally, logically and especially linguistically because each new addition to our mental arsenals quickly spread from competition. As we became capable with tools, our ecological niche expanded to potentially include all niches. Conceptual thinking combined with tools let us develop and act on causal chains of reasoning that outperform instinctive and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and consequently practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and cognitive science (or at least some of it), and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances while leveraging learning and memory. Instance is not divisible into explanatory parts; natural selection essentially allows the ends to justify the means. Subconceptual thinking is also unconcerned with the means; it evolved because real-time pattern storage, analysis and lookup was useful to survival. While the same could be said about conceptual thinking, the means are more relevant both because the algorithms are open-ended and because we are conscious of them. Conceptual thinking (logical reasoning) establishes its own criteria, where a criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” This implies that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can’t work in a gestalt way as instinct and subconcepts do; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe they are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed generalized nonconscious skills to find patterns and benefit from them. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain and control the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”22. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Philosophers have at times over the ages proposed different categories of being than form and function, but I contend they were misguided. Attempts to conceive categories of being all revolve around ways of capturing the existence of functional aspects of things. Aristotle’s Organon listed ten categories of being: substance, quantity, quality, relationship, place, time, posture, condition, action, and recipient of action. But these are not fundamental, and they conflate intrinsic properties with relational ones. Physically, we might say all particles have substance, place, and time (to the extent we buy into particles and spacetime), so these are inherent aspects of physical objects. All the other categories characterize aspects of aggregates of particles. But we only know particles have substance, place and time based on theories that interpret observed phenomena of them. Independent of observations, we can posit that a particle or object itself exists as a noumenon, or thing-in-itself. Any information about the object is a phenomenon, or thing-as-sensed. We have no direct knowledge of noumena; we only know them through their phenomena. Noumena, then, are what I call form, while the interpretation of phenomena is what I call function. Some aspects of noumena are internal and can never be known, while others interact with other noumena to propagate information about them, called phenomena. We can only learn about noumena by analyzing their phenomena using information management processes. We further need to break phenomena down into three aspects. A tree that falls in a forest produces a sound, being a shockwave of compressed air, that is the transmitted phenomenon. If we hear it, that is the received phenomenon. If we interpret that sound into information, that is the true phenomenon or thing-as-sensed, since sensing means “making sense of” or interpreting. The transmitted phenomenon and received phenomenon are actually noumena, being a shockwave or vibration of eardrums in this case, so I will reserve the word phenomenon for the interpretation or information processing itself. Interpretation is strictly functional, so all phenomena are strictly functional. Similarly, all function is strictly phenomenal in the sense that information is based on patterns and so is about them, because patterns are received phenomena. Summarizing, form and function dualism could also be called noumenon and phenomenon dualism. This implies that phenomena are not simply the observed aspects of noumena but are functional constructs in their own right and are consequently not ontologically dependent on their noumena. One could also so that all knowledge is phenomenal/functional while the objects of all knowledge are noumena/forms. Finally, I’d like to note that while form usually refers to physical form, when we make a functional entity the object of observation or description, it becomes a functional (non-physical) noumenon. For example, “justice” is an abstract concept about which we can make observations and develop descriptions. Our interpretations or understanding of it are phenomenal and functional, but justice itself (to the extent it can abstractly exist by itself) is just noumenal. While we can’t know anything for certain about noumena, a convenient way to think about them is that if we had exhaustive, detailed phenomenal knowledge of a noumenon would reveal the noumenon to us. It’s not the same, because it is descriptive and not the thing itself, but it would eliminate mysteries about the noumenon. Put another way, our knowledge of (noumenal) nature grows more accurate all the time through (phenomenal) theories. We like to think we know our own minds, but our conscious stream can only access small parts at a time, which gives a phenomenal view into our own mind, whose noumena are only partially known to us. In other words, we know our minds have functions, but our knowledge of them is indirect and imperfect. But we tend not to think that way because we can study our minds indefinitely until we are quite confident we have gotten sufficiently “close” to the noumena. So abstract nouns like “apple” about physical noumena and like “justice” about functional noumena both define noumenal concepts which we feel we understand well even though we can only define and describe them approximately using words. While the full noumenal nature of “apple” and “justice” varies from person to person and depends on all their experience with the concepts and assessments they have made about them, our phenomenal understanding of them at a high level intersect well enough that we can agree on basic definitions and applications in conversation.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

  1. Downward Causation, Principia Cybernetic Web, 1995
  2. Downward Causation, The Information Philosopher
  3. Chemical bonding model, Wikipedia
  4. Pauling, L. (1932). “The Nature of the Chemical Bond. IV. The Energy of Single Bonds and the Relative Electronegativity of Atoms”. Journal of the American Chemical Society. 54 (9): 3570–3582. doi:10.1021/ja01348a011.
  5. Joscha Bach, From Computation to Consciousness: Can AI reveal the nature of our minds?, Ted Talk
  6. The Human Memory, Luke Mastin, 2018
  7. Why psychedelic drugs could transform how we treat depression and mental illness, A conversation with author Michael Pollan on becoming a “reluctant psychonaut.”, Sean Illing, Vox
  8. How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence., Michael Pollen, 2018
  9. I did have a few experiences with psilocybin and LSD in college and I do consider myself permanently improved by them. I have always been heavily into introspective thoughts, but I felt I was able to reach a sufficient state of epiphany that I could just take things easier going forward.
  10. A Mathematical Theory of Communication, Claude Shannon, 1948
  11. Science and Complexity, Warren Weaver, 1948
  12. Gilbert Ryle, The Concept of Mind, University of Chicago Press, 1949
  13. Ryle’s examples are more involved, e.g. that colleges and libraries comprise universities and that battalions and squadrons comprise military divisions.
  14. Conclusions of the Research on Nonconscious Information Processing, Pawel Lick, Psychology Department, University of Tulsa, Tulsa, OK
  15. Preconscious, Gillian Fournier, Psych Central, 2018
  16. John von Neumann, Wikiquote
  17. Brain makes decisions before you even know it, Kerri Smith, 11 April 2008, Nature
  18. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  19. Nicaraguan Sign Language
  20. List of animals by number of neurons, Wikipedia
  21. A combined analysis of genetically correlated traits identifies 187 loci and a role for neurogenesis and myelination in intelligence, W.D. Hill et al, 2018, Molecular Psychiatry
  22. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2

1 thought on “Hey, Science, We’re Over Here”

Leave a Reply