Hey, Science, We’re Over Here

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. While I agree that there is a physical basis for everything physical, I don’t agree that all physical things are only physical. Some physical things (namely living things and artifacts they create) are also functional, which is a distinct kind of existence. While one can explain these things in physical terms, physical explanations can’t address function, which is a meaningless concept from a physical perspective. I will argue that functional existence transcends physical existence and is hence independent of it, even though our awareness of functional existence and use of it are mediated physically. This is because function is indirect; while it can be applied to physical circumstances, functional existence is essentially the realm of possibility, not actuality. The eliminativist idea that everything that exists is physical, aka physical monism, is only true so long as one is referring only to physical things that are not functional. But physical systems can become functional by managing information, which is patterns in data that can be used to predict what will happen with better than even odds. Once physical things start to manage information, physical monism breaks down because information isn’t physical, though it can be stored physically. Function is an additional kind of existence that characterizes a system’s capabilities rather than its substance. Functional systems can be physical or imaginary, but if they exist physically then their functional reach has physical limitations and their physical reach depends on their functional strength. In nonfunctional physical systems, events unfold in direct accordance with physical laws, but functional physical systems predict and influence events using information created using feedback. A capability can be thought of as the “power” to do something, which is distinct from the underlying physical mechanisms that make it possible. Living things principally exist functionally and only secondarily physically because natural selection selects functions, not forms. Consequently, any attempt to understand living things must focus first on function and then on form. Brains are organs responsible for managing dynamical information, and the mind is that functional capacity itself. We are not always precise in how we use the words brain and mind, but I will use brain only to refer to physical aspects of the organ, while mind refers to functional aspects independent of our knowledge of the brain. Understanding the brain necessarily depends on the physical study of it, while understanding the mind, I contend, depends on studying its functions. The two work together, and so the study of each can help inform the other, but at all times function and form are existentially independent. How the brain manages function (i.e. as the mind) is so abstracted from the physical mechanisms that make it possible that we only understand in broad terms how the joint venture of form and function works. It is my job here to establish those broad terms better and to clarify them as much as possible.

Reductionists reject downward causation1 as a nonsensical emergent power of complex systems. The idea is that some mysterious, nonphysical power working at a “higher” level causes changes at a “lower” level. For example, downward causation might be taken to imply that a snowflake’s overall hexagonal shape causes individual water molecules to attach to certain places to maintain the symmetry of the whole crystal. But this is patently untrue; only the local conditions each molecule encounters affects its placement. Each new water molecule will most likely attach to spots on the crystal most favored by current conditions encountered by that snowflake, which constantly change as the snowflake forms. But those favored spots at any given instant during its formation are hexagonally symmetric, making symmetrical growth most likely. The symmetry only reflects the snowflake’s history, not an organizing force. But just because downward causation doesn’t exist in purely physical systems, that doesn’t mean it doesn’t exist in functional systems like living things. Any system capable of leveraging feedback can make adjustments to reach an objective, hence “causing” it, if it also has an evaluative mechanism to prefer one objective over another. Mechanisms that record such objectives and have preferences about reaching them are called information management systems, and living things use several varieties of these systems to use feedback to bring about downward causation. It is a misnomer to call this capacity of life an “emergent” property because it doesn’t appear from nothing; it is just that certain physical systems can manage information and apply feedback. So we can see that the “higher” levels of organization are these information management systems and the “lower” levels are the things under management. Living things use at least two and arguably four or more information management systems at different levels (more on these levels later). Reductionists hold that causation is entirely a function of subatomic forces, and that “causes” at the atomic, molecular, or substance level are reducible to subatomic forces. They further conclude that, since life is natural with nothing added (e.g. no mystical “spirit”), the mind is an epiphenomenon or an illusion, and in either case, is certainly not causing anything to happen but is merely observing. Although this strongly contradicts our experience, reductionists will just say that our perception of time and logic is biased, so we should ignore our feelings and just accept that our existence as agents in the world is only a convenient way of describing matters really managed at the subatomic level.

But the reductionists underestimated the potential side-effects of the uniformity of nature. The fact that all subatomic particles of a certain type behave the same way all the time, similarly causing uniform behavior among atoms, molecules and substances, has implications about the kind of reality that will emerge. Without uniformity, information would be impossible and function could not be achieved because they work by collecting and applying feedback from past events to predict future events. With uniformity, feedback loops can discover and exploit patterns that repeat themselves. Given enough time these loops can create arbitrarily complex systems that are potentially capable of doing anything physically possible in that universe. You might think of them as trial and error machines: they start out trying things at random, but will launch an arms race that produces ever more systematic methods to deliver ever more relative functionality, until they are ultimately capable of doing potentially everything physically possible in that universe. So, ironically, information doesn’t emerge from complexity; it emerges from uniformity. Uniformity is akin to a superpower or supernatural force that, if present in a universe, enables the extra dimension of functional existence to manifest if feedback loops can flourish.

Bob Doyle (the Information Philosopher) explains it like this:


Some biologists (e.g., Ernst Mayr) have argued that biology is not reducible to physics and chemistry, although it is completely consistent with the laws of physics. Even the apparent violation of the second law of thermodynamics has been explained because living beings are open systems exchanging matter, energy, and especially information with their environment. In particular, biological systems have a history that physical systems do not, they store knowledge that allows them to be cognitive systems, and they process information at a very fine (atomic/molecular) level.

Information is neither matter nor energy, but it needs matter for its embodiment and energy for its communication.

A living being is a form through which passes a flow of matter and energy (with low or “negative” entropy, the physical equivalent of information). Genetic information is used to build the information-rich matter into an overall information structure that contains a very large number of hierarchically organized information structures. Emergent higher levels exert downward causation on the contents of the lower levels.2

I wrote most of this book before I discovered Bob Doyle’s work, so I did not know that anyone else had proposed the full-fledged existence of function/information independent of physical matter. But I’m glad to see that someone else thinks along the same lines as me. Doyle’s mission is to expose the larger role of information in philosophy, while my mission is to explain the mind. While doing that, I came to see function as a primary force and information as the medium that makes function possible. They are different: function is directly connected to purpose while information is only indirectly connected. Function is the capacity to do something, and information may or may not make it possible. Minds are functional entities that employ information.

I am not proposing property dualism, the idea that everything is physical but has both physical properties and informational properties. No physical thing has an informational property. Rather, I agree with eliminative materialists that physical things are just physical so far as physical existence goes. But physical systems called information management systems can arise that can exploit feedback to arbitrary degrees by recording that feedback as information, and this information can be viewed from a different perspective as having a distinct kind of existence. It doesn’t exist if your perspective is physical, but it is useful to have more than one perspective about what constitutes existence if your goal is to explain things. Ironically, explanation itself is not physical but is a functional kind of thing, so eliminative materialists can only argue their case by ignoring the very medium they are using to do so. But functional existence is a bit harder to put your finger on because function and information are abstractions which can never be perfectly described. Perfect descriptions are impossible because they are ultimately constructed out of feedback loops, which can reveal likelihoods but not certainties. Tautological information can be considered to be perfect, but only because it is true by definition. Tautological information doesn’t actually become functional until it is applied in a useful way, and “use” implies that at least something about the system is not known by definition. So some closed formal systems, e.g. some math and computer languages, can be perfectly deductive and within them information and functions are entirely predictable. But in all other systems, including the physical universe, some induction is necessary to predict what will happen, which will only work if those systems exhibit some uniformity. Function is a generalization about patterns that feedback has revealed, and a generalization is a nonphysical perspective, which is to say a way of explaining relationships between things. While function is strictly nonphysical, information can be thought of as having a physical manifestation through data stored in a physical medium like a brain, computer, or book. That the information is actually functional and not physical can be demonstrated through a simple thought experiment. Any information, e.g. the integer seventeen, can be stored in different ways in a person’s mind or in a computer, and can also be imagined to exist in a hypothetical sense without any physical manifestation. What it means to be “seventeen” is independent of any physical medium used to store it and above and beyond that physical form; it is what I call its function. One can argue that the “true” meaning of seventeen differs depends on how we define numbers and operations on them, but these things are themselves nonphysical functional concepts and not dependent on how or whether we store them in our brains or elsewhere.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. The only place function has appeared is in living organisms, who achieved it through evolution, which applies feedback from current situations to improve the chances of survival in future situations. The biochemical mechanisms they employ matter more from a functional standpoint than a physical standpoint because they are only selected for what they can do, giving them a reason to exist, and not for how they do it. In the nonliving world, things don’t happen for a reason, they just happen. We can predict subatomic and atomic interactions using physics, and molecular interactions using chemistry. Linus Pauling’s 1931 paper “On the Nature of the Chemical Bond” showed that chemistry could in principle be reduced to physics34. Geology and earth science generalize physics and chemistry to a higher level but reduce fully to them. However, while physical laws work well to predict the behavior of simple physical systems, they are not enough to help us predict complex physical systems, where complexity refers to chaotic, complex, or functional factors or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. These models are based on physical laws but use heuristics to approximate how systems will behave over time. But the weather and all other nonliving systems don’t control their own behavior; they are reactive and not proactive. Living things introduce functional factors, aka capabilities. Organisms are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes through DNA. I can’t prove that complex adaptive systems are the only way functionality could arise in a physical universe, but I don’t see how a system could get there without leveraging cycles of positive and negative feedback. Over time, a CAS creates an abstract quantity called information, which is a pattern that has occurred before and so is likely to occur again. The system then exploits the information to alter its destiny. Information can never reveal the future, but it does help identify patterns that are more likely to happen than random chance, and everything better than random chance constitutes useful predictive power.

Functional systems, i.e. information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities (though it does place some constraints on those functions). While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage (approximately) the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented. That said, in practice, physical strengths and limitations make each kind of brain and computer stronger or weaker at different tasks and so must be taken into consideration.

I call the new brand of dualism I am proposing form and function dualism. This stance says that while everything physical is strictly physical, everything functional is strictly functional. Physical configurations can act in functional ways via information management systems, and these systems can only be understood from a functional perspective (because understanding is functional itself). Consequently, both physical and functional things can be said to exist, never as different aspects of each other but as a completely independent kind of existence. Functional existence can be discussed on a theoretical basis independent of physical information management systems to implement them. So, in this sense, mathematics exists functionally whether we know it or not. More abstractly, even functional entities entirely dependent on physical implementations for their physical existence, like the functional aspect of people, could potentially be replicated to a sufficient level of functional detail on another physical platform, e.g. a computer, or it could be spoken of on a theoretical basis. In fact, when we speak of other people, we are implicitly referring to their functional existence and not their physical bodies (or, at least, their bodies are secondary). So, entirely aside from religious connotations, we generally recognize this immaterial aspect of existence in humans and other organisms as the “soul.”

Information management systems that do physically exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

    Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful and necessary to discuss such functions independently of the underlying physical systems they run on. Also note that minds heavily leverage the information managed by organisms, so one can’t deeply understand them without understanding organism function as well. Civilizations and software manage information using appropriately customized systems, but they are direct creations of minds and their ultimate purposes are only to serve the purposes of the mind. If we do learn to build artificial minds or, more abstractly, information management systems with their own purposes and reasons for attaining them independent of our own, then they could be considered an independent class beyond organisms and minds.

Joscha Bach says, “We are not organisms, we are the side-effects of organisms that need to do information processing to regulate interaction with the environment.” 5 This statement presumes we define organisms as strictly physical and “we” as strictly functional, which are the senses in which we usually use these words. But saying that we are the side-effects is a bit of a joke because it is really the other way around: the information processing (us) is primary and physical organisms are the side-effects. Bach points out that mind starts with pleasure and pain. This is the first inkling of the mind subprocess, a process within the brain, separate from all lower-level information processing, whose objective is to make top-level decisions. By summarizing low-level information into an abstract form, the behavior of the brain can be controlled more abstractly, specifically: “Pleasure tells you, do more of what you are currently doing; pain tells you, do less of what you are currently doing.” All pleasure and pain are connected to needs: social, physiological and cognitive. In higher brains like ours, consciousness is like an orchestra conductor who coordinates a variety of activities by attending to them, prioritizing them, and then integrating them into a coherent overall strategy (his “conductor theory of consciousness”). Bach identifies the dorsolateral prefrontal cortex as the brain region that is most likely acting as the conductor of consciousness, coordinating the features of consciousness together to make them goal oriented. This part of the brain can help us run simulations of possibilities through our memories with an eye to aligning them with desired objectives. Our memories are encoded in the same parts of the brain that originally experienced them, e.g. the visual cortex, motor cortex, or language centers, etc., and recalling them reactivates the neural areas that generated that initial encoding.6 Bach theorizes that the conductor is off when people sleepwalk. They can still go to the fridge or in some cases even make dinner or answer simple questions, but there is “nobody home”. Similarly, whenever we engage in any habitual behavior, our conscious conductor has ceded most of its control to other brain areas that specialize in that behavior. While the conductor can step in and micromanage, it usually won’t, and if it does step in where it has not been for a long time, it will often make things worse because it doesn’t remember how shoes are actually tied or how sentences are actually formed. Bach’s theory proposes that “You are not your brain, you are a story that your brain tells itself,” which is correct except for its humorous use of the word “itself” — the brain doesn’t have a self; you do. More accurately the sentiment should go, “You are your mind, but your mind is not your brain; the mind is a story the brain produces that you star in.”

I’m not going to go too deeply into the mechanisms of the brain, both because we only know superficial things about them and because my thesis is that function drives form, but I would like to talk for a moment about the default mode network (DMN). This set of interacting brain regions has been found to be highly active when people are not focused on the world around them, either because they are resting or because they are daydreaming, reflecting, or planning, but in any case not engaged in a task. It is analogous to a running car engine when the clutch is engaged and power isn’t going to the wheels. Probably more than any other animal, people maintain a large body of information associated with their sense of self, their sense of others (theory of mind), and planning, and so need to be able to comfortably balance using their mind for these activities versus using them for active tasks. We like to think we naturally maintain a balance between the two that is healthy for us, but we now know that our culture has prioritized task-oriented thinking over reflection. Excessive focus on tasks is stressful, and more engagement of the default mode network is the solution. This can be achieved through rest and relaxation, meditation and mindfulness exercises, and, most effectively of all, via psychotropic drugs like psilocybin and LSD. Even a single experience with psychedelic drugs can permanently improve the balance, potentially curing depression or improving one’s outlook, though more research needs to be done to establish good guidelines (and they also need to be decriminalized!)789

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final. The formal cause attaches meaning to the shape something will have, essentially a generalization or classification of it from an informational perspective. While this was an intrinsic property to the Greeks, nowadays we recognize that classification is extrinsically assigned for our convenience. The efficient cause is what we usually mean by cause today, i.e. cause and effect. Physicalism sees the material, formal and efficient causes as the physical substance, how we classify it, and what changes it. However, physicalism rejects the final, teleological cause because it sees no mechanism. After all, objects don’t sink to lower points because it is their final cause, but simply because of gravity. While I agree that this is true of ordinary physical things, I hold that teleology is both intuitively true and actually true for functional systems, and that the mechanism that makes it possible is information management. Physicalists consider the matter closed — teleology has been disproven because there is no “force” pulling things toward their purpose. But if one can see that function as real, then one can see that it exists to pull things toward purposes. Teleology is so far out of favor that Wikipedia is hesitant to admit that the life sciences might require teleological explanation: “Some disciplines, in particular within evolutionary biology, continue to use language that appears teleological when they describe natural tendencies towards certain end conditions.[citation needed] While some argue that these arguments can be rephrased in non-teleological forms, others[who?] hold that teleological language cannot be expunged from descriptions in the life sciences.” It goes on, “Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically.” I would make an analogy to trying to efforts to convert homosexuals to heterosexuality. Although evolution is, as Richard Dawkins says, a blind watchmaker, the information management systems of life nevertheless have purpose. The purpose of lungs is to provide oxygen, of the heart to circulate blood, and of limbs to provide mobility. They can also have additional purposes. The body is not aware of these purposes, and our attempts to summarize them through theory and explanation can only be approximately right, but their approximate truth is undeniable. Lungs, hearts and limbs that do not fulfill these purposes will seriously jeopardize survival. Although there is no actual watchmaker and the function of the lungs results from many chance feedback events, animals need oxygen and lungs help provide it. This is their design purpose whether the design was intentional or not.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. I contend that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 194810, which then led into systems theory11, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Brains are dynamic information management systems that create and manage information in real-time. Minds are subprocesses running in brains that create a first-person perspective to facilitate top-level decisions. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind12 in 1949. While we know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, Ryle felt it still had tacit if not explicit “official” support. While our lives unfold in two arenas, one of “inner” mental happenings and one of “outer” physical happenings, each with a distinct vocabulary, he felt philosophy presumed more than this: “It is assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed, contending that the mind is not a “ghost in the machine,” something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake” to describe a situation where one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, the mistake arises from a failure to understand that forest has a different scope than tree.13 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough, we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

But Ryle was more mistaken than Descartes. His mistake was in thinking that the whole problem was a category mistake, when actually only a superficial aspect of it was. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because saying how the clock works is not really the interesting part — the interesting part is the purpose of the clock: to tell time. The function of the brain cannot be explained physically because purpose has no physical corollary. The brain and the mind have a purpose — to control the body — but that function cannot be deduced from a physical examination. One can tell that nerves from the brain animate hands, but one must invoke the concept of purpose to see why. Ryle saw the superficial category mistake (forgetting that the brain is a machine) but missed the significant categorical difference (that function is not form). Function can never be reduced to form, even though it can only occur in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but talking about the mind is not the same as talking about those processes any more than talking about cogs is the same as talking about telling time. The subject matter of the brain and mind is functional and never the same as the physical means used to think about them. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong. In this case they really do indicate two different types of existence. Yes, the mind has a physical manifestation as a subprocess of the brain, so it is physical in that sense. But our primary sense of the word mind refers to what it does, which is entirely functional. This is the kind of dualism Descartes was grasping for, but he overstepped his knowledge by attempting to provide the physical explanation. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are fundamentally not physical and their existence is not dependent on space or time; they are pure expressions of hypothetical relationships and possibilities.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because material science has rejected teleology as mystical. But a physical science that ignores the existence of natural information management systems can’t explain all of nature.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. From a physical perspective the system is not doing some “new” kind of thing it could not do before; it is still essentially a set of cogs and wheels spinning. All that has changed is that feedback is being collected to let the system affect itself, a capacity I call functionality. The behavior that results builds on a vastly higher level of complexity which can only be understood or explained through paradigms like information and functionality. While there are an infinite number of ways one could characterize or describe information and functionality, all these ways have in common that they are detecting patterns to predict more patterns. Because one must look to information and function to explain these systems and not only to physical causes, it is as if something new emerged in organisms and the brain. Viewed abstractly, one could say that the simplistic causal chains of physical laws are broken and replaced by functional chains in functional systems. This is because in a system driven by feedback, cause itself is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages to achieve functional ends. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system could thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but, through feedback, a tool can come to be designed to perform the job better. Thus, from an ideal perspective, function begets form.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that confers capabilities to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same genes. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while in others the “purpose” of the gene can be hard to identify. It seems likely that each protein would fulfill one biological function (e.g. catalyzing a given chemical reaction) because heritability derives from selection events on one function at a time, so multiple functions would be challenging for natural selection to maintain because it seems mutations would be unlikely to be mutually beneficial to two functions. However, cases of protein moonlighting, in which the same protein performs unrelated functions, are now well-documented. In the best-known case, different sequences in the DNA for crystallins code either for enzyme function or transparency (as the protein is used to make lenses). A majority of proteins may moonlight, but, in any case, it is very hard to unravel all the effects of even a primary protein function. So any causal model of gene function will necessarily gloss over subtle benefits and costs. Its real purpose is a consolidated sum of the role it played in facilitating life and averting death in every moment since it first appeared. The gene’s functionality is real but has a deep complexity that can only be partially understood. Even so, approximating that function through generalized traits works pretty well in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and their traits when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first conceptual thinking, which can also called logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects and objects) taken to be true, and draws consequences from them. Subjects and objects and the premises and predicates about them are all concepts, meaning discrete, expressed ideas. The second approach is subconceptual thinking, which is a broader world of thinking about things without subdividing them into conceptual buckets. Our memories and thinking processes are mostly subconceptual, while language and logical reasoning are mostly conceptual. I call the units of subconceptual thinking subconcepts in analogy to concepts. While concepts are well-defined, subconcepts are just impressions, but they are impressions based on lots of experience. We form subconcepts just from having experience and reflecting on it over time. Subconceptual impressions give us a generalized gist of the way things are. Subconcepts continuously arise in our minds as our memory is triggered by current stimuli. They are connected to many other subconcepts through loose associations drawn from many experiences. The lowest level of subconcepts develop from sensory feedback and can even have an instinctive component. For example, we instinctively avoid pain, but we build a subconceptual database that generalizes situations that might be painful. Subconcepts then go well beyond instinct to associate arbitrary causes with effects based on experiences rather than causal mechanisms. Concepts are well-organized by comparison because they have definitions that must be met to trigger a match. A concept is a bucket that holds a generalization about a class of things or other concepts. These buckets give us the ability to manipulate concepts and relationships between them using logical relationships. So where instincts produce fixed reactions and subconceptual thinking produces customized but simple reactions based on experience, conceptual thinking can produce a logical chain of cause-and-effect entailments that achieve complex, multi-step solutions beyond the reach of instinct and subconcepts. Subconceptual thinking supports things like common sense, pattern recognition, intuition, art, and music, while conceptual thinking supports math, language, and science, though conceptual thinking draws heavily on subconceptual thinking as well. We sometimes form concepts based on subconcepts, and these are called stereotypes. Each person forms their own stereotypes from their subconcepts, but we usually reserve the word stereotype for stereotypes widely shared by many people. Stereotypes are notably susceptible to bias and are often associated with bias, but they can provide useful guidelines for thinking conceptually about subjects not formally divided into concepts.

Much, or even most, of the data our brains gather about the world is subconceptual and can be trusted to guide us where we have little or no underlying conceptual understanding. While all our experiences create impressions that are used to form both subconcepts and concepts, concepts create an explicitly link between concept’s bucket or name to its definition, which is itself a set of relationships that gets fuzzy around the edges rather than a strict dictionary defintion. Usually, one or more examples help define it, but other means can be used to provide its set of defining generalizations. All examples will have some details that are more specific than the definition. While all examples must meet the definition, some features may only appear optionally. An object reference is another kind of bucket that associates to a specific thing or idea rather than a general type of thing as a concept does. We may or may not have one or more concepts to describe any given object reference, but an object reference is defined in terms of what it refers to, not by concepts. We can name a concept by attaching a word or phrase to it, and we can name an object reference using a proper noun. Many words are attached to multiple concepts, each of which will have its own entry in the dictionary. Words provide the most versatile anchors for referencing concepts, but we manage many concepts contextually without having specific or entirely accurate words to name them, for example for familiar parts of larger objects. Thinking about concepts, object references, and subconcepts to find new patterns and reach conclusions is called reasoning. Reasoning is the conscious capacity to “make sense” of things by producing useful information linking them together. We have just one stream of consciousness along which our attention is focused at any given moment, but we have many lines of thought, which are sequences of thoughts pertaining to each other that share a common context. We will hold one or more lines of thought for each objective we think about attaining. These lines can break down into sub-lines for sub-objectives or alt-lines for alternative strategies. We will often imagine lines that are completely unattainable, but we can devote lots of time to attaining desirable lines of thought.

The most primitive subconcepts, percepts, are drawn from the senses using internal processes to create a large pool of information akin to big data in computers. Subconcepts and big data are data that is collected without explicit consideration of the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful again. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction (i.e. having no corollary in the physical world) that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Basic reasoning uses subconceptual pattern analysis, including recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises and not diffuse, unstructured data. Basic reasoning can also leverage concepts, but deduction specifically requires them. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity.

You will only consider any theory about how the mind works credible if it is conceptual. Subconceptual appeals to common sense, intuition or other impressions could help build a case but do not make a case. Two fundamental differences between concepts and subconcepts account for this. The first is that concepts support binary logic with which one can deductively prove results given premises. And the second is that we have full conscious awareness of the premises and logic we use to reason conceptually. Subconcepts, on the other hand, only inductively suggest results given a body of experience, extrapolating from examples. And, more significantly, we do not have full conscious awareness, or sometimes any awareness, of the information the subconcepts are based on. I would like to expand on this last point a bit. All of our concepts are built on subconcepts, and all our subconcepts are built on our pool of experience, but we can’t consciously access most of that information for two reasons. First, we forget most of the detailed information we take in. Sensory memory only lasts seconds and short-term memory last at most minutes, but long-term memory can be permanent. But long-term memory must be reinforced or it fades. The exact reasons for this are not fully understood, but it is probably in our best interests, evolutionarily speaking. But experience reinforces concepts and subconcepts as it comes in, so has lasting value even if we forget the details. And second, our conscious minds are only made aware of information and information processing in the brain on a need-to-know basis. Again, the exact reasons are not fully understood but it is sensible. Some brain functions are automatic and don’t need conscious support, while many others are low-level and often semi-autonomous, and for these consciousness is either unaware or only sees a high-level summary. This hidden part of the brain is called the nonconscious mind14, and it is thought to manage up to 90 to 95 percent of everything we do, though it is arguably not meaningful to divide the share of work in this way given that the conscious mind is more a supervisor than a worker. All our reasoning about what our brains do nonconsciously is based on this process of elimination: it is the functions we know the brain performs minus those we report from conscious awareness. Before going further, let me contrast nonconscious with the more familiar term subconscious:

nonconscious: mental activity that is not conscious and cannot be brought into conscious awareness because it is outside the realm of conscious experience

subconscious: the part of the mind of which one is not fully aware but which influences one’s actions and feelings. mental activity just below the level of consciousness that influences conscious thoughts which can potentially be brought into conscious awareness because it is inside the realm of conscious experience

Freud coined the term subconscious in 1893 but abandoned it because he felt it could be misunderstood to mean an alternate “subterranean” consciousness. Though Freud gave up on it, it is used to refer to the well-known factors that influence our conscious thoughts and feelings that are at the “tip of the tongue”, as it were. Freud’s own model was based on the idea of a conscious mind and an unconscious mind. Freud’s unconscious mind was the union of repressed conscious thoughts that are no longer accessible (at least not without psychoanalytic assistance) and the nonconscious mind. He saw the preconscious, which is quite similar to what we now call the subconscious, as the mediator between them:

Freud described the unconscious section as a big room that was extremely full with thoughts milling around while the conscious was more like a reception area (small room) with fewer thoughts. The preconscious functioned as the guard between the two spaces and would let only some thoughts pass into the conscious area of thoughts. The ideas that are stuck in the unconscious are called “repressed” and are therefore unable to be “seen” by any conscious level. The preconscious allows this transition from repression to conscious thought to happen.15

I’m not going to refer to Freud or his terminology any further, both because it is now considered dated and because it was designed for psychoanalysis, which is quite far afield from explaining how the mind works. Most of my discussions will focus on the realm and interactions of the nonconscious and conscious minds, though the phenomena associated with the subconscious do warrant an explanation and I will address them later as well.

Let’s take a closer look at information and information processing techniques used by the nonconscious mind. Although we can’t, by definition, acquire a conscious awareness of this information and processing, we can use objective means to reveal that they exist and what they must do. Most notably, our senses process incoming data signals into the somewhat magically “real” sensory feelings we experience. If we look specifically at vision, our most powerful sense, we can readily see that several levels of nonconscious processing must be happening to produce sight as we know it. First incoming light must be captured by the camera of the eye onto a retina of photoreceptor cells that can represent the image as an essentially pixelated image with a certain degree of resolution, brightness, and wavelength distinction. Another layer needs to group the image into similarly-colored regions using edge detection, and a third layer needs to recognize objects from those groups. The exact mechanics are quite involved and only partially understood, but the important point here is that we know from experience that none of this information or its processing is conscious, so we can safely conclude it is nonconscious.

Similarly but in reverse, layers of nonconscious processing control our motor movements. As the nerves and muscles form, nonconscious neural mechanisms develop to control our muscles. Our conscious minds learn to control our muscles in a top-down way in terms of accomplishing purposes: we want to move in certain ways to get things done. The layers of processing needed to convert high-level wishes into coordinated muscle responses are automated using learning processes called habituation. We can and do, in time, gain considerable granular control over our muscles, but this is always only a supervisory level of control; we cannot understand any of the processes controlling muscle movement or coordinating groups of muscles. We instead have conscious feelings about them from our sensory processing, which works quite well for us but doesn’t reduce our indebtedness to our nonconscious minds.

Most of the processing that powers our memory is also nonconscious. Our conscious experience is that our desire to recall memories relating to something we have in mind is met with the appearance of such memories, but we don’t know how it happens. A somewhat random assortment of memories appears in connection with anything we think about. Intense focus on memories of any one thing can potentially unearth an almost unlimited number of memories and associations we have for that thing, which are then linked by an unlimited number of connections to other things. We have neither the time nor the inclination to probe our memories exhaustively, but we know we have a lot in there. But we also know that we are unlikely to be able to recall most of everything that has ever happened to us — we have either forgotten it or can’t access it anymore — our memories come up blank past a certain amount of detail.

Next, our facility with language is mostly nonconscious. Aside from our vast memory for words and their meanings provided by our nonconscious, we have what can only be called a knack for language; we can almost instantly form and understand sentences. Sentences use a complex grammatical structure which we have tried to unravel and teach in school, but our understanding of universal grammars can still be said to be somewhat rudimentary. The reality is that our brain’s language centers apply many subtle rules for us to make language work effectively which we can’t consciously see or understand at all.

Conscious thought itself is, of course, not nonconscious, but it depends on a staggering amount of nonconscious processing beyond what I have mentioned above. Although Descartes no doubt thought that conscious thought itself was a fundamental substance or functional entity, it is really a high-level subprocess of the brain specifically designed to limit awareness to a streamlined, functional perspective of the world. Just as it takes a lot of work behind the scenes to make a movie or write a book or design any human artifact, it takes a lot of processing to create the high-level subprocess of consciousness. Though it is still well beyond our means to quantify that amount accurately, and such a quantification could be defined in many ways, I do think that when we can finally detect the neurons “directly” felt by awareness and compare them to those indirectly supporting awareness, we will find that over 99% are in a supporting role. That support most crucially creates for us the perception that we are the interactive stars in a continuous movie, and also gives us many abstract abilities for thinking about things, including the ability to simulate additional movies (which we use when we watch movies, read books, or daydream), or more generally to create and use mental models. These high-level abilities depend on our intuitive capacity to catalog many traits and properties for many kinds of things, which we can generally do but for which we also notably have customized nonconscious mental support for many other fundamental mental abilities16, like:

  • sense of mechanics and forces,
  • sense of space and maps,
  • sense of quantity,
  • sense of what is safe, dangerous, or disgusting,
  • sense of what primal drives including food and sex,
  • sense of well-being and self,
  • sense of the behavior or psychology of plants, animals, and people, which extends to sense of kinship, justice and theory of mind (the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others)

We can consciously extend these nonconscious talents through focused attention and training, but we can’t take conscious credit for creating them. We just feel these things, much like we feel our senses, so we must accept that consciousness is limited to ways it can consciously understand and control them. In a very real sense, the nonconscious mind does so much that the conscious mind just has to show up for a free ride. The purpose of the conscious mind is just to make high-level decisions, which could arguably also be viewed as mediating these decisions between our many nonconscious talents. So it makes sense, and is apparently the case, that our conscious experience of the world is narrowly limited to a high-level summary perspective that presents us with all the relevant information in an easy-to-digest format that lets us quickly and appropriately keep deciding what we will do next. From these high-level reports from all these talents, we conduct or supervise matters without a detailed understanding of what it actually takes to make these talents possible from a computational perspective. We can’t legitimately claim conscious knowledge of nonconscious mechanisms would be a distraction, because our ability to focus attention already acts to prevent everything in our broader awareness from distracting us. Quite the contrary, we can benefit consciously from knowing more about what our brains are doing for us because we can then focus our attention on those details to achieve better control. We do have a conscious capacity to focus in on some details of each of our nonconscious talents, but there are limits and we can’t see most of what they are doing. The reason is ultimately a matter of practicality. The computational cost of creating a high-level experience for conscious consumption is relatively high, so it doesn’t make sense to create such experiences for all low-level functions. Still, if there were a significant adaptive advantage to gaining visibility into a nonconscious function, it is likely that such visibility would evolve. This criterion accounts for the range of conscious access to brain functions we feel today. Consequently, we can manually control our blinking probably because it is helpful not to blink at certain times when precision is required, but there is no benefit to understanding what processes make recognition of an object possible, so recognition seems atomic and not based on underlying subprocesses. Also, although we can’t sense the nonconscious mind directly, we can speak of all the information that appears in the conscious mind as a consequence of nonconscious talents as having been fed to us by the nonconscious mind. So our senses of mechanics and quantity, for example, are consciously felt but nonconsciously created and fed.

Now that I have brought the nonconscious mind into the discussion, I can say that subconcepts are created entirely by the nonconscious mind and are presented to the conscious mind as impressions or feelings, while concepts are created by the conscious mind (though using many supporting nonconscious talents) with specific definitions. We can thus be said to have a deep conscious understanding of concepts but only an intuitive conscious understanding of subconcepts. Although our understanding of subconcepts is intuitive rather than reasoned, and so subconcepts depend entirely on the weight of experience rather than any kind of logical support, we largely trust our experience and hence the subconcepts that derive from it. As we probe our memories, both concepts and subconcepts will appear, which can trigger more memories to appear, including memories of specific or idealized events. Idealized events can either be imaginary or a composite of actual events. We remember events through references, which I called object references above, and could also be called facts. Facts are specific details that are true in a given context, while subconcepts and concepts are generalizations that have broad applicability across a range of contexts. Contexts, in turn, depend on both details and generalizations to make sense, which means this overall model of comprehension is recursively defined because all the pieces can build on each other. However, there is a bottom because nonconscious talents mostly work with low-level data and mechanisms rather than building on higher-level constructs. Consciousness is much more a recursive phenomenon than nonconscious information and processing.

The contexts I am talking about are more properly called mental models, which give us our sense of how things work by binding subconcepts and concepts together into related sets. Mental models have strong nonconscious support that lets them appear in our heads with little conscious effort. The subconceptual aspects of these models give them a very “real,” sensory feel to us, while the conceptual aspects that overlay them connect things at a higher level of meaning. Although subconceptual thinking supports much of what we need to do with these models (akin to an autopilot), logical reasoning is much more effective for solving problems than pattern recognition, common sense, and intuition, which are often poor at solving novel problems. Logical reasoning, however, gives us an open-ended capacity to chain causes and effects in real time. As we mature we build a vast catalog of mental models to help us navigate the world. Though we may remember the specific times they were applied, we mostly remember how to use them in a general sense. Note that although logic helps hold mental models together, it doesn’t follow that understanding is a consequence of logic. John von Neumann once said, “Young man, in mathematics you don’t understand things. You just get used to them.”17 But it’s not just mathematics; all of understanding is really a matter of getting used to things. We feel naturally inclined to get used to things our nonconscious tells us “make sense,” but all of knowledge is ultimately relative: logical systems are internally consistent but are based on premises whose support is ultimately subjective. The important thing about understanding is that it is functional; it is news you can use.

The physical world lives in our minds via mental models. Our minds hold an overall model of the current state of the physical world that I call the mind’s real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world. The mind’s real world leverages countless mental models we have that have helped us understand everything we have ever seen. These models don’t have to be right or mutually exclusive; whatever models help provide us with our most accurate view of physical reality comprise our conception of it. The mind’s real world “feels” real to us, although it is purely a mental construct, because the mind is inclined to interpret its sensory connections to the physical world that way instinctively, subconceptually and conceptually. But we don’t just live in the here and now. Because the mind’s primary task (and the whole role of information and function) is to predict the future, mental models flexibly apply to a range of circumstances. We call the alternative ways things could have been or might yet be possible worlds. In principle, the mind’s real world is a single possible world, but in practice our knowledge of the physical world is imperfect, so our model of it in the past, present and future is always a constellation of possible worlds.

In summary, all behavior results from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genetic data is a first-order bearer of information that is collected and refined on an evolutionary timescale. Instincts (senses, drives, and emotions) are second-order bearers of information that process patterns in real time whose utility has been predetermined by evolution. Subconcepts are third-order bearers of information in which the exact utility of the patterns has not been predetermined by evolution, but which do tend to turn out to be valuable in general ways. Finally, concepts are fourth-order bearers of information that are fundamentally symbolic; concepts are pure abstractions that represent a block of related information distilled from patterns in the feedback. Some nonconscious thought processes (e.g. vision and language processing) manipulate concepts in customized ways without applying general-purpose logical reasoning, which can probably only be done consciously. Logic finds reasons, i.e. rules, that work reliably or even perfectly in mental models. To apply the results of logical reasoning we must ultimately map our models back to the real world, and for this we depend mostly on a nonconscious capacity to fit our models back to reality via “reverse recognition” mechanisms. Recognition and reverse recognition are complex problems requiring massive parallel computation for which present-day computers are only recently developing some facility, but for us they just happen with no conscious effort. Because our nonconscious minds do these things for us automatically, our simplified, almost cartoon-like conceptual representation of the world feels like the real world to us.

Our three real-time thinking talents — instinct, subconceptual thinking, and conceptual thinking — are distinct but can be very hard to cleanly separate. We know instinct influences much of our behavior, but we are quite unsure where instinct leaves off and tailored information management begins because they integrate very well. And even complex behavior, most notably mating, can be driven by instincts, so we can’t be too sure instinct isn’t behind any given action. While subconceptual and conceptual thinking can be readily separated based on the presence of concepts, it can be difficult to impossible to say at exactly what point a concept has coalesced from subconcepts. In theory, though, I believe there must be a logical and physical point at which a concept comes to exist, the moment that a set of information is referenced as a collective. This suggests that conceptual processes differ from subconceptual ones because they involve objectification of data by reference. Logical reasoning introduces the logical form, which abstracts logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen nonconsciously. The decision to act does not need to be strictly conscious. Habitual or snap decisions are sometimes made nonconsciously before conscious awareness, but such decisions can be considered to have been “preapproved” by prior conscious thought. We do always have the conscious prerogative to override “automated” behavior, though it may take us some time to decide whether to do so. But only consciousness is equipped to pursue a chain of reasoning, so habitual responses can only replay stored sequences. Our capacity to reason logically works when we dream and daydream using our stream of consciousness, even though consciousness is reduced at those times.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”18 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one19. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, but subconceptual and conceptual, and the experience they created. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that indicate the conceptual use of cause and effect, which goes beyond what instinct and subconceptual thinking could achieve. Other animals do not, and I suspect all others lack even a rudimentary conceptual capacity. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as it provides greater adaptive power. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, meaning that language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Advanced animals can logically reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Perhaps the ecological niches into which they evolved did not present them with enough situations where directed abstract thinking would benefit them to justify the additional costs such abilities bring. But why is it useful to be able to decouple information from the world to such a degree? The greater facility a mind has for abstraction, the more creatively it can develop causal chains that can outperform instinct and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Subconceptual thinking is also a gestalt approach that applies innate algorithms to subconcepts (big data) and uses feedback to collect useful patterns. Conceptual thinking (logical reasoning) creates the criteria it uses for feedback. A criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed general-purpose skills to find certain kinds of patterns. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”20. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Philosophers have at times over the ages proposed different categories of being than form and function, but I contend they were misguided. Attempts to conceive categories of being all revolve around ways of capturing the existence of functional aspects of things. Aristotle’s Organon listed ten categories of being: substance, quantity, quality, relationship, place, time, posture, condition, action, and recipient of action. But these are not fundamental, and they conflate intrinsic properties with relational ones. Physically, we might say all particles have substance, place, and time (to the extent we buy into particles and spacetime), so these are inherent aspects of physical objects. All the other categories characterize aspects of aggregates of particles. But we only know particles have substance, place and time based on theories that interpret observed phenomena of them. Independent of observations, we can posit that a particle or object itself exists as a noumenon, or thing-in-itself. Any information about the object is a phenomenon, or thing-as-sensed. We have no direct knowledge of noumena; we only know them through their phenomena. Noumena, then, are what I call form, while the interpretation of phenomena is what I call function. Some aspects of noumena are internal and can never be known, while others interact with other noumena to propagate information about them, called phenomena. We can only learn about noumena by analyzing their phenomena using information management processes. We further need to break phenomena down into three aspects. A tree that falls in a forest produces a sound, being a shockwave of compressed air, that is the transmitted phenomenon. If we hear it, that is the received phenomenon. If we interpret that sound into information, that is the true phenomenon or thing-as-sensed, since sensing means “making sense of” or interpreting. The transmitted phenomenon and received phenomenon are actually noumena, being a shockwave or vibration of eardrums in this case, so I will reserve the word phenomenon for the interpretation or information processing itself. Interpretation is strictly functional, so all phenomena are strictly functional. Similarly, all function is strictly phenomenal in the sense that information is based on patterns and so is about them, because patterns are received phenomena. Summarizing, form and function dualism could also be called noumenon and phenomenon dualism. This implies that phenomena are not simply the observed aspects of noumena but are functional constructs in their own right and are consequently not ontologically dependent on their noumena. One could also so that all knowledge is phenomenal/functional while the objects of all knowledge are noumena/forms. Finally, I’d like to note that while form usually refers to physical form, when we make a functional entity the object of observation or description, it becomes a functional (non-physical) noumenon. For example, “justice” is an abstract concept about which we can make observations and develop descriptions. Our interpretations or understanding of it are phenomenal and functional, but justice itself (to the extent it can abstractly exist by itself) is just noumenal. While we can’t know anything for certain about noumena, a convenient way to think about them is that if we had exhaustive, detailed phenomenal knowledge of a noumenon would reveal the noumenon to us. It’s not the same, because it is descriptive and not the thing itself, but it would eliminate mysteries about the noumenon. Put another way, our knowledge of (noumenal) nature grows more accurate all the time through (phenomenal) theories. We like to think we know our own minds, but our conscious stream can only access small parts at a time, which gives a phenomenal view into our own mind, whose noumena are only partially known to us. In other words, we know our minds have functions, but our knowledge of them is indirect and imperfect. But we tend not to think that way because we can study our minds indefinitely until we are quite confident we have gotten sufficiently “close” to the noumena. So abstract nouns like “apple” about physical noumena and like “justice” about functional noumena both define noumenal concepts which we feel we understand well even though we can only define and describe them approximately using words. While the full noumenal nature of “apple” and “justice” varies from person to person and depends on all their experience with the concepts and assessments they have made about them, our phenomenal understanding of them at a high level intersect well enough that we can agree on basic definitions and applications in conversation.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

  1. Downward Causation, Principia Cybernetic Web, 1995
  2. Downward Causation, The Information Philosopher
  3. Chemical bonding model, Wikipedia
  4. Pauling, L. (1932). “The Nature of the Chemical Bond. IV. The Energy of Single Bonds and the Relative Electronegativity of Atoms”. Journal of the American Chemical Society. 54 (9): 3570–3582. doi:10.1021/ja01348a011.
  5. Joscha Bach, From Computation to Consciousness: Can AI reveal the nature of our minds?, Ted Talk
  6. The Human Memory, Luke Mastin, 2018
  7. Why psychedelic drugs could transform how we treat depression and mental illness, A conversation with author Michael Pollan on becoming a “reluctant psychonaut.”, Sean Illing, Vox
  8. How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence., Michael Pollen, 2018
  9. I did have a few experiences with psilocybin and LSD in college and I do consider myself permanently improved by them. I have always been heavily into introspective thoughts, but I felt I was able to reach a sufficient state of epiphany that I could just take things easier going forward.
  10. A Mathematical Theory of Communication, Claude Shannon, 1948
  11. Science and Complexity, Warren Weaver, 1948
  12. Gilbert Ryle, The Concept of Mind, University of Chicago Press, 1949
  13. Ryle’s examples are more involved, e.g. that colleges and libraries comprise universities and that battalions, batteries, and squadrons comprise divisions.
  14. Conclusions of the Research on Nonconscious Information Processing, Pawel Lick, Psychology Department, University of Tulsa, Tulsa, OK
  15. Preconscious, Gillian Fournier, Psych Central, 2018
  16. adapted from p437-8 of The Language Instinct, Steven Pinker, 1994, Harper Collins Publishers
  17. John von Neumann, Wikiquote
  18. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  19. Nicaraguan Sign Language
  20. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2

1 thought on “Hey, Science, We’re Over Here”

Leave a Reply