The Mind Matters: The Scientific Case for Our Existence

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. The eliminativist idea that everything that exists is physical, aka physical monism, is true so long as one is referring to physical things, including matter, energy, and spacetime. But another kind of existence entirely, which I will hereafter call functional existence, is about the capabilities inherent in some systems. These can be capabilities of physical or fictional systems; their physical existence is independent from their functional existence and only relevant to it in limited ways. A capability is the power to do something, so functionality can be called a behavioral condition, which is a different thing than any underlying physical mechanism that might make it possible. What the mind does, as opposed to what the brain does, is to support the appropriate function of the body; the term “mind” refers to functionality while “brain” indicates the physical mechanism. Scientific explanations themselves (even those of materialists) are entirely functional and speak to capabilities, and are not at all about the physical form an explanation might have in our brains. But these theories don’t have to be about functional things, and those of physics and chemistry never are.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. So far as we know, everything in the universe except life is devoid of function. Physics and chemistry help us predict the behavior of nonliving things with equations. Physical laws are quite effective for describing simple physical systems but quite helpless with complex physical systems, where complexity refers to chaotic, complex, or functional factors, or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. But the weather and all other nonliving systems are not capable of controlling their own behavior; they are reactive and not proactive. Capability arises in living things because they are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes in DNA and bodies that replicate it. I can’t prove that functionality could only arise in a physical universe through complex adaptive systems, but any system lacking cycles of both positive and negative feedback would probably never get there. Over time, a CAS creates an abstract quantity called information, which is a pattern that has predictive power over the system. We can’t actually predict the future using information, but we can identify patterns that are more likely to happen than random chance, and this power is equivalent to an ability to foretell, just less certain.

Functional systems, which I will also refer to as information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities at all. While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented.

In summary, I am proposing a new kind of dualism, which I call form and function dualism, which says that everything is physical, but also that information management systems create additional entities that have a functional existence. Further, functional entities can be said to exist even if no physical system is implementing them. This form of existence is hypothetical, from a physical perspective, but everything is hypothetical from a functional perspective, so that is not an impediment to their functional existence. So, in this sense, mathematics exists as abstract systems that may or may not find physical manifestations, and even functional entities that only exist because of their physical manifestations, like people, can be said to have “souls” that exist independent of their physical bodies because the essential functional capacities that make them special could potentially be realized using different physical mechanisms, even if that technology does not exist today.

Information management systems that do physically exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful to discuss such functions independently of the underlying physical systems they run on. More importantly, most of the meaning of these functions is quite independent of their underlying physical systems.

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final, but the latter three are just aspects of function (akin to what, who, and why) and so are teleological. He used the word cause more broadly than we do today; cause, as in cause and effect, refers only to the efficient cause, “who” caused what. The formal cause refers to the lines we draw to distinguish wholes from their parts, i.e. our system of classification. To the Greeks, these lines seemed mostly intrinsic, but we see them today more as artificial constructs we impose on the world for our convenience. We have let the dominance of physicalism drive teleology from our minds as quackery akin to Lamarckism, the debunked evolutionary theory that the degree to which a giraffe manages to stretch its neck will be inherited by its offspring. After all, objects don’t sink to lower points because it is their final cause. But teleology is both intuitively and actually true, but only in functional systems, of which gravity is not an example.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. It is my contention that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 19481, which then led into systems theory2, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Minds are dynamic information management systems built in animal brains that create information in real-time. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind3. While we know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, Ryle felt it still had tacit if not explicit “official” support in 1949. While we know our lives metaphorically bifurcate into two, one of ‘inner’ mental happenings and ‘outer’ physical happenings, each with a distinct vocabulary, he felt we went further philosophically: “It is assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed, contending that the mind is not a “ghost in the machine”, something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake”, a situation in which one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, the mistake arises from a failure to understand that forest has a different scope than tree.4 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

Ironically, Ryle made the bigger mistake with categories than Descartes. His mistake was in thinking that the whole problem arose from a category mistake, when actually only a superficial aspect of it did. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because the function of what happens mentally cannot be explained in physical terms because while the brain runs the mind (akin to software), it doesn’t know its purpose. The mistake is in seeing the superficial category mistake but missing the legitimate categorization. Function is not form and can never be reduced to it, even though it can only happen in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but that doesn’t mean those processes are the subject matter of our mental vocabulary. Those processes are a minor aspect; what we are really talking about is what we can do, i.e. how we can use information to change the future. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong; the way the mind manifests physically is not a different type of existence, but they way it manifests functionally is, and that is what really matters here. This is the kind of dualism Descartes was grasping for, but he overstepped his knowledge by attempting to provide the physical explanation. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are fundamentally not physical and their existence is not dependent on space or time; they are pure expressions of hypothetical relationships and possibilities.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because the exhaustive reach of material existence has come to be synonymous with the triumph of science over mysticism. But physical science alone can’t give us a complete picture of nature because function, which begins as physical processes, can acquire persistence and hence existence in the physical world through information management systems.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. Understanding the underlying physical system can’t explain behavior because the link between them is indirect, which as noted above detaches the physical from the functional. Also, unlike digital computers, which are perfectly predictable given starting conditions, minds have chaotic and complex factors that impede prediction. We can conclude that emergence is a valid philosophical position that describes the creation of information, though it is a misleading word because it suggests the underlying physical system causes the functional system. Cause in a feedback-based system is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system can thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but through feedback a tool can come to be designed to perform the job better.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that relates to the capabilities it confers to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same traits. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while it others the “purpose” of the gene can be hard to identify. Also, knowing the chemistry will not reveal all the kinds of circumstances in which it might help or hurt. Any model of causes and kinds of circumstances we develop will be a gross simplification of what really happens, even though it might often work well most of the time. In other words, the traits we describe are only generalizations about the purpose of the gene. Its real purpose is an amalgamated sum of every selection event back to the dawn of life. The functionality is real, but with a very deep complexity that can’t be summarized without loss of information. The functional information wrapped up in genes is a gestalt that cannot be decomposed into parts, though an approximation of that function through generalized traits works well enough in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and the traits they confer when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts arise subconsciously but sometimes present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first conceptual thinking, i.e. thinking with concepts, which most notably includes logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects) taken to be true, and draws consequences from them. Subjects, predicates, and premises are concepts viewed from a logical perspective. The second approach is subconceptual thinking, which is a kitchen sink of data analysis capabilities. Unlike instincts, whose reactions are fixed, subconceptual thinking does not produce fixed responses. Subconceptual thinking includes common sense, pattern recognition, and intuition, but also includes much of our facility for math, language, music and other largely innate but not fixed, instinctive talents. Much of what we learn from experience is subconceptual in that it is not dependent on conceptualizing or logical reasoning. Conditioning, for example, with or without reinforcement, is subconceptual. Much, or even most, of the data our brains gather about the world is subconceptual and is there to help us despite the lack of a conceptual understanding. When conceptual and subconceptual thinking are done consciously we call it reasoning. Reasoning is the conscious capacity to “make sense” of things, which means to produce useful information. What we are conscious of is organizing, weighing, and otherwise assessing all the factors relevant to a situation. It doesn’t need to involve concepts or logic, and mostly it doesn’t. We lack conscious awareness of many aspects of both conceptual and subconceptual thinking, so these are said to be subconscious. For example, we recognize and recall things without knowing how, we can tell when sentences are properly formed, and we have hunches about the best way to do things that just come to us by intuition.

Subconceptual thinking uses subconceptual data (subconcepts) while conceptual thinking uses concepts. The most primitive subconcepts, percepts, are drawn from the senses using internal processes to create a large pool of information akin to big data in computers. Subconcepts and big data are data that is collected without knowing the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful again. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction, i.e. without a corollary in the physical world, that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Some kinds of reasoning can be done subconceptually by pattern analysis, specifically recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises, not diffuse, unstructured data. The other kinds of reasoning can also leverage concepts, but deduction specifically requires them. Also, so far as we know, deduction can’t be done subconsciously. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity. Note that while all reasoning, being the top level or final approval of our decision-making process, is also strictly conscious, many other kinds of conceptual and subconceptual thinking happen subconsciously.

Relationships that bind subconcepts and concepts together form mental models, which constitute our sense for how things work or behave. Mental models have strong subconscious support that lets them appear in our heads with little conscious effort. The subconceptual aspects of these models give them a very “real,” sensory feel to us, while the conceptual aspects that overlay them connect things at a higher level of meaning. Although subconceptual thinking supports much of what we need to do with these models (akin to an autopilot), conceptual thinking organizes things better for higher purposes, and logical reasoning can be much more effective for solving problems than our more limited conceptual and subconceptual thinking processes. While conceptual and subconceptual data analysis is quite powerful, it can’t readily solve novel problems. Logical reasoning, however, gives us an open-ended capacity to chain causes and effects in real time. As we mature we build a vast catalog of mental models to help us navigate the world. We remember the specific times they were applied, but mostly the general sense of how to use them.

The physical world lives in our minds via mental models. Our minds hold an overall model of the current state of the physical world that I call the mind’s real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world. The mind’s real world leverages countless mental models we have that have helped us understand everything we have ever seen. These models don’t have to be right or mutually exclusive; whatever models help provide us with our most accurate view of physical reality comprise our conception of it. The mind’s real world “feels” real to us, although it is purely a mental construct, because the mind is inclined to interpret its sensory connections to the physical world that way instinctively, subconceptually and conceptually. But we don’t just live in the here and now. Because the mind’s primary task (and the whole role of information and function) is to predict the future, mental models flexibly apply to a range of circumstances. We call the alternative ways things could have been or might yet be possible worlds. In principle, the mind’s real world is a single possible world, but in practice our knowledge of the physical world is imperfect, so our model of it in the past, present and future is always a constellation of possible worlds.

In summary, all behavior results from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genetic data is a first-order bearer of information that is collected and refined on an evolutionary timescale. Instincts (senses, drives, and emotions) are second-order bearers of information that process patterns in real time whose utility has been predetermined by evolution. Subconcepts are third-order bearers of information in which the exact utility of the patterns has not been predetermined by evolution, but which do tend to turn out to be valuable in general ways. Finally, concepts are fourth-order bearers of information that are fundamentally symbolic; a concept is a pure abstraction that represents a block of related information distilled from patterns in the feedback. Some subconscious thought processes (e.g. vision and language processing) manipulate concepts in customized ways without applying general-purpose logical reasoning, which can only be done consciously. Logic finds reasons, i.e. rules, that work reliably or even perfectly in mental models. The utility of logical reasoning ultimately depends on correlating models back to the real-world, and for this we depend on mostly subconscious but conceptual reverse recognition mechanisms that fit our models back to reality. Recognition and reverse recognition are complex problems requiring massive parallel computation for which present-day computers are only recently developing some facility, but for us they just happen with no conscious effort. This not only lets us think about more important things, it makes our simplified, almost cartoon-like representation of the world through concepts feasible.

Our four real-time thinking talents — instinct, subconceptual thinking, conceptual thinking, and logical reasoning (this last one being a kind of conceptual thinking) — are distinct but can be very hard to cleanly separate. We know instinct influences much of our behavior, but we are quite unsure where instinct leaves off and tailored information management begins because they integrate very well. And even complex behavior, most notably mating, can be driven by instincts, so we can’t be too sure instinct isn’t behind any given action. While subconceptual and conceptual thinking can be readily separated based on the presence of concepts, it can be difficult to impossible to say at exactly what point a concept has coalesced from subconcepts. In theory, though, I believe there must be a logical and physical point at which a concept comes to exist, the moment that a set of information is referenced as a collective. This suggests that conceptual processes differ from subconceptual ones because they involve objectification of data by reference. Logical reasoning refines conceptual thinking by introducing the logical form, which abstracts logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen subconsciously, because we consciously decide whether to act. Or do we? Habitual or snap decisions are sometimes made on a “preapproved” basis where we act entirely on subconscious reasoning which we then only observe consciously. We do always have the conscious prerogative to override “automated” behavior, though it may take us some time to decide whether to do so. The truth is, at one level of granularity or another, all our activity is driven subconsciously by muscle memory or procedural memory, which needs some conscious approval to proceed, but that approval can be implied by circumstances, effectively taking it out of the hands of consciousness. I posit that logical reasoning of any complexity only happens consciously as well because only consciousness is equipped to pursue a chain of reasoning. We can still reason logically when we dream and daydream, as this chaining capacity is not otherwise occupied then, with greater free association that can lead to more creativity, though with less rigor.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”5 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one6. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, but subconceptual and conceptual, and the experience they created. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that indicate the conceptual use of cause and effect, which goes beyond what instinct and subconceptual thinking could achieve. Other animals do not, and I suspect all others lack even a rudimentary conceptual capacity. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as it provides greater adaptive power. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, meaning that language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Advanced animals can logically reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Perhaps the ecological niches into which they evolved did not present them with enough situations where directed abstract thinking would benefit them to justify the additional costs such abilities bring. But why is it useful to be able to decouple information from the world to such a degree? The greater facility a mind has for abstraction, the more creatively it can develop causal chains that can outperform instinct and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Subconceptual thinking is also a gestalt approach that applies innate algorithms to subconcepts (big data) and uses feedback to collect useful patterns. Conceptual thinking (logical reasoning) creates the criteria it uses for feedback. A criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed general-purpose skills to find certain kinds of patterns. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”7. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Some have posited additional categories of being beyond form and function, but I think any additional categories can be reduced to these two. Aristotle’s Organon enumerated all ten possible kinds of things that can be the subject or the predicate of a proposition, the first of which is substance. Substance is equivalent to what I call form. Aristotle essentially defined it as that which is not function, by saying substance cannot be “predicated of anything” or be said to “be in anything”, which are relational or functional aspects. The other nine categories are functional aspects and as so are inherently indirect and not the substance themselves, namely quantity, quality, relationship, where, when, posture, condition, action, and recipient of action. I would say that the location of a physical object in time and space is part of its physical existence, but how we describe it relative to other things is not. As Immanuel Kant would have put it, a physical thing is a noumenon, or thing-in-itself, while our description of it is a phenomenon, or thing-as-sensed. We have no direct knowledge of the noumena of the physical world, but we talk about them as phenomena all the time. Noumena are strictly physical and phenomena strictly functional. I see no reason for any additional categories. Quantity and quality may accurately describe traits of a noumenon, but they are still descriptions and so are functional; the noumenon itself just exists without regard to how it might be characterized for some purpose extrinsic to itself. Ironically, this means that science is entirely a functional pursuit, even though its greatest successes so far are about the physical world. “About” is the key word; science studies phenomena, not noumena. We are curious about the noumena, but we can never know them as we only see their phenomena. This is not due to limitations of measurement but to limitations of understanding. Any noumenon can be understood through an infinite variety of phenomena and ways of interpreting those phenomena that model it but are never the same as it. They consequently describe some features, perhaps very accurately, but miss others, and in any event, what we think of as a feature is a generalization that makes sense to us functionally but means nothing physically.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

  1. A Mathematical Theory of Communication, Claude Shannon, 1948
  2. Science and Complexity, Warren Weaver, 1948
  3. Gilbert Ryle, The Concept of Mind, University of Chicago Press, 1949
  4. Ryle’s examples are more involved, e.g. that colleges and libraries comprise universities and that battalions, batteries and squadrons comprise divisions.
  5. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  6. Nicaraguan Sign Language
  7. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2

1 thought on “The Mind Matters: The Scientific Case for Our Existence”

Leave a Reply