The Mind Matters: The Scientific Case for Our Existence

The mind occupies a very tenuous position scientifically. Probably no two scientists would agree on precisely what kind of entity the mind even is. At one extreme are the eliminative materialists (or eliminativists), who believe that although our states of mind form convincing illusions, they are really just aggregates of neurochemical phenomena without being objectively real in their own right. At the other extreme are the cognitive idealists, who believe that our mental states are real but immaterial, that is, that mental states are ontologically distinct from (have a separate basis for existence from) and are not reducible to the physical. If science had to take an official stance, it would favor the former, noting the predominance and relative certainty of the hard sciences, which have successively eroded notions of human preeminence, such as our place in the universe and our divine creation. Probably most would say that the clear role of the brain as the mechanism settles the matter, and all that remains is to work out the details, at which point we will be able to relegate the mind to the dustbin of history along with the flat earth, geocentrism, and phlogiston. Just as chemistry was shown by Linus Pauling1 to be reducible to physics but is still worth studying as a special science, so, too, do social scientists tacitly accept that the mind is probably reducible to mechanisms but can still be useful to study as if aggregate conceptions of it meant something. The whole point of this book is to prove that this analogy is flawed and to establish a nonphysical yet scientific basis for the existence of the mind as we know it. Dualism, the idea of a separate existence of mind and matter, has long had a taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical, at least when formulated correctly. I did not originate the basis for dualism I will present, but I have refined it substantially so it can stand as a stronger foundation for cognitive science, and for science overall.

Any contemplation of existence starts with solipsism, the philosophical idea that only one’s own mind is sure to exist. As Descartes put it, I think therefore I am. The next candidates for existence are the contents of our thoughts, which invariably include the raft of persistent objects that we think of as comprising the physical world. In fact, objects persist so predictably that a whole canon of materialist scholarship, physical science, has developed. The rules of physical science hold that everything physical must follow physical laws. While these laws cannot be proven, their comprehensiveness and reliability suggest that we should trust them until proven otherwise. Applying physical science to our own minds has revealed that the mind is a process of the brain. That the brain is physical seems to imply that the mind is physical, too, which in turn suggests that any non-physical sense in which we think of thoughts is an illusion; it is just our mind looking at itself and interpreting physical processes as something more than they are. In the words of the neurophilosopher Paul Churchland, the activities of the mind are just the “dynamical features of a massively recurrent neural network”2. From a physical perspective, this is entirely true, provided one accepts the phrase “massively recurrent neural network” as a gross simplification of the brain’s overall architecture. The problem lies in the word “dynamical”, which takes for granted (incorrectly, as we will see) that all change in a physical world can be understood using a purely physical paradigm. However, because some physical systems use complex feedback loops to capture and then use information, which itself is not physical, we need a paradigm that can explain processes that use it. Information is the basic unit of function, which is an entirely separate kind of existence. These two kinds of existence create a complete ontology (philosophy of existence), which I call form and function dualism. While philosophers sometimes list a wider variety of categories of being (e.g. properties, events), I believe these additional categories reduce to either form or function and no further.

Because physical systems that use information (which I generically call information management systems) have an entirely physical mechanism, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. Functional existence is not a separate substance in the brain as Descartes proposed. It is not even anywhere in the brain because only physical things have a location. Any given thought I might have simultaneously exists in two ways, physically and functionally. Its physical form is arguably a subset of my neurons and what they are up to, and perhaps also involves to some degree my whole brain or my whole body. We can’t yet delineate the physical boundaries of a thought, but we know it has a physical form. The thought also has a function, being the role or purpose it serves. The thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate similar past situations to the current situation. The relationship between information and the circumstances in which it can be employed is inherently indirect, and abstractly so. While we establish an indirect reference whenever we use information, until it is used the information is strictly nonphysical, though it can be stored physically. By this, I mean that the information characterizes potential relationships through as-yet unestablished indirect references, and so its “substance” is essentially a web of connections untethered to the physical world, even though this web has been recorded in a physical way. Consequently, no theory of physical form can characterize this nonphysical function, which is why function must be viewed as a distinct kind of existence. The two kinds of existence never even hint at each other. Form doesn’t hint at purpose, only thinking about function does. The universe runs, some would say runs down, without any concept of purpose. And function doesn’t mandate form; many forms could perform the same function.

While form does not need function for its physical existence, function needs a form, an information management system, to exist physically. This raises the obvious question: if we were to understand the physical mechanism of the mental world, would that explain it? First, “understand” and “explain” are matters of function, not form, so one must presumably be willing to accept this use of function to explain itself. Second, given point one, we must also accept that there are always many functional ways to explain anything, each forming a perspective that covers different aspects (or the same aspects in different ways). In the case of the brain, I would distinguish at least three primary aspects: a mechanical understanding of neural circuitry to explain the form, an evolutionary understanding to explain the long-term (gene line) control function, and an experiential understanding to explain the short-term (single lifetime) control function. Physically, neural circuitry and biochemistry underlie all brain activity. Functionally, information management in the brain includes both instinctive inclinations encoded in DNA and lessons from experience encoded in memory. Both instinct and experience use feedback to achieve purpose-oriented designs, though evolution has no designer and the degree to which we design our behavior will be a subject of future discussion. These three kinds of understanding are the subjects of neurochemistry, evolutionary biology, and psychology respectively.

Neurochemistry and evolutionary biology will not be my primary angle of attack, even though they are fundamental and must support any conclusions I might reach. The reason is that the third category, psychology, encompassing our knowledge from a lifetime of experience and the whole veneer of civilization, has a greater relevance to us than the underlying mechanisms that make it possible, and, more significantly, is “under our control” in ways our wiring and instincts are not. We use the word artificial to distinguish the aspects and products of mind we can control, and humans appear to be able to control a whole lot more than other animal minds. To attack this memory-based, experiential side of the mind, I will at first draw what support I can from neurochemistry and evolutionary biology and from there propose new theory drawing on introspection, common knowledge, psychology and computer science. In other words, I am going to speculate, but within an appropriate framework as objectively defined as I believe anyone has yet attempted for this subject. To keep my presentation from getting bogged down in details, I will just state my positions at first, whether they are based on new theory or old, and I will gradually develop supporting arguments and sources as I go on.

To cut right to the chase, what stands out about the human mind relative to all others is its greater capacity for abstract thought (note that this is new theory). People use their minds to control themselves, but they can also freely associate seemingly any ideas in their heads with any others ad infinitum to consider potentially any situation from any angle. Of course, whether and how this association is really free is a matter for further discussion, but, practically speaking, we are flexible with our generalizations and so can find connections between any two things or concepts. This lets us leverage our knowledge in vastly more ways than if we only applied our knowledge narrowly, as is more generally the case with other animals. Language is taken by some to be the source and hallmark of human intelligence, and it did coevolve with the evolutionary expansion of abstract thought. Visual and temporal thinking also expanded, but linguistic, spatial and event-based thinking could not have evolved further without abstract thinking, which made them worthwhile and so can be thought of as the driving force. The combination of greater abstract thought with linguistic, spatial, and temporal gifts that employ it collectively constitute the human mental edge over other species.

Abstraction is the essence of what it means to be human (above and beyond being an animal). It is the light bulb that goes off in our heads that shows that we “get it”. If you interviewed a robot that appeared to be able to make abstract connective leaps, you would have to begrudgingly grant it a measure of intelligence. But if it could not, then no matter how clever it was at other things you would still think of it as a dumb automaton. The evolution of abstraction is hardly mandated by or an obvious development of evolution, so we could study neurochemistry and evolutionary biology for some time without ever suspecting it happened, as it didn’t in other animals. But we have a strong, vested interest in understanding human minds in particular, so we have to look for differences where we can and try to explain them. A remarkable thing about abstraction is that it gives us unlimited cognitive reach: we can understand anything, and no idea is beyond us. Our unlimited capacity to abstract gives us the ability to transcend our neurochemical and evolutionary limitations, at least in some ways. We can use mechanical means, e.g. books or computers, to extend our memory and processing power. We can ignore evolutionary mandates enforced by emotions and instincts to choose any path we like using abstract reasoning, a freedom evolution granted us with the expectation that we will think responsibly to meet our evolutionary goals and not thwart them. Note that if we destroy the planet and ourselves in the process, this will reveal the drawback of this expectation.

There is a school of thought, new mysterianism, that holds that some problems are beyond human ability to solve and therefore to understand, possibly because of our biological limitations. This position was proposed to justify the insolubility of the so-called “hard problem of consciousness”, which is the problem of explaining why we have qualia (sensory experiences). The problem is considered hard because no apparent physical mechanism can explain them. I hold that it is nonsensical to suggest that any problem is beyond our ability to understand and solve because understanding comes from without, not from within. That is, understanding is a perspective outside that which is being understood. It consists of generalizations, which are essentially approximations with some predictive power. One can always generalize about anything, no matter how ineffable or complex, so one can always understand it, at least to a certain degree. “Perfect” understanding is always impossible because of the approximate nature of generalizations. In functional terms, to generalize means to extract information or useful patterns collected from different perspectives or observations, i.e. to serve different purposes. For example, we observe the universe, and although we cannot see the underlying mechanism, we have generalized many laws of nature to describe it, explain it, and solve any number of problems. So solutions to problems are just explanatory perspectives. If the point the new mysterians are making is that we can’t prove the true nature of the universe, then that point has to be granted, because laws of nature can’t be proven (and perfect explanations aren’t possible anyway because they are approximate). But a theory that explains, say, all relevant observations about qualia can be formulated, and I will present one later on.

But what about problems that are just too complex, with too many parts, for humans to wrap their heads around? By generalizing more we can always break the complex down into the simple, so we can definitely understand them and solve them eventually. But I have to concede that some problems may be beyond our practical grasp because it would take us too long to properly understand all the parts and how they fit together. A classic example of this issue is four-dimensional (or higher) vision. We take our ability to visualize in 3D for granted, and yet we draw on highly customized subconscious hardware to do it. Unless we somehow find a way to add 4D hardware to our brains, we will never have an intuitive feel for higher dimensional thinking. We can always project slices down to 2D or 3D, and so in principle we can eventually solve problems in higher dimensions, but our lack of intuition is a serious practical hindrance. Another example is our “one-track mind”, which makes it hard for us to conceive of complex (e.g. biological) processes that happen in parallel. We instead track them separately and try to factor in knock-on effects, but this is a crude analogy. We have to accept that our practical reach has many constraints which can obscure deeper understandings from us.

Let me return from my digression into human-specific capacities to the existential nature of function. I initially said that information is the basic unit of function, and I defined information in terms of its ability to correlate similar past situations to the current situation to make an informed prediction. This strategy hinges on the likelihood that similar kinds of things will happen repeatedly. At a subatomic level the universe presumably never exactly repeats itself, but we have discovered consistent laws of nature that are highly repeatable even for the macroscopic objects we typically interact with. Lifeforms, as DNA-based information management systems, bank on the repeatable value of genetic traits when they use positive feedback to reward adaptive traits over nonadaptive ones. Hence DNA uses information to predict the future. Further adaptation will always be necessary, but some of what has been learned before (via genetic encoding) remains useful indefinitely, allowing lifeforms to expand their genetic repertoire over billions of years. Some of this information is provided directly to the mind through instincts. For example, the value of wanting to eat or to have kids has been demonstrated through countless generations. In the short-term, however, that is, over a single organism’s lifetime, instincts alone don’t and can’t capture enough detailed information to provide the level of decision support animals need. For example, food sources and threats vary too much to ingrain them entirely as instincts. To meet this challenge, animals learn, and to learn an animal must be able to store experiential information as memory that it can access as needed over its lifetime. In principle, minds continually learn throughout life, always assessing effects and associating them to their causes. In practice, the minds of all animals undergo rapid learning phases during youth followed by the confident application of lessons learned during adulthood. Adults continue to learn, but acting quickly is generally more valuable to adults than learning new tricks, so stubbornness overshadows flexibility. I have defined function and information as aids to prediction. This capacity of function to help us underlies its meaning and distinguishes it from form, even though we use mechanisms (form) to perform functions and store information. Form and function are distinct: the ability to predict has no physical aspect, and particles, molecules, and objects have no predictive aspect.

With that established we can get to the most interesting aspect of the mind, which is this: brains manage information in two ways, but only one of those ways is responsible for the existence of minds. I call these two kinds of information logical and associative, and minds exist because of logical information. I will get to why further down, but first let’s look at these two kinds of information. Logical information invokes formalizations and logic while associative information gleans patterns. The simplest kind of formalization is the indirect reference, which is the ability to reference something using a name, reference, or container instead of the value itself. In the brain, I will assume the existence of a low-level (i.e. subconscious) capacity to create such containers, which I will henceforth call concepts. We often use words to label concepts, though most are unnamed. Also, words come to refer to a host of concepts and not just one, including what we think of as their distinct dictionary entries, connotations, and also a constantly evolving set of variations specific to our experience. We manage associative information entirely by subconscious processing, which is because it is done in parallel. We don’t consciously think in parallel, we consciously follow a single train of thought, but subconsciously all our neurons are working for us all the time behind the scenes, and any insights they glean from a variety of algorithms that survey the available data can be called associative information. It is a lot like inductive reasoning because it is based on a preponderance of the evidence, but it differs in that it is subconscious instead of conscious, and it may leverage or produce generalizations/concepts, but it doesn’t have to. I collectively label associative information that bubbles up to conscious awareness as intuition. Intuition, an immediate or instinctive understanding without conscious reasoning, derives from our senses, instincts, emotions, and memory. The classic example of associative information is recognition, in which an input image is simultaneously compared against everything the brain has ever seen and the best match just “pops” out consciously as identified. This also helps us connect ideas to similar ideas we have had before, which gives us our ready understanding of how things work. Brain processes are either logical or associative, but they can leverage each other. Associative information supports the formalization of concepts, and concepts, which are formed and extended by logical thinking, are often searched associatively once committed to memory.

Concepts are at the core of logical thought. Concepts are generalized, indirect references that can be applied to similar situations. “Baseball”, “three” and “red” can refer to any number of physical phenomena but are not physical themselves. Even when they are applied to specific physical things, they are still only references and not the things themselves. Churchland would say these mental terms are a part of folk psychology that makes sense to us subjectively but have no place in the real world, which depends on the calculations that flow through the brain but does not care about our high level “molar” interpretation of them as “ideas”. Really, though, the mysterious, folksy, molar property he can’t quite put his finger on is function, and it can’t be ignored or reduced. The atomic embodiment of function in our logical thinking is the concept, and in our associative thinking it is intuition. The way that concepts and intuition guide our actions results from neurochemical mechanisms, but the mechanisms themselves can’t tell us why we chose an action; for that, we have to look at the information that was used from a functional perspective. Some eliminativists might say that using functional means to explain the physical is still physical, because function is an advanced aspect of complex physical systems, but this is a cheat, because it isn’t: specific events are physical, but generalizations to functionality are not.

But why do minds only exist because of logical information? In this usage of mind I am clearly distinguishing the mind from being the totality of everything the brain does. Computers do a lot too, but they don’t have minds. We reserve the word “mind” to describe a certain subset of what the brain does, specifically to our first-person capacity for awareness, attention, perception, feeling, thinking, will and reason. We currently lack a workable scientific explanation for why we seem to be agents experiencing these mental states. Consequently, we can’t define them except, irreducibly, in terms of each other. I have an explanation, which I will elevate to the standard of being scientific, which I will now summarize. Our agency ultimately follows from the fact that concepts are generalizations, and generalizations are fictional, which literally means they are only figments of our imagination. These figments create worlds independent of the physical world, if by world we mean a functional realm or domain, comprised of concepts (generalizations) defined in terms of other concepts. Causes and effects can be tracked in functional domains the same way they can in the physical world, but this tracking happens relative to the perspective of the functional domain, which is to say according to logical and associative operations. I am going to argue that this relative perspective creates the sense of consciousness as we know it specifically because logic must follow a single path or “train of thought”. Let’s look closer at the ways (i.e. the functional domains) that association and logic are used in the brain.

We don’t know all that much about associative functional domains because our brains perform associative operations subconsciously. Loosely speaking, we collect from a set of experiences a body of knowledge that groups similar experiences, and using this we can then decide whether or not a new example belongs to the set. This skill has been artificially replicated using machine learning algorithms that are “trained” using many examples that have been correlated to identifying characteristics of each example. This skill most obviously allows us to recognize things from memory, but it also lets us recall the correct word for a situation, it tells us what things to focus our attention on, and it inspires us to feel calm or scared in soothing or dangerous situations. We don’t have to reason these things out; we know them intuitively from expert associative functional domains.

While we also don’t know that much about logical functional domains because much of what they do is also done subconsciously, we do have considerable conscious awareness of them and so can study them somewhat directly through introspection. Doing this, we can see that (loosely speaking, again) we establish rules of cause and effect linking our concepts together. Causality is just a way of describing the predictive power of logical, but not associative, information. Associative information is predictive, but it can’t be said to be causative because while it can indicate likely outcomes, it is indifferent to whether a causative mechanism exists. It can advise our decisions by suggesting behaviors that have worked well in the past in similar circumstances, which is exactly how our subconscious behaviors work. Nearly everything our brains do follows this “mindless” but effective automated approach. The problem is that it doesn’t work very well in novel situations, and there is usually at least some novelty in nearly all our top-level interactions. This is where logic and reasoning come in. If we had rules that could tell us what was going to happen with certainty (or significant accuracy) by representing the situation in mechanical terms, then that would not only be extremely “predictive” but also causative. For example, if I let go of a book, gravity will cause it to fall to the ground. This conclusion is entirely independent of the associative knowledge I acquired from the thousands or millions of times I’ve seen things fall; it is based solely on the logical model in my head that describes how objects are pulled down to the ground by gravity. The true mechanism of the universe, which is unknowable anyway, doesn’t matter. Whether gravity is a force or results from the shape of spacetime, if we presume it is causative it can do much more than intuition alone. The catch is that rules don’t operate on raw sensory inputs, they need to operate on nicely packaged pieces of information. In the brain these logical pieces are concepts. If I can generalize a hand and a book from my sensory data, I can map my gravity rule onto the book. I can then identify a cause (letting go) with an effect (the book falling). That the effect will follow the cause more often than when the cause did not happen demonstrates that the cause is at least partially responsible for it and justifies labeling them cause and effect. What this tells us is that there is more to abstraction than just forming concepts from generalizations. We form networks of interrelated concepts and rules about them called systems of abstract thought. Much of this book will focus on figuring out how these systems work in our minds, but we can also devise systems that work outside our own mind as well, which are easier to study. Language is, of course, the primary example of such a system. It partially standardizes a number of concepts and rules and can be used or extended to characterize any new concepts or rules that come along. A fully standardized, i.e. logical, abstract system is called a formal system. Formal systems, which often employ formal languages, can achieve much greater logical precision than natural language34.

Systems of abstract thought employ conceptual models, which are sets of rules and concepts that work together to characterize some topic to be explained (the subject of prediction). Any system of abstract thought would use a variety of conceptual models to cover different topics, and then use additional associative or logical means to select and arbitrate between conceptual models for each application. Because abstract thought systems can use associative strategies to manage conceptual models, and because concepts themselves depend on associative support for their meaning, these systems are not as strictly logical as formal systems but instead include both associative and logical aspects. In fact, generally speaking, the mind fluidly uses both associative and logical information, each to their strengths, and so is often not aware of their distinction. But they are very different kinds of information, and we do recognize the difference on reflection.

“Conceptual model” is an abstract term that describes networks of concepts independently of they ways such models work in actual minds, or in our minds specifically. A conceptual model presented through language (natural or formal) doesn’t depend directly on the structure of the mind to be expressed, and so much potential complexity has been abstracted away. A conceptual model as it is held in our minds is called a mental model. As it is a conceptual model, a mental model only represents one logical model to explain its underlying topic(s). As noted in Wikipedia:

Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in the psychology of reasoning (Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case of counterfactual conditionals and counterfactual thinking (Byrne, 2005).

The article says, simplistically as it will turn out, that “People infer that a conclusion is valid if it holds in all the possibilities.” If the way we used mental models were as strictly logical as mental models themselves, then this would hold up, but the systems of abstract thought that we use to reason include associative knowledge and processing, which is not logical itself. So you could say that we use intuitive approaches to rate our mental models, lending appropriate levels of credence to each. But it is more than that; we also use intuitive powers to align mental models with observed circumstances, this time using our power of recognition to match models instead of objects (or other concepts). We can just feel when a model is a good fit, but this is not logical (rational), it is based on experience and the weight of the evidence our subconscious has considered for us. We will, of course, supplement associative knowledge with logical scrutiny when it seems appropriate, focusing our attention on details to make sure the models and concepts we are using make logical sense. But this logical oversight, and all of logical reasoning and consciousness, for that matter, are analogous to a CEO who indirectly oversees thousands of employees gathering information and doing things down below. The CEO has just one train of thought, while all those workers… well, they are not quite analogous to people because they are just associative, processing information without dwelling on the implications. To summarize, mental models are possible worlds, but it takes more than logic to match up possible worlds to actual worlds and to evaluate their worth.

Now we can see why logic creates agency. Logic is all about dwelling on the implications. Consciousness and its attendant traits — awareness, attention, perception, feeling, thinking, will and reason — are all designed to efficiently reduce the deluge of information flowing from our senses to our brains into a single train of thought, a train that is single because logic can only draw one conclusion from premises at a time. The entity I think of as me is just a logical process in my brain doing what it has to do. If you think about it, it can’t help but create the traits of consciousness; they need to be that way to work. But why do they feel the way they do? This, too, is not a miracle, it is just a bias. We think of our sensations as being miraculous because we can distinguish them from each other so readily, and because each feels so appropriate to its task, but, then again, they have to feel different and appropriate. Their appropriateness is a simple consequence of their being highly integrated with the knowledge most relevant to them. Heat feels comfortable in moderation and painful when too much or too little. What else would we expect? Red seems bright and surprising while green is calm and soothing. This helps us distinguish fruits from leaves. Emotions are hardwired reactions that, big surprise, inspire us to react in exactly the ways that are most likely to be helpful. In short, our subjective feelings feel the way the do as a shorthand for the information they represent. It is the feel of information that has been packaged up by associative subconscious systems into easily digestible form for conscious consideration. Our exact feel of qualia evolved over billions of years, so our subjective experience is not a blank slate that is different for each of us but is extremely similar in everyone. Furthermore, since the feel of qualia is an expression of their function, similar qualia must feel the same in different species, although with appropriate differences due to variations in function. It is ironic that qualia (especially emotion) are not themselves logical and yet they exist to support logic and consequently consciousness, which need to have complex inputs simplified down to a single stream for logical processing.

Finally, let’s circle back to see whether we really need a functional view or whether a physical view might suffice. Arguably, we can understand life processes using only physical (i.e. biochemical) and not functional (i.e. evolutionary) analysis. Biochemistry explains all the processes of metabolism and reproduction, including DNA’s mechanisms. We can consequently say that everything that has happened since life began was just a sequence of physical events without invoking functional notions like evolution. So the reason some traits and species survived while others perished is superficially nothing more than that the individuals survived, which in turn can be attributed to their mechanisms working well. Even adaptive change can be explained mechanically: the physical system is set up so that feedback selects mechanisms that will be more likely to be adaptive in future situations. One doesn’t have to jump to any conclusions or generalizations about “traits” these mechanisms “carry”. We don’t need a functional perspective until and unless we want to leverage logic, either as a way of explaining the design or doing logical design ourselves. Logic, the hallmark power of consciousness, brings capabilities to the table that a purely physical understanding of functional phenomena lacks. Logic lets us explain function with function, and in so doing lets us draw confident conclusions about things that will happen that a physical explanation couldn’t even hint at. Understanding the biochemistry gives us no idea or perspective how well a mechanism will do in the future because physical perspectives don’t predict the future. When we look at biology from an evolutionary perspective, suddenly we see purposes for every trait, purposes which in turn can be said to have at least partially caused the traits to evolve. We develop an explanation that carries much more predictive power than a biochemical explanation alone.

The above argument applies to the mind as well, since it is one of the life processes under evolutionary management. But the mind is much more than that because it does not just leverage information stored in DNA, it leverages information collected from cultural and personal experience, and these kinds of information are arguably of greater interest to us than those that we acquire innately. Neurochemistry studies the mechanics of mental functionality at the “hardware” (physical) level. While highly dependent on this physical form, cognitive function introduces a whole new plane of existence which I have argued is not even hinted at by it. My goal is to explain the cognitive function of the brain, also called our mental life or the mind. DNA encodes the subset of mental functionality that is purely biological, and evolutionary biology studies this perspective. I have argued that cognitive function in humans qualitatively exceeds that of other animals because we evolved greater powers of abstraction simultaneously with language. Psychology is the field directly charged with studying mental life. Psychology and other social sciences have had to take our mental existence as a prerequisite to do further work, but my goal here is to break that existence down into its component parts. But how can we objectively study something we can’t see or measure with instruments, but can only examine through introspection and observation of behavior? As I have said, neurochemistry and evolutionary biology are our most objective sources, so we have to start there. Evolutionary psychology is the branch of evolutionary biology concerned with cognitive function, and I will look very closely at what it has revealed. We like to give ourselves credit for creating a manmade “artificial” layer of existence, a veneer of civilization, over and above our natural endowment of mental gifts. But it is not over and above; our capabilities are entirely genetic and our accomplishments are entirely consistent with and driven by our nature. And yet, the manmade parts — our culture, education and personal experience — have themselves through feedback participated in our evolution, while also creating worlds of information beyond anything genetics could have anticipated. Abstraction opened the door to a knowledge explosion, because once it is packaged abstractly, knowledge not only accumulates but can build on itself to create ever more complex artifacts and institutions. So although we are still physical creatures constrained by biochemistry and evolution, we have the freedom to abstract across the full range of functional existence.

  1. Linus Pauling, THE NATURE OF THE CHEMICAL BOND. APPLICATION OF RESULTS OBTAINED FROM THE QUANTUM MECHANICS AND FROM A THEORY OF PARAMAGNETIC SUSCEPTIBILITY TO THE STRUCTURE OF MOLECULES, J. Am. Chem. Soc., April, 1931, pp 1367–1400
  2. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2
  3. The logical positivists in the 1930’s and 1940’s claimed that all scientific explanations could fit into a formal system (called the Deductive Nomological Model), which basically said that scientific explanations follow solely from laws of nature, their causes, and their effects. The flaw with this theory was that it tried to be all logical and ignored the role of association in forming concepts and mapping them to the world in the first place, which connects the observer to the observed and helps create the subjective foundations (and biases) of theory and knowledge in scientific communities (or any communities). Logic may be perfect but can only be imperfectly married to nature. While postpositivists have tried to salvage some of the certainty of positivism, they can’t because the underlying problem is that function doesn’t reduce to form. No matter how appealing, scientific realism is meaningless from a functional perspective; we can functionally suppose that form exists because it is pragmatically valuable to do so, so whether it actually exists is irrelevant. Form and function dualism gives post positivism solid ground to stand on and is the missing link to support the long-sought unity of science.
  4. Scientific Explanation, Stanford Encyclopedia of Philosophy, 2003, 2014

1 thought on “The Mind Matters: The Scientific Case for Our Existence”

Leave a Reply