The Mind Matters: The Scientific Case for Our Existence

The mind occupies a very tenuous position scientifically. Probably no two scientists would agree on precisely what kind of entity the mind even is. At one extreme are the eliminative materialists (or eliminativists), who believe that although our states of mind form convincing illusions, they are really just aggregates of neurochemical phenomena without being objectively real in their own right. At the other extreme are the cognitive idealists, who believe that our mental states are real but immaterial, that is, that mental states are ontologically distinct from (have a separate basis for existence from) and are not reducible to the physical. If science had to take an official stance, it would favor the former, noting the predominance and relative certainty of the hard sciences, which have successively eroded notions of human preeminence, such as our place in the universe and our divine creation. Probably most would say that the clear role of the brain as the mechanism settles the matter, and all that remains is to work out the details, at which point we will be able to relegate the mind to the dustbin of history along with the flat earth, geocentrism, and phlogiston. Just as chemistry was shown by Linus Pauling1 to be reducible to physics but is still worth studying as a special science, so, too, do social scientists tacitly accept that the mind is probably reducible to mechanisms but can still be useful to study as if aggregate conceptions of it meant something. The whole point of this book is to prove that this analogy is flawed and to establish a nonphysical yet scientific basis for the existence of the mind as we know it. Dualism, the idea of a separate existence of mind and matter, has long had a taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical, at least when formulated correctly. I did not originate the basis for dualism I will present, but I have refined it substantially so it can stand as a stronger foundation for cognitive science, and for science overall.

Any contemplation of existence starts with solipsism, the philosophical idea that only one’s own mind is sure to exist. As Descartes put it, I think therefore I am. The next candidates for existence are the contents of our thoughts, which invariably include the raft of persistent objects that we think of as comprising the physical world. In fact, objects persist so predictably that a whole canon of materialist scholarship, physical science, has developed. The rules of physical science hold that everything physical must follow physical laws. While these laws cannot be proven, their comprehensiveness and reliability suggest that we should trust them until proven otherwise. Applying physical science to our own minds has revealed that the mind is a process of the brain. That the brain is physical seems to imply that the mind is physical, too, which in turn suggests that any non-physical sense in which we think of thoughts is an illusion; it is just our mind looking at itself and interpreting physical processes as something more than they are. In the words of the neurophilosopher Paul Churchland, the activities of the mind are just the “dynamical features of a massively recurrent neural network”2. From a physical perspective, this is entirely true, provided one accepts the phrase “massively recurrent neural network” as a gross simplification of the brain’s overall architecture. The problem lies in the word “dynamical”, which takes for granted (incorrectly, as we will see) that all change in a physical world can be understood using a purely physical paradigm. However, because some physical systems use complex feedback loops to capture and then use information, which itself is not physical, we need a paradigm that can explain processes that use it. Information is the basic unit of function, which is an entirely separate kind of existence. These two kinds of existence create a complete ontology (philosophy of existence), which I call form and function dualism. While philosophers sometimes list a wider variety of categories of being (e.g. properties, events), I believe these additional categories reduce to either form or function and no further.

Because physical systems that use information (which I generically call information management systems) have an entirely physical mechanism, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. Functional existence is not a separate substance in the brain as Descartes proposed. It is not even anywhere in the brain because only physical things have a location. Any given thought I might have simultaneously exists in two ways, physically and functionally. Its physical form is arguably a subset of my neurons and what they are up to, and perhaps also involves to some degree my whole brain or my whole body. We can’t yet delineate the physical boundaries of a thought, but we know it has a physical form. The thought also has a function, being the role or purpose it serves. The thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate similar past situations to the current situation. The relationship between information and the circumstances in which it can be employed is inherently indirect, and abstractly so. While we establish an indirect reference whenever we use information, until it is used the information is strictly nonphysical, though it can be stored physically. By this, I mean that the information characterizes potential relationships through as-yet unestablished indirect references, and so its “substance” is essentially a web of connections untethered to the physical world, even though this web has been recorded in a physical way. Consequently, no theory of physical form can characterize this nonphysical function, which is why function must be viewed as a distinct kind of existence. The two kinds of existence never even hint at each other. Form doesn’t hint at purpose, only thinking about function does. The universe runs, some would say runs down, without any concept of purpose. And function doesn’t mandate form; many forms could perform the same function.

While form does not need function for its physical existence, function needs a form, an information management system, to exist physically. This raises the obvious question: if we were to understand the physical mechanism of the mental world, would that explain it? First, “understand” and “explain” are matters of function, not form, so one must presumably be willing to accept this use of function to explain itself. Second, given point one, we must also accept that there are always many functional ways to explain anything, each forming a perspective that covers different aspects (or the same aspects in different ways). In the case of the brain, I would distinguish at least three primary aspects: a mechanical understanding of neural circuitry to explain the form, an evolutionary understanding to explain the long-term (gene line) control function, and an experiential understanding to explain the short-term (single lifetime) control function. Physically, neural circuitry and biochemistry underlie all brain activity. Functionally, information management in the brain includes both instinctive inclinations encoded in DNA and lessons from experience encoded in memory. Both instinct and experience use feedback to achieve purpose-oriented designs, though evolution has no designer and the degree to which we design our behavior will be a subject of future discussion. These three kinds of understanding are the subjects of neurochemistry, evolutionary biology, and psychology respectively.

Neurochemistry and evolutionary biology will not be my primary angle of attack, even though they are fundamental and must support any conclusions I might reach. The reason is that the third category, psychology, encompassing our knowledge from a lifetime of experience and the whole veneer of civilization, has a greater relevance to us than the underlying mechanisms that make it possible, and, more significantly, is “under our control” in ways our wiring and instincts are not. We use the word artificial to distinguish the aspects and products of mind we can control, and humans appear to be able to control a whole lot more than other animal minds. To attack this memory-based, experiential side of the mind, I will at first draw what support I can from neurochemistry and evolutionary biology and from there propose new theory drawing on introspection, common knowledge, psychology and computer science. In other words, I am going to speculate, but within an appropriate framework as objectively defined as I believe anyone has yet attempted for this subject. To keep my presentation from getting bogged down in details, I will just state my positions at first, whether they are based on new theory or old, and I will gradually develop supporting arguments and sources as I go on.

To cut right to the chase, what stands out about the human mind relative to all others is its greater capacity for abstract thought (note that this is new theory). People use their minds to control themselves, but they can also freely associate seemingly any ideas in their heads with any others ad infinitum to consider potentially any situation from any angle. Of course, whether and how this association is really free is a matter for further discussion, but, practically speaking, we are flexible with our generalizations and so can find connections between any two things or concepts. This lets us leverage our knowledge in vastly more ways than if we only applied our knowledge narrowly, as is more generally the case with other animals. Language is taken by some to be the source and hallmark of human intelligence, and it did coevolve with the evolutionary expansion of abstract thought. Visual and temporal thinking also expanded, but linguistic, spatial and event-based thinking could not have evolved further without abstract thinking, which made them worthwhile and so can be thought of as the driving force. The combination of greater abstract thought with linguistic, spatial, and temporal gifts that employ it collectively constitute the human mental edge over other species.

Abstraction is the essence of what it means to be human (above and beyond being an animal). It is the light bulb that goes off in our heads that shows that we “get it”. If you interviewed a robot that appeared to be able to make abstract connective leaps, you would have to begrudgingly grant it a measure of intelligence. But if it could not, then no matter how clever it was at other things you would still think of it as a dumb automaton. The evolution of abstraction is hardly mandated by or an obvious development of evolution, so we could study neurochemistry and evolutionary biology for some time without ever suspecting it happened, as it didn’t in other animals. But we have a strong, vested interest in understanding human minds in particular, so we have to look for differences where we can and try to explain them. A remarkable thing about abstraction is that it gives us unlimited cognitive reach: we can understand anything, and no idea is beyond us. Our unlimited capacity to abstract gives us the ability to transcend our neurochemical and evolutionary limitations, at least in some ways. We can use mechanical means, e.g. books or computers, to extend our memory and processing power. We can ignore evolutionary mandates enforced by emotions and instincts to choose any path we like using abstract reasoning, a freedom evolution granted us with the expectation that we will think responsibly to meet our evolutionary goals and not thwart them. Note that if we destroy the planet and ourselves in the process, this will reveal the drawback of this expectation.

There is a school of thought, new mysterianism, that holds that some problems are beyond human ability to solve and therefore to understand, possibly because of our biological limitations. This position was proposed to justify the insolubility of the so-called “hard problem of consciousness”, which is the problem of explaining why we have qualia (sensory experiences). The problem is considered hard because no apparent physical mechanism can explain them. I hold that it is nonsensical to suggest that any problem is beyond our ability to understand and solve because understanding comes from without, not from within. That is, understanding is a perspective outside that which is being understood. It consists of generalizations, which are essentially approximations with some predictive power. One can always generalize about anything, no matter how ineffable or complex, so one can always understand it, at least to a certain degree. “Perfect” understanding is always impossible because of the approximate nature of generalizations. In functional terms, to generalize means to extract information or useful patterns collected from different perspectives or observations, i.e. to serve different purposes. For example, we observe the universe, and although we cannot see the underlying mechanism, we have generalized many laws of nature to describe it, explain it, and solve any number of problems. So solutions to problems are just explanatory perspectives. If the point the new mysterians are making is that we can’t prove the true nature of the universe, then that point has to be granted, because laws of nature can’t be proven (and perfect explanations aren’t possible anyway because they are approximate). But a theory that explains, say, all relevant observations about qualia can be formulated, and I will present one later on.

But what about problems that are just too complex, with too many parts, for humans to wrap their heads around? By generalizing more we can always break the complex down into the simple, so we can definitely understand them and solve them eventually. But I have to concede that some problems may be beyond our practical grasp because it would take us too long to properly understand all the parts and how they fit together. A classic example of this issue is four-dimensional (or higher) vision. We take our ability to visualize in 3D for granted, and yet we draw on highly customized subconscious hardware to do it. Unless we somehow find a way to add 4D hardware to our brains, we will never have an intuitive feel for higher dimensional thinking. We can always project slices down to 2D or 3D, and so in principle we can eventually solve problems in higher dimensions, but our lack of intuition is a serious practical hindrance. Another example is our “one-track mind”, which makes it hard for us to conceive of complex (e.g. biological) processes that happen in parallel. We instead track them separately and try to factor in knock-on effects, but this is a crude analogy. We have to accept that our practical reach has many constraints which can obscure deeper understandings from us.

Let me return from my digression into human-specific capacities to the existential nature of function. I initially said that information is the basic unit of function, and I defined information in terms of its ability to correlate similar past situations to the current situation to make an informed prediction. This strategy hinges on the likelihood that similar kinds of things will happen repeatedly. At a subatomic level the universe presumably never exactly repeats itself, but we have discovered consistent laws of nature that are highly repeatable even for the macroscopic objects we typically interact with. Lifeforms, as DNA-based information management systems, bank on the repeatable value of genetic traits when they use positive feedback to reward adaptive traits over nonadaptive ones. Hence DNA uses information to predict the future. Further adaptation will always be necessary, but some of what has been learned before (via genetic encoding) remains useful indefinitely, allowing lifeforms to expand their genetic repertoire over billions of years. Some of this information is provided directly to the mind through instincts. For example, the value of wanting to eat or to have kids has been demonstrated through countless generations. In the short-term, however, that is, over a single organism’s lifetime, instincts alone don’t and can’t capture enough detailed information to provide the level of decision support animals need. For example, food sources and threats vary too much to ingrain them entirely as instincts. To meet this challenge, animals learn, and to learn an animal must be able to store experiential information as memory that it can access as needed over its lifetime. In principle, minds continually learn throughout life, always assessing effects and associating them to their causes. In practice, the minds of all animals undergo rapid learning phases during youth followed by the confident application of lessons learned during adulthood. Adults continue to learn, but acting quickly is generally more valuable to adults than learning new tricks, so stubbornness overshadows flexibility. I have defined function and information as aids to prediction. This capacity of function to help us underlies its meaning and distinguishes it from form, even though we use mechanisms (form) to perform functions and store information. Form and function are distinct: the ability to predict has no physical aspect, and particles, molecules, and objects have no predictive aspect.

With that established we can get to the most interesting aspect of the mind, which is this: brains acquire and manage information in two ways, but only one of those ways is responsible for the existence of minds. These two ways are reasoning and intuition, and minds exist because of reasoning (I will get to why in a moment). Reasoning is the attribution of causes to effects, while intuition covers all information acquired without reasoning, e.g. by discerning patterns and associations. So reasoning is how the mind employs a causative approach while intuition is how it uses a pattern analysis approach. All information is helpful either because it connects causes to effects or because it finds patterns that can be exploited. The first approach is the domain of logic, while the second is the domain of data analysis. Reasoning is conducted using atomic units of information called concepts (at least, this is the usage I will employ for concepts). A concept is a container or indirect reference that the mind uses to stand for or represent something else. Intuition doesn’t use atomic information but rather stores and extracts information based on pattern analysis and recognition. For example, we can subconsciously match an input image against our memory causing a single concept to pop out, which we might think of as a known object or as a word. We often use words to label concepts, though most are unnamed. Every word actually refers to a host of concepts, including its dictionary definition(s) (approximately), its connotations, and also a constantly evolving set of variations specific to our experience. Concepts are generalizations that apply to similar situations. “Baseball”, “three” and “red” can refer to any number of physical phenomena but are not physical themselves. Even when they are applied to specific physical things, they are still only references and not the things themselves. Churchland would say these mental terms are a part of folk psychology that makes sense to us subjectively but have no place in the real world, which depends on the calculations that flow through the brain but does not care about our high level “molar” interpretation of them as “ideas”. Really, though, the mysterious, folksy, molar property he can’t quite put his finger on is function, and it can’t be ignored or reduced. Brains manage information to achieve purposes, and only by focusing on those purposes (i.e. by regarding them as entities) can we understand what it is “really” doing. Concepts, intuition, and reasoning are basic tools the brain uses to achieve its function of controlling the body. But what is it about concepts and reasoning that creates the mind? Why can’t we just go about our business without concepts or minds?

I distinguish the mind from the totality of everything the brain does. Computers do a lot too, but they don’t have minds. The “mind” is a perspective on a certain subset of what the brain does. Specifically, it refers to our first-person capacity for awareness, attention, perception, feeling, thinking, will and reason. Collectively, we call these properties or mental states agency, and we call our human awareness of our own agency self. We don’t currently have a workable scientific explanation for why we seem to be agents experiencing mental states, as opposed to robots or zombies that don’t experience mental states. Consequently, we can’t define them except, irreducibly, in terms of each other. For example, awareness might be defined “having or showing realization, perception, or knowledge”, and perception might be defined “awareness of the elements of environment”. But I have an explanation of agency, which in short is that agency evolved as a very effective (arguably the most effective) way for animals to use reasoning. In more detail, concepts are generalizations and generalizations are fictional, which literally means they are figments of our imagination. These figments form imaginary worlds independent of the physical world comprised of concepts defined in terms of other concepts via causative or other kinds of relationships. Causation only exists in these imaginary worlds and not the physical world, because causes and effects are generalizations of kinds of things and events and are not specific things or events. But if the process performing the reasoning in the brain were to act as if its imaginary worlds and their rules of cause and effect were identical to corresponding parts of the physical world, then this would optimize the efficacy and efficiency of its actions. In other words, if the part of the brain doing the reasoning believed it were an agent with a distinct claim to existence (a functional kind), then this would improve its ability to compete with other organisms.

But why, exactly, should the agent approach be better than the (robotic) alternatives? This is a logical consequence of there being one brain to control one body. At any point in time, the parts of the body can each only undertake one action, so one overall, coordinated strategy must govern the whole body. At the top level, then, the brain needs to come to an unending series of decisions about what to do next. Each of these decisions should draw on all the relevant information at the animal’s disposal, including all sensory inputs and any information recorded instinctively in DNA or experientially in memory. With the agency approach, this is accomplished by having a conscious mind in which an attention subprocess focuses information from perception, feeling and memory into an awareness state on which thinking, feeling and reason can act to continually make the next decision. An enormous pool of information is filtered down to create consciousness, and it is specifically done to provide the agent process of the brain (i.e. consciousness) as logically simplified a train of thought as possible so that it can focus its efforts on what is relevant to making decisions while ignoring everything else. This logical simplification can be thought of as creating a cartoon-like representation of reality that captures the aspects most relevant to animal behavior in packets of information — concepts — for which generalized rules of causation can be applied. Intuition, which includes a broad set of algorithms to recognize patterns, can’t by itself process concepts using logical rules such as cause and effect. Reasoning does this, and the network of concepts it uses to do it creates the agent-oriented perspective with which we are so familiar. The addition of abstraction elevates this agency to the level of human intelligence. So, as I said above, we would recognize a robot that could demonstrate abstraction as being intelligent, but abstraction is a development of reasoning, not intuition, so the robot would need to be reasoning with a relevant set of concepts just as we do. Does this imply it would possess agency? If it were controlling a body in the world, then yes, I think this follows, because its relevant set of concepts would be akin to our own. It might subdivide the world into entirely different concepts, but it would still be using a concept-based simplification derived from sensory inputs that probably depends principally on cause and effect for predictive power. The distinct qualia (sensations) that make up our conscious experience are physically just information in the form of electrochemical signals. But each quale feels distinctive so the agent can tell them apart. We also have innate and learned associations for each quale, e.g. red seems dangerous, but the distinctiveness is the main thing as it lets a single-stream train of thought monitor many sensory input channels simultaneously without getting confused. Provided our putative robot had distinct streams of sensory inputs feeding a simplified concept-based central reasoning process then that distinctiveness could be said to be perceived as qualia just like our own. Note that intuition happens outside of our conscious control or awareness and so does not need qualia (i.e. it doesn’t feel), though it can make use of the information. We only have direct conscious awareness of a small amount of the processing done in our brains, and the rest of the processing is subconscious. I will use intuition and subconscious synonymously, though with different connotations. Reasoning and conscious are not synonyms because the conscious mind can access much intuitive knowledge and so uses both reasoning and intuition to reach decisions. Our mind seems to us to be a single entity, but it is really a partnership between the subconscious and the conscious. The conscious mind can override the subconscious on any deliberated decision, but to promote efficiency the simplest tasks are all delegated to the subconscious via instinct and learning. Though we feel like subconscious decisions are “ours”, we may find on conscious review that we don’t agree with them and will attempt to retrain ourselves to act differently next time, essentially adjusting the instructions that guide the subconscious.

Before I move on I’d like to explain the power of reason over intuition in one other way. If most of our mental processing is subconscious and does not use reason, and we can let our subconscious minds make so many of our daily decisions on autopilot, why do we need a conscious reasoning layer at the top to create a cartoon-like world? Note that our more complex subconscious behaviors got there in the first place from conscious programming (learning) using concepts, so although we can carry out such behaviors without further reasoning we used reasoning to establish them. The real question, though, is whether subconscious algorithms that glean patterns from information could theoretically solve problems as well, eliminating the need for consciousness. While people aren’t likely to change their ways, an intelligent computer program that didn’t need code for consciousness would be easier to develop. Let’s grant this computer program access to a library of learned behavior to cover a wide variety of situations, which is analogous to the DNA-based information the brain provides through instinct. Let’s further say the program can use concepts as containers to distinguish objects, events, and behaviors. Such a program could know from experience and data analysis how bullets can move. They can stay still, fall, be thrown, or be fired from a gun at great speed. Still things generally touch other things below them, falling things don’t touching other things below them, and thrown and fired things follow throwing and firing actions. What is missing from this picture is an explanatory theory of cause and effect, and more broadly the application of logic through reason. The analysis of patterns alone does not reveal why things happen because it doesn’t use a logical model with rules of behavior. The theory of gravity says that earth pulls all things toward it, and more generally that all things exert an attractive force to each other inversely proportional to the square of their distance apart. The weakness of physical intuition compared to theory is made clear by the common but mistaken intuition that the speed that objects fall is proportional to their weight. Given more experience observing falling objects one will eventually develop an intuitive sense that aligns well with the laws of physics, but trying to do science by studying data instead of theorizing about cause and effect relationships would be very slow and inconclusive. The intuitions we gather from large data sets are indispensable to our overall understanding but are only weakly suggestive compared to the near certainty we get from positing laws of nature. The subconscious is theory-free; it just circulates information looking for patterns, including information packaged up into concepts. When it encounters multiple factors in combinations it has not seen before, it has no way of predicting combined effects. In the real world, every situation is unique and so has a novel combination of factors. Reasoning with cause and effect can draw out the implications of those factors where pattern analysis could only see likelihoods relative to possibly irrelevant past experience.

A self-driving car must be able to evaluate current circumstances and select appropriate responses. While we have long had the technology to build sensors and mechanical or computer-based controllers, we haven’t been able to interpret sensor data well enough to replace human drivers. Machine learning has solved that problem, and we can now train algorithms using thousands of examples to recognize things. This recognition mirrors our subconscious approach by using data and positive feedback. Self-driving car algorithms plug recognized objects into a reason-based driving model that follows the well-defined rules of the road. To ensure good recognition and response in nearly any circumstance, these programs use data from millions of hours of “practice”. What they do is akin to us performing a learned behavior: we collect a little feedback from the environment to make sure our behavior is appropriate, and then we just execute it. To tie our shoes we need feedback to locate the laces and ensure the tension is appropriate through the process, but mostly we don’t think and it just happens. We need to be able to reason to drive well because we have to be prepared to act well when we encounter new situations, but a self-driving car, with all of its experience, is likely to have seen just about every kind of situation it will ever encounter and already has a response ready. That overwhelming edge in experience won’t help when it encounters a new situation that reason could have easily solved, but even so self-driving cars are already 20 times safer than humans and will soon be over 100 times safer, mostly because humans make more mistakes. Although computer algorithms still can’t do general purpose reasoning, our reasoning processes have lots of subconscious support, so applying machine learning to reasoning will continue to increase the cleverness of computers and may even bring them all the way to abstract intelligence. My goal is to unveil the algorithm of reason, to the extent that this can be done using reason. That will certainly include crediting subconscious support where it is due, but more significantly it will expose the role and structure of consciousness.

All animal minds bundle information into concepts through abstraction for convenient processing by their conscious minds. Abstract thought employs conceptual models, which are sets of rules and concepts that work together to characterize some topic to be explained (the subject of prediction). We often perceive conceptual models as images or representations of the outside world “playing” inside our heads. While we can’t exactly describe the structure of conceptual models, we can represent them outside the mind using language or a formal system. Formal systems, which often employ formal languages, can achieve much greater logical precision than natural language. But what both formal and natural languages have in common is that they treat concepts atomically. We ultimately need intuition, i.e. subconscious skills, to resolve concepts to meanings. Yes, we can reason out logical explanations of concepts in terms of other concepts, but these explanations can only cover certain aspects and invariably miss much detail that we grasp from consideration of our immense body of experience for any given concept, for which we depend on subconscious associations. Again, the bicameral mind (a partnership between the subconscious and the conscious, not the speaking/listening division proposed by Julian Jaynes3) feels to us quite unified even though it actually blends intuitive understandings based on subconscious processes with rational understandings orchestrated by rational, conscious processes. From this, we can conclude that formal systems simplify out a critical part of the model. Natural language also simplifies, but words carry subtleties through context and connotation. Mental models combine all of our intuitive, subconscious-based knowledge with the reasoned concept-based knowledge we use in conceptual models. Put another way, conceptual models manage the cartoon-like, simplified view at the center of reasoning, while mental models combine these logical views with all the sensory and experiential data that backs them up.

The logical positivists in the 1930’s and 1940’s claimed that all scientific explanations could fit into a formal system (called the Deductive Nomological Model), which basically said that scientific explanations follow solely from laws of nature, their causes, and their effects. The first flaw with this theory was that it committed the category mistake of equating function with form. Scientific explanations, and all understanding, exist to serve a function, which is to say they have predictive power and consequently are carriers of information. That which is to be explained, the explanandum or form, is explained by an explanans or function. It is not that the form doesn’t exist in its own right, it is that our only interest in it relates to what might happen to it, its function. The second flaw with the DN Model was that it presumes that explanations only require a deductive or logical approach, but as I explained above, patterns are fundamental to comprehension as they set the foundation that connects the observer to the observed. Logic may be perfect but can only be imperfectly married to nature, a connection established by detecting and correlating patterns. While postpositivists have tried to salvage some of the certainty of positivism by acknowledging that human understanding introduces uncertainty, but they can’t because the real problem is that function doesn’t reduce to form. No matter how appealing, scientific realism (the view that the universe actually exists) is irrelevant to science. Science is indifferent to the noumenal (what actually exists); it is concerned only with the phenomenal (that which is observed) and what we can learn from observation. Form and function dualism gives postpositivism solid ground to stand on and is the missing link to support the long-sought unity of science. I contend that functional explanations are always partial by their nature, providing the ability to predict the future better than chance but guaranteeing nothing. It is consequently unfair to call such explanations “partial” because there is no such thing as a “full” explanation.

  1. Linus Pauling, THE NATURE OF THE CHEMICAL BOND. APPLICATION OF RESULTS OBTAINED FROM THE QUANTUM MECHANICS AND FROM A THEORY OF PARAMAGNETIC SUSCEPTIBILITY TO THE STRUCTURE OF MOLECULES, J. Am. Chem. Soc., April, 1931, pp 1367–1400
  2. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2
  3. Julian Jayne’s, Bicameralism (psychology), The Origin of Consciousness in the Breakdown of the Bicameral Mind, Houghton Mifflin Harcourt, 1976

1 thought on “The Mind Matters: The Scientific Case for Our Existence”

Leave a Reply