The Mind Matters: The Scientific Case for Our Existence

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. The eliminativist idea that everything that exists is physical, aka physical monism, is true so long as one is referring to physical things, including matter, energy, and spacetime. But another kind of existence entirely, which I will hereafter call functional existence, is about the capabilities inherent in some systems. These can be capabilities of physical or fictional systems; their physical existence is independent from their functional existence and only relevant to it in limited ways. A capability is the power to do something, so functionality can be called a behavioral condition, which is a different thing than any underlying physical mechanism that might make it possible. What the mind does, as opposed to what the brain does, is to support the appropriate function of the body; the term “mind” refers to functionality while “brain” indicates the physical mechanism. Scientific explanations themselves (even those of materialists) are entirely functional and speak to capabilities, and are not at all about the physical form an explanation might have in our brains. But these theories don’t have to be about functional things, and those of physics and chemistry never are.

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. So far as we know, everything in the universe except life is devoid of function. Physics and chemistry help us predict the behavior of nonliving things with equations. Physical laws are quite effective for describing simple physical systems but quite helpless with complex physical systems, where complexity refers to chaotic, complex, or functional factors, or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. But the weather and all other nonliving systems are not capable of controlling their own behavior; they are reactive and not proactive. Capability arises in living things because they are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes in DNA and bodies that replicate it. I can’t prove that functionality could only arise in a physical universe through complex adaptive systems, but any system lacking cycles of both positive and negative feedback would probably never get there. Over time, a CAS creates an abstract quantity called information, which is a pattern that has predictive power over the system. We can’t actually predict the future using information, but we can identify patterns that are more likely to happen than random chance, and this power is equivalent to an ability to foretell, just less certain.

Functional systems, which I will also refer to as information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities at all. While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented.

In summary, I am proposing a new kind of dualism, which I call form and function dualism, which says that everything is physical, but also that information management systems create additional entities that have a functional existence. Further, an infinity of functional entities can be said to exist hypothetically even if no physical system is implementing them. Information management systems that exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

    Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful to discuss such functions independently of the underlying physical systems they run on.

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final, but the latter three are just aspects of function (akin to what, who, and why) and so are teleological. He used the word cause more broadly than we do today; cause, as in cause and effect, refers only to the efficient cause, “who” caused what. The formal cause refers to the lines we draw to distinguish wholes from their parts, i.e. our system of classification. To the Greeks, these lines seemed mostly intrinsic, but we see them today more as artificial constructs we impose on the world for our convenience.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. It is my contention that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 19481, which then led into systems theory2, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Minds are dynamic information management systems built in animal brains that create information in real-time. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Our perspective has been influenced by the path of scientific progress. The power of the scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because the exhaustive reach of material existence has come to be synonymous with the triumph of science over mysticism. But physical science alone can’t give us a complete picture of nature because function, which begins as physical processes, can acquire persistence and hence existence in the physical world through information management systems.

The social sciences presume the existence of states of mind which we understand subjectively but which arise out of neural activity. The idea that mental states are not entirely reducible to neural activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. Understanding the underlying physical system can’t explain behavior because the link between them is indirect, which as noted above detaches the physical from the functional. Also, unlike digital computers, which are perfectly predictable given starting conditions, minds have chaotic and complex factors that impede prediction. We can conclude that emergence is a valid philosophical position that describes the creation of information, though it is a misleading word because it suggests the underlying physical system causes the functional system. Cause in a feedback-based system is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system can thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but through feedback a tool can come to be designed to perform the job better.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that relates to the capabilities it confers to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same traits. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while it others the “purpose” of the gene can be hard to identify. Also, knowing the chemistry will not reveal all the kinds of circumstances in which it might help or hurt. Any model of causes and kinds of circumstances we develop will be a gross simplification of what really happens, even though it might often work well most of the time. In other words, the traits we describe are only generalizations about the purpose of the gene. Its real purpose is an amalgamated sum of every selection event back to the dawn of life. The functionality is real, but with a very deep complexity that can’t be summarized without loss of information. The functional information wrapped up in genes is a gestalt that cannot be decomposed into parts, though an approximation of that function through generalized traits works well enough in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and the traits they confer when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without the indirection an information processing system such as a brain would employ. Instinct covers all information-based behavior that doesn’t leverage experience or reasoning. To qualify as information, a set of input data needs to be collected using an internal representation that can then be applied to future circumstances not directly but using an algorithm that indirectly matches features of the representation to the circumstance. Such algorithms that work the same way regardless of experience (learning) or reasoning are labeled instincts and include all the subconscious impulses and processes of the brain, covering things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. At least two approaches evolved to do this. The first, learning, creates a store of knowledge and/or behavior that has worked in the past with the expectation that it can be applied in the future. Learned behavior is not instinctive because it depends on stored experience rather than genetic impulse alone. Even many primitive animals and perhaps even plants demonstrate some capacity for learning. The other approach, reasoning, is the capacity to draw conclusions from new or learned information. Reasoned solutions are not instinctive because they tailor solutions to suit circumstances. We reason in two ways. Intuition is subconscious reasoning in which we come by knowledge (useful information) without knowing how we did it, while logical reasoning is reasoning consciously with logic. Logic operates on the large pool of subconceptual data (akin to big data in computers) created by instinct and experience. This pool must first be partitioned into generalized elements called concepts (akin to structured data in computers) that have associated properties and which interact according to rules of entailment, also called cause and effect. We then group concepts and the rules about them into mental models that represent possible worlds. Logical reasoning lets us run simulations in possible worlds to reach conclusions that can be very effective in the physical world, giving us an open-ended capacity to solve problems in real time. We catalog successful (and failed) mental models and use them to help us navigate the world. In fact, one of these imaginary worlds, which I call the mind’s real world, has a special status: it is our best guess from current sensory information about what is happening in the real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world.

In summary, all mental behavior results from instinct, learning or reasoning, which are almost always combined to leverage the strengths of each. Our instincts keep us closely connected to the world and motivate us. Learning uses feedback to find useful patterns and file them away for future use. Reasoning finds reasons, i.e. rules, that work reliably or even perfectly in well-defined circumstances and then applies them. These three talents are distinct but can be very hard to cleanly separate. We know many of our behaviors are influenced by instincts written in our genes, but we are quite unsure where instinct leaves off and learning begins because so much is based on them working together. And while logical reasoning can be readily separated from learning and intuition in principle, we can’t easily separate them in practice when we reflect on how we think. For example, both learning from experience and intuition subconsciously review knowledge scanning for useful patterns, but that review includes knowledge created using logical reasoning and so can reach conclusions that are partially based on reason and may seem like they were reasoned logically even though they just fit the data well.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.” 3 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one4. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, draw substantially on learning and logical reasoning. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by learning and logical reasoning), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, not all can reason, and especially not logically. Reason is sometimes held to be the unique province of humans, but all and mammals can do it, and other animals may also be able to in rudimentary ways. Birds and mammals (let’s call them advanced animals for short) can generalize about objects, i.e. use concepts, and develop logical consequences about them. We can’t ask them what they are thinking, but problem-solving behaviors like planning and tool use go beyond what instinct or experience base on trial and error could achieve. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as its adaptive power is so large. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, so our whole concept of language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Animals can reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Arguably, their skill set falls below a threshold needed for directed abstract thinking to be beneficial. But why is it useful to be able to decouple information from the world? The greater facility a mind has for abstraction, the more creatively it can apply information to outperform instincts.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct, learning, and reason, which as I noted differ in that instinct imposes ingrained behaviors, learning creates behaviors customized to circumstances through experience, and reason can create entirely novel behaviors. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Learning from experience is based on intuitive feedback, i.e. we discover methods using our instincts, prior experience, and trial and error that we reinforce because we liked their outcomes. Reasoning (and learning from reasoning) creates the criteria it uses for feedback. A criterion is not a physical thing but a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements (“somethings”) that interact according to logical rules. And, as noted, reasoning is a combination of intuition and logical reasoning. Logical reasoning separates from reasoning overall in that it can be logically isolated from intuition, which gives it the power to be completely objective (repeatable), whereas intuition is completely subjective (unrepeatable) because of its gestalt foundation which can only be explained approximately. Are there other ways to manage information in real time beyond instinct, learning, and reason? Perhaps, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of learning and reasoning use neural processing over seconds to minutes. Learning from experience works because many circumstances repeat, while reason works because self-contained logical models are internally true by design yet also correlate well to many real-world situations.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and experience or differentiated in the case of the elements of reason. But let’s consider physical things, which we normally take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but even that existence depends on the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have good reasons to grant existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than functional sciences (in which I include biology and social sciences). And even worse for the cause of the functional sciences is the problem that the existence of function has inadvertently been discredited. Once an idea, like phlogiston or flat earth, has been cast out of the realm of scientific credibility, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence which becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”5. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Some have posited additional categories of being beyond form and function, but I think any additional categories can be reduced to these two. Aristotle’s Organon enumerated all ten possible kinds of things that can be the subject or the predicate of a proposition, the first of which is substance. Substance is equivalent to what I call form. Aristotle essentially defined it as that which is not function, by saying substance cannot be “predicated of anything” or be said to “be in anything”, which are relational or functional aspects. The other nine categories are functional aspects and as so are inherently indirect and not the substance themselves, namely quantity, quality, relationship, where, when, posture, condition, action, and recipient of action. I would say that the location of a physical object in time and space is part of its physical existence, but how we describe it relative to other things is not. As Immanuel Kant would have put it, a physical thing is a noumenon, or thing-in-itself, while our description of it is a phenomenon, or thing-as-sensed. We have no direct knowledge of the noumena of the physical world, but we talk about them as phenomena all the time. Noumena are strictly physical and phenomena strictly functional. I see no reason for any additional categories. Quantity and quality may accurately describe traits of a noumenon, but they are still descriptions and so are functional; the noumenon itself just exists without regard to how it might be characterized for some purpose extrinsic to itself. Ironically, this means that science is entirely a functional pursuit, even though its greatest successes so far are about the physical world. “About” is the key word; science studies phenomena, not noumena. We are curious about the noumena, but we can never know them as we only see their phenomena. This is not due to limitations of measurement but to limitations of understanding. Any noumenon can be understood through an infinite variety of phenomena and ways of interpreting those phenomena that model it but are never the same as it. They consequently describe some features, perhaps very accurately, but miss others, and in any event, what we think of as a feature is a generalization that makes sense to us functionally but means nothing physically.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because experience and intuition are a gestalt that draws on all our knowledge, and logical reasoning uses closed models, but those models are built on concepts that connect back to all our knowledge. The functional form of a thought is the role or purpose it serves. Because we can refer to this purpose and must do so to understand it, i.e. relate it to all other purposes, it springs into being as a kind of thing, something that can be the subject or the predicate of a proposition. Functionality in minds has a practical purpose (even function in mathematics must be practical in some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

  1. A Mathematical Theory of Communication, Claude Shannon, 1948
  2. Science and Complexity, Warren Weaver, 1948
  3. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  4. Nicaraguan Sign Language
  5. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2

1 thought on “The Mind Matters: The Scientific Case for Our Existence”

Leave a Reply