Recent Posts

Mission Statement: The Need to Know Our Own Minds

We want to live our lives responsibly and effectively, but what makes us think we are qualified to do it? We depend on a combination of nature and nurture to give us the confidence to act. Our lives are filled with actions, all of which we need to be confident enough about to perform. So, for better or worse, we become confident about the quality our knowledge and move forward. That’s a justification, and a good one since we have to act, but it sets no standard about the level of responsibility or effectiveness we will achieve. Of course, we need standards because how good a job we do affects the quality and length of our own lives, of those around us, and of the planet itself. So we create millions of standards to improve our responsibility and effectiveness in as many ways across as many subjects as we can, which we enforce through good character, peer pressure, and laws.

My mission here is to go beyond this ad hoc network of improvements to establish a scientific basis for responsible and effective action. To scientifically understand how and why we act we need to know how and why our minds work, that is, the purpose and methods of the mind. This will put us in a better position to say what it even means to be responsible and effective and whether those objectives are worthwhile, and, assuming they are, it will position us to start addressing the question of finding paths to achieve them better. The scope of all possible actions is unlimited, and I am not proposing that one perfect scientific path forward will emerge. But I think we will quickly be able to see how many actions and the standards that support them are flawed, and how we can develop strategies compatible with our nature and nurture that would work better. I think we can increase our confidence that we know what we are doing if we understand our motives, goals, and how we set and reach them.

I’m going to address this subject in two parts, through two books. The first book will establish a scientific basis for the mind and the second will investigate how we can use it better. It is important to know that my goal is to write the second book so you understand my focus in the first book. I am not so much interested in the details of how neurotransmitters or 3D vision work as in what kinds of functions they make possible. Also, our technical knowledge is still pretty rudimentary, but our knowledge of what we do is pretty good. We have a fair idea how sensory perceptions get started, but we are harder pressed to explain how they feel (qualia), awareness, thinking, understanding, emotion, purpose, and morality. These things are not physical, which means that to the hard sciences they don’t exist. But they do exist, and if the framework of science is not big enough to encompass them, then it must be expanded. Of course, we address them through the social sciences, but these have deserved reputations for being soft because they are built on a quicksand of assumptions (or standards) about human nature that must be taken as givens. To really understand human nature we need to link the hard and soft sciences. We have to find a hard basis for soft things.

This is the purpose of cognitive science, and so that is what I am doing here. But I am trying to relaunch cognitive science with a fresh start because it has gotten bogged down in classical quandaries and neurochemical details. I think the available evidence can support much stronger and broader theories than have yet been proposed, so I am going to propose some. I remember being entranced when I was a teenager by the idea of Harry Seldon’s psychohistory in Isaac Asimov’s Foundation Trilogy because it raised the prospect of understanding human nature. Psychohistory made it possible to predict what we will want and how we will get it, which allowed for perfect prediction of the future. That is an extreme and unattainable use of the understanding of human nature. My more practical goal is to understand what we should want and how we should get it. Two short books on the subject won’t give us all the answers, but I hope it will help inspire us to start devising methods in which we can justifiably have greater confidence than the methods that have been handed down to us.

My investigation into the scientific nature of our mental states will reveal that they really do exist and are not just illusions or rationalizations our brain creates to explain why we did what we did after the fact. It will reveal that the fabric of those states is information, which can and must be represented physically (using 0’s and 1’s in computers and neurochemistry in humans), but which is fundamentally not a physical substance. Understanding is created from information and is new information itself. The mind is a deeply interconnected web of information all the way down, and the brain is the mechanism that makes it possible. Understanding the mind becomes an exercise in figuring out what kinds of information and information processing the mind uses to accomplish what purposes.

In our native configuration, we know how to use our minds but we don’t understand how they work. From science, we are pretty sure about a few more things, most notably that our minds arise from neural activity in our brains and to a lesser degree our whole nervous system. Beyond that, the rest of our bodies and what we do in the world stimulate our nerves. We know minds are not supernatural, yet attempts to characterize them as purely physical seem unconvincing. So we hit a wall. I’m going to take us through that wall by proposing dualism, meaning the existence of two independent kinds of things, physical and informational. Classical dualism is mystical, and I am not reviving that. My dualism is more about the value of taking different perspectives, two in this case, and on the value and nature of perspectives themselves, which are useful informational structures.

As I devise a basis for understanding what understanding itself is I will start to weave it into what the mind is up to and how it goes about doing it. I will try at all points to keep this new basis of understanding compatible with the available scientific evidence. Since I am not an experimentalist, I am necessarily a theoretical cognitive scientist. All that distinguishes me from just being an armchair philosopher is my methods, which I aim to make as rigorously scientific as possible. This means I will entertain a discussion about what science and the scientific method even are, and in so doing I will define them more clearly than they usually are, and I will expand their definitions while I am at it. That said, because this is a very high-level book and I have limited time, perspective, and space, I am aiming for a level of rigor suitable to a book of this scope and directed at a general audience.

The Mind Matters: The Scientific Case for Our Existence

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the idealized view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven. While the track record of the hard sciences to reduce phenomena to the physical appears impressive at first glance, their explanations are not physical, and neither are appeals to functionality (traits) in biology. Materialists think explanations are physical, being brain states, and would characterize biological traits as states in a (slower running) evolutionary system. While it is true that these functions operate in physical systems, the systems are not the functions; they only make them possible. The functions of brains and DNA are abstracted away from their underlying physical mechanisms. “Abstracted away” is called emergentism in philosophy, and refers to situations where even the full explanations of underlying mechanisms can’t provide the necessary explanatory power to account for certain behaviors that “emerge” from them. What emerges from brains and DNA is functionality, and the reason one can’t explain functionality by understanding the mechanism below is that functionality is a generalized conclusion and not a specific mechanism. Generalization is a process in which information is created by analyzing data for patterns. Information is a non-physical representation of patterns and their logic that is stored physically. I did not invent the idea that functional existence is independent of physical existence, but I do feel it is an overlooked and neglected existential state that is central to understanding the mind, so I will be developing and expanding on it for much of this book.

Before data came along the world was entirely physical; particles hit particles and followed scientific laws of the sort provided by physics and chemistry. Data is a second-order phenomenon which leverages small side-effects of events without changing the events themselves. It is theoretically possible that a logical analysis of such side-effects would reveal patterns in them that could be used to predict future events. The ability to predict the future would carry such profound benefits that any mechanism capable of doing it could do so to further its own ends, most notably to protect its own survival. At least two different mechanisms that predict the future have evolved, the first being entirely reactive and the second being proactive. Natural selection is a reactive process which measures the side effects of events blindly. It collects data about the value of a genetic trait to influence external events by statistically testing how well variations of the trait perform against each other. Because this statistical data is assessed using reproductive fitness, it takes years to millennia for adaptations to spread. Also, evolutionary “progress” is not directed but is limited to filling ecological niches with appropriate species. Animal brains, on the other hand, provide a proactive means to analyze patterns in data that can connect side effects to their underlying events to establish correlations between side effects and the effects themselves in a general way. Where a passive mechanism is general but blind to causation, this active connecting of side effects to effects actually defines causation, which put another way means to establish a predictive link between inputs that correlate to a predictive model and the outputs that one can expect. These data analysis, modeling, correlation, and matching operations will succeed based primarily on the qualities of the theoretical analysis and only secondarily on the qualities of the computing platform. The theoretical analysis, being an exercise in logic over abstract operands, is strictly nonphysical, while the computing platform is strictly physical. The operands are abstract because they are not tied strictly to any physical referent but are generalized entities (let’s call them concepts) that may be attached to a variety of possible referents depending on how we correlate and match. Put another way, indirection causes emergence. It is not at all the case that something has arisen from nothing; it is only the case that something has arisen that doesn’t directly connect back to what it came from.

Because information is decoupled this way from physical systems that employ it, we need to grant it, or more generally the function that it makes possible, a separate status of existence than physical things. I would call them physical and mental, but mental is a special case of how brains employ information, and this realm of existence extends to all abstracted entities used by information management systems to achieve function, so I will call them physical and functional. So we can conclude that scientific explanations themselves may secondarily consist of brain states, but primarily they are an abstract network of concepts which constitute an informational representation of possible sets of affairs. That is, their functional existence is paramount, and how they manifest physically in minds, computers or on paper are incidental to this function, whose existence can be discussed quite independently of any physical expression.

Having established a basis for the existence of functional entities, I will now turn my attention to physical entities. Do physical entities also have a valid claim to existence? Our continued existence as living creatures certainly seems to depend on recognizing sensory feedback as evidence of a fixed external world, and extensive analysis both personal and scientific has established the high reliability of viewing this external world as being completely fixed independent of our senses. So we can and should grant physical things a special existential status, but should not forget that it is necessarily provisional as all our knowledge of it, no matter how well corroborated, is ultimately indirect and mediated by our minds through thoughts and concepts, which exist functionally.

Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than functional sciences (in which I include biology and social sciences). And even worse for the cause of the functional sciences is the problem that the existence of function has inadvertently been discredited. Once an idea, like phlogiston or flat earth, has been cast out of the realm of scientific credibility, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when formulated correctly. The eliminativist idea that everything that exists is physical, aka physical monism, is not exactly wrong because all physical things that exist are physical (by definition); all objects in the universe are wholly physical. But we can imagine something that doesn’t exist, and although our imagination physically lives in our brain, what we imagine still has a hypothetical existence regardless of whether we think about it or not, and that hypothetical existence is the essence of functional existence. I can think of an apple, and you can think of an apple, and completely different brain processes (in the sense that one is here and the other is there) happen. But our distinct concepts of apple will share many functional similarities, and the value of the concept APPLE (I will use the customary convention of capitalizing concepts) ultimately derives from its role in affecting our function and what we will do and not in the physical mechanisms we use to conceive of apples in our brains, which are, in this larger sense, irrelevant. Relevance itself is a fundamental property of function that is meaningless physically.

The laws of physical science can provide very reliable explanations for all physical phenomena. We are finding it very challenging to explain all the mechanisms that power biological systems, and our brains in particular, because they employ very complex electrochemical reactions, and, in the case of brains, complex networks of neural connections as well. It’s just very hard to unravel. But we are fairly sure that the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to leap to the conclusion that the mind is physical, but if we take the mind to represent the functional aspect of the brain, then my arguments above show that the mind is not physical. Pursuing an eliminativists stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”1. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” to be a simplification of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. The same feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, i.e. the same functions of the feature. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. This information is the basic unit of function, which is decoupled from and so independently existing from the physical. Physical and functional existence together form a complete ontology (philosophy of existence) which I call form and function dualism. While philosophers sometimes list a wider variety of categories of being (e.g. properties, events), I believe these additional categories can all be reduced to either form or function and no further.

Because physical systems that use information (which I generically call information management systems) have an entirely physical mechanism, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. Functional existence is not a separate substance in the brain as Descartes proposed. It is not even anywhere in the brain because only physical things have a location. Any given thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain and arguably the whole body participate in forming the thought. The thought also has a function, being the role or purpose it serves. This property, clearly not physical and seemingly intangible, is tangible (touchable) through the impact it can have in aiding prediction, so it is a kind of existence we can talk about and which affects us. A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect, and abstractly so (abstract: “not based on a particular instance; theoretical”, meaning the information is general and not specific). We store this general information in one of two ways, genetically or neurally (i.e. via a neurochemical memory system). As noted above, genetic information is collected and processed reactively through heritability, and neural information is processed proactively using neural data analysis. Genetic information directs all the systems of the body, but also specifically directs the control systems of the brain to give us instincts, which go beyond simple urges to include things like giving us a visual and sensory understanding of our surroundings. We also create neurally-stored memories of our personal experience to record our interactions with the world. Not everything our instincts and memory enables us to do will be useful, but they do enable us to do some useful things, and this capacity is the measure of their existence as function.

The brain’s hardware is only very indirectly connected to its function, the same way software’s function is only indirectly connected to the computer hardware on which it runs. A nearly infinite string of genetic feedback stretching back millions of years designed our brains to achieve functional objectives by progressively selecting and adapting physical mechanisms based only on their ability to help. This kind of results-oriented design can produce complex mechanisms that are hard to unravel, but functions with specific mechanical and algorithmic needs can be pinpointed. The nerves that conduct sensory information from the eyes, ears, nose, etc. to specific brain areas dedicated to those senses have been identified. We have a sense of the kinds of processing the brain does to see color, 3D, detect edges, recognize objects, fill in our blind spots, etc., so the localized study of the mechanisms of vision and the other senses may well come to provide a comprehensive explanation of their function as well. All of the brain’s genetically-based functionality will eventually be explained pretty well by this kind of physical study alone, but we are still quite a ways off from that point and what our minds do is not driven solely by genetics. Everything we’ve learned stretching back to our conception is stored in some neurochemical way which is itself entirely genetic and explainable, but the contents of what we have learned are, like software, not knowable by studying our gene-based hardware. Until recently our sole method to access the brain’s informational content was through use, by observing behavior, listening to verbal accounts of knowledge, or contemplating the thoughts in our own heads. But now several techniques have been developed to see which brain areas and somethings which individual neurons activate as people do or think specific things. While this is certainly a good first step to reading a person’s mind, we are a long ways off from reading thoughts objectively as well as we can subjectively. I can certainly imagine that a machine learning algorithm given such information could become quite good at predicting what we are thinking about and even what we will do next, but this too will fall far short of understanding why.

Function in the mind doesn’t arise through a single mechanism as it does in a computer’s CPU, one instruction after the next. It is a highly distributed process in which genetically supported functions are activated chemically, electrochemically or even mechanically, while neural functions are activated neurochemically. Even so, it is useful to draw an analogy between the brain’s mechanisms and computer hardware and between learned knowledge and software. We can’t completely distinguish the hardware and software aspects of any brain function, but we can do so approximately. Knowledge extends the innate power of the brain the same way an advanced civilization outperforms an aboriginal one. Extending the analogy, software is more than just a series of instructions. Each instruction embeds functionality designed into the hardware, and instructions are then bundled into operating system subroutines and library functions that make high-level functions possible. Our brains perform the low-level functions subconsciously, which is to say outside direct conscious awareness or control. From habit, we can learn to perform even high-level functions with little or no direct conscious awareness or control. We can talk about high-level functions of the mind or software independent of the hardware or low-level routes, but to understand it well we need to know what the underlying mechanisms are doing. Physical architectures that make function possible both limit and enable what is possible, but function still retains a fundamentally non-physical character. What function makes possible could always be accomplished using different hardware. Also, form doesn’t drive function, function drives function, meaning that what needs to be done to achieve better and better results is the force that impels evolution and human design, not the toolkits available to do them. Consequently, I will be focusing on the mind more than the brain, i.e. the function more than the form, as I develop explanations.

Before I set out to develop theories to explain the high-level functions of the mind, I’d like to bring up some caveats. First, theories themselves are high-level functions of the mind, so I will be explaining the mind by using the mind. This is a bit of a catch-22, but we don’t hesitate to use theories to explain physics and chemistry, so what is good for the goose must be good for the gander. Second, I have admitted that I am going to propose theories to explain function without a detailed theory to explain the physical brain mechanism behind it. Again, I claim that all of science proceeds this way, starting with sketchy explanations of mechanisms and refining them as new evidence comes in. So long as my theories are plausible given our physical knowledge, they are sufficient. And third, there are many ways to explain anything, each creating a perspective that covers different aspects (or the same aspects in different ways). Science aims to merge mutually consistent perspectives into overarching theories based on a consistent set of underlying evidence, but it is not always practical or possible to merge them all at a given point in time. So we should expect and encourage a proliferation of theories on the frontier. In the case of the mind, there are still many ways we could describe the master control program and high-level functions of the brain, and I will present one scheme here, as unified as I can make it.

A stream has no purpose; water just flows downhill. But a blood vessel is built specifically to deliver resources to tissues and remove waste. Lifeforms have many specialized mechanisms to perform specific functions, but a single animal body can’t perform all its support functions simultaneously, except the most degenerate, sessile animals like sponges. A mobile animal needs to prioritize where it will go and what it will do. This functional requirement led to the evolution of animal brains, which, as noted above, collect information about their environment through senses and analyze it to predict likely outcomes from different interactions. This information can be used to plan actions. This kind of dynamic, proactive data analysis generalizes functional aspects of the environment into concepts which can be manipulated functionally as distinct entities. So an animal can thus interpret sensory data as food, friend or foe nearly instantaneously. In doing so, it has moved out of the realm of physical interactions into the realm of functional interactions, which are cut off from the physical by a barrier of indirection. Any functional interaction requires sensory processing that leads to recognition using stored models, evaluation of possible reactions, and prioritization against other reactions. These are not just “knee-jerk” reactions; they require complex information processing. Most of the information processing needed to do them is innate, and in some simple animals it may be entirely innate, but the advantages that learning and memory provide to tune responses to specific situations are so great that nearly all animals have some capacity to learn. Animals are better equipped to learn some things than others. For example, humans more readily store sensory experiences than abstract information and have distinct innate mechanisms for storing procedural vs. declarative memories.

What interests us most about the mind is not the huge fraction we generally take for granted that is pretty similar across all mammalian brains, it is the part that makes humans special. The richness of our lives and even our motivation for living depend on our senses, motor skills, emotions, drives, and awarenesses (of time, place, etc.), so I don’t want to mitigate their significance to our lives or their contribution to the mind overall. All mammals (and to some degree all sentient creatures) have those things, but they are not enough to form our sense of self, a sense which we are pretty sure is stronger in people than in animals. While arguably sheer arrogance, I am going to claim that this sense in humans derives from our greater capacity for abstract thought. Beyond just controlling themselves and our lives, people can also freely associate seemingly any ideas in their heads with any others ad infinitum, giving us the potential to see any situation from any angle. Whether and how this association is really free is a matter I will explore more later, but our flexibility with generalizations lets us leverage knowledge in vastly more ways than if we only applied our knowledge narrowly, which we can infer happens with animals based on their behavior. Some consider language to be at the root of human intelligence, and some look to our facility with procedural or spatial thinking, but the lever that drove these was the evolutionary expansion of abstract thought.

Though our mechanisms to learn and recall are innate, we have created an entirely artificial world of cultural knowledge that supports our civilization. We each adopt our share of cultural knowledge, but we also develop a store of personal knowledge specific to our own interactions with the world. What is all this knowledge and what makes it abstract? While we will in time come to understand the neurochemical mechanisms that support abstract thinking and memory, just as we will for vision processing, knowing how they work will say much less about what ideas we have created with them than vision processing says about what we see. This is because abstraction opens the door to the imagination, letting us create fictional worlds where anything might happen, while vision aims for as much realism as possible. Consider an analogy. If vision and abstract thinking are both higher-order functions like software libraries, then the vision library performs a specific function while the abstract thinking library provides a general toolkit one can use to perform a wide, even unlimited, range of functions. This library performs a higher-order process we call “thinking” that employs abstract data we call “concepts” to model general-purpose problems. General-purpose computer toolkits like this include programming languages (e.g. C++, Java) and software engines, of which there are now hundreds for developing games. The actual instructions computers use are in machine language, but we program them in programming languages or engines which package up commonly used functionality into a convenient and seamless environment. In the same way, our reasoning capacity provides us with a convenient and seamless way to use many of our brain’s underlying capabilities without worrying about the details. Although convenient, this also means we can’t “think our way out of the box”; we can only think using the small, tailored subset of functions available to our conscious minds.

While our linguistic, procedural, and spatial thinking leverage our enhanced facility for abstraction, they are still innate capacities. The crown jewel of our abstraction toolkit is the ability to reason. Reasoning is what extends our power to know things to knowing that we know them. Reasoning pushes the level of indirection inherent in all knowledge back one or more levels, to become self-referential or arbitrarily referential. Reasoning doesn’t just let us create new abstract concepts; it lets us connect them logically through deduction (entailment; cause and effect), induction (weight of evidence), or abduction (recognition or intuition). The ability to reason abstractly is the critical skill we would expect to find in a robot seeking to pass the Turing test, which entails convincing a human that it is human via written communication. If a robot could reason abstractly as well as we can, we would have to grant it intelligence. It would need to understand and have programming for senses, emotions, drives, etc., as well, but I posit that reasoning is the most significant and overarching skill.

In granting us the ability to reason abstractly, evolution opened a door for us to a much larger space than the physical world: the unbounded world of imagination. We know animals are sentient, feel pain and experience the world in much the same way we do, but we rather doubt their imaginations are really comparable to our own. When religions refer to the divine, it is this attribute in ourselves to which they refer. This nonphysical capacity, abstract reasoning, gives us arbitrary power over nature and our own fortunes, both in an imaginary sense and, through our actions, in a physical sense. It is hard to keep in mind, and counterintuitive, but the imaginary sense matters more than the physical sense. That is why we turn to religion, to remind us that the greatest part of us is not our physical bodies and deeds, but our internal world and the ways it connects us to each other and all living things. I am not saying this for the sake of spirituality or religion, but only to acknowledge a fact about our existence. Cogito ergo sum is not just a cool idea without much practical application, we are primarily beings of thought and only secondarily bodies in an environment. Minds made capable of self-awareness through a sufficiently powerful facility for abstraction and reason have to come to terms with this fact. We are not just physical organisms preserving a gene line; we are now also mental organisms floating in seas of possibility, detached from time and space through the force of conjecture. Because ideas are timeless, we are timeless.

Each religion captures the essence of this idea in its own doctrinal way, but they have in common a reverence for this divine capacity, because the important point here is there is more to us that we can ever see. Our imaginations that the ties that bind us to the world run deeper through the planes of possibility than we can perceive through a single stream of consciousness, but they are the biggest part of us, not our bodies. We sense it, we are humbled by it, and we take guidance from it because our actions are not just in the service of evolutionary (physical) goals but also in the service of the network of this abstract power cultivated by nature and nurture and handed down to us through countless generations. Is it perhaps going too far to suggest that minds and possibly free will elevate us above evolution’s thrall? As we become post-evolutionary, about to design the cybernetic organisms that will replace us, discussions about the conventional mechanisms of evolution become moot. Ultimately, we transcend evolution for the same reason we have free will: because Pandora’s box has been opened and anything might happen. But that is a deeper discussion I will develop more later.

Let me back up a bit. Our mental states are special cases of function that are highly tuned to meet our needs, but to understand them we need to dig a lot deeper into at the existential nature of function. I initially said that information is the basic unit of function, and I defined information in terms of its ability to correlate similar past situations to the current situation to make an informed prediction. This strategy hinges on the likelihood that similar kinds of things will happen repeatedly. At a subatomic level the universe presumably never exactly repeats itself, but we have discovered consistent laws of nature that are highly repeatable even for the macroscopic objects with which we typically interact. Lifeforms, as DNA-based information management systems, bank on the repeatable value of genetic traits when they use positive feedback to reward adaptive traits over nonadaptive ones. Hence DNA uses information to predict the future. Further adaptation will always be necessary as circumstances change, but some of what has been learned before (via genetic encoding) remains useful indefinitely, allowing lifeforms to expand their genetic repertoire over billions of years. Some of this information is made available to the mind through instincts. For example, the value of wanting to eat or to have kids has been demonstrated through countless generations, and having a mental awareness of these goals and their relative priority is central to the mind’s function. In the short-term, however, that is, over a single organism’s lifetime, instincts alone don’t and can’t capture enough detailed information to provide the level of decision support animals need. For example, food sources and threats vary too much to ingrain them entirely as instincts. To meet this challenge, animals learn, and to learn an animal must be able to store experiential information as memory that it can access as needed over its lifetime. In principle, minds continually learn throughout life, always assessing effects and associating them to their causes. In practice, the minds of all animals undergo rapid learning phases during youth followed by the confident application of those lessons during adulthood. Adults continue to learn, but reacting quickly is generally more valuable to adults than learning new tricks, so stubbornness overshadows flexibility. I have defined function and information as aids to prediction. This capacity of function to help us underlies its meaning and distinguishes it from form, even though we need mechanisms (form) to perform functions and store information. But form and function are distinct: the ability to predict has no physical aspect, and particles, molecules, and objects have no predictive aspect.

With that established, we can get to the most interesting aspect of the mind, which is this: brains acquire and manage information through reasoning and intuition, but minds only exist because of reasoning (I will get to why in a moment). Reasoning is the attribution of causes to effects, while intuition covers all information acquired without reasoning, e.g. by discerning patterns and associations. So reasoning is how the mind employs a causative approach while intuition is how it uses a pattern analysis approach. All information is helpful either because it connects causes to effects or because it finds patterns that can be exploited. The first approach is the domain of logic, while the second is the domain of data analysis. Reasoning is conducted using atomic units of information called concepts. A concept is a container or indirect reference that the mind uses to represent (stand for) something else. Intuition doesn’t use atomic information but rather stores and extracts information based on pattern analysis and recognition. For example, recognition is the subconscious matching of input data against our memory that causes a single concept to pop out, e.g. a known object or word. We often use words to label concepts, though most are unnamed. Every word actually refers to a host of concepts, including its dictionary definition(s) (approximately), its connotations, and also a constantly evolving set of variations specific to our experience. Concepts are generalizations that apply to similar situations. BASEBALL, THREE, and RED can refer to any number of physical phenomena but are not physical themselves. Even when they are applied to specific physical things, they are still only references and not the things themselves. Churchland would say these mental terms are a part of folk psychology that makes sense to us subjectively but have no place in the real world, which depends on the calculations that flow through the brain but does not care about our high-level “molar” interpretation of them as “ideas.” Really, though, the mysterious, folksy, molar property he can’t quite put his finger on is function, and it can’t be ignored or reduced. Brains manage information to achieve purposes, and only by focusing on those purposes (i.e. by regarding them as entities) can we understand what it is “really” doing. Concepts, intuition, and reasoning are basic tools the brain uses to achieve its primary function of controlling the body. But what is it about concepts and reasoning that creates the mind? Why can’t we just go about our business as unaware zombies?

The mind and the brain are not the same. Just distinguishing form from function is not enough to separate the mind from the brain. Holistically, all brain function can be called mind. That is, the actions our body takes as directed by the brain accomplish the functions that define the mind. In some cases, those directions may simply result from chemical triggers, e.g. hormone regulation, and not neural processing, but mental control is ultimately a combination of genetic and neural information processing, so holistically we need to lump all this feedback-mediated control under the label “mind.” But this holistic perspective is not the one of most interest to us. When we think of the mind, we are mostly thinking of our first-person capacity for awareness, attention, perception, feeling, thinking, will and reason. Collectively, we call these properties or mental states agency, and we call our human awareness of our own agency self. We currently have no scientific explanation for why we seem to be agents experiencing mental states, yet computer programs (and zombies) have no such states but can still do things. Some scientists have concluded that this means our subjective lives are illusions that are immaterial to our behavior, and evidence that our conscious minds only become aware of our actions slightly after the neural signals to perform them have happened seems to bolster this view. This conclusion is wrong; our subjective states are the product of very real processing happening in our brains, and this processing does control what we do and is not at all incidental to it. A subset of our brain’s information processing is dedicated to creating the mental states we experience, because agency is an effective way for animals to use reasoning, arguably the most effective way.

In more detail, concepts are generalizations and generalizations are fiction, literally figments of our imagination. These figments form imaginary worlds comprised of concepts defined in terms of other concepts via deductive, inductive and abductive links. The links are also imaginary; causation and patterns carry generalized information about things and events and are not the things or events themselves. This all happens with or without consciousness because simulations of the world via imaginary worlds are (arguably) the best route to predicting what will happen in the real world. One of these imaginary worlds — let’s call it the mind’s real world — has a special status: it is our best guess from current sensory information about what is happening in the real world. It is still imaginary; we are really inside our heads and not out in the world. Since the mind’s real world only exists in the present (and also the past to the extent we remember it), we can only predict the future using imaginary worlds, but we will try to choose imaginary worlds that seem plausible when making decisions about the real world. It is ultimately the mind’s whole job to make decisions based on such predictions, and one of the tools it uses is reason. But reason only works in a single stream, drawing conclusions one at a time from premises. Physical reality, on the other hand, unfolds in parallel because the waves and particles that comprise it act independently. Brains are physically located in bodies that also only work in a single stream, although they can potentially perform several activities concurrently if they can operate their extremities independently. The octopus, in particular, has a fairly independent brain for each arm, which makes sense because it is often helpful and practical for the arms to pursue independent objectives. But for most animals a single stream of reasoned decisions governing a coordinated stream of bodily actions works best. This is not to say that all decisions need reason or need the active oversight of reason. Many, even most, of the control functions of the brain either happen independently of reason or are habituated and are delegated to subconscious control outside or at the periphery of conscious awareness. This is also not to say that our reasoned decisions control or supersede all subconscious functions; quite the contrary, we are generally quite satisfied to let subconscious controls do what they do best. But the point I am driving toward is that there are good reasons for this top-level single stream of reasoned decisions to be processed as the subjective mental states I am calling agency. First, most bodies act as single agents most of the time, i.e. achieving one functional goal at a time. So it makes sense to generalize about an animal that it is currently eating and defending its food; these characterizations create a story that can help both the animal and others observing it make reasoned decisions. The mental capacity to recognize agency in others is called “theory of mind” (or TOM; not to be confused with the theories I am presenting about the mind). Second, the top-level reasoning process that either makes decisions or confirms decisions made subconsciously is itself an agent (the conscious mind itself) that must reconcile its book of reasoning with the single stream of actions the body performs. The way we feel free will leads us to believe that we willfully selected the actions we undertook, as opposed to simply observing actions our subconscious minds made for us. This is not because we actively make every decision on the spot using rational processes, it is because we can. The conscious mind is a supervisory process; it ultimately delegates all the nonsupervisory work to underlings, i.e. to subconscious processes. Most of what we do during the day is fairly habitual; we have previously established many techniques and preferences for things we like from a supervisory perspective, and we only have to nudge our minds in the right direction and subconscious processing takes care of the details. Even the explicit decisions we make throughout the day are enacted because our conscious minds “ask” our bodies to do our bidding, and subconscious processes which have been habituated to stimulate our nerves in just the right ways actually move our extremities. Experiments that demonstrate we the order to hit a button when we see a specific event comes from the subconscious and a split second later is “made” by our conscious minds should not surprise us at all. What has really happened is that we consciously habituated (programmed) the subconscious to act that way; it has been preapproved to behave a certain way, and the subconscious can react quicker if habituated this way than we can consciously. If we were to test our mental response in any situation where new rational thought is applied we will find that the subconscious waits for the decision to come down before acting. And third, I have to note that just because we can consciously control our actions with rational thought, this doesn’t mean we always will. Pressures from the subconscious to act in ways contrary to our conscious, rational stream of thought are immense and we will often succumb to them. Such pressures never provide us with a reason why, but we develop a rationalized view of our instinctive drives, emotional needs and moral responsibilities so we can incorporate them into our rational decision making processes. Evolutionary theory has started to provide scientific rationales for the adaptive value of these subconscious pressures which we can potentially incorporate into our rational thinking to make better decisions.

Summarizing, the rational stream of thought at the center of our conscious experience is only a small part of what the brain is doing, but it is the part coordinating overall activities. It appears to us as subjective theater principally because the job of the process is inherently first-person: it collects sensory and internal inputs, weighs them against priorities, and produces desirable actions as outputs. Although it is therefore first-person by construction, this is not enough for it to “know” it is first-person, which arises because it interprets the inputs through intuition and reason. Intuition just leverages data in useful ways without any attempt at explanation. Reason, however, condenses the inputs into generalized elements that create our cartoon-like imaginary worlds and the mind’s real world. Elements that sit on deep stores of intuitive knowledge feel much more real that purely abstract concepts (so APPLE feels more real than SPHERE). Our first-person perspective arises at the moment our supervisory process starts to act as if the mind’s real world is itself the real world. As humans, we know that they are different, but as subjects, we are convinced that our senses are themselves real, and, by extension, that all the objects we distinguish as discrete in the world are in fact discrete and bearers of the qualities we ascribe to them. In lower animals, useful behaviors can be controlled genetically with little or no imagination or reasoning. It is probably fair to say that the vast majority of behavior (and the mental processing behind it) in all animals, including us, is controlled genetically, but a fraction is controlled through neural reasoning processes in a proportion that roughly correlates to brain size and complexity. It is this fraction that creates minds that feel joy, pain, or anything, for that matter, because the brain has dedicated processing power to create a subjective theater with mental states as a way to make the mind’s real world pull strings effectively in the actual real world. Note that the “quality” of our mental states is a computational fiction (i.e. subjective), but the objective meaning of these states comes from how they help us function. Things feel like how they influence our behavior. We shy away from frightening/cold/hot things; we jump into happy/warm/cool things.

But why does our supervisory process start to act as if the mind’s real world is itself the real world? In a word, confidence. Confidence is the ability to make effective decisions quickly, taking all the available information into account, prioritizing what matters most, and choosing the action that will probably work best. Unlike chemical regulatory mechanisms or sensory information processioning, which can run continuously, reasoning is discrete. Reasoning simplifies a complex situation into a small number of relevant actors and the relationships between them from which logical implications can be deduced. In any situation, people (and animals) are keenly aware of the most relevant issues they face, especially those with immediate urgency. They draw on experience and project likely outcomes combining both overt rational logic and intuition, which is largely a memory-based lookup of successful strategies. Reason and intuition together create the confidence that a decision will succeed to an expected degree. This doesn’t eliminate doubt; every decision is also accompanied by doubt, but once we reach a level of confidence that says acting now is the best way to manage all our priorities, we will act. We are most confident performing repetitive actions that have succeeded so many times we have little reason to doubt them. We are naturally least confident performing actions which we suspect might fail or which we have never done before. We can’t avoid acting in these situations forever if our long-term goals depend on appropriate action. What we can and invariably will do in such situations is to analyze them further. We start thinking them over from many perspectives, drawing on both rational models and intuitive sensibilities. The amount of analysis depends on the situation. Every day we face a number of situations for which a few moments analysis brings us to a sufficient level of confidence that we can act. For more challenging goals we will procrastinate, giving ourselves more time to develop a good strategy rather than leaping before we have adequately looked. We run the risk of analysis paralysis, which is procrastination to the point where we are causing harm by not acting. To avoid reaching this point, we need to weigh the benefit of action against the possible harm, which itself can lead to analysis paralysis because any one goal must compete for our attention with all other goals. Things typically work out because we maintain a short list of personal priorities we must accomplish and we see to it that they are addressed. But as we look in a more open-ended way to the future our list of possible goals becomes infinite as there are an unlimited number of projects we might do to benefit ourselves and/or others. Analysis paralysis shuts down most of these avenues before we can even contemplate them because we only have so much time, so we invariably focus on the manageable subset that is most personally significant to us. We will have vague opinions on all possible goals, at least to the point that we are satisfied that they don’t require our attention. This brings us back to the goal of subjectivity. Subjectivity is an immersive, hands-on approach for establishing what goals we should pay attention to and what means to attain them will best manage all our priorities. By providing a theater in which we can reason, subjectivity can be viewed as a regulatory mechanism that keeps objectivity (reason) on track.

Non-sentient algorithms, such as a self-driving car might have, would lack confidence in this sense. Their algorithm would dictate the best strategy, and so long as their stored experience is sufficient to handle the driving challenges they encounter, they will do fine. But outside that range they can do nothing. For example, if the car in front of you stalled on the tracks with an oncoming train 30 seconds away, you could use your car to push the other car off the tracks, but if your car were self-driving with no experience in pushing cars, it would be helpless. In a novel situation, a human can figure out what to do but a program operating within a range of experience can’t. While we could, in principle, program machines to figure things out, the bigger gap is that they need to recognize what problems need to be solved, which depends on a sense of what is important. A human can immediately tell there is a risk of death to people in the cars and on the train, and, of somewhat less concern, a risk of damage to the cars and the train. A lifetime of subjective prioritization using feedback from instinctive preferences has honed this model to perceive and weigh the risks accordingly. Simply programming an algorithm to protect people first and property next doesn’t begin to address the complex subtleties of our prioritization model. Yes, we instinctively prioritize the safety of ourselves, our family, our tribe, strangers, and property in that order, but beyond such crude directives, we refine our priorities over a lifetime of first-person interactions in which the way each situation unfolds relative to our first-person priorities impacts them going forward. Without this implied “me,” as in “What does this do for me?”, we essentially have no skin in the game and thus no basis for preferring one action to another. It is the difference between Soviet five-year plans and a market economy. One is preprogrammed, but has no adaptability to circumstances, while the other evolves to changing circumstances. It is hard to imagine an algorithm for managing a mind and its body that could be more effective than one that principally interprets all top-level interactions with the world in terms of the holistic impact to itself via a subjective stance.

But why, exactly, should the agent approach be better than the (robotic) alternatives? This is a logical consequence of there being one brain to control one body. At any point in time, the parts of the body can each only undertake one action, so one overall, coordinated strategy must govern the whole body. At the top level, then, the brain needs to come to an unending series of decisions about what to do next. Each of these decisions should draw on all the relevant information at the animal’s disposal, including all sensory inputs and any information recorded instinctively in DNA or experientially in memory. With the agency approach, this is accomplished by having a conscious mind in which an attention subprocess focuses information from perception, feeling and memory into an awareness state on which thinking, feeling and reason can act to continually make the next decision. An enormous pool of information is filtered down to create consciousness, and it is specifically done to provide the agent process of the brain (i.e. consciousness) as logically simplified a train of thought as possible so that it can focus its efforts on what is relevant to making decisions while ignoring everything else. This logical simplification can be thought of as creating a cartoon-like representation of reality that captures the aspects most relevant to animal behavior in packets of information — concepts — for which generalized rules of causation can be applied. Intuition, which includes a broad set of algorithms to recognize patterns, can’t by itself process concepts using logical rules such as cause and effect. Reasoning does this, and the network of concepts it uses to do it creates the agent-oriented perspective with which we are so familiar. The addition of abstraction elevates this agency to the level of human intelligence. So, as I said above, we would recognize a robot that could demonstrate abstraction as being intelligent, but abstraction is a development of reasoning, not intuition, so the robot would need to be reasoning with a relevant set of concepts just as we do. Does this imply it would possess agency? If it were controlling a body in the world, then yes, I think this follows, because its relevant set of concepts would be akin to our own. It might subdivide the world into entirely different concepts, but it would still be using a concept-based simplification derived from sensory inputs that probably depends principally on cause and effect for predictive power. The distinct qualia (sensations) that make up our conscious experience are physically just information in the form of electrochemical signals. But each quale feels distinctive so the agent can tell them apart. We also have innate and learned associations for each quale, e.g. red seems dangerous, but the distinctiveness is the main thing as it lets a single-stream train of thought monitor many sensory input channels simultaneously without getting confused. Provided our putative robot had distinct streams of sensory inputs feeding a simplified concept-based central reasoning process then that distinctiveness could be said to be perceived as qualia just like our own. Note that intuition happens outside of our conscious control or awareness and so does not need qualia (i.e. it doesn’t feel), though it can make use of the information. We only have direct conscious awareness of a small amount of the processing done in our brains, and the rest of the processing is subconscious. I will use intuition and subconscious synonymously, though with different connotations. Reasoning and conscious are not synonyms because the conscious mind can access much intuitive knowledge and so uses both reasoning and intuition to reach decisions. Our mind seems to us to be a single entity, but it is really a partnership between the subconscious and the conscious. The conscious mind can override the subconscious on any deliberated decision, but to promote efficiency the simplest tasks are all delegated to the subconscious via instinct and learning. Though we feel like subconscious decisions are “ours”, we may find on conscious review that we don’t agree with them and will attempt to retrain ourselves to act differently next time, essentially adjusting the instructions that guide the subconscious.

Before I move on I’d like to explain the power of reason over intuition in one other way. If most of our mental processing is subconscious and does not use reason, and we can let our subconscious minds make so many of our daily decisions on autopilot, why do we need a conscious reasoning layer at the top to create a cartoon-like world? Note that our more complex subconscious behaviors got there in the first place from conscious programming (learning) using concepts, so although we can carry out such behaviors without further reasoning we used reasoning to establish them. The real question, though, is whether subconscious algorithms that glean patterns from information could theoretically solve problems as well, eliminating the need for consciousness. While people aren’t likely to change their ways, an intelligent computer program that didn’t need code for consciousness would be easier to develop. Let’s grant this computer program access to a library of learned behavior to cover a wide variety of situations, which is analogous to the DNA-based information the brain provides through instinct. Let’s further say the program can use concepts as containers to distinguish objects, events, and behaviors. Such a program could know from experience and data analysis how bullets can move. They can stay still, fall, be thrown, or be fired from a gun at great speed. Still things generally touch other things below them, falling things don’t touching other things below them, and thrown and fired things follow throwing and firing actions. What is missing from this picture is an explanatory theory of cause and effect, and more broadly the application of logic through reason. The analysis of patterns alone does not reveal why things happen because it doesn’t use a logical model with rules of behavior. The theory of gravity says that earth pulls all things toward it, and more generally that all things exert an attractive force to each other inversely proportional to the square of their distance apart. The weakness of physical intuition compared to theory is made clear by the common but mistaken intuition that the speed that objects fall is proportional to their weight. Given more experience observing falling objects one will eventually develop an intuitive sense that aligns well with the laws of physics, but trying to do science by studying data instead of theorizing about cause and effect relationships would be very slow and inconclusive. The intuitions we gather from large data sets are indispensable to our overall understanding but are only weakly suggestive compared to the near certainty we get from positing laws of nature. The subconscious is theory-free; it just circulates information looking for patterns, including information packaged up into concepts. When it encounters multiple factors in combinations it has not seen before, it has no way of predicting combined effects. In the real world, every situation is unique and so has a novel combination of factors. Reasoning with cause and effect can draw out the implications of those factors where pattern analysis could only see likelihoods relative to possibly irrelevant past experience.

A self-driving car must be able to evaluate current circumstances and select appropriate responses. While we have long had the technology to build sensors and mechanical or computer-based controllers, we haven’t been able to interpret sensor data well enough to replace human drivers. Machine learning has solved that problem, and we can now train algorithms using thousands of examples to recognize things. This recognition mirrors our subconscious approach by using data and positive feedback. Self-driving car algorithms plug recognized objects into a reason-based driving model that follows the well-defined rules of the road. To ensure good recognition and response in nearly any circumstance, these programs use data from millions of hours of “practice”. What they do is akin to us performing a learned behavior: we collect a little feedback from the environment to make sure our behavior is appropriate, and then we just execute it. To tie our shoes we need feedback to locate the laces and ensure the tension is appropriate through the process, but mostly we don’t think and it just happens. We need to be able to reason to drive well because we have to be prepared to act well when we encounter new situations, but a self-driving car, with all of its experience, is likely to have seen just about every kind of situation it will ever encounter and already has a response ready. That overwhelming edge in experience won’t help when it encounters a new situation that reason could have easily solved, but even so self-driving cars are already 20 times safer than humans and will soon be over 100 times safer, mostly because humans make more mistakes. Although computer algorithms still can’t do general purpose reasoning, our reasoning processes have lots of subconscious support, so applying machine learning to reasoning will continue to increase the cleverness of computers and may even bring them all the way to abstract intelligence. My goal is to unveil the algorithm of reason, to the extent that this can be done using reason. That will certainly include crediting subconscious support where it is due, but more significantly it will expose the role and structure of consciousness.

All animal minds bundle information into concepts through abstraction for convenient processing by their conscious minds. Abstract thought employs conceptual models, which are sets of rules and concepts that work together to characterize some topic to be explained (the subject of prediction). We often perceive conceptual models as images or representations of the outside world “playing” inside our heads. While we can’t exactly describe the structure of conceptual models, we can represent them outside the mind using language or a formal system. Formal systems, which often employ formal languages, can achieve much greater logical precision than natural language. But what both formal and natural languages have in common is that they treat concepts atomically. We ultimately need intuition, i.e. subconscious skills, to resolve concepts to meanings. Yes, we can reason out logical explanations of concepts in terms of other concepts, but these explanations can only cover certain aspects and invariably miss much detail that we grasp from consideration of our immense body of experience for any given concept, for which we depend on subconscious associations. Again, the bicameral mind (a partnership between the subconscious and the conscious, not the speaking/listening division proposed by Julian Jaynes2) feels to us quite unified even though it actually blends intuitive understandings based on subconscious processes with rational understandings orchestrated by rational, conscious processes. From this, we can conclude that formal systems simplify out a critical part of the model. Natural language also simplifies, but words carry subtleties through context and connotation. Mental models combine all of our intuitive, subconscious-based knowledge with the reasoned concept-based knowledge we use in conceptual models. Put another way, conceptual models manage the cartoon-like, simplified view at the center of reasoning, while mental models combine these logical views with all the sensory and experiential data that backs them up.

The logical positivists in the 1930’s and 1940’s claimed that all scientific explanations could fit into a formal system (called the Deductive Nomological Model), which basically said that scientific explanations follow solely from laws of nature, their causes, and their effects. The first flaw with this theory was that it committed the category mistake of equating function with form. Scientific explanations, and all understanding, exist to serve a function, which is to say they have predictive power and consequently are carriers of information. That which is to be explained, the explanandum or form, is explained by an explanans or function. It is not that the form doesn’t exist in its own right, it is that our only interest in it relates to what might happen to it, its function. The second flaw with the DN Model was that it presumes that explanations only require a deductive or logical approach, but as I explained above, patterns are fundamental to comprehension as they set the foundation that connects the observer to the observed. Logic may be perfect but can only be imperfectly married to nature, a connection established by detecting and correlating patterns. While postpositivists have tried to salvage some of the certainty of positivism by acknowledging that human understanding introduces uncertainty, but they can’t because the real problem is that function doesn’t reduce to form. No matter how appealing, scientific realism (the view that the universe actually exists) is irrelevant to science. Science is indifferent to the noumenal (what actually exists); it is concerned only with the phenomenal (that which is observed) and what we can learn from observation. Form and function dualism gives postpositivism solid ground to stand on and is the missing link to support the long-sought unity of science. I contend that functional explanations are always partial by their nature, providing the ability to predict the future better than chance but guaranteeing nothing. It is consequently unfair to call such explanations “partial” because there is no such thing as a “full” explanation.

The Mind from Behind the Scenes

We are all the stars of the movies that play in our own heads. We write our own parts and then act them out. Of course, we don’t literally write or act: we think about what we want, then we imagine ways to get it, and then we do things to achieve it. We know why we do it: to preserve our lives and lifestyle. But we don’t know how we do it. We don’t know, in a detailed, scientific sense, what is happening when we are wanting, imagining or thinking. While scientists are fairly certain our minds are the consequence of fantastically intricate but natural processes in the brain, from our first-person perspective they seem magical, even supernatural. Thanks to the theory of evolution and the computational theory of mind we can now imagine how they could be natural, but we can only explain them in broad strokes that leave most of the answers to the imagination. In a world where seemingly everything important is now well-understood, must we still accept that the very essence of our nature has not been explained? I think we can do better, and more to the point, I think we already know enough to do better.

This shortcoming in our knowledge hasn’t gone completely unnoticed. It has interested philosophers for thousands of years and scientists for over a hundred, leading to the formation of cognitive science as a discipline in 1971 and for the 1990’s to be dubbed the “decade of the mind”. Much has been learned, but not much consensus has formed around explanations. Instead, the field is littered with half-baked theories and contentiously competing camps. The casual observer might wonder whether we have made any progress at all. It is hard enough for normal science to shift course from its established paradigms, but additional obstacles are the subversion of science for commercial and political purposes, pseudoscience, and even fake news. We only understand a small fraction of the phenomena at play in the brain and mind. This suggests that any explanatory theory will mostly be guesswork. Yes and no. Yes, we have to guess, i.e. hypothesize, first before we can see if those guesses hold up. But our guesses can be very informed. We do know enough to establish a broad scientific consensus around an overall explanatory theory of the mind. Though it is still early days, and we should still expect a see many viewpoints, it is no longer so early that we can’t roughly agree on much of what is going on that is supported by an extensive body of common knowledge and established science. It is my goal to pull together what we already know and to back it up with a new philosophical perspective to form a single, coherent, overarching scientific theory of the mind.

While some interesting and insightful books have been written that summarize what we know about how the mind works, e.g. Steven Pinker’s How the Mind Works1, to me they seem to miss the key point, which is that thought is a computational instrument of function and that its form is largely irrelevant. Natural scientists are biased to see things mostly in physical terms, which leaves them in the awkward position of explaining functional phenomena through physical processes. Evolutionary psychology embraces functional explanations but still seems to miss the forest for the trees, which is this: the physical mechanisms of the brain, i.e. the neurochemical bases of instincts, emotions and thought, including the genes, are not themselves the objects of existence under discussion. The mind is actually about function, information, and purpose, which is an alternate plane of existence with which we have intimate familiarity but which is ignored by physical science. Cognitive science should recognize this but has become bogged down by an excess of perspectives. I think recognizing form and function will unify science, especially cognitive science.

Minds not Brains: Introducing Theoretical Cognitive Science

I’m going to make a big deal about the difference between the mind and the brain. We know what minds are from long experience and take the concept for granted, despite an almost complete absence of a scientific explanation. Conventionally, the mind is “our ability to feel and reason through a first-person awareness of the world”. This definition begs the question of what “feel”, “reason” and “first-person awareness” might be, since we can’t just define the mind by using terms that are only meaningful to the owner of one. While we can safely say they are techniques that help the brain perform its primary function, which is to control the body, we will have to dig deeper to figure out how they work. Our experience of mind links it strongly to our bodies, and scientists have long said it resides in the nervous system and the brain in particular. Steven Pinker says that “The mind is what the brain does.”1 This is only superficially right, because it is not what, but why. It is not the mechanism or form of the mind that matters as much as its purpose or function. But how can we embark on the scientific study of the mind from the perspective of its function? As currently practiced, the natural sciences don’t see function as a thing itself, but more as a side effect of mechanical processes. The social sciences start with the assumption that the mind exists but take no steps to connect it back to the brain. Finally, the formal sciences study theoretical, abstract systems, including logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics, but leave it to natural and social scientists to apply them to natural phenomena like brains and minds. What is the best scientific standpoint to study the mind? Cognitive science was created in 1971 to fill this gap, which it does by encouraging collaboration between the sciences. I think we need to go beyond collaboration and admit that the existing three branches have practical and metaphysical constraints that limit their reach into the study of the mind. We need to lift these constraints and develop a unified and expanded scientific framework that can cleanly address both mental and physical phenomena.

Viewed most abstractly, science divides into two branches, the formal and experimental sciences, with the formal being entirely theoretical, and the experimental being a collaboration between theory and testing. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and special sciences (all other natural and social sciences), which are presumed to be reducible to fundamental physics, at least in principle. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum. Hypotheses are purely functional while testing is purely physical. That is, hypotheses are ideas with no physical existence, though we think about and discuss them through physical means, while testing tries to evaluate the physical world as directly as possible. Of course, we use theory to perform and interpret the tests, so it can’t escape some dependence on function. The scientific method tacitly acknowledges and leverages both functional and physical existence, even though it does not overtly explain what functional existence might be or attempt to explain how the mind works. That’s fine — science works — but we can no longer take functional existence and its implications for granted as we start to study the mind. It’s remarkable, really, that all scientific understanding, and everything we do for that matter, depend critically on our ability to use our minds, yet don’t need an understanding of how it works or what it is doing. But we have to find a way to make minds and ideas into objects of study themselves to understand what they are.

The special sciences are broken down further into the natural and social sciences. The natural sciences include everything in nature except minds, and the social sciences study minds and their implications. The social sciences start with the assumption that people, and hence their minds, exist. They draw on our perspectives about ourselves, our behavior patterns, and what we think we are doing to explain what we are and help us manage our lives better. Natural scientists (aka hard scientists) call the social sciences “soft sciences” because they are not based on physical processes bound by mathematical laws of nature; nothing about minds has so far yielded that kind of precision. Our only direct knowledge of the mind is our subjective viewpoint, and our only indirect knowledge comes from behavioral studies, evolutionary psychology and outright speculation into the functions our minds appear to perform. The study of behavior finds patterns in the ways brains make bodies behave and may support the idea of mental states but doesn’t prove they exist. Evolutionary psychology also suggests how mental states could explain behavior, but can’t prove they exist. Studying the functions the mind does by just guessing about them sounds crazy at first, but is actually the way all scientific hypotheses are formed: take a guess and see if it holds up. It too can’t prove mental states exist, but we need to remember that science isn’t about proving, it is about developing useful explanations.

The differences in approach between hard and soft sciences have opened up a gap that currently can’t be bridged, but we have to bridge it to develop a complete explanation of the mind. This schism between our subjective and objective viewpoints is sometimes called the explanatory gap. The gap is that we don’t know how physical properties alone could cause a subjective perspective (and its associated feelings) to arise. I closed this gap in The Mind Matters, but not rigorously. In brief, I said that the mind is a process in the brain that experiences things the way it does because creating a process that behaves like an agent and sees itself as an agent is the most effective way to get the job done. More to the point, it feels like an agent because it has to have some way of thinking about its senses and that way needs to keep them all distinct from each other. So perceptions are just the way our brains process information and “present” it to the process of mind. It is not a side effect; much of the wiring of the brain was designed to make this illusion happen exactly the way it does.

Natural science currently operates on the assumption that natural phenomena can be readily modeled by hypotheses which can be tested in a reproducible way. This works well enough for simple systems, i.e. those which can be modeled using a handful of components and rules. The mind, however, is not a simple system for three reasons: complexity, function, and control. Living tissues are complex systems with many interacting components, so while muscle tissue can be modeled as a set of fibers working together as a simple machine, like any complex system its behavior will become chaotic outside normal operating parameters. Next, the mind (and muscles) have a different metaphysical nature than nonliving things. Unlike rocks and streams, muscles and nerves are organized to perform a function rather than employ a specific physical form. And most significantly, the mind is not organized to perform functions itself but to control how the body will perform functions, and so could be called metafunctional. These three complicating factors make developing and testing hypotheses about the mind vastly more complicated than doing it for rocks and streams, so paradigms based only on natural laws won’t work. Yet the attitude among natural scientists is that the mind is just an elaborate cuckoo clock and so understanding it reduces to knowing its brain chemistry. That will indeed reveal the physical mechanisms, but it won’t reveal the reasons for the design, any more than understanding the clock explains why we want to know what time it is. When we study complex systems, like the weather, we have to accept that chaos and unpredictability are around every corner. When we study functional systems, like living things, we have to accept that functional explanations — and all explanations are functional — need to acknowledge the existence of function. And when we study control systems, like brains and minds, we have to accept that direct cause and effect is supplanted by indirect cause and effect through information processing. Natural sciences study complexity and function in living systems, but not the control aspect of minds. Control is addressed by a number of the formal sciences, but since the formal sciences are not concerned with natural phenomena like minds, the study of control by minds has been left high and dry. It falls under the purview of cognitive science, but we need to completely revamp our concept of what scientific method is appropriate to study function and control. We will need theories that seek to explain how control is managed from a functional perspective, that is, using information processing, and we will need ways to test them that are less direct than tests of natural laws.

Nearly all our knowledge of our mind comes from using it, not understanding it. We are experts at using our minds. Our facility develops naturally and is helped along by nurture. Then we spend decades at schools to further develop our ability to use our mind. But despite all this attention on using it, we think little about what it is and how it works. Just as we don’t need to understand how any machine works to use it, we don’t need to know how our mind works to use it. And we can no more intuit how it works that we can intuit how a car or TV works. We consequently take it for granted and even develop a blindness about the subject because of its irrelevance. But it is the relevant subject here, so we have to overcome this innate bias. We can’t paint a picture of a scene we won’t look at. While we have no natural understanding of it, we do know it is a construct of information managed by the brain. Understanding the physical mechanisms of the brain won’t explain the mind any more than taking a TV apart would explain TV shows, because for both the mind and TV shows the hardware is just a starting point from which information management constructs highly complex products. So the mind is less what the brain does than why it does it. It is about how it physically accomplishes things so much as what it is trying to accomplish. This is the non-physical, functional existence I have argued for. In fact, for us, functional existence is primary to physical existence, because knowledge itself is information or function, so we only know of physical existence as mediated through functional existence, i.e. from observations we make with our minds (i.e. “I think therefore I am”).

Knowing that functional existence is real and being able to talk about it still doesn’t explain how it works. We take understanding to be axiomatic. We use words to explain it, but they are words defined in terms of each other without any underlying explanation. For example, to understand is to know the meaning of something, to know is to have information about, information is facts or what a representation conveys, facts are things that are known, convey is to make something known to someone, meaning is a worthwhile quality or purpose, purpose is a reason for doing something, reason is a cause for an event, and cause is to induce, give rise, bring about, or make happen. If anything, causality seems like it should reduce to something physical and not mental, yet it doesn’t. But the language of the mind is not intended to explain how understanding or the mind works, just to let us use understanding and our minds. If we are to explain how understanding and other mental processes work we will need to develop an objective frame of reference that can break mental states down into causes and effects or we will remain trapped in a relativistic bubble.

Let’s consider which sciences study the mind directly. Neuroscience studies the brain and nervous system, but this is not direct for the same reason studying computer hardware says little or nothing about what computer software does. On the other hand, psychology and cognitive science are dedicated to the study of the mind. Psychology studies the mind as we perceive it, our experience of mind, while cognitive science studies how it works. One could say psychology studies the subjective side and cognitive science studies the objective side. Psychology divides into a variety of subdisciplines, including neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, and humanistic psychology. They each draw on a different objective source of information. Neuropsychology studies the brain for effects on behavior and cognition. Behavioral psychology studies behavior. Evolutionary psychology studies the impact of evolution. Cognitive psychology studies mental processes like perception, attention, reasoning, thinking, problem-solving, memory, learning, language, and emotion. Psychoanalysis studies experience (but with a medical goal). Humanistic psychology studies uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. Cognitive science focuses on the processes that support and create the mind. Most cognitive scientists, including me, are functionalists, maintaining that the mind should be explained in terms of what it does. But science continues to be almost completely dominated by a physicalist tradition, which suggests and even claims that studying the brain will ultimately explain the mind. I have adamantly argued that function does not reduce to form, even though it needs form. And it is true that knowing the form provides many clues to the function, and it is also true that form is our only hard evidence. But we are still a long way from unraveling all the mechanics of neurochemistry, though rapid progress is being made. In the meantime, without any more information than we already have at hand there is much that we can say about the brain’s function, that is, about the mind, by taking a functional perspective on what it is doing. So cognitive science should not be an interdisciplinary collaboration, but should reboot science from scratch by establishing a scientific approach to studying function that can meet a comparable level of objectivity as our paradigm for studying form. I have, so far, proposed that all of science be refounded on the ontology of form and function dualism. The prevailing paradigm, which derives as I have noted from the Deductive Nomological Model, uses function to study form, while I propose to use function to study both form and function.

One other discipline formally studies the mind: philosophy. Practiced as an independent field, general philosophy studies fundamental questions, such as the nature of knowledge, reality, and existence. But because they don’t establish an objective basis for their claims, philosophers ultimately depend on the subjective, intuitive appeal of their perspectives. For example, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism. Universality and relativism assume the concepts of facts, discovery, understanding, and perception, but these assumptions are at best loosely defined and really depend on a common knowledge of what they are. Philosophy builds on common knowledge ideas without attempting to establish an objective basis. What principally distinguishes science is the effort to establish objectivity, and the way it does this it itself studied unscientifically as the philosophy of science. It is an ironic situation that the solid foundation upon which science has presumably been built is itself unclear and ultimately pretty subjective. George Bernard Shaw said, “Those who can do, those who can’t teach,” and this is a theme I have been repeating. We are designed to do things but not to understand how we do them or, much less, to teach how they are done. But understanding and teaching are important to get us to the next level so we can leverage what we know in new ways. We have long been perfectly capable of practicing science without dwelling too much on its philosophical basis, but that was before we started to study the mind. We desperately need an objective basis of objectivity itself and how to apply it to the study of both form and function in order to proceed. Philosophers have asked the questions and laid out the issues, but scientists now have to step up and answer them.

Philosophy of science and philosophy of mind have detailed the issues at hand from a number of directions, and characteristically of philosophy have failed to indicate an objective path forward. I believe we can derive the objective philosophy we need by reasoning it out from scratch using the common and scientific knowledge of which we are most confident, which I will do in the next chapter. But a brief summary of the fields is a good starting point to provide some orientation. Science was a well-established practice long before efforts were made to describe its philosophy. August Comte proposed in 1848 that science proceeds through three stages, the theological, the metaphysical, and the positive. The theological stage is prescientific and cites supernatural causes. In the metaphysical stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes and embrace an ever-progressing refinement of facts based on empirical observations. So every theory must be guided by observed facts, which in turn can only be observed under the guidance of some theory. Thus arises the hypothesis-testing loop of the scientific method and the widely accepted view that science continually refines our knowledge of nature. Comte’s third stage developed further in the 1920’s into logical positivism, the theory that only knowledge verified empirically (by observation) was meaningful. More specifically, logical positivism says that the meaning of logically defined symbols could mirror or capture the lawful relationship between an effect and its cause2. Every term or symbol in a theory must correspond to an observed phenomena, which then provides a rigorous way to describe nature mathematically. It was a bold assertion because it says that science derives the actual laws of nature, even though we know any given evidence can be used to support any number of theories, even if the simplest theory (i.e. by Occam’s razor) seems more compelling. In the middle of the 20th century, cracks began to appear in logical positivism (and its apotheosis in the DN Model, see above) as the sense of certainty promised by modernism began to be replaced by a postmodern feeling of uncertainty and continuous change. In the sciences, Thomas Kuhn published The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin that phrase specifically). Though Kuhn’s goal was to help science by unmasking the forces behind scientific revolutions, he inadvertently opened a door he couldn’t shut, forever ending dreams of absolutism and a complete understanding of nature and replacing it with a relativism in which potentially all truth is socially constructed. In the 1990s, postmodernists claimed all of science was a social construction in the so-called science wars. Because this seems to be true in many ways, science formally lost this battle against relativism and has continued full steam without clarifying its philosophical foundations. Again, while this is good enough to do science that studies form, it is not enough to do science that studies function. Arguably, we could and very well might develop a scientific tradition for studying function that lets us get the job done without a firm philosophical foundation either. After all, we need news you can use regardless of why it works. Maybe it will happen that way, but I personally consider the why to be the more interesting question, and because function is so much more self-referential than form that I think studying it will turn out to require understanding what it means to study it.

The philosophy of mind is studied as a survey of topics including existence (the mind-body problem), theories of mental phenomena, consciousness/qualia/self/will, and thoughts/concepts/meaning. My goal, as noted, is to establish an objectively supportable stance on these topics and on objectivity itself, which I will then use to launch an investigation into the workings of the mind. It will take some time to do all this, but as a preview I will lay out where I will land on some fundamental questions:

I endorse physicalism (i.e. minimal or supervenience physicalism), which says the mind has a physical basis, or, as philosophers sometimes say, the mental supervenes on the physical. This means that a physical duplicate of the world would also duplicate our minds. While true duplication is impossible, my point here is just that the mind draws its power entirely from physical materials. Physicalism rejects the idea of an immortal soul and Descartes’ substance dualism in which mind and body are distinct substances. Physicalism is often taken to simultaneously reject any other kind of existence, making it a physical monism, but that rejection is unnecessary. At its core physicalism just says that physical things are physical. That one might also interpret something physical from another perspective is irrelevant to physicalism.

I endorse non-reductive physicalism, which is just a fancy way of saying that things that are not physical are not physical, and in particular, that function is not form or reducible to it. More accurately, mental explanations cannot be reduced solely to physical explanations. That doesn’t mean that physical things like brains, that can carry out functions, are not physical, because they are entirely physical from a physical perspective. But if you look at brains from the perspective of what they are doing you create an auxiliary kind of explanation, a functional one. And because explanatory perspectives are abstract, there are an unlimited number of functional perspectives (or existences) about everything. The brain is still physical, the explanations of it are not. To the extent the word “mind” is taken to be a functional perspective of what the brain is doing, it is really the union of all the explanatory perspectives the brain uses when going about its business. These functional perspectives are not mystical, they are relational, tying information to other information using math, logic or correlation. A given thought has a form as an absolute, physical particular in a brain, but its meaning is relative, being a generalization or idealization that might refer to any number of things. Thus, “three” and “above” are not physical particulars. A thought is a functional tool that may be employed in a specific physical example but exists as an abstraction independent of the physical.

I endorse functionalism, which is the theory that mental states are more profitably viewed from the perspective of what they do rather than what they are made of, that is, in terms of their function, not their form. In my ontology of form & function dualism mental states have both kinds of existence, with many possible takes as to what their function is, but they evolved to satisfy the control function for the body, and so our efforts to understand them should take this perspective first.

I endorse the idea that consciousness is a subprocess of the brain that is designed to create a subjective theater from which centralized control of the body by the brain can be performed efficiently. All the familiar aspects of consciousness such as qualia, self, and the will are just states managed by this subprocess. As a special spoiler, I will reveal that I endorse free will, even if the universe is deterministic, which to the best of our knowledge it is not.

Finally, I endorse the idea that thoughts, concepts, and meaning are information management techniques that have both conscious and subconscious aspects, where subconscious refers to subprocesses of the brain that are supportive of consciousness, which is the most supervisory subprocess.

While this says much about where I am going, it doesn’t say how how I will get there or how a properly unified philosophy of science and mind imply these things.

Deriving an Appropriate Scientific Perspective for Studying the Mind

I have made the case for developing a unified and expanded scientific framework that can cleanly address both mental and physical phenomena. I am going to focus first on deriving an appropriate scientific perspective for studying the mind, which also bears on science at large. I will follow these five steps:

1. The common knowledge perspective of how the mind works
2. Form & Function Dualism: things and ideas exist
3. The nature of knowledge: pragmatism, rationalism and empiricism
4. What Makes Knowledge Objective?
5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

1. The common-knowledge perspective of how the mind works

Before we get all sciency, we should reflect on what we know about the mind from common knowledge. Common knowledge has much of the reliability of science in practice, so we should not discount its value. Much of it is uncontroversial and does not depend on explanatory theories or schools of thought, including our knowledge of language and many basic aspects of our existence. So what about the mind can we say is common knowledge? This brief summary just characterizes the subject and is not intended to be exhaustive. While some of the things I will assume from common knowledge are perhaps debatable, my larger argument will not depend on them.

First and foremost, having a mind means being conscious. Consciousness is our first-person (subjective) awareness of our surroundings through our senses and our ability to think and control our bodies. We implicitly trust our sensory connection to the world, but we also know that our senses can fool us, so we’re always re-sensing and reassessing. Our sensations, formally called qualia, are subjective mental states like redness, warmth, and roughness, or emotions like anger, fear, and happiness. Qualia have a persistent feel that occurs in direct response to stimuli. When not actually sensing we can imagine we are sensing, which stimulates the memory of what qualia felt like. It is less vivid than actual sensation, though dreams and hallucinations can seem pretty real. While our sensory qualia inform us of physical properties (form), our emotional qualia inform us of mental properties (function). Fear, desire, love, revulsion, etc., feel as real to us as sight and sound, though mature humans also recognize them as abstract constructions of the mind. As with sensory qualia, we can recall emotions, but again the feeling is less vivid.

Even more than our senses, we identify our conscious selves with our ability to think. We can tell that our thoughts are happening inside our heads, and not, say, in our hearts. It is common knowledge that our brains are in our heads and brains think1, so this impression is a well-supported fact, but why do we feel it? Let’s call this awareness of our brains “encephaloception,” a subset of proprioception (our sense of where the parts of our body are), but also including other somatosenses like pain, touch, pressure. The main reason our encephaloception pinpoints our thoughts in our heads is that senses work best when they provide consistent and accurate information, and the truth is we are thinking with our brains. Like other internal organs, it helps us to be aware of pain, motion, impact, balance, etc. on the head and brain as this can affect our ability to think, so having sufficient sensory awareness of our brain just makes sense. It is not just a side effect, say, of having vision or hearing in the head that we assume our thoughts originate there; it is the consistent integration of all the sensory information we have available.

But what is thinking? Loosely speaking it is the union of everything we feel happening in our heads, but more specifically we think of it as a continuous train of thought which connects what is happening in our minds from moment to moment in a purposeful way. This can happen through a variety of modalities, but the primary one is the simulation of current events. As our bodies participate in events, the mind simultaneously simulates those events to create an internal “movie” that represents them as well as we understand them. We accept that our understanding is limited to our experience and so tends to focus on levels of detail and salient features that have been relevant to us in the past. The other modalities arise from emphasizing the use of specific qualia and/or learned skills. Painting and sculpting emphasize vision and pattern/object recognition, music emphasizes hearing and musical pattern recognition, and communication usually emphasizes language. Trains of thought using these modalities feel different from our default “movie” modality but have in common that our mind is stepping through time trying connecting the dots so things “make sense.” Making sense is all about achieving pleasing patterns and our conscious role in spotting them.

And even above our ability to think, we consciously identify with our ability to control our bodies and, indirectly through them, the world. Though much of our talent for thought is innate, we believe the most important part is learned, the result of years of experience in the school of hard knocks. We believe in our free will to take what our senses, emotions, and memory can offer us to select the actions that will serve us best. At every waking moment, we are consciously considering and choosing our upcoming actions. Sometimes those actions are moments away, sometimes years. Once we have selected a course of action, we will, as much as possible, execute it on “autopilot,” which is to say we leverage conditioned behavior to reduce the burden on our conscious mind by letting our subconscious handle it. So we recognize that we have a conscious mind that is just that part that is actively considering our qualia and memories to select next actions and a subconscious mind that is processing our qualia and memories and performing a variety of control functions that don’t require conscious control. All of this is common knowledge from common sense, and it is also well-established scientifically.

But what is thinking? What does it mean to consider and decide? Thinking seems like such an ineffable process, but we know a lot about it from common knowledge. We know that concepts are critical building blocks of thought, and we know that concepts are generalizations gleaned from grouping similar experiences together into a unit. Language itself functions by using words to invoke concepts. We each make strong associations between each word we know and a variety of concepts that word has been used to represent. Our ability to use language to communicate hinges on the idea that the same word will trigger very similar concepts in other people. Our concepts are all connected to each other through a web of relationships which reveal how the concepts will affect each other under different circumstances. This web thus reveals the function of the concept and constitutes its meaning, so its meaning and hence its existence is entirely functional and not physical. Its neural physical manifestation is only indirectly related and hence incidental, as the meaning could in principle be realized in different people or by another intelligent being or even just written down. Although every physical brain contemplating any given concept will have some subtle and deep differences in their understanding of it, because the concept is fundamentally a generalization, subtle and deep characteristics are necessarily of less significance than the overall thrust.

The crux of thinking, though, is what we do with concepts: we reason with them. Basically, reasoning means carefully laying out a set of related concepts and the relevant relationships that bind them and drawing logical implications. To be useful, the concepts and implications have to be correlated to a situation for which one wants to develop a purposeful strategy. In other words, when we face a situation we don’t know how to handle it creates a problem we have to solve. We try to identify the most relevant factors of the problem by correlating the situation to all the solutions we have reasoned out in the past, which lets us narrow it down to a few key concepts and relationships. To reason, we consider just these concepts and our rules about them in a kind of cartoon of reality, and then we hope that conclusions we drew about these generalized concepts will apply to the real situation we are addressing. In practice, it usually works so well that we think of our concepts as being identical to the things they represent, even though they are really just loose descriptive generalizations that are nothing like what they represent and, in fact, only capture a small slice of abstract functional properties about those things. But they tend to be exactly what we need to know. “Thinking outside the box” refers to the idea of contemplating uses for concepts beyond the ones most familiar to us. An infinite variety of possible alternate uses for any thing or concept always exists, and it is a good idea to consider some of them when a problem arises, but most of the time we can solve most problems well enough by just recombining our familiar concepts in familiar ways.

This much has arguably been common knowledge for thousands of years, even if not articulated as such, and so can arguably even be subsumed under the more heading common sense, which includes everything intuitively obvious to normal people 2. But can civilization and culture be said to have generated trustworthy common knowledge that goes beyond what we can intuit for ourselves using common sense just by growing up? I am not referring to the common knowledge of details, e.g. historical facts, but to the common knowledge of generalities, i.e. the way things work. Here I would divide such generalities into two camps, those that have scientific support and hence can be clearly explained and demonstrated and those that don’t, but which still have broad enough acceptance to be considered common knowledge. I will consider these two camps in turn.

Our scientific common knowledge expands dramatically with each generation. We take much for granted today from physics, chemistry, and biology that were unknown a few hundred years ago. Even if we are weak in the details, we are all familiar with the scope of physical and chemical discoveries from artifacts we use every day. We know evolution is the prime mover in evolution, causally linking biological traits to the benefits they provide. Relative to the mind specifically, we have familiarity with discoveries from neuroscience, computer science, psychology, sociology and more that expand our insight into what the brain is up to. Although we recognize there is still much more unknown than known, we are pretty confident about a number of things. We know the mind is produced by the brain and not an ethereal force independent of the brain or body. This is scientific knowledge, as thoroughly proven from innumerable scientific experiments as gravity or evolution, and is accepted as common knowledge by those who recognize science’s capacity to increase our predictive power over of the world. Those who reject science or who employ unscientific methods should read no further as I believe the alternatives are smoke and mirrors and should not be trusted as the basis for guiding decisions.

Beyond being powered by the brain, we also now know from common knowledge that the mind traffics solely in information. We don’t need to have any idea how it manages it to see that everything that is happening in our subjective sphere is relational, just a big description of things in terms of other things. It is a large pool of information that we gather in real time and integrate both with information we have stored from a lifetime of experience and collected as instinctive intuitions from millions of years of evolution. The advent of computers has given us a more general conception of information than our parents and grandparents had. We know it can all be encoded as 0’s and 1’s, and we have now seen so many kinds of information encoded digitally that we have a common-knowledge intuition about information that didn’t exist 30 to 60 years ago.

It is also common knowledge that there is something about understanding the brain and/or mind that makes it a hard problem. While everything else in the known universe can be explained with well-defined (if not perfectly fleshed-out) laws of physics and chemistry, biology has introduced incredible complexity. How has it accomplished that and how can we understand it? The ability of living things to use feedback from natural selection, i.e. evolution, is the first piece of the puzzle. Complexity can be managed over countless generations to develop traits that exploit almost any energy source to support life better. But although this can create some very complex and interdependent systems, we have been pretty successful in breaking them down into genetic traits with pros and cons. We basically understand plants, for example, which don’t have brains per se. The control systems of plants are less complex than animal brains, but there is much we still don’t understand, including how they communicate with each other through mycorrhizal networks to manage the health of whole forests. But while we know the role brains serve and how they are wired to do it with neurons, we have only a vague idea how the neurons do it. We know that even a complete understanding of how the one hundred or so neurotransmitters activate isn’t going to explain it.

We know now from common knowledge that we have to confront head-on the question of what brains are doing with information to tackle the problem. And the elephant in the room is that science doesn’t recognize the existence of information. There are protons and photons, but no informatons or cogitons. What the brain is up to is still viewed strictly through a physical lens as a process reducible to particles and waves. This has always run counter to our intuitions about the mind, and now that we understand information it runs counter to our common-knowledge understanding of what the mind is really doing. So we have a gap between the tools and methods science brings to the table and the problem that needs to be solved. The solution is not to introduce informatons and cogitons to the physical bestiary, but to see information and thought in a way that makes them explainable as phenomena.

So when we think to ourselves that we “know what we know” and that it is not just reducible to neural impulses, we are on to something. That knowledge can be related verbally and so “jump” between people is proof that it is fundamentally nonphysical, although we need a physical brain to reflect on it. All ideas are abstractions that indirectly characterize real or imagined things. Our minds themselves, using the physical mechanisms of the brain, are organized and oriented so as to leverage the power this abstraction brings. We know all this — better today than ever before — but we find ourselves stymied to address the matter scientifically because abstraction has no scientific pedigree. But I am not going to ignore common sense and common knowledge, as science is wont to do, as I unravel this problem.

2. Form & Function Dualism: things and ideas exist

We can’t study anything without a subject to study. What we need first is an ontology, a doctrine about what kinds of things exist. We are all familiar with the notion of physical existence, and so to the extent we are referring to things in time and space that can be seen and measured we share the well-known physicalist ontology. Physicalism is an ontological monism, which means it says just one kind of thing exists, namely physical things. But is physicalism is a sufficient ontology to explain the mind? Die-hard natural scientists insist it is and must be, and that anything else is new-age nonsense. I am sympathetic to that view as mysticism is not explanatory and consequently has no place in discussions about explanations. And we can certainly agree from common knowledge that there is a physical aspect, being the body of each person and the world around us. But knowing that seems to give us little ability to explain our subjective experience, which is so much more complex than the observed physical properties of the brain would seem to suggest. Can we extend science’s reach with another kind of existence that is not supernatural?

We are intimately familiar with the notion of mental existence, as in Descartes’ “I think therefore I am.” Feeling and thinking (as states of mind) seem to us to exist in a distinct way from physical things as they lack extent in space or time. Idealism is the monistic ontology that asserts that only mental things exist, and what we think of as physical things are really just mental representations. In other words, we dream up reality any way we like. But science and our own experience have provided overwhelming evidence of a persistent physical reality that doesn’t fluctuate in accord with our imagination, and this makes idealism rather untenable. But if we join the two together we can imagine a dualism between mind and matter in which both the mental and physical exist without either being reducible to the other. All religions have seized on this idea, stipulating a soul (or equivalent) that is quite distinct from the body. But no scientific evidence has been found supporting the idea that the mind can physically exist independent of the body or is in any way supernatural. But if we can extend science beyond physicalism, we might find a natural basis for the mind that could lift religion out of this metaphysical quicksand. Descartes also promoted dualism, but he got into trouble identifying the mechanism: he supposed the brain had a special mental substance that did the thinking, a substance that could in principle be separated from the body. Descartes imagined the two substances somehow interacted in the pineal gland. But no such substance was ever found and the pineal gland’s primary role is to make melatonin, which helps regulate sleep.

If the brain just operates under the normal rules of spacetime, as the evidence suggests, we need an explanation of the mind bound by that constraint. While Descartes’ substance dualism doesn’t deliver, two other forms of dualism have been proposed. Property dualism tries to separate mind from matter by asserting that mental states are nonphysical properties of physical substances (namely brains). This misses the mark too because it suggests a direct or inherent relationship between mental states and the physical substance that holds the state (the brain), and as we will see it is precisely the point that this relationship is not direct. It is like saying software is a non-physical property of hardware; while software runs on hardware, the hardware reveals nothing about what the software is meant to do.

Finally, predicate dualism proposes that predicates, being any subjects of conversation, are not reducible to physical explanations and so constitute a separate kind of existence. I will demonstrate that this is true and so hold that predicate dualism is the correct ontology science needs, but I am rebranding it as form and function dualism (just why is explained below). Sean Carroll writes,3 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. Baseball encompasses everything from an abstract set of rules to a national pastime to specific events featuring two baseball teams. Some parts have a physical corollary and some don’t, but the physical part isn’t the point. A game is an abstraction about possible outcomes when two sides compete under a set of rules. “Apple” and “water” are (seemingly) physical predicates while “three”, “red” and “happy” are not. Three is an abstraction of quantity, red of color, happy of emotion. Quantity is an abstraction of groups, color of light frequency, brightness and context, and emotion of experienced mental states. Apple and water are also abstractions; apples are fruits from certain varieties of trees and water is the liquid state of H2O, but is usually used generically and not to refer to a specific portion of water.4 Any physical example of apple or water will fall short of any ideal definition in some ways, but this doesn’t matter because function is never the same as form; it is intentionally an abstract characterization.

I prefer form and function dualism to predicate dualism because it is both clearer and more technically correct. It is clearer because it names both kinds of things that exist. It is more correct because function is bigger than predicates. I divide function into active and passive forms. Active function uses reference, logical reasoning, and intelligence. The word “predicate” emphasizes a subject, being something that refers to something else, either specifically (definite “the”) or generally (indefinite “a”) through the ascription of certain qualities. Predicates are the subjects (and objects) of logical reasoning. Passive function, which is employed by evolution, instinct, and conditioned responses, uses mechanisms and behaviors that were previously established to be effective in similar situations. Evolution established that fins, legs, and wings could be useful for locomotion. Animals don’t need to know the details so long as they work, but the selection pressures are on the function, not the form. We can actively reason out the passive function of wings to derive principles that help us build planes. Some behaviors originally established with reason, like tying shoelaces, are executed passively (on autopilot) without active use of predicates or reasoning. Function can only be achieved in physical systems by identifying and applying information, which as I have previously noted is the basic unit of function. Life is the only kind of physical system that has developed positive feedback mechanisms capable of capturing and using information. These mechanisms evolved because they enable life to do things more competitively than it could otherwise do because predicting the future beats blind guessing. Evolution captures information using genes, which apply it either directly through gene expression (to regulate or code proteins) or indirectly through instinct (to influence the mind). Minds capture information using memory, which is a partially understood neural process, and then applies it through recall or recognition, which subconsciously identify appropriate memories through triggering features. But if information is captured using physical genes or neurons, what trick makes it nonphysical? That is the power of abstraction: it allows stored patterns to be as indefinite generalities to be correlated later to new situations to provide a predictive edge. Information is created actively by using concepts to represent general situations and passively via pattern matching. Genes create proteins that do chemical pattern matching, while instinct and conditioned response leverage subconscious neural pattern matching.

This diagram shows how form and function dualism compares to substance dualism and several monisms. These two perspectives, form and function, are not just different ways of viewing a subject, but define different kinds of existences. Physical things have form, e.g. in spacetime, or potentially in any dimensional state in which they can have an extent. Physical systems that leverage information have both form and function, but to the extent we are discussing the function we can ignore or set aside considerations of the form because it just provides a means to an end. Function has no extent but is instead measured in terms of its predictive power. Pattern-matching techniques and algorithms implement functionality passively through brute force, while reasoning creates information actively by laying out concepts and rules that connect them. In a physical world, form makes function possible, so they coexist, but form and function can’t be reduced to each other. This is why I show them in the diagram as independent dimensions that intersect but generally do their own thing. Technically, function emerges from form, meaning that interactions of forms cause function to “spring” into existence with new properties not present in forms. But it has nothing to do with magic; it is just a consequence of abstraction decoupling information from what it refers to. The information systems are still physical, but the function they manage is not. Function can be said to exist in an abstract, timeless, nonphysical sense independent of whether it is ever implemented. This is true because an idea is not made possible because we think it; it is “out there” waiting to be thought whether we think it or not. However, as physical creatures, our access to function and the ideal realm is limited by the physical mechanisms our brains use to implement abstraction. We could, in principle, build a better mind, or perhaps a computer, that can do more, but any physical system will always be physically constrained and so limit our access to the infinite domain of possible ideas. Idealism is the reigning ontology across this hypothetical space of ideas, but it can’t stand alone in our physical space. And though we can’t think all ideas, we can potentially steer our thoughts in any direction, so given enough time we can potentially conceive anything.

So the problem with physicalism as it is generally presented is that form is not the only thing a physical universe can create; it can create form and function, and function can’t be explained with the same kind of laws that apply to form but instead needs its own set of rules. If physicalism had just included rules for both direct and abstract existence in the first place, we would not need to have this discussion. But instead, it was (inadvertently) conceived to exclude an important part of the natural world, the part whose power stems from the fact that it is abstracted away from the natural world. It is ironic considering scientific explanation itself (and all explanation) is itself immaterial function and not form. How can science see both the forest and the trees if it won’t acknowledge the act of looking?

Pipe

A thought about something is not the thing itself. “Ceci n’est pas une pipe,” as Magritte said5. The phenomenon is not the noumenon, as Heidegger would have put it: the thing-as-sensed is not the thing-in-itself. If it is not the thing itself, what is it? Its whole existence is wrapped up in its potential to predict the future; that is it. However, to us, as mental beings, it is very hard to distinguish phenomena from noumena, because we can’t know the noumena directly. Knowledge is only about representations, and isn’t and can’t be the physical things themselves. The only physical world the mind knows is actually a mental model of the physical world. So while Magritte’s picture of a pipe is not a pipe, the image in our minds of an actual pipe is not a pipe either: both are representations. And what they represent is a pipe you can smoke. What this critically tells us is that we don’t care about the pipe, we only care about what the pipe can do for us, i.e. what we can predict about it. Our knowledge was never about the noumenon of the pipe; it was only about the phenomena that the pipe could enter into. In other words, knowledge is about function and only cares about form to the extent it affects function. We know the physical things have a provable physical existence — that the noumena are real — it is just that our knowledge of them is always mediated through phenomena. Our minds experience phenomena as a combination of passive and active information, where the passive work is done for us subconsciously finding patterns in everything and the active work is our conscious train of thought applying abstracted concepts to whatever situations seem to be good matches for them.

Given the foundation of form and function dualism, what can we now say distinguishes the mind from the brain? I will argue that the mind is a process in the brain viewed from its role of performing the active function of controlling the body. That’s a mouthful, so let me break it down. First, the mind is not the brain but a process in the brain. Technically, a process is any series of events that follows some kind of rules or patterns, but in this case I am referring specifically just to the information managing capabilities of the brain as mediated by neurons. We don’t know quite how they do it, but we can draw an analogy to a computer process that uses inputs and memory to produce outputs. But, as argued before, we are not so concerned with how this brain process works technically as with what function it performs because we now see the value of distinguishing functional from physical existence. Next, I said the mind is about active function. To be clear, we only have one word for mind, but might be referring to several things. Let’s call the “whole mind” the set of all processes in the brain taken from a functional perspective. Most of that is subconscious and we don’t necessarily know much about it consciously. When I talk about the mind, I generally mean just the conscious mind, which consists only of the processes that create our subjective experience. That experience has items under direct focused attention and also items under peripheral attention. It includes information we construct actively and also provides us access to much information that was constructed passively (e.g. via senses, instinct, intuition, and recollection). The conscious mind exists as a distinct process from the whole mind because it is an effective way for animals to make the kinds of decisions they need to make on a continuous basis.

3. The nature of knowledge: pragmatism, rationalism and empiricism

Given that we agree to break entities down into form and function, things and ideas, physical and mental, we next need to consider what we can know about them, and what it even means to know something. A theory about the nature of knowledge is called an epistemology. I described the mental world as being the product of information, which is patterns that can be used to predict the future. What if we propose that knowledge and information are the same thing? Charles Sanders Peirce called this epistemology pragmatism, the idea that knowledge consists of access to patterns that help predict the future for practical uses. As he put it, pragmatism is the idea that our conception of the practical effects of the objects of our conception constitutes our whole conception of them. So “practical” here doesn’t mean useful; it means usable for prediction, e.g. for statistical or logical entailment. Practical effects are the function as opposed to the form. It is just another way of saying that information and knowledge differ from noise to the extent they can be used for prediction. Being able to predict well doesn’t confer certainty like mathematical proofs; it improves one’s chances but proves nothing.

Pragmatism takes a hard rap because it carries a negative connotation of compromise. The pragmatist has given up on theory and has “settled” for the “merely” practical. But the whole point of theory is to explain what will really happen and not simply to be elegant. It is not the burden of life to live up to theory, but of theory to live up to life. When an accepted scientific theory doesn’t exactly match experimental evidence, it is because the experimental conditions are more complex than the theory’s ideal model. After all, the real world is full of imperfections that the simple equations of ideal models don’t take into account. However, we can potentially model secondary and tertiary effects with additional ideal models and then combine the models and theories to get a more accurate overall picture. However, in real-world situations it is often impractical to build this more perfect overall ideal model, both because the information is not available and because most situations we face include human factors, for which physical theories don’t apply and social theories are imprecise. In these situations pragmatism shines. The pragmatist, whose goal is to achieve the best prediction given real-world constraints, will combine all available information and approaches to do it. This doesn’t mean giving up on theory; on the contrary, a pragmatist will use well-supported theory to the limit of practicality. They will then supplement that with experience, which is their pragmatic record of what worked best in the past, and merge the two to reach a plan of action. Recall that information is the product of both a causative (reasoned) approach and a pattern analysis (e.g. intuitive) approach. Both kinds of information can be used to build the axioms and rules of a theoretical model. We aspire to causative rules for science because they lead to necessary conclusions, but in their absence we will leverage statistical correlations. We associate subconscious thinking with the pattern analysis approach, but it also leverages concepts established explicitly with a causative approach. Both our informal and formal thinking is a combination at many levels of both causation and pattern analysis. Because our conscious and subconscious minds work together in a way that appears seamless to us, we are inclined to believe that reasoned arguments are correct and not dependent on subjective (biased) intuition and experience. But we are strongly wired to think in biased ways, not because we are fundamentally irrational creatures but because biased thinking is often a more effective strategy than unbiased reason. We are both irrational and rational because both help in different ways, but we have to spot and overcome irrational biases or we will make decisions that conflict with our own goals. All of our top-level decisions have to strike a balance between intuition/experience-based (conservative) thinking and reasoned (progressive) thinking. Conservative methods let us act quickly and confidently so we can focus our attention on other problems. Progressive methods slow us down by casting doubt but they reveal better solutions. It is the principal role of consciousness to provide the progressive element, to make the call between a tried-and-true or a novel approach to any situation. These calls are always themselves pragmatic, but if in the process we spot new causal links then we may develop new ad hoc or even formal theories, and we will remember these theories along with the amount of supporting evidence they seem to have. Over time our library of theories and their support will grow, and we will draw on them for rational support as needed.

Although pragmatism is necessary at the top level of our decision-making process where experience and reason come together to effect changes in the physical world, it is not a part of the theories themselves, which exist independently as constructs of the mental (i.e. functional) world. We do have to be pragmatic about what theories we develop and about how we apply them, but since theories represent idealized functional solutions independent of practical concerns, the knowledge they represent is based on a narrower epistemology than pragmatism. But what is this narrower epistemology? After all, it is still the case that theories help predict the future for practical benefits. And Peirce’s definition, that our conception of the practical effects of the objects of our conception constitutes our whole conception of them, is also still true. What is different about theory is that it doesn’t speak to our whole conception of effects, inclusive of our experience, but focuses on causes and effects in idealized systems using a set of rules. Though technically a subset of pragmatism, rule based-systems literally have their own rules and can be completely divorced from all practical concerns, so for all practical purposes they have a wholly independent epistemology based on rules instead of effects. This theory of knowledge is called rationalism, and holds that reason (i.e. logic) is the chief source of knowledge. Put another way, where pragmatism uses both causative and pattern analysis approaches to create information, reason only uses the logical, causative approach, though it leverages axioms derived from both causative and pattern-based knowledge. A third epistemology is empiricism, which holds that knowledge comes only or primarily from sensory experience. Empiricism is also a subset of pragmatism; it differs in that it pushes where pragmatism pulls. In other words, empiricism says that knowledge is created as stimuli come in, while pragmatism says it arises as actions and effects go out. The actions and effects do ultimately depend on the inputs, and so pragmatism subsumes empiricism, which is not prescriptive about how the inputs (evidence) might be used. In science, the word empiricism is taken to mean rationalism + empiricism, i.e. scientific theory and the evidence that supports it, so one can say that rationalism is the epistemology of theoretical science and empiricism is the epistemology of applied science.

Mathematics and highly mathematical physical theories are often studied on an entirely theoretical basis, with considerations as to their applicability left for others to contemplate. The study of algorithms is mostly theoretical as well because their objectives are established artificially, so they can’t be faulted for inapplicability to real-world situations. Developing algorithms can’t, in and of itself, explain the mind, because even if the mind does employ an algorithm (or constellation of algorithms), the applicability of those algorithms to the real-world problems the mind solves must be established. But iteratively we can propose algorithms and tune them so that they do align with problems the mind seems to solve. Guessing at algorithms will never reveal the exact algorithm the mind or brain uses, but that’s ok. Scientists never discover the exact laws of nature; they only find rules that work in all or most observed situations. What we end up calling an understanding or explanation of nature is really just a framework of generalizations that helps us predict certain kinds of things. Arguably, laws of nature reveal nothing about the “true” nature of the universe. So it doesn’t matter whether the algorithms we develop to explain the mind have anything to do with what the mind is “actually” doing; to the extent they help us predict what the mind will do they will provide us with a greater understanding of it, which is to say an explanation of it.

Because proposing algorithms, or outlines of potential algorithms, and then testing them against empirical evidence is entirely consistent with the way science is practiced (i.e. empiricism), this is how I will proceed. But we can’t just propose algorithms at random; we will need a basis for establishing appropriate artificial objectives, and that basis has to be related to what it is we think minds are up to. This is exactly the feedback loop of the scientific method: propose a hypothesis, test it, and refine it ad infinitum. The available evidence informs our choice of solution, and the effectiveness of the solution informs how we refine or revise it. From the high level at which I approach this subject in this book, I won’t need to be very precise in saying just how the algorithms work because that would be premature. All we can do at this stage is provide a general outline for what kinds of skills and considerations are going into different aspects of the thought process. Once we have come to a general agreement on that, we can start to sweat the details.

While my approach to the subject will be scientifically empirical, we need to remember that the mind itself is primarily pragmatic and only secondarily capable of reason (or intuition) to support that pragmatism. So my perspective for studying the mind is not itself the way the mind principally works. This isn’t a problem so long as we keep it in mind: we are using a reasonable approach to study something that is itself uses a highly integrated combination of reason and intuition (basically causation and pattern). It would be disingenuous to suggest that I have freed myself of all possible biases in this quest and that my conclusions are perfectly objective; even established science can never be completely free of biases. But over time science can achieve ever more effective predictive models, which is the ultimate standard for objectivity: can results be duplicated? But the hallmark of objectivity is not its measure but its methods: logic and reason. The conclusions one reaches through logic using a system of rules built on postulates can be provably true, contingent on the truth of the postulates, which make it a very powerful tool. Although postulates are true by definition from the perspective of the logical model that employs them, they have no absolute truth in the physical world because our direct knowledge of the physical world is always based on evidence from individual instances and not on generalities across similar instances. So truth in the physical world (as we see it from the mental world) is always a matter of degree, the degree to which we can correlate a given generality to a group of phenomena. That degree depends both on the clarity of the generalization and on the quality of the evidence, and so is always approximate at best, but can often be close enough to a perfect correlation to be taken as truth (for practical purposes). Exceptions to such truths are often seen more as “shortcomings of reality” than as shortcomings of the truth since truth (like all concepts) exists more in a functional sense than in the sense of having a perfect correlation to reality.

But how can we empirically approach the study of the mind? If we can accept the idea that the mind is principally a functional entity, it is largely pointless to look for physical evidence of its existence, beyond establishing the physical mechanism (the brain) that supports it. This is because physical systems can make information management possible but can’t explain all the uses to which the information can be put, just as understanding the hardware of the internet doesn’t say anything about the information flowing through it. We must instead look at the functional “evidence.” We can never get direct evidence, being facts or physical signs, of function (because function has no form), so we either need to look at physical side effects or develop a way to see “evidence” of function directly independent of the physical. Behavior provides the clearest physical evidence of mental activity, but our more interesting behavior results from complex chains of thought and can’t be linked directly to stimulus and response. Next, we have personal evidence of our own mind from our own experience of it. This evidence is much more direct than behavioral evidence but has some notable shortcomings as well. Introspection has a checkered past as a tool for studying the mind. Early hopes that introspection might be able to qualitatively and quantitatively describe all conscious phenomena were overly optimistic, largely because they misunderstand the nature of the tool. Our conscious minds have access to information based both on causation and pattern analysis, but our conscious awareness of this information is filtered through an interpretive layer that generalizes the information into conceptual buckets. So these generalized interpretations are not direct evidence, but, like behavior, are downstream effects of information processing. Even so, our interpretations can provide useful clues even if they can’t be trusted outright. Freud was too quick to attach significance to noise in his interpretation of dreams as we have no reason to assume that the content of dreams serves any function. Many activities of the mind do serve a function, however, so we can study them from the perspective of those functions. As the conscious mind makes a high-level decision, it will access functionally relevant information packaged in a form that the conscious subprocess can handle, which is at least partially in the form of concepts or generalizations. These concepts are the basis of reason (i.e. rationality), so to the extent our thinking is rational then our interpretation of how we think is arguably exactly how we think (because we are conscious of it). But that extent is never exact or complete because our concepts draw on a vast pool of subconscious information which heavily colors how we use them, and also we use subconscious data analysis algorithms (most notably memory/recognition). For both of these reasons any conscious interpretation will only be approximate and may cause us to overlook or misinterpret our actual motivations completely (for which we may have other motivations to suppress).

While both behavior and introspection can provide evidence that can suggest or support models of the mind, they are pretty indirect and can’t provide very firm support for those models. But another way to study function is to speculate about what function is being performed. Functionalism holds that the defining characteristics of mental states are the functions they bring about, quite independent of what we think about those functions (introspectively) or whether we act on them (behaviorally). This is the “direct” study of function independent of the physical to which I alluded. Speculation to function, aka the study of causes and effects, is an exercise of logic. It depends on setting up an idealized model with generalized components that describes a problem. These components don’t exist physically but are exemplars that embody only the properties of their underlying physical referents that are relevant to the situation. Given the existence of these exemplars (including their associated properties) as postulates, we can then reason about what behavior we can expect from them. Within such a model, function can be understood very well or even perfectly, but it is never our expectation that these models will align perfectly with real-world situations. What we hope for is that they will match well enough that predictions made using the model will come true in the real world. Our models of the functions of mental states won’t exactly describe the true functions of those mental states (if we could ever discover them), but they will still be good explanations of the mind if they are good at predicting the functions our minds perform.

Folk explanations differ from scientific explanations in the breadth and reliability of their predictive power. While there are unlimited folk perspectives we can concoct to explain how the mind works, all of which will have some value in some situations, scientific perspectives (theories) seek a higher standard. Ideally, science can make perfect predictions, and in many physical situations it nearly does. Less ideally, science should at least be able to make predictions with odds better than chance. The social sciences usually have to settle for such a reduced level of certainty because people, and the circumstances in which they become involved, are too complex for any idealized model to describe. So how, then, can we distinguish bona fide scientific efforts in matters involving minds from pseudoscience? I will investigate this question next.

4. What Makes Knowledge Objective?

It is easier to define subjective knowledge that objective knowledge. Subjective knowledge is anything we think we know, and it counts as knowledge as long as we think it does. We set our own standard. It starts with our memory; a memory of something is knowledge of it. Our minds don’t record the past for its own sake but for its potential to help us in the future. From past experience we have a sense of what kinds of things we will need to remember, and these are the details we are most likely to commit to memory. This bias aside, our memory of events and experiences is fairly automatic and has considerable fidelity. The next level of memory is of our reflections: thoughts we have had about our experiences, memories and other thoughts. I call these two levels of memory and knowledge detailed and summary. There is no exact line separating the two, but details are kept as raw and factual as possible while summaries are higher-order interpretations that derive uses for the details. It takes some initial analysis, mostly subconscious, to study our sensory data so we can even represent details in a way that we can remember. Summaries are a subsidiary analysis of details and other summary information performed using both conscious (reasoned) and subconscious (intuitive) methods. These details and summaries are what we know subjectively.

We are designed to gather and use knowledge subjectively, so where does objectivity come in? Objectivity creates knowledge that is more reliable and broadly applicable than subjective knowledge. Taken together, reliability and broad applicability account for science’s explanatory power. After all, to be powerful, knowledge must both fit the problem and do so dependably. Objective approaches let us create both physical and social technologies to manage both goods and services to high standards. How can we create objective knowledge that can do these things? As I noted above, it’s all about the methods. Not all methods of gathering information are equally effective. Throughout our lives, we discover better ways of doing things, and we will often use these better ways again. Science makes more of an effort to identify and leverage methods that produce better information, i.e. with reliability and broad applicability. These methods are collectively called the “scientific method”. It isn’t one method but an evolving set of best practices. They are only intended to bring some order to the pursuit and do not presume to cover everything. In particular, they say nothing of the creative process or seek to constrain the flow of ideas. The scientific method is a technology of the mind, a set of heuristics to help us achieve more objective knowledge.

The philosophy of science is the conviction that an objective world independent of our perceptions exist and that we can gain an understanding of it that is also independent of our perceptions. Though it is popularly thought that science reveals the “true” nature of reality, it has been and must always be a level removed from reality. An explanation or understanding of the world will always be just one of many possible descriptions of reality and never reality itself. But science doesn’t seek a multitude of explanations. When more than one explanation exists, science looks for common ground between and tries to express them as varying perspectives of the same underlying thing. For example, wave-particle duality allows particles to be described both as particles and waves. Both descriptions work and provide explanatory power, even though we can’t imagine macroscopic objects being both at the same time. We are left with little intuitive feel for the nature of reality, which serves to remind us that the goal of objectivity is not to see what is actual there but to gain the most explanatory power over it that we can. The canon of generally-accepted scientific knowledge at any point in time will be considered charming, primitive and not terribly powerful when looked back on a century or two later, but this doesn’t mitigate its objectivity or claim on success.

That said, the word “objectivity” hints at certainty. While subjectivity acknowledges the unique perspective of each subject, objectivity is ostensibly entirely about the object itself, its reality independent of the mind. If an object actually did exist, any direct knowledge we had of it would then remain true no matter which subject viewed it. This goal, knowledge independent of the viewer, is admirable but unattainable. Any information we gather about an object must always ultimately depend on observations of it, either with our own senses or using instruments we devise. And no matter how reliable that information becomes, it is still just information, which is not the object itself but only a characterization of traits with which we ultimately predict behavior. So despite its etymology, we must never confuse objectivity with “actual” knowledge of an object, which is not possible. Objectivity only characterizes the reliability of knowledge based on the methods used to acquire it.

With those caveats out of the way, a closer look at the methods of science will show how they work to reduce the likelihood of personal opinion and maximize the likelihood of reliable reproduction of results. Below I list the principle components of the scientific method, from most to least helpful (approximately) in establishing its mission of objectivity.

    1. The refinement of hypotheses. This cornerstone of the scientific method is the idea that one can propose a rule describing how kinds of phenomena will occur, and that one can test this rule and refine it to make it more reliable. While it is popularly thought that scientific hypotheses are true until proven otherwise (i.e. falsified, as Karl Popper put it), we need to remember that the product of objective methods, including science, is not truth but reliability6. It is not so much that laws are true or can be proven false as that they can be relied on to predict outcomes in similar situations. The Standard Model of particle physics purports (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.7. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Also, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. Some aspects of mental function will prove to be highly predictable while others will be more chaotic, but our standard for scientific value should still be explanatory power.
    2. Scientific techniques. This most notably includes measurement via instrumentation rather than use of senses. Instruments are inherently objective in that they can’t have a bias or opinion regarding the outcome, which is certainly true to the extent the instruments are mechanical and don’t employ computer programs into which biases may have been unintentionally embedded. However, they are not completely free from biases or errors in how they are used, and also there are limits in the reliability of any instrument, especially at the limits of their operating specifications. Scientific techniques also include a wide variety of practices that have been demonstrated to be effective and are written up into standard protocols in all scientific disciplines to increase the chances that results can be replicated by others, which is ultimately the objective of science.
    3. Critical thinking. I will define critical thinking here without defense, as that requires a more detailed understanding of the mind than I have yet provided. Critical thinking is an effort to employ objective methods of thought with proven reliability while excluding subjective methods known to be more susceptible to bias. Next, I distinguish three of the most significant components of critical thinking:

3a. Rationality. Rationality is, in my theory of the mind, the subset of thinking concerned with applying causality to concepts, aka reasoning. As I noted in The Mind Matters, thinking and the information that is thought about divide into two camps, being reason, which manages information that derives using a causative approach, and intuition, which manages information that derives using a pattern analysis approach. Both approaches are used to some degree for almost every thought we have, but it is often useful to focus on one of these approaches as the sole or predominant one for the purpose of analysis. The value of the rational approach over the intuitive is in its reproducibility, which is the primary objective of science and the knowledge it seeks to create. Because rational techniques can be written down to characterize both starting conditions and all the rules and conclusions they imply, they have the potential to be very reliable.

3b. Inductive reasoning. Inductive reasoning extrapolates patterns from evidence. While science seeks causative links, it will settle for statistical correlations if it has to. Newton used inductive reasoning to posit gravity, which was later given a cause by Einstein’s theory of general relativity as a deformation of space-time geometry.

3c. Abductive reasoning. Abductive reasoning seeks the simplest and most likely explanations, which is a pattern matching heuristic that picks kinds of matches that tend to work out best. Occam’s Razor is an example of this often used in science: “Among competing hypotheses, the one with the fewest assumptions should be selected”.

3d. Open-mindedness. Closed-mindedness means having a fixed strategy to deal with any situation. It enables a confident response in any circumstance, but works badly if one tries to use it beyond the conditions those strategies were designed to handle. Open-mindedness is an acceptance of the limitations of one’s knowledge along with a curiosity about exploring those limitations to discover better strategies. While everyone must be open-minded in situations where ignorance is unavoidable, one hopes that one will develop sufficient mastery over most of the situations that one encounters to be able to act confidently in a closed-minded way without fear of making a mistake. While this is often possible, the scientist must always remember that perfect knowledge is unattainable and must always be alert for possible cracks in one’s knowledge. These cracks should be explored with objective methods to discover more reliable knowledge and strategies than one might already possess. By acknowledging the limits and fallibility of its approaches and conclusions, science can criticize, correct, and improve itself. Thus, more than just a bag of tricks to move knowledge forward, it is characterized by a willingness to admit to being wrong.

3e. Countering cognitive biases. More than just prejudice or closed-mindedness, cognitive biases are subconscious pattern analysis algorithms that usually work well for us but which are less reliable than objective methods. The insidiousness of cognitive biases was first exposed by Tversky and Kahneman their 1971 paper, “Belief in the law of small numbers.”89. Cognitive biases use pattern analysis to lead us to conclusions based on correlations and associations rather than causative links. They are not simply inferior to objective methods because they can account for indirect influences that can be overlooked by objective methods. But robust causative explanations are always more reliable than associative explanations, and in practice they tend to be right where biases are wrong. (where “right” and “wrong” here are taken not as absolutes but as expressions of very high and low reliability).

    4. Peer review. Peer review is the evaluation of a scientific work by one or more people of similar competence to assess whether it was conducted using appropriate scientific standards.
    5. Credentials. Academic credentials attest to the completion of specific education programs. Titular credentials, publication history, and reputation add to a researcher’s credibility. While no guarantee, credentials help establish an author’s scientific reliability.
    6. Pre-registration. A recently added best practice is pre-registration, which clears a study for publication before it has been conducted. This ensures that the decision to publish is not contingent on the results, which would be biased 10.

The physical world is not itself a rational place because reason itself it has a functional existence, not a physical existence. So rational understanding, and consequently what we think of as truth about the physical world, depends on the degree to which we can correlate a given generality to a group of phenomena. But how can we expect a generality (i.e. hypothesis) that worked for some situations to work for all similar situations? The Standard Model of particle physics professes (with considerable success) that any two subatomic particles of the same kind are identical for all predictive purposes except for occupying a different location in spacetime.11. Maybe they are identical (despite this being impossible to prove), and this helps account for the many consistencies we observe in nature. But location in spacetime is a big wrinkle. The three body problem remains insoluble in the general case, and solving for the movements of all astronomical bodies in the solar system is considerably more so. Predictive models of how large groups of particles will behave (e.g. for climate) will always just be models for which reliability is the measure and falsifiability is irrelevant. Particles are not simply free-moving; they clump into atoms and molecules in pretty strict accordance with laws of physics and chemistry that have been elaborated pretty well. Macroscopic objects in nature or manufactured to serve specific purposes seem to obey many rules with considerably more fidelity than free-moving weather systems, a fact upon which our whole technological civilization depends. Still, in most real-world situations many factors limit the exact alignment of scientific theory to circumstances, e.g. impurities, ability to acquire accurate data, and subsidiary effects beyond the primary theory being applied. Even so, by controlling the conditions adequately, we can build many things that work very reliably under normal operating conditions. The question I am going to explore in this book is whether scientific, rational thought can be successfully applied to function and not just form, and specifically to the mental function comprising our minds. Are some aspects highly predictable while others remain chaotic?

We have to keep in mind just how much we take correlation of theory to reality for granted when we move above the realm of subatomic particles. No two apples are alike, or any two gun parts, though Eli Whitney’s success with interchangeable parts has led us to think of them as being so. They are interchangeable once we slot them into a model or hypothesis, but in reality any two macroscopic objects have many differences between them. A rational view of the world breaks down as the boundaries between objects become unclear as imperfections mount. Is a blemished or rotten apple still an apple? What about a wax apple or a picture of an apple? Is a gun part still a gun part if it doesn’t fit? A hypothesis that is completely logical and certain will still have imperfect applicability to any real-world situation because the objects that comprise it are idealized, and the world is not ideal. But still, in many situations this uncertainty is small, often vanishingly small, which allows us to build guns and many other things that work very reliably under normal operating conditions.

How can we mitigate subjectivity and increase objectivity? More observations from more people help, preferably with instruments, which are much more accurate and bias-free than senses. This addresses evidence collection, but it not so easy to increase objectivity over strategizing and decision-making. These are functional tasks, not matters of form, and so are fundamentally outside the physical realm and so not subject to observation. Luckily, formal systems follow internal rules and not subjective whims, so to the degree we use logic we retain our objectivity. But this can only get us so far because we still have to agree on the models we are going to use in advance, and our preference of one model over another ultimately has subjective aspects. To the degree we use statistical reasoning we can improve our objectivity by using computers rather than innate or learned skills. Statistical algorithms exist that are quite immune to preference, bias, and fallacy (though again, deciding what algorithm to use involves some subjectivity). But we can’t yet program a computer to do logical reasoning on a par with humans. So we need to examine how we reason in order to find ways to be more objective about it so we can be objective when we start to study it. It’s a catch-22. We have to understand the mind first before we figure out how to understand it. If we rush in without establishing a basis for objectivity, then everything we do will be a matter of opinion. While there is no perfect formal escape from this problem, we informally overcome this bootstrapping problem with every thought through the power of assumption. An assumption, logically called a proposition, is an unsupported statement which, if taken to be true, can support other statements. All models are built using assumptions. While the model will ultimately only work if the assumptions are true, we can build the model and start to use it on the hope that the assumptions will hold up. So can I use a model of how the mind works built on the assumption that I was being objective to then establish the objectivity I need to build the model? Yes. The approach is a bit circular, but that isn’t the whole story. Bootstrapping is superficially impossible, but in practice is just a way of building up a more complicated process through a series of simpler processes: “at each stage a smaller, simpler program loads and then executes the larger, more complicated program of the next stage”. In our case, we need to use our minds to figure out our minds, which means we need to start with some broad generalizations about what we are doing and then start using those, then move to a more detailed but still agreeable model and start using that, and so on. So yes, we can only start filling in the details, even regarding our approach to studying the subject, by establishing models and then running them. While there is no guarantee it will work, we can be guaranteed it won’t work if we don’t go down this path. While not provably correct, nothing in nature can be proven. All we can do is develop hypotheses and test them. By iterating on the hypotheses and expanding them with each pass, we bootstrap them to greater explanatory power. Looking back, I have already done the first (highest level) iteration of bootstrapping by endorsing form & function dualism and the idea that the mind consists of processes that manage information. For the next iteration, I will propose an explanation for how the mind reasons, which I will then use to support arguments for achieving objectivity.

So then, from a high level, how does reasoning work? I presume a mind that starts out with some innate information processing capabilities and a memory bank into which experience can record learned information and capabilities. The mind is free of memories (a blank slate) when it first forms but is hardwired with many ways to process information (e.g. senses and emotions). Because our new knowledge and skills (stored in memory) build on what came before, we are essentially continually bootstrapping ourselves into more capable versions of ourselves. I mention all this because it means that the framework with which we reason is already highly evolved even from the very first time we start making conscious decisions. Our theory of reasoning has to take into account the influence of every event in our past that changed our memory. Every event that even had a short-term impact on our memory has the potential for long-term effects because long-term memories continually form and affect our overall impressions even if we can’t recall them specifically.

One could view the mind as being a morass of interconnected information that links every experience or thought to every other. That view won’t get us very far because it gives us nothing to manipulate, but it is true, and any more detailed views we develop should not contradict it. But on what basis can we propose to deconstruct reasoning if the brain has been gradually accumulating and refining a large pool of data for many years? On functional bases, of which I have already proposed two: logical and statistical, which I introduced above with pragmatism. Are these the only two approaches that can aid prediction? Supernatural prophecy is the only other way I can think of, but we lack reliable (if any) access to it, so I will not pursue it further. Just knowing that however the mind might be working, it is using logical and/or statistical techniques to accomplish its goals gives us a lot to work with. First, it would make sense, and I contend that it is true, that the mind uses both statistical and logical means to solve any problem, using each to the maximum degree they help. In brief, statistical means excel at establishing the assumptions and logical means at drawing out conclusions from the assumptions.

While we can’t yet say how neurons make reasoning possible, we can say that it uses statistics and logic, and from our knowledge of the kinds of problems we solve and how we solve them, we can see more detail about what statistical and logical techniques we use. Statistically, we know that all our experience contributes supporting evidence to generalizations we make about the world. More frequently used generalizations come to mind more readily than lesser used and are sometimes also associated with words or phrases, such as about the concept APPLE. An APPLE could be a specimen of fruit of a certain kind, or a reproduction or representation of such a specimen, or used in a metaphor or simile, which are situations where the APPLE concept helps illustrate something else. We can use innate statistical capabilities to recognize something as an APPLE by correlating the observed (or imagined) aspects of that thing against our large database every encounter we have ever had with APPLES. It’s a lot of analysis, but we can do it instantly with considerable confidence. Our concepts are defined by the union of our encounters, not by dictionaries. Dictionaries just summarize words, and yet words are generalizations and generalizations are summaries, so dictionaries are very effective because they summarize well. But brains are like dictionaries on steroids; our summaries of the assumptions and rules behind our concepts and models are much deeper and were reinforced by every affirming or opposing interaction we ever had. Again, most of this is innate: we generalize, memorize, and recognize whether we want to or not using built-in capacities. Consciousness plays an important role I will discuss later, but “sees” only a small fraction of the computational work our brains do for us.

Let’s move on to logical abilities. Logic operates in a formal system, which is a set of assumptions or axioms and rules of inference that apply to them. We have some facility for learning formal systems, such as the rules of arithmetic, but everyday reasoning is not done using formal systems for which we have laid out a list of assumptions and rules. And yet, the formal systems must exist, so where do they come from? The answer is that we have an innate capacity to construct mental models, which are both informal and formal systems. They are informal on many levels, which I will get into, but also serve the formal need required for their use in logic. How many mental models (models, for short) do we have in our heads? Looked at most broadly, we each have one, being the whole morass of all the information we have every processed. But it is not very helpful to take such a broad view, nor is it compatible with our experience using mental models. Rather, it makes sense to think of a mental model as the fairly small set of assumptions and rules that describe a problem we typically encounter. So we might have a model of a tree or of the game of baseball. When we want to reason about trees or baseball, we pull out our mental model and use it to draw logical conclusions. From the rules of trees, we know trees have a trunk with ever small branches branching off that have leaves that usually fall off in the winter. From the rules of baseball, we know that an inning ends on the third out. Referring back a paragraph, we can see that models and concepts are the same things — they are generalizations, which is to say they are assessments that combine a set of experience into a prototype. Though the same data, models and concepts have different functional perspectives: models view the data from the inside as the framework in which logic operates, and concepts view it from the outside as the generalized meaning it represents.

While APPLE, TREE, and BASEBALL are individual concepts/models, no two instances of them are the same. Any two apples must differ at least in time and/or place. When we use a model for a tree (let’s call it the model instance), we customize the model to fit the problem at hand. So for an evergreen tree, for example, we will think of needles as a degenerate or alternate form of leaves. Importantly, we don’t consciously reason out the appropriate model for the given tree; we recognize it using our innate statistical capabilities. A model or concept instance is created through recognition of underlying generalizations we have stored from long experience, and then tweaked on an ad hoc basis (via further recognition and reflection) to add unique details to this instance. Reflection can be thought of as a conscious tool to augment recognition. So a typical model instance will be based on recognition of a variety of concepts/models, some of which will overlap and even contradict each other. Every model instance thus contains a set of formal systems, so I generally call it a constellation of models rather than a model instance.

We reason with a model constellation by using logic within each component model and then using statistical means to weigh them against each other. The critical aspect of the whole arrangement is that it sets up formal systems in which logic can be applied. Beyond that, statistical techniques provide the huge amount of flexibility needed to line up formal systems to real-world situations. The whole trick of the mind is to represent the external world with internal models and to run simulations on those models to predict what will happen externally. We know that all animals have some capacity to generalize to concepts and models because their behavior depends on being able to predict the future (e.g. where food will be). Most animals, but humans in particular, can extend their knowledge faster than their own experience allows by sharing generalizations with others via communication and language, which have genetic cognitive support. And humans can extend their knowledge faster still through science, which formally identifies objective models.

So what steps can we take to increase the objectivity of what goes on in our minds, which has some objective elements in its use of formal models, but which also has many subjective elements that help form and interpret the models? Devising software that could run mental models would help because it could avoid fallacies and guard against biases. It would still ultimately need to prioritize using preferences, which are intrinsically subjective, but we could at least try to be careful and fair setting them up. Although it could guard against the abuses of bias, we have to remember that all generalizations are a kind of bias, being arguments for one way of organizing information over another. We can’t write software yet that can manage concepts or models, but machine learning algorithms, which are statistical in nature, are advancing quickly. They are becoming increasingly generalized to behave in ever more “clever” ways. Since concepts and models are themselves statistical entities at their core, we will need to leverage machine learning as a starting point for software that simulates the mind.

Still, there is much we can do to improve our objectivity of thought short of replacing ourselves with machines, and science has been refining methods to do it from the beginning. Science’s success depends critically on its objectivity, so it has long tried to reject subjective biases. It does this principally by cultivating a culture of objectivity. Scientists try to put opinion aside to develop hypotheses in response to observations. They then test them with methods that can be independently confirmed. Scientists also use peer review to increase independence from subjectivity. But what keeps peers from being subjective? In his 1962 classic, The Structure of Scientific Revolutions12, Thomas Kuhn noted that even a scientific community that considers itself objective can become biased toward existing beliefs and will resist shifting to a new paradigm until the evidence becomes overwhelming. This observation inadvertently opened a door which postmodern deconstructionists used to launch the science wars, an argument that sought to undermine the objective basis of science, calling it a social construction. To some degree this is undeniable, which has left science with a desperate need for a firmer foundation. The refutation science has fallen back on for now was best put by Richard Dawkins, who noted in 2013 that “Science works, bitches!”13. Yes, it does, but until we establish why we are blustering much like the social constructionists. The reason science works is that scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that they don’t (and in fact can’t) eliminate it. All that matters is that they reduce it, which distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So, yes, science is a social construction, but one that continually moves closer to truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount and quality of useful information. It is not enough for scientific communities to assume best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”1415. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings, but is more commonly a problem when science is used to justify a commercial enterprise.

5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

The paradigm I am proposing to replace physicalism, rationalism, and empiricism is a superset of them. Form & function dualism embraces everything physicalism stands for but doesn’t exclude function as a form of existence. Pragmatism embraces everything rationalism and empiricism stand for but also includes knowledge gathered from statistical processes and function.

But wait, you say, what about biology and the social sciences: haven’t they been making great progress within the current paradigm? Well, they have been making great progress, but they have been doing it using an unarticulated paradigm. Since Darwin, biology has pursued a function-oriented approach. Biologists examine all biological systems with an eye to the function they appear to be serving, and they consider the satisfaction of function to be an adequate scientific justification, but it isn’t under physicalism, rationalism or empiricism. Biologists cite Darwin and evolution as justification for this kind of reasoning, but that doesn’t make it science. The theory of evolution is unsupportable under physicalism, rationalism, and empiricism alone, but instead of acknowledging this metaphysical shortfall some scientists just ignore evolution and reasoning about function while others just embrace it without being overly concerned that it falls outside the scientific paradigm. Evolutionary function occupies a somewhat confusing place in reasoning about function because it is not teleological, meaning that evolution is not directed toward an end or shaped by a purpose but rather is a blind process without a goal. But this is irrelevant from an informational standpoint because information never directs toward an end anyway, it just helps predict. Goals are artifacts of formal systems, and so contribute to logical but not statistical information management techniques. In other words, goals and logic are imaginary constructs; they are critical for understanding the mind but can be ignored for studying evolution and biology, which has allowed biology to carry on despite this weakness in its foundation.

The social sciences, too, have been proceeding on an unarticulated paradigm. Officially, they are trying to stay within the bounds of physicalism, rationalism, and empiricism, but the human mind introduces a black box, which is what scientists call a part of the system that is studied entirely through its inputs and outputs without any attempt to explain the inner workings. Some efforts to explain it have been attempted. Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s Verbal Behavior by explaining how language acquisition leverages innate linguistic talents16. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. So we now have good reason to believe the mind is much more than conditioned behavior and employs reasoning and subconscious know-how. But that is not the same thing as having an ontology or epistemology to support it. Form & function dualism and pragmatism give us the leverage to separate the machine (the brain) from its control (the mind) and to dissect the pieces.

Expanding the metaphysics of science has a direct impact across science and not just regarding the mind. First, it finds a proper home for the formal sciences in the overall framework. As Wikipedia says, “The formal sciences are often excluded as they do not depend on empirical observations.” Next, and critically, it provides a justification for the formal sciences to be the foundation for the other sciences, which are dependent on mathematics, not to mention logic and hypotheses themselves. But the truth is that there is no metaphysical justification for invoking formal sciences to support physicalism, rationalism, and empiricism. With my paradigm, the justification becomes clear: function plays an indispensable role in the way the physical sciences leverage generalizations (scientific laws) about nature. In other words, scientific theories are from the domain of function, not form. Next, it explains the role evolutionary thinking is already having in biology because it reveals how biological mechanisms use information stored in DNA to control life processes through feedback loops. Finally, this expanded framework will ultimately let the social sciences shift from black boxes to knowable quantities.

But my primary motivation for introducing this new framework is to provide a scientific perspective for studying the mind, which is the domain of cognitive science. It will elevate cognitive science from a loose collaboration of sciences to a central role in fleshing out the foundation of science. Historically the formal sciences have been almost entirely theoretical pursuits because formal systems are abstract constructs with no apparent real-world examples. But software and minds are the big exceptions to this rule and open the door for formalists to study how real-world computational systems can implement formal systems. Theoretical computer science is a well-established formal treatment of computer science, but there is no well-established formal treatment for cognitive science, although the terms theoretical cognitive science and computational cognitive science are occasionally used. Most of what I discuss in this book is theoretical cognitive science because most of what I am doing is outlining the logic of minds, human or otherwise, but with a heavy focus on the design decisions that seem to have impacted earthly, and especially human, minds. Theoretical cognitive science studies the ways minds could work, looking at the problem from the functional side, and leaves it as a (big) future exercise to work out how the brain actually brings this sort of functionality to life.

It is worth noting here that we can’t conflate software with function: software exists physically as a series of instructions, while function exists mentally and has no physical form (although, as discussed, software and brains can produce functional effects in the physical world and this is, in fact, their purpose). Drew McDermott (whose class I took at Yale) characterized this confusion in the field of AI like this (as described by Margaret Boden in Mind as Machine):

A systematic source of self-deception was their common habit (made possible by LISP: see 10.v.c) of using natural-language words to name various aspects of programs. These “wishful mnemonics”, he said, included the widespread use of “UNDERSTAND” or “GOAL” to refer to procedures and data structures. In more traditional computer science, there was no misunderstanding; indeed, “structured programming” used terms such as GOAL in a liberating way. In Al, however, these apparently harmless words often seduced the programmer (and third parties) into thinking that real goals, if only of a very simple kind, were being modelled. If the GOAL procedure had been called “G0034” instead, any such thought would have to be proven, not airily assumed. The self-deception arose even during the process of programming: “When you [i.e. the programmer] say (GOAL… ), you can just feel the enormous power at your fingertips. It is, of course, an illusion” (p. 145). 17

This begs the million-dollar question: if an implementation of an algorithm is not itself function, where is the function, i.e. real intelligence, hiding? I am going to develop the answer to this question as the book unfolds, but the short answer is that information management is a blind watchmaker both in evolution and the mind. That is, from a physical perspective the universe can be thought of as deterministic, so there is no intelligence or free will. But the main thrust of my book is that this doesn’t matter because algorithms that manage information are predictive and this capacity is equivalent to both intelligence and free will. So if procedure G0034 is part of a larger system that uses it to effectively predict the future, it can fairly also be called by whatever functional name you like that describes this aspect. Such mnemonics are actually not wishful. It is no illusion that the subroutines of a self-driving car that get it to its destination in one piece do wield enormous power and achieve actual goals. This doesn’t mean we are ready to start programming goals to the level human minds conceive them (and certainly not UNDERSTAND!), but function, i.e. predictive power, can be broken down into simple examples and implemented using today’s computers.

What are the next steps? My main point is that we need start thinking about how minds achieve function and stop thinking that a breakthrough in neurochemistry will magically solve the problem. We have to solve the problem by solving the problem, not by hoping a better understanding of the hardware will explain the software. While the natural sciences decompose the physical world from the bottom up, starting with subatomic particles, we need to decompose the mental world from the top down, starting (and ending) with the information the mind manages.

An Overview of What We Are

[Brief summary of this post]

What are we? Are we bodies or minds or both? Natural science tells us with fair certainty that we are creatures, one type among many, who evolved over the past few billion years in an entirely natural and explainable way. I certainly endorse broad scientific consensus, but this only confirms bodies, not minds. Natural science can’t yet confirm the existence of minds; we can observe the brain, by eye or with instruments, but we can’t observe the mind. Everything we know (or think we know) about the mind comes from one of two sources: our own experience or hearsay. However comfortable we are with our own minds, we can’t prove anything about the experience. Similarly, everything we learn about the world from others is still hearsay, in the sense that it is information that can’t be proven. We can’t prove things about the physical world; we can only develop pretty reliable theories. And knowledge itself, being information and the ability to apply it, only exists in our minds. Some knowledge appears instinctively, and some is acquired through learning (or so it seems to us). Beyond knowledge, we possess senses, feelings, desires, beliefs, thoughts, and perspectives, and we are pretty sure we can recognize these things in others. All of these mental words mean something about our ability to function in the world, and have no physical meaning in and of themselves. And not incidentally, we also have physical words that let us understand and interact with the physical world even though these words are also mental abstractions, being generalizations about kinds or instances of physical phenomena. We can comfortably say (but can’t prove) that we have a very good understanding of a mentally functional existence that is quite independent of our physical existence, an understanding that is itself entirely mentally functional and not physical. It is this mentally functional existence, our mind, that we most strongly identify with. When we are discussing any subject, the “we” doing the discussing is our minds, not our bodies. While we can identify with our bodies and recognize them as an inseparable possession, they, including our brains, are at least logically distinct entities from our minds. We know (from science) that the brain hosts our mind, but that is irrelevant to how we use our minds (excepting issues concerning the care of our heads and bodies) because our thoughts are abstractions not bound (except through indirect reference) to the physical world.

Given that we know we are principally mental beings, i.e. that we exist more from the perspective of function than form, what can we do to develop an understanding of ourselves? All we need to do is approach the question from the perspective of function rather than form. We don’t need to study the brain or the body; we need to study what they do and why. Just as homologous evolution caused eyes to evolve independently about 50-100 times, all our brain functions are evolving because of their value rather than because of their mechanism. Function drives evolution, not form, although form constrains what can be achieved.

But let’s consider the form for a moment before we move on to function. Observations of the brain will eventually reveal how it works in the same way dissection of a computer would. This will illuminate all the interconnections, and even which areas specialize in what kind of tasks. Monitoring neural activation alone could probably even get to the point where one could predict the gist of our thoughts with fair accuracy by correlating areas of neural activity to specific memories and mental states. But that would still be a parlor trick because such a physical reading would not reveal the rationale for the logical relationships in our cognitive models. The physical study of the brain will reveal much about the constraints of the system (the “hardware”), including signal speeds, memory storage mechanisms, and areas of specialized functions, but could it trace our thoughts (the “software”)? To extend the computer analogy, one can study software by doing a memory dump, so a similar memory reading ability for brains could reveal thoughts. But it is not enough to know the software or the thoughts; one needs to know what function is being served, i.e. what the software or thoughts do. A physical examination can’t reveal that; it is a mental phenomenon that can be understood only by reasoning out what it does from a higher-level (generalized) perspective and why. One can figure out what software does from a list of instructions, but one can’t see the larger purposes being served without asking why, which moves us from form to function, from physical to mental. So a better starting point is to ask what function is being served, from which one can eventually back out how the hardware and software do it. Since we are far from being able to decode the hardware or software of the brain (“wetware”) in much detail anyway, I will adopt this more direct functional approach.

From the above, we have finally arrived at the question we need to ask: What function do minds serve? The answer, for which I will provide a detailed defense later on, is that the function of the brain is to provide centralized, coordinated control of the body, and the function of the conscious mind is to provide centralized, coordinated control of the brain. That brains control bodies is, by now, not a very controversial stance. The rest of the body provides feedback to the brain, but the brain ultimately decides. The gut brain does a lot of “thinking” for itself, passing along its hungers and fears, but it doesn’t decide for you. That the conscious mind controls the brain is intuitively obvious but hard to prove given that our only primary information source about the mind is the mind itself, i.e. it is subjective instead of objective. However, if we work from the assumption that the brain controls the body using information management, which is to say the application of algorithms on data, then we can define the mind as what the brain is doing from a functional perspective. That is, the mind is our capacity to do things.

The conscious mind, however, is just a subset of the mind, specifically including everything in our conscious awareness, from sensory input to memories, both at the center of our attention and in a more peripheral state of awareness. We feel this peripheral awareness both because we can tell it is there without dwelling on it and because we often do turn our attention to it, at which point it happily becomes the center. The capacity of our mind to do things is much larger than our conscious awareness, including all things our brains can do for which we don’t consciously sense the underlying algorithm. Statistically, this includes almost everything our brains do. The things we use our minds to do which we can’t explain are said to be done subconsciously, by our subconscious mind. We only know the subconscious mind is there by this process of elimination: we can do it, but we are not aware of how we do it or sometimes that we are doing it at all.

For example, we can move, talk, and remember using our (whole) mind, but we can’t explain how we do them because they are controlled subconsciously, and the conscious mind just pulls the strings. Any explanations I might attempt of the underlying algorithms behind these actions sound like they are at the puppeteer level: I tell my body to move, I use words to talk, I remember things by thinking about them. In short, I have no idea how I really do it. The explanations or understandings available to the conscious mind develop independently of the underlying subconscious algorithms. Our conscious understanding is based only on the information available to conscious awareness. While we are aware of much of the sensory data used by the brain, we have limited access to the subconscious processing performed on that data, and consequently limited access to the information it contains. What ends up happening is that we invent our own view of the world, our own way of understanding it, using only the information we can access through awareness and the subconscious and conscious skills that go with it. What this means is that our whole understanding of the world (including ourselves) is woven out of information we derive from our awareness and not from the physical world itself, which we only know second-hand. Exactly like a sculptor, we build a model of the world, similar to it in as many ways as we can make it feel similar, but at all times just a representation and not the real thing. While we evolved to develop this kind of understanding, it depends heavily on the memories we record over our lifetimes (both consciously accessible and subconsciously not). As the mind develops from infancy, it acquires information from feedback that it can put to use, and it thinks of this information as “knowledge” because it works, i.e. it helps us to predict and consequently to control. To us, it seems that the mind has a hotline to reality. Actually, though, the knowledge is entirely contextual within the mind, not reality itself but only representative of it. But by representing it the contexts or models of the conscious mind arise: the conscious mind has no choice but to believe in itself because that is all it has.

Speaking broadly, subconscious algorithms perform specialized informational tasks like moving a limb, remembering a word, seeing a shape, and constructing a phrase. Consciously, we don’t know how they do it. Conscious algorithms do more generalized tasks, like thinking of ways to find food or making and explaining plans. We know how we do these things because we think them through. Conscious algorithms provide centralized, coordinated control of subconscious (and other conscious) algorithms. Only the top layer of centralized control is done consciously; much can be done subconsciously. For example, all our habitual behavior starts under conscious development and is then delegated to the subconscious going forward. As the control central, though, the buck stops with the conscious mind; it is responsible for reviewing and approving, or, in the case of habitual behavior, preapproving, all decisions. Some recent studies impugn this decisive capacity of the conscious mind with evidence that we make decisions before we are consciously aware that we have done so.1 But that doesn’t undermine the role of consciousness, it just demonstrates that to operate with speed and efficiency we can preapprove behaviors. Ideally, the conscious mind can make each sort of decision just once and self-program to reapply that decision as needed going forward without having to repeat the analysis. It is like a CEO who never pulls triggers himself but has others to do it for him, but continually monitors to see if things are being done right.

I thus conclude that the conscious mind is a subprocess of the mind that exists to make decisions and that it does it using perspectives called knowledge that are only meaningful locally (i.e. in the context of the information under its management) and that these contexts are distilled from information fed to it by subconscious processes. The conscious mind is separate from the subconscious mind for practicality reasons. The algorithmic details of subconscious tasks are not relevant to centralized control. We subconsciously metabolize, pump blood, breathe, blink, balance, hear, see, move, etc. We have conscious awareness of these things only to the degree we need to to make decisions. For example, we can’t control metabolization and heartbeat (at least without biofeedback), and we consequently have no conscious awareness of them. Similarly, we don’t control what we recognize. Once we recognize something, we can’t see it as something else (unless an alternate recognition occurs). But we need to be aware of what we recognize because it affects our decisions. We breathe and blink automatically, but we are also aware we are doing it so we can sometimes consciously override it. So the constant stream of information from the subconscious mind that flows past our conscious awareness is just the set we need for high-level decisions. The conscious mind is unaware how the subconscious does these things because this extraneous information would overly complicate its task, slowing it down and probably compromising its ability to lead. We subjectively know the limits of our conscious reach, and we can also see evidence of all the things our brains must be doing for us subconsciously. I suspect this separation extends to the whole animal kingdom, which is nearly all comprised of bilateral animals having one brain. Octopuses are arguably an exception as they have separate brains for each arm, but the central octopus brain must still have some measure of high-level control over them, perhaps in the form of an awareness, similar to our consciousness. Whether each arm also has some degree of consciousness is an open question.2 Although a separate consciousness process is not the only possible solution to centralized control, it does appear to be the solution evolution has favored, so I will take it as my working assumption going forward.

One can further subdivide the subconscious mind along functional lines into what are called modules, which are specialized functions that also seem to have specialized physical areas of the brain that support them. Steven Pinker puts it this way:

The mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation. The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. 3
The mind is a set of modules, but the modules are not encapsulated boxes or circumscribed swatches on the surface of the brain. The organization of our mental modules comes from our genetic program, but that does not mean that there is a gene for every trait or that learning is less important than we used to think.4

Positing that the mind has modules doesn’t tell us what they are or how they work. Machines are traditionally constructed from parts that serve specific purposes, but design refinements (e.g. for miniaturization) can lead to a streamlining of parts that are fewer in number, but that holistically serve more functions. Having been streamlined by countless generations, the modules of the mind can’t be as easily distinguished along functional boundaries as the other parts of the body because they all perform information management in a highly collaborative way. But if we accept that any divisions we make are preliminary, we can get on with it without getting too caught up in the details. Drawing such lines is reverse engineering. Evolution engineered us, explaining what it did is reverse engineering. Ideally one learns enough from reverse engineering to build a duplicate mechanism from scratch. But living things were “designed” from trillions of small interactions spread over billions of years. We can’t identify those interactions individually, and in any event, natural selection doesn’t select for individual traits but for entire organisms, so even with all the data one would be hard-pressed to be sure what caused what. However, if one generalizes, that is, if one applies statistical reasoning, one can distinguish functional advantages of one trait over another. And considering that all knowledge and understanding are the product of such generalizing, it is a reasonable strategy. Again, it is not the objective of knowledge to describes things “as they are,” only to create models or perspectives that abstract or generalize certain features. So we can and should try to subdivide the mind into modules and guess how they interact, with the understanding that there is more than one way to skin this cat and greater clarity will come with time.

Subdividing the mind into consciousness and a number of subconscious components will do much to elucidate how the mind provides its centralized control function, but the next most critical aspect to consider is how it manages information. Information derives from the analysis of data, the separation of useful data (the wheat) from noisy data (the chaff). Our bodies use at least two physical mechanisms to record information: genes and memory. Genes are nature’s official book of record, and many mental functions have extensive instinctive support encoded by genes. We have fully decoded all our genes and have identified some functions of some of them. Genes either code for proteins or they help or regulate those that do. Their function can be viewed narrowly as a biochemical role or more broadly as the benefit conferred to the organism. We are still a long way off from connecting the genes to the biochemical roles, and further still from connecting to benefits. Even with good explanations for everything questions will always remain because billions of years of subtlety are coded into genes, and models for understanding invariably generalize that subtlety away.

Memory is an organism’s book of record, responsible for preserving any information it gleans from experience, a process also called learning. We don’t yet understand the neurochemical basis of memory, though we have identified some of the chemicals and pathways involved. Nurture (experience) is often steered by nature (instinct) to develop memory. Some of our instinctive skills work automatically without memory but must leverage memory for us to achieve mastery of a learned behavior. We are naturally inclined to learn to walk and talk but are born with no memory of steps or words. So we follow our genetic inclinations, and through practice we record models in memory that help us perform the behaviors reliably.

Genes and memory store information of completely incompatible types and formats. Genetic information encodes chemical structures (either mRNA or proteins) which translate to function mostly through proteins and gene regulation. Memory encodes objects, events and other generalizations which translate to function through indirection, mostly by correlating memory with reality. Genetic information is physical and is mechanically translated to function. Remembered information is mental and is indirectly or abstractly translated to function. While both ultimately get the job done, the mind starts out with no memory as a tabula rasa (blank slate) and assembles and accumulates memory as a byproduct of cogitation. Many algorithmic skills, like vision processing, are genetically prewired, but on-the-job training leverages memory (e.g. recognition of specific objects). In summary, genes carry information that travels across generations while memory carries information transient to the individual.

I mentioned before that culture is another reservoir of information, but it doesn’t use an additional biological mechanism. While culture depends heavily on our genetic nature, significantly on language, we reserve the word culture for additions we make beyond our nature and ourself. Language is an innate skill; a group of children with no language can create a completely vocabulary and grammar themselves in a few years. Therefore, cultural information is not stored in genes but only in memory, and it is also stored in artifacts as a form of external memory. Each of us forms a unique set of memories based on our own experience and our exposure to culture. What an apple is to each of us is a unique derivation of our lifetime exposure to apples, but we all share general ideas (knowledge) about what one can do with apples. We create memories of our experiences using feedback we ourselves collect. Our memory of culture, on the other hand, is partially based on our own experiences and partially on the underlying cultural information others created. Cultural institutions, technologies, customs, and artifacts have ancient roots and continually evolve. Culture extends our technological and psychological reach, providing new ways to control the world and understand our place in it. While cultural artifacts mediate much of the transmission of culture, most culture is acquired from direct interaction with other people via spoken language or other activities. Culture is just a thin veneer sitting on top of our individual memories, but it is the most salient part to us because it encodes so much of what we can share.

To summarize so far, we have conscious and subconscious minds that manage information using memory. The conscious mind is distinct from the subconscious as the point where relevant information is gathered for top-level centralized control. But why are conscious minds aware? Couldn’t our top-level control process be unaware and zombie-like? No, it could not, and the analogy to zombies or robots reveals why. While we can imagine an automaton performing a task effectively without consciousness, as indeed some automated machines do, we also know that they lack the wherewithal to respond to unexpected circumstances. In other words, we expect zombies and robots to have rigid responses and to be slow or ineffective in novel situations. This intuition we have about them results from our belief that simple tasks can be automated, but very general tasks require generalized thinking, which in turn requires consciousness. I’m going to explain why this intuition is sound and not just a bias, and in the process we will see why the consciousness process must be aware of what it is doing.

I have so far described the consciousness process as being a distinct subprocess of the mind which is supplied just the information relevant to high-level decisions from a number of subconscious processes, many of them sensory but also memory, language, spatial processing, etc. Its task is to make high-level decisions as efficiently and efficaciously as possible. I can’t prove that this design is the only possible way of doing things, but it is the way the human mind is set up. And I have spoken in general about how knowledge in the mind is contextual and is not identical to reality but only representative of it. But now I am going to look closer at how that representative knowledge causes a mind to “believe in itself” and consequently become aware. It is because we create virtual worlds (called mental models, or models for short) in our heads that look the same as the outside world. We superimpose these on the physical world and correlate them so closely that we can usually ignore the distinction. But they could not be more different. One of them is out there, and the other in here. One exists only physically, the other only mentally (albeit with the help of a physical computational mechanism, the brain). One is detailed down to atoms and then quarks, while the other is a network of generalizations with limited detail, but extensive association. For this reason, a model can be thought of as a simplified, cartoon-like representation5 of physical reality. Within the model, one can do simple, logical operations on this abridged representation to make high-level decisions. Our minds are very handy with models; we mostly manage them subconsciously and can recognize them much the same way we recognize objects. We automatically fit the world to a constellation of models we manage subconsciously using model recognition.

So the approach consciousness uses to make top level decisions is essentially to run simulations: it builds models that correlate well to physical conditions and then projects the models into the future to simulate what will happen. Consciousness includes models of future possibilities and models of current and past experiences as we observed them. We can’t remember the actual past as it actually was, only how we experienced it through our models. All our knowledge is relative to these models, which in turn relate indirectly to physical reality. But where does awareness fit in? Awareness is just the data managed by this process. We are aware of all the information relevant to top-level decisions because our conscious selves are this consciousness process in the brain. Not all the data within our awareness is treated equally. Since much more information is sensed and recognized than is needed for decisions, the data is funneled down further through an attention process that focuses on just select items in consciousness.6 As I noted before, we can apply our focusing power on anything within our conscious awareness at will to pull it into attention, but our subconscious attention process continually identifies noteworthy stimuli for us to focus on, and it does it by “listening” for signals that stand out from the norm. We know from experience that although we are aware of a lot of peripheral sensory information and peripheral thoughts floating around in our heads at any given point in time, we can only actively think about one thing at a time, in what seems to us as a train of thought where one thought follows another. This linear, plodding approach to top-level decision making ensures that the body will make just one coordinated action at a time because we don’t have to compete with ourselves like a committee every time we do something.

Let’s think again about whether minds could be robotic again. Self-driving cars, for example, are becoming increasingly capable of executing learned behaviors, and even expanding their proficiency dynamically, without any need for awareness, consciousness, reasoning, or meaning. But even a very good learned behavior falls far short of the range of responses that animals need to compete in an evolutionary environment. Animals need a flexible ability to assess and react to situations in a general way, that is, by considering a wide range of past experience. The modeling approach I propose for consciousness can do that. If we programmed a robot to use this approach, it would both internally and externally behave as if it were aware of the data presented to it, which is wholly analogous to what we do. It will have been programmed with a consciousness process that considers access to data “awareness”. Could we conclude that it had actually become aware? I think we could because it meets the logical requirements, although this doesn’t mean robotic awareness would be as rich an experience of awareness as our own. A lot goes into the richness of our experience from billions of years of tweaks that would take us a long time to replicate faithfully in artificial minds. But it is presumptuous of us to think that our awareness, which is entirely a product of data interpretation, is exclusive just because we
are inclined to feel that way.

Let me talk for a moment about that richness of experience. How and why our sensory experiences (called qualia) feel the way they do is what David Chalmers has famously called the hard problem of consciousness. The problem is only hard if you are unwilling to see consciousness as a subroutine in the brain that is programmed to interpret data as feelings. It works exactly the way it does because it is the most effective way that has evolved to get bodies to take all the steps they need to survive. As will be discussed in the next section, qualia are an efficient way to direct data from many external channels simultaneously to the conscious mind. The channels and the attention process focus the relevant data, but the quality or feeling of the qualia results from subconscious influences the qualia exert. Taste and smell simplify chemical analyses down for the conscious mind into a kind of preference. Color and sound can warn us of danger or calm us down. These qualia seem almost supernatural but they actually just neatly package up associations in our minds so we will feel like doing the things that are best for us. Why do we have a first-person experience of them? Here, too, it is nothing special. First-person is just the name we give to this kind of processing. If we look at our, or someone else’s, conscious process more from a third-person perspective we can see that what sets it apart is just the flood of information from subconscious processes giving us a continuous stream of sensations and skills that we take for granted. First person just means being connected so intimately to such a computing device.

Now think about whether robots can be conscious. Self-driving cars use a specialized algorithm that consults millions of hours of driving experience to pick the most appropriate responses. These cars don’t reason out what might happen in different scenarios in a general way. Instead, they use all that experience to look up the right answer, more or less. They still use internal models for pedestrians, other cars, roads, etc, but once they have modeled the basic circumstances they just look up the best behavior rather than reasoning it out generally. As we start to build robots that need more flexibility we may well design the equivalent of a conscious subprocess, i.e. a higher-level process that reasons with models. If we also use the approach of giving it qualia that color its preferences around its sensory inputs in preprogrammed (“subconscious”) ways to simplify the task at the conscious level, then we will have built a consciousness similar to our own. But while we may technically meet my definition of consciousness and while such a robot may even be able to convince people into thinking it is human sometimes (i.e. pass the Turing test), that alone won’t mean it experiences qualia anywhere near as rich as our own, and that is because we have more qualia which encode more preferences in a highly interconnected and seamless way following billions of years of refinements. Brains and bodies are an impressive accomplishment. But they are ultimately just machines, and it is theoretically possible to build them from scratch, though not with the approaches to building we have today.

The Certainty Engine

The Certainty Engine: How Consciousness Arose to Drive Decisions Through Rationality

The mind’s organization as we experience it revolves around the notion of certainty. It is a certainty engine. It is designed so as to enable us to act with the full expectation of success. In other words, we don’t just act confidently because we are brash, but because we are certain. It is a surprising capacity, given that we know the future is unknowable. We know we can’t be certain about the future, and yet at the same time we feel certain. That feeling comes from two sources, one logical and one psychological.

Logically, we break the world down into chunks which follow rules of cause and effect. We gather these chunks and rules into mental models (models for short) where certainty is possible because we make the rules. When we think logically, we are using these model models to think about the physical world, because logic, and cause and effect, only exist in the models; they exist mentally but not physically. Cause and effect are just illusions of the way we describe things — very near and dear to our hearts — but not scientific realities. The universe follows its clockwork mechanism according to its design, and any attempt to explain what “caused” what after the fact is going to be a rationalization, which is not necessarily a bad thing, but it does necessarily mean simplifying down to an explanatory model in which cause and effect become meaningful concepts. Consequently, if something is true in a model, then it is a logical certainty in that model. We are aware on some level that our models are simplifications that won’t perfectly match the physical world, but on another level, we are committed to our models because they are the world as we understand it.

Psychologically, it wouldn’t do for us to be too scared to ever act for fear of making a mistake, so once our confidence reaches a given threshold we leap. In some of our models we will succeed while in others we will fail. Most of our actions succeed. This is because most of our decisions are habitual and momentary, like putting one foot in front of the other. Yes, we know we could stub our toe on any step, and we have a model for that, but we rarely think about it. Instead, we delegate such decisions to our subconscious minds, which we trust both to avoid obstacles and to alert us to them as needed, meaning to the degree avoidance is more of a challenge than the subconscious is prepared to handle. For any decision more challenging than habit can handle we try to predict what will happen, especially with regard to what actions we can take to change the outcome. In other words, we invoke models of cause and effect. These models stipulate that certain causes have certain effects, so the model renders certainty. If I go to the mailbox to get the mail and the mailman has come today, I am certain I will find today’s mail. Our plans fail when our models fail us. We didn’t model the situation well enough, either because there were things we didn’t know or conclusions that were insufficiently justified. The real world is too complicated to model perfectly, but all that matters is that we model it well enough to produce predictions that are good enough to meet our goals. Our models simplify to imply logical outcomes that are more likely than chance to come true. This property, which separates information from noise, is why we believe a model, which is to say we are psychologically prepared to trust the certainty we feel about the model enough to act on it and face the consequences.

What I am going to examine how this kind of imagination arose, why it manifests in what we perceive as consciousness, and what it implies for how we should lead our lives.

To the fundamental question, “Why are we here?”, the short answer is that we are here to make decisions. The long answer will fill this book, but to elaborate some, we are physically (and mentally) here because our evolutionary strategy for survival has been successful. That mental strategy, for all but the most primitive of animals, includes being conscious with both awareness and free will, because those capacities help with making decisions, which translates to acting effectively. Decision-making involves capturing and processing information, and information is the patterns hiding in data. Brains use a wide variety of customized algorithms, some innate and some learned, to leverage these patterns to predict the future. To the extent these algorithms do not require consciousness I call them subrational. If all of them were subrational then there would be no need for subjective experience; animals could go about their business much like robots without any of the “inner life” which characterizes consciousness. But one of these talents, reasoning, mandates the existence of a subjective theater, an internal mental perspective which we call consciousness, that “presents” version(s) of the outside world to the mind for consideration as if they were the outside world. All but the simplest of animals need to achieve a measure of the certainty of which I have spoken and to do that they need to model worlds and map them to reality. This capacity is called rationality. It is a subset of the reasoning process, with the balance being our subrational innate talents, which proceed without such modeling (though some support it or leverage it). Rationality mandates consciousness, not as a side effect but because reasoning (which needs rationality) is just another way of describing what consciousness is. That is, our experience of consciousness is reasoning using the faculties we possess that help us do so.

At its heart, rationality is based on propositional logic, a well-developed discipline that consists of propositions and rules that apply to them. Propositions are built from concepts, which are references that can be about, represent, or stand for things, properties and states of affairs. Philosophers call this “aboutness” that concepts possess “intentionality”, and divide mental states into those that are intentional and those are merely conscious, i.e. feelings, sensations and experiences in our awareness1. To avoid confusion and ambiguity, I will henceforth simply call intentional states “concepts” and conscious states “awareness”. Logic alone doesn’t make rationality useful; concepts and conclusions have to connect back to the real world. To accomplish this they are built on an extensive subrational infrastructure, and understanding that is a big part of understanding how the mind works.

So let’s look closer at the attendant features of consciousness and how they contribute to rationality. Steven Pinker distinguishes four “main features of consciousness — sensory awareness, focal attention, emotional coloring, and the will.”2 The first three of these are subrational skills and the last is rational. Let’s focus on subrational skills for now, and we will get to the rational will, being the mind’s control center, further down. The mind also has many more kinds of subrational skills, sometimes called modules. I won’t focus too much on exact boundaries or roles of modules as that is inherently debatable, but I will call out a number of abilities as being modular. Subrational skills are processed subconsciously, so we don’t consciously sense how they work; they appear to work magically to us. We do have considerable conscious awareness and sometimes control over these subrational skills, so I don’t simply call them “subconscious”. I am going to briefly discuss our primary subrational skills.

First, though, let me more formally introduce the idea of the mind as a computational engine. The idea that computation underlies thinking goes back at least 500 years to Thomas Hobbes who said “by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.” Alan Turing and Claude Shannon developed theories of computability and information from the 1930’s to the 1950’s that led to Hilary Putnam formalizing the Computational Theory of Mind (CTM) in 1961. As Wikipedia puts it, “The computational theory of mind holds that the mind is a computation that arises from the brain acting as a computing machine, [e.g.] the brain is a computer and the mind is the result of the program that the brain runs”. This is not to suggest it is done digitally as our computers do it; brains use a highly customized blend of hardware (neurochemistry) and software (information encoded with neurochemistry). At this point we don’t know how it works except for some generalities at the top and some details at the bottom. Putnam himself abandoned CTM in the 1980’s, though in 2012 he resubscribed to a qualified version. I consider myself an advocate of CTM, provided it is interpreted from a functional standpoint. It does not matter how the underlying mechanism works, what matters is that it can manipulate information, which as I have noted does not consist of physical objects but of patterns that help predict the future. So nothing I am going to say in this book is dependent on how the brain works, though it will not be inconsistent with it either. While we have undoubtedly learned some fascinating things about the brain in recent years, none of it is proven and in any case it is still much too small a fraction of the whole to support many conclusions. So I will speak of the information management done in the brain as being computational, but that doesn’t imply numerical computations, it only implies some mechanism that can manage information. I believe that because information is abstract, the full range of computation done in human minds could be done on digital computers. At the same time, different computing engines are better suited to different tasks because the time it takes to compute can be considerable, so to perform well an engine must be finely tuned to its task. For some tasks, like math, digital computers are much better suited than human brains. For others, like assessing sensory input and making decisions that match that input against experience, they are worse (for now). Although we are a long way from being able to tune a computer as well to its task as our minds are to our task of survival, computers don’t have to match us in all ways to be useful. Our bodies are efficient, mobile, self-sustaining and self-correcting units. Computers don’t need to be any of those things to be useful, though it helps and we are making improvements in these areas all the time.

So knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and thoroughly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can sense different kinds of things at the same time without difficulty.

Looking at sensory awareness, we internally process sensory information into qualia (singular quale, pronounced kwol-ee), which is how each sense feels to us subjectively. This processing is a computation and the quale is a piece of data, nothing more, but we are wired to attach a special significance to it subjectively. We can think of the qualia as being data channels into our consciousness. Consciousness itself is a computational process that interprets the data from each these channels in a different way, which we think of as a different kind of feeling, but which is really just data from a different channel. Beyond this raw feel we recognize shapes, smells, sounds, etc. via the subrational skills of recollection and recognition, which bring experiences and ideas we have filed away back to us based on their connections to other ideas or their characteristics. This information is fed through a memory data channel. Interestingly, the memory of qualia has some of the feel of first-hand qualia, but is not as “vivid” or “convincing”, though sometimes in dreams it can seem to be. This is consistent with the idea that our memory can hold some but not all of the information the data channels carried.

Two core subrational skills let us create and use concepts: generalizing and modeling. Generalization is the ability to recognize patterns and to group things, properties, and ideas into categories called concepts. I consider it the most important mental skill. Generalizations are abstractions, not of the physical world but about it. A concept is an internal reference to a generalization in our minds that lets us think about the generalization as a unit or “thing”. Rational thought in particular only works with concepts as building blocks, not with sensations or other kinds of preconceptual ideas. Modeling itself is a subrational skill that builds conceptual frameworks that are heavily supported by preconceptual data. We can take considerable conscious control of the modeling process, but still the “heavy lifting” is both subrational and subconscious, just something we have a knack for. It is not surprising; our minds make the work of being conscious seem very easy to us so that we can focus with relative ease on making top-level decisions.

There are countless ways we could break down our many other subrational skills, with logical independence from each other and location in the brain being good ones. Harvard psychologist Howard Gardner identified eight types of independent “intelligences” in his 1983 book Frames of Mind: The Theory of Multiple Intelligences3: musical, visual-spatial, verbal-linguistic, logical-mathematical, bodily, interpersonal, intrapersonal and naturalistic. MIT neuroscientist Nancy Kanwisher in 2014 identified specific brain regions that specialize in shapes, motion, tones, speech, places, our bodies, face recognition, language, theory of mind (thinking about what other people are thinking), and “difficult mental tasks”.4 As with qualia and memory, most of these skills interact with consciousness via their own kind of data channel.

Focus itself is a special subrational skill, the ability to weigh matters pressing on the mind for attention and then to give focus to those that it judges most important. Rather than providing an external data channel into consciousness, focus controls the data channel between conscious awareness and conscious attention. Focusing itself is subrational and so its inner workings are subconscious, but it appears to select the thoughts it sends to our attention by filtering out repetitive signals and calling attention to novel ones. We can only apply reasoning to thoughts under attention, though we can draw on our peripheral awareness of things out of focus to bring them into focus. While focus works automatically to bring interesting items to our attention, we have considerable conscious control to keep our attention on anything already there.

Drives are another special kind of subrational skill that can feed consciousness through data channels with qualia of their own. A drive is logically distinct from the other subrational skills in that it creates a psychological need, a “negative state of tension”, that must be satisfied to alleviate the tension. Drives are a way of reducing psychological or physiological needs to abstractions that can be used to influence reasoning, to motivate us:

A motive is classified as an “intervening variable” because it is said to reside within a person and “intervene” between a stimulus and a response. As such, an intervening variable cannot be directly observed, and therefore, must be indirectly observed by studying behavior.5

Just rationally thinking about the world using models or perspectives doesn’t by itself give us a preference for one behavior over another. Drives solve that problem. While some decisions, such as whether our heart should beat, are completely subconscious and don’t need motivation or drive, others are subconscious yet can be temporarily overridden consciously, like blinking and breathing. These can be called instinctive drives because we start to receive painful feedback if we stop blinking or breathing. Others, like hunger, require a conscious solution, but the solution is still clear: one has to eat. Emotions have no single response that can resolve them, but instead provide nuanced feedback that helps direct us to desirable objectives. Our emotional response is very context-sensitive in that it depends substantially on how we have rationally interpreted, modeled and weighed our circumstances. But emotional response itself is not rational; an emotional response is beyond our conscious control. Since it depends on our rational evaluation of our circumstances, we can ameliorate it by reevaluating, but our emotions have access to our closely-held (“believed”) models and can’t be fooled by those we consider only hypothetically.

We have more than just one drive (to survive) because our rational interactions with the world break down into many kinds of actions, including bodily functions, making a living, having a family, and social interactions.6 Emotions provide a way of encoding beneficial advice that can be applied by a subjective, i.e. conscious, mind that uses models to represent the world. In this way, drives can exert influence without simply forcing a prescribed instinctive response. And it is not just “advice”; emotions also insulate us from being “reasonable” in situations where rationality would hurt more than help. Our faces betray our emotions so others can trust us.7 Romantic love is a very useful subrational mechanism for binding us to one other person as an evolutionary strategy. It can become frustratingly out of sync with rational objectives, but it has to have a strong, irrational, even mad, pull on us if it is to work.8

Although our conscious awareness and attention exist to support rationality, this doesn’t mean people are rational beings. We are partly rational beings who are driven by emotions and other drives. Rather than simply prescribing the appropriate reaction, drives provide pros and cons, which allow us to balance our often conflicting drives against each other by reasoning out consequences of various solutions. For any system of conflicting interests to persist in a stable way, one has to develop rules of fair play or each interest will simply fight to the death, bringing the system down. Fair play, also known as ethics, translates to respect: interests should respect each other to avoid annihilation. This applies to our own competing drives and interpersonal relationships. The question is, how much respect should one show, on a scale of me first to me last? Selfishness and cooperation have to be balanced in each system accordingly. The ethical choice is presumably one that produces a system that can survive for a long time. And living systems all embrace differing degrees of selfishness and cooperation, proving this point. Since natural living systems have been around a long time, they can’t be unethical by this definition, so any selfishness they contain is justified by this fact. Human societies, on the other hand, may overbalance either selfishness or cooperation, leading to societies that fail, either by actually collapsing or by under-competing with other societies, which eventually leads to their replacement.

And so it is that our conscious awareness becomes populated with senses, memories, emotions, language, etc, which are then focused by our power of attention for the consideration of our power of reasoning. Of this Steven Pinker says:

The fourth feature of consciousness is the funneling of control to an executive process: something we experience as the self, the will, the “I.” The self has been under assault lately. The mind is a society of agents, according to the artificial intelligence pioneer Marvin Minsky. It’s a large collection of partly finished drafts, says Daniel Dennett, who adds, “It’s a mistake to look for the President of the Oval Office of the brain.”
The society of mind is a wonderful metaphor, and I will use it with gusto when explaining the emotions. But the theory can be taken too far if it outlaws any system in the brain charged with giving the reins or the floor to one of the agents at a time. The agents of the brain might very well be organized hierarchically into nested subroutines with a set of master decision rules, a computational demon or agent or good-kind-of-homunculus, sitting at the top of a chain of command. It would not be a ghost in the machine, just another set of if-then rules or a neural network that shunts control to the loudest, fastest or strongest agent one level down.9
The reason is as clear as the old Yiddish expression, “You can’t dance at two weddings with only one tuches.” No matter how many agents we have in our minds, we each have exactly one body. 10

While it may only be Pinker’s fourth feature, it is the whole reason for consciousness. We have a measure of conscious awareness and control over our subrational skills only so that they can help with reasoning and thereby allow us to make decisions. This culmination into a single executive control process is a logical necessity given one body, but that it should be conscious or rational is not so much necessary as useful. Rationality is a far more effective way to navigate an uncertain world than habit or instinct. Perhaps we don’t need to create a model to put one foot in front of the other or chew a bite of food. But paths are uneven and food quality varies. By modeling everything in many degrees of detail and scope, we can reason out solutions better than more limited heuristical approaches of subrational skills. Reasoning brings power, but it can only work if the mind can manage multiple models and map them to and from the world, and that is a short description of what consciousness is. Consciousness is the awareness of our senses, the creation (modeling) of worlds based on them, and the combined application of rational and subrational skills to make decisions. Our decisions all have some degree of rational oversight, though we can, and do, grant our subrational skills (including learned behaviors) considerable free reign so we can focus our rational energies on more novel aspects of our circumstances.

Putting the shoe on the other foot, could reasoning exist robotically without the inner life which characterizes consciousness? No, because what we think of as consciousness is mostly about running simulations on models we have created to derive implications and reactions, and measuring our success with sensory feedback. It would feel correct to us to label a robot doing those things as conscious, and it would be able to pass any test of consciousness we cared to devise. It, like us, would metaphorically have only one foot in reality while its larger sense of “self” would be conjecturing and tracking how those conjectures played out. For the conscious being, life is a game played in the head that somewhat incidentally requires good performance in the physical world. Of course, evolved minds must deliver excellent performance as only the fittest survive. A robot consciousness, on the other hand, could be given different drives to fit a different role.

To summarize, one can draw a line between conscious beings and those lacking consciousness by dividing thoughts into a conceptual layer and the support layers beneath it. In the conceptual layer, information has been generalized into packets called concepts which are organized into models which gather together the logical relationships between concepts. The conceptual layer itself is an abstraction, but it connects back to the real world whenever we correlate our models with physical phenomena. This ability to correlate is another major subrational skill, though it can be considered a subset of our modeling ability. Beneath the conceptual layer are preconceptual layers or modules, which consists of both information and algorithms that capture patterns in ways that have proven useful. While the rational mind only sees the conceptual layer, some subrational modules use both preconceptual and conceptual data. Emotions are the most interesting example of a subrational skill that uses conceptual data: to arrive at an emotional reaction we have to reason out whether we should feel good or bad, and once we have done that, we experience the feeling so long as we believe the reasoning (though feelings will fade if their relevance does). Only if our underlying reasoning shifts will our feelings change. This will happen quickly if we discover a mistake, or slowly as our reasoned perspective evolves over time.

One can picture the mind then as a tree, where the part above the ground is the conceptual layer and the roots are the preconceptual layers. Leaves are akin to concepts and branches to models. Leaves and branches are connected to the roots and draw support from them. The above-ground, visible world is entirely rational, but reasoning without connecting back to the roots would be all form and no function. So, like a tree “feeling” its roots, our conscious awareness extends underground, anchoring our modeled creations back to the real world.

Key insights of my theory

The key insights as I see them:

1. Descartes was right about dualism
2. We underappreciate the impact of evolution on the mind
3. We underappreciate the computational nature of the mind.
4. Consciousness exists to facilitate reasoning.
5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel
6. Minds reason using models that represent possibilities
7. Reasoning is fundamentally an objective process that manages truth and knowledge
8. We really have free will

Insight 1. Descartes was right about dualism – mind and body are separate kinds of substances. He made the false assumption that the mind is a physical substance, but then, he had no scientific basis for distinguishing mental from physical. We do have a basis now, but no one, so far as I can tell, has pointed it out as such. I will do so now. Mind and body, or, as I will refer to them, the mental (or ideal) and physical, are not separate in the sense of being different physical substances, but in the sense of being different independent kinds of existence that don’t preclude each other, but can affect each other. The brain and everything it does has a physical aspect, but some of the things it does, e.g. relationships and ideas, have an ideal, or mental, aspect as well. The mechanics of how a brain thinks is physical, but the ideas it thinks, viewed abstractly, are mental. You could say the idea that 1+1=2 exists regardless of whether any brain thinks about it. So our experience of mind is physical, but to the degree that our minds use relationships and ideas as part of that experience (using a physical representation), those relationships and ideas are also mental (in that they have an abstract meaning). The brain leverages mental relationships analogously to the way life leverages chemicals that have different physical properties, except that mental relationships have no physical properties like chemicals but instead impact the physical world through feedback and information processing as a series of physical events. As with chemicals, the net effect is that the complexity of the physical world increases.

Only abstract relationships count as mental, where “abstract” refers to the idea of indirect reference, which is a technique of using one thing to represent or refer to another. A physical system, like a brain or a computer, that implements such techniques has all sorts of physical limitations on the scope and power of those representations, but, like a Turing machine, any implementation capable of performing logical operations on arbitrary abstract relationships can in principle compute anything in the ideal world. In other words, there are no “mysterious” ideas beyond our comprehension, though some will exceed our practical capacity. The confusion between physical and mental that has dogged philosophy and science for centuries only continues because we have not been clearly differentiating the brain from what it does. The brain implements a biological computer physically, but what it does is represent relationships as ideas. Ideas are not dependent on the implementation so that an idea can be represented with words and shared by author and reader. The three forms are very different, but we know that they share important aspects.

All abstract relationships exist (ideally) whether any brain (or computer) thinks about them or not. So the imaginary world is a much broader space than the physical, if you will, as it essentially parameterizes possibility – thoughts are not locked down in all their specifics but generalize to a range of possibilities. Consider a computer program, which is a simple system that manipulates abstract relations. A program executing on a computer will go through a very real set of instructions and process specific data from inputs to outputs. But a program’s capability can be nearly infinite if it is capable of handling many kinds of inputs across a whole range of situations. The program “itself” doesn’t know this, but the programmer does. Our minds work like the programmer; they manage an immense range of possibilities. We see these possibilities in general terms, then add specifics (provide inputs) to make them more concrete, and ultimately a few of them are realized in the real world (i.e. match up to things there). In a very real sense, we live our lives principally in this world of possibilities and only secondarily in the physical world. I’m not speaking fancifully about daydreaming about dragons and unicorns, though we can do that, but about whether the mail has come yet or rain is likely. Whatever actually happens to us immediately becomes the past and doesn’t matter anymore, except in regards to how it will help us in the future. Of course, knowing that we have gotten the mail or that it is rained matters a lot to how we plan for the future, so we have to track the past to manage the future. But nostalgia for its own sake matters little, and so it is no big surprise that our memory of past events dissipates rather quickly (on the theory that our memory has evolved to intentionally “forget” information that could do more to distract effective decision making that help it). My point, though, is that we continually imagine possibilities.

Insight 2. We underappreciate the impact of evolution on the mind. Darwin certainly tried to address this. “How does consciousness commence?” Darwin wondered. It was, and is, a hard question to answer because we still lack any objective means of studying what the mind does (as the mind is only visible from within, subjectively). Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s “Verbal Behavior” by explaining how language acquisition leverages innate linguistic talents. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. And we now know that thinking is much more than a conditioned behavior but employs reasoning and subconscious know-how. But evolution tells us more still. The mind is the control center of the body, so the direct feedback from evolution is more on how the mind handles a situation than the parts of the body it used to do it. The body is a multipurpose tool to help the mind satisfy its objectives. Mental evolution, therefore, leads somatic evolution. However, since we don’t understand the mechanics of the mind we have done less to study the mind than the body, which is just more tractable. Although understanding the full mechanics of mind is still a long way off, by looking at what selection pressures created demand for what kinds of cognitive skills evolutionary psychologists can explain them. Those explanations involve the evolution of both specialized and general purpose software and hardware in the brain, with consciousness itself being the ultimate general purpose coordinator of action.

Insight 3. We underappreciate the computational nature of the mind. As I noted in The Certainty Engine, what the mind does is computational, if computation is taken to be any information management process. But knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and broadly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can perceive many kinds of things at the same time without difficulty.

Insight 4. Consciousness exists to facilitate reasoning. Consciousness exists because we continually encounter situations beyond the range of our learned responses, and being able to reason out effective strategies works much better than not being able to. We can do a lot on “autopilot” through habit and learned behavior, but it is too limited to get us through the day. Most significantly, our overall top-level plan has to involve prioritizing many activities over short and long time frames, which learned behavior alone can’t do. Logic, inductive or deductive, can do it, but only if we come up with a way to interpret the world in terms of propositions composed of symbols. This is where a simplified, cartoon-like version of reality comes into play. To reason, we must separate relevant from irrelevant information, and then focus on the relevant to draw logical conclusions. So we reduce the flood of sensory inputs continually entering our brains into a set of discrete objects we can represent as symbols we can use in logical formulas (here I don’t mean shaped symbols but referential concepts we can keep in mind). The idea that hypothetical internal cognitive symbols represent external reality is called the Representational Theory of Mind (RTM), and in my view is the critical simplification employed by reasoning, but it is not critical to much of subconscious processing, which does not have this need to simplify. Although we can generalize to kinds using logical buckets like bird or robin, we can also track all experience and draw statistical inferences without any attempt at representation at all, yielding bird-like or robin-like without any actual categories.

Do we reason using language? Are these symbols words and are the formulas sentences? There is a debate about whether the mind reasons directly in our natural language (e.g. English) or an internal language, sometimes called “mentalese”. Both are partially right but mostly wrong; the confusion comes from failing to appreciate the difference between the conscious and subconscious minds. Language is part of the simplified world of consciousness that tries to turn a gray world into something more black and white that reason can attack (while not incidentally aiding communication). From the conscious side, language-assisted reasoning is done entirely in natural language. We are also capable of reasoning without language, and much of the time we do, but language is not just an add-on capability, it is what pushes human reasoning power into high gear. Animals have had solid reasoning skills (and consequently consciousness) for hundreds of millions of years, so that they could apply cause and effect in ways that matter to them, creating a subjective version of the laws of nature and the jungle. But without language animals can only follow simple chains of reasoning. Language, which evolved in just a few million years, lengthens those chains and adds nesting of concepts. It gives us the ability to reason in a directed way over an arbitrarily abstract terrain. Without inner speech, the familiar internal monolog of our native tongue, we can’t ponder or scheme, we can only manage simple tasks. Sure, we can keep schemes in our heads without further use of language, but language so greatly facilitates rearranging ideas that we can’t develop abstract ideas very far without it. Helen Keller could remember her languageless existence and claimed to be a non-thinking entity during that time. By “thinking” I believe she only meant directed abstract reasoning and all the higher categories of thoughts that brings to mind.

I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect.1

She admits to consciousness, which I argue exists because animals need to reason, yet also felt unconsciousness, in that large parts of her mind were absent, notably intellect (directed abstract reasoning) and will (abstract awareness of desire). So our ability to string ideas together with language vastly extends and generalizes our cognitive reach, accounting for what we think of as human intelligence.

While we could call the part of the subconscious that supports language mentalese, I wouldn’t recommend it, because this support system is not language itself. This massively parallel process can’t be thought of as just a string of symbols; each word and phrase branches out into a massive web of interconnections into a deeply multidimensional space that joins back to the rest of the mind. It follows Universal Grammar (UG), which as laid out by Noam Chomsky is a top-down set of language rules that the subconscious language module supports, but not because it is an internal language but because it has to simplify the output into a form consciousness can use. So natural language is the outer layer of the onion, but it is the only layer we can consciously access, so it is fair to say we consciously reason with the help of natural language, even though we can also do simple reasoning without language. While the part of language-assisted reasoning of which we are consciously aware is entirely conducted in natural language, it is only partially right to say we think in our native tongue because most of the actual work behind that reasoning happens subconsciously. And the subconscious part doing most of the work is not using an internal language at all, though it does use innate mechanisms that support features common to all languages.

So what about linguistic determinism, aka the Sapir-Whorf hypothesis, which states that the structure of a natural language determines or greatly influences the modes of thought and behavior characteristic of the culture in which it is spoken? As with all nature/nurture debates, it is some of each, but with the lion’s share being nature. Natural language is just a veneer on our deep subconscious language processing capacity, but both develop through practice and what we are practicing is the native language our society developed over time. The important point, though, is that thinking is only superficially conscious and consequently only superficially linguistic, and hence only marginally linguistically determined. Words do matter, as do the kinds of verbal constructions we use, so to the extent we guide our thinking process linguistically with inner speech they have an influence. But language is only a high-level organizer of ideas, not the source of meaning, so it does not ultimately constrain us, even though it can sometimes steer us. We can coin new words and idioms, and phase out those that no longer serve as well. So again, just to clarify: while a digital computer can parse language and store words and phrases, this doesn’t even scratch the surface of the deep language processing our subconscious does for us. It is only a false impression of consciousness that the flow of words through our minds reveals anything about the information processing steps that we perform when we understand things or reason with language.

Insight 5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel. We are not zombies or robots, pursuing our tasks with no inner life. Consciousness feels the way it does because the overall mind, which also includes considerable subconscious processing of which we are not consciously aware, cordons off conscious access to subconscious processing not deemed relevant to the role of consciousness. The fact that logic only works in a serial way, with propositions implying conclusions, and the fact that bodies can only do one thing at a time, put information management constraints on the mind that consciousness solves. To develop propositions on which one can apply logic that can be useful in making a decision, one has to generalize commonly encountered phenomena into categories about which one can track logical implications. So we simplify the flood of sensory data into a handful of concrete objects about which we reason. These items of reason, generically called concepts, are internal representations of the mind that can be thought of as pointers to the information comprising them. Our concept of a rock bestows a fixed physical shape, while a quantity of water has a fluid shape. The concept sharp refers to a capacity to cut, which is associated with certain physical traits. Freedom and French are abstract concepts only indirectly connected to the physical world about which we each have acquired a very detailed, personal internal representations. Consciousness is a special-purpose “application” (or subroutine) within the mind that focuses on the concepts most relevant to current circumstances and applies reason along with habit, learning and intuition to direct the body to take actions one at a time. The only real role of consciousness is to manage this top-level single-stream logic processing, so it doesn’t need to be aware of, and would only be distracted by, the details that the subconscious takes care of, including sensory processing, memory lookup/recognition, language processing and more. Consciousness needs access to all incoming information upon which it can be useful to apply reason. To do this in real time, the mind preprocesses concepts subconsciously where possible, which is often little more than a memory lookup service, but also includes converting 2-D images into known 3-D objects or converting concepts into linguistic form. We bypass concepts and reason entirely whenever habit, experience and intuition can manage alone, but do so with conscious oversight. Consciousness needs to act continuously and “enthusiastically”, so it is pre-configured to pursue innate desires, and can develop custom desires as well.

I call the consciousness subroutine the SSSS for single-stream step selection, because objectively that is what it is for, selecting the one coordinated action at a time for the body to perform next. Our whole subjective world of experience is just the way the SSSS works, and its first person aspect is just a consequence of the simplification of the world necessary to support reason, combined with all the data sources (senses, memory, emotion) that can help in making decisions. Our subjective perspective is only figuratively a projection or a cartoon; it is actually comprised of a combination of nonrepresentational data that statistically correlates information and representational data that represents both the real and imagined symbolically through concepts. This perspective evolved over millions of years, since the dawn of animal minds. Though reasoning ultimately leads to a single stream of digital decisions (ones that go one way or another), nothing constrains it from using analog or digital inputs or parallel processing along the way, and it does all these things and more to optimize performance. Conscious experience is consequently a combination of many things happening at once, which only feel like a seamless integrated experience because it would be very nonadaptive if it didn’t. For instance, we perceive a steady, complete field of vision as if it were a photograph because it would be distracting if we didn’t, but actually our eyes are only focused on a narrow circle of central vision, the periphery is a blur, and our eyes dart around a lot filling in holes and double checking. The blind spots in our peripheral vision (that form where the optic nerve passes through the retina) appear to have the same color and even pattern of the area around them because it would be distracting if they disturbed the approximation to a photograph. So the software of consciousness tries very hard to create a smooth and seamless experience out of something much more chaotic. It is an intentional illusion. It seems like we see a photo, but as we recognize objects we note the fact and start tracking them separately from the background. We can automatically track their shading and lighting from different perspectives without even being aware we are doing it. Colors have distinct appearances both to provide more information we can use and to alert us to associations we have for each color.

While the only purpose of consciousness is to support reasoning, it carries this very rich subjective feel with it because that helps us make the best decisions very quickly. That it seems pleasurable or painful to us is in a way just a side effect of our internal controls that lead us to seek pleasure and avoid pain. This is because consciousness simplifies decision making by reducing complex situations into an emotional response or a preference. Such responses have no apparent rational basis, but presumably serve an adaptive purpose since we have them and evolved traits are always adaptive, at least originally (in the ancestral environment). We just respond emotionally or prefer things a certain way and then can reason starting with those feelings as propositions. Objectively, we can figure out why such responses could be adaptive. For example, hunger makes us want to eat, and, not coincidentally, eating fends off starvation. Libido makes us want sex, and reproduction perpetuates the species. Providing hunger and libido as axiomatic desires to the reasoning process eliminates the need to justify them on rational grounds. Is there a good reason why we should survive or produce offspring? Not really, but if we just happen to want to do things that have that outcome, the mandate of evolution is satisfied. Basically, if we don’t do it someone else will, and more than that, if we don’t do it better they will, in the long run, squeeze us out, so we had better want it pretty bad. So feelings and desires are critical to support reasoning, even though these premises are not based on reason themselves.

This perhaps explains why we feel emotions and desires, but it doesn’t explain why they feel just the way they do to us. This is both a matter of efficiency and logical necessity. From an efficiency standpoint, for an emotion or innate desire to serve its purpose we need to be able to process immediately and appropriately, but also simultaneously with all other emotions, desires, sensory inputs, memory, and intuition that apply in each moment. To accomplish this, all of these inputs have independent input channels into the conscious mind, and to help us tell them all apart, they all have distinct quale (kwol-ee, the way it feels, the plural is qualia). From a logical necessity standpoint, for reasoning to work appropriately the quale should influence us directly and proportionally to the goal, independent of any internally processed factors. Our bodies require foods with appropriate levels of fat, carbohydrates, and protein, but subjectively we only know smell, taste and hunger (food labels notwithstanding). These senses and our innate preferences directly support reasoning where a detailed analysis (e.g. a list of calories, fat and protein) would not. Successful reproduction requires choosing a fit mate, ensuring that mates will stay together for a long time, and procreating. This gets simplified down to feelings of sex appeal, love, and libido. Based on any kind of subsidiary reasoning couples would never stay together; they need an irrational subconscious mandate, i.e. love.

Nihilists reject or disregard innate feelings and preferences, presumably on philosophical or rational grounds. While this is undoubtedly reasonable and consequently philosophical, we can’t change the way we are wired just by willing it so. We will want to heed our desires, i.e. to pursue happiness, although unlike other animals our facility with directed abstract thought gives us the freedom to reason our way out of it or, potentially, to reach any conclusion we can imagine. Evolution has done its best to keep our reasoning facility in thrall to our desires so that we focus more on surviving and procreating and less so on contemplating our navels, but humans have developed a host of vices which can lead us astray, with technology creating new ones all the time. If vices represent a failure of our desires to keep us focused on activities beneficial to our survival, virtues oppose them by emphasizing desires or values that are a benefit, not just to ourselves but our communities. We can consequently conclude that the meaning of life is to reject nihilism because it is a pointless and vain attempt to supersede our programming, and to embrace virtuous hedonism as its opposite, to exemplify what reason and intelligence can add to life on earth.

Michael Graziano explains well how attention works within consciousness, but he says the motivation to simplify the world down to a single stream is that: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others.”2 His sentiment is right, his reason is wrong; the brain has more than enough computational capacity to fully process all the parallel streams hitting the senses and all the streams of memory generated in response, but it does all this subconsciously. Massive parallel processing is the strength of the subconscious. The conscious subroutine is intentionally designed to produce a single stream of actions since there is only one body, and so this is the motivation to simplify and focus attention and create a conscious “theater” of experience. A corollary key insight to my points on consciousness is that most of our minds are subconscious and contain specialized and generalized functions that do all the computationally-intensive stuff.

Insight 6. Minds reason using models that represent possibilities. Its job is to control the body by analyzing current circumstances to compute an optimal response. No ordinary machine can do this; it requires a system that can collect information about the environment, compare it to stored information, and based on matches, statistics, and rules of logical entailment select an appropriate reaction. Matches and statistics are the primary drivers of associative memory, which not only helps us recognize objects on sight and remember information learned in the past given a few reminders, but also supports more general intuition about what is important or relevant to the present situation. While this information is useful, it is not predictive. Since the physical world follows pretty rigid laws of nature, it is possible to predict with near certainty what will happen under controlled circumstances, so an animal that had a way to do this would have a huge advantage over one that could not. Beyond that, once animals developed predictive powers, a mental arms race ensued to do it better. Nearly all animals can do it, and we do it best, but how?

The answer lies entirely in the words “controlled circumstances”. We create a mental model, which is an imaginary set of premises and rules. A physical model, in particular, contains physical objects and rules of cause and effect. Cause and effect is a way of describing laws of nature as they apply at the object level within a physical model. So gravity pulls objects down, object integrity holds objects together differently for each substance, and simple machines like ramps, levers and wheels can impact the effort required. And we recognize other animals as agents employing their own predictive strategies. Within a model, we can achieve certainty: rules that always apply and causes that always produce expected effects. The rules don’t have to be completely certain (deductive); they can be highly likely (inductive). But either way, they work, so once we decide to use a given model in a real-world situation, we can act quickly and effectively. And causal reasoning can be chained to solve complex puzzles. While we can control circumstances with models, the real world will never align precisely with an idealized model, so how we choose the models is as important as how we reason with them. Doubt will creep in if our results fall short of expectations, which can happen if we choose an inappropriate model, or if a model is appropriate but inadequately developed. For every situation, we select one or more models from a constellation of models, and we apply the rules and act with an appropriate degree of certainty based on our confidence in picking the model, its accuracy, and our ability to keep it aligned with reality.

Mental models are mostly subrational. A full definition of subrational will be presented later in Concepts and Models, but for now think of it as a superset of everything subconscious plus everything in our conscious awareness that is not a direct object of reasoning. Models themselves need not be based in reason and need not enter into the focus of conscious reasoning. We can reason with models supported entirely by hunches, but we can also if desired take a step back mentally and use reason to list the premises, rules, and scope of a model we have been using implicitly up to that point. However, as we will see in the next insight, doing this only rationalizes the subrational, which is to say it provides another way of looking at them that is not necessarily better or even right (to the extent an interpretation of something can be said to be right or wrong).

Insight 7. Reasoning is fundamentally an objective process that manages truth and knowledge. Objective principally means without bias and agreeable to anyone based on a preponderance of the evidence, and subjective is everything else, namely that which is biased or not agreeable to everyone. Science tries to achieve objectivity by using instruments for measurements, checking results independently, and using peer review to establish a level of agreement. While these are great ways to eliminate bias and foster agreement, we have no instruments for seeing thoughts or checking them: all our thoughts are inherently subjective. This is an obstacle to an objective understanding of the mind. Conventionally science deals with this by giving up: all evidence of our own thought processes are considered inadmissible, and science consequently has nothing to say. Consider this standard view of introspection:

“Cognitive neuroscientists generally believe that objective data is the only reliable kind of evidence, and they will tend to consider subjective reports as secondary or to disregard them completely. For conscious mental events, however, this approach seems futile: Subjective consciousness cannot be observed ‘from the outside’ with traditional objective means. Accordingly, some will argue, we are left with the challenge to make use of subjective reports within the framework of experimental psychology.” 3

This wasn’t always so. The father of modern psychology, Wilhelm Wundt, was a staunch supporter of introspection, which is the subjective observation of one’s own experience. But its dubious objectivity caught up with it, and in 1912 Knight Dunlap published an article called “The Case Against Introspection” that pointed out that no evidence supports the idea that we can observe the mechanisms of the mind with the mind. I agree, we can’t. In fact, I propose the SSSS process supporting consciousness filters our awareness to include only the elements useful to us in making decisions and consequently blocks our conscious access to the underlying mechanisms. So we don’t realize from introspection that we are machines, albeit fancy ones. But we have figured it out scientifically, using evolutionary psychology, the computational theory of mind, and other approaches within cognitive science.

The limitations of introspection don’t make it useless from an objective standpoint; they only mean we need to interpret it in light of objective knowledge. So we can, for example, postulate an objective basis for desires and then test them for consistency using introspection. We should eventually be able to eliminate introspection from the picture, but most of our understanding of consciousness and the SSSS at this point comes from our use of it, so we can’t ignore what we can glean from that.

While our whole experience is subjective, because we are the subject, that doesn’t mean a subset of what we know isn’t objective. We do know some things objectively, and we know we know them because we are using proven models. And we know the degree of doubt we should have in correlating these models to reality because we have used them many times and seen the results. It is usually more important for consciousness to commit to actions without doubt than to suffer from analysis paralysis, though of course for complex decisions we apply more conscious reasoning as appropriate.

We have many general-purpose models we trust, and we generally know how our models match up with those other people use and how much other people trust them. Since objectivity is a property of what other people think, i.e. agreeable to all and not subjective, we need to have a good idea of what models we are using and the degree to which other people use the same models (i.e. similar models; we each instantiate models differently). If our models are subrational, how can we ever achieve this? For the most part, it is done through an innate talent called theory of mind:

Theory of mind (often abbreviated ToM) is the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.

So we subrationally form models and can intuit much about the models of others using this subrational and largely subconscious skill. Our subconscious mind automatically does things for us that would be too laborious to work out consciously. Consciousness evolved more to support quick reactions than to think things through, though humans have developed quite a knack for that. But whether these skills are subconscious, subrational, or rational, the rational conscious mind directs the subrational and subconscious and thus takes credit for the capacities of the whole mind. This gives us the ability to distinguish objective knowledge from subjective opinion. Consider this: when a witness is asked to recount events under oath, the jury is expecting to hear objective truth. They will assess the evidence they hear using theory of mind on the witness to understand his models and to rule out compromising factors such as mental capacity, impartiality, memory or motives. People read each other very well, and it is hard to lie convincingly because we have subconscious tells that others can read, which is a mechanism that evolved to allow trust to develop despite our ability to lie. The jury expects the witness can record objective knowledge similarly to a video camera and that they can acquire this knowledge from him. Beyond juries, we know our capacities, partiality, memory, and motives well enough to know whether our own knowledge is objective (that is, agreeable to anyone based on a preponderance of the evidence). So we’ve been objective long before scientific instruments, independent confirmation or peer review came along. Of course, we know that science can provide more precision and certainty using its methods than we can with ours, but science is more limited in the phenomena to which it applies. People have always understood the world around them to a high degree of objectivity that would stand up to considerable scrutiny, despite also having many personal opinions that would not. We tend not to give ourselves much credit for that because we are not often asked to separate the objective and subjective parts.

Objectivity does not mean certainty. Certainty only applies to tautologies, which are concepts of the ideal world, not the physical world. A tautology is a proposition that is necessarily true, or, put another way, is true by definition. If one sets up a model, sometimes called a possible world, in which one defines what is true, then within that possible world, everything that is true is necessarily true, by definition. So “a tiger is a cat” is certain if our model defines tiger as a kind of cat. Often to verify whether or not one has a tautology one has to clarify definitions, i.e. clarify the model. This process will identify rhetorical tautologies. If we say that the rules of logic apply in our models, and we generally will, then anything logically implied is also necessarily true and a logical tautology. The law of the excluded middle, for example, demonstrates a logical necessity by saying that “A or not A” is necessarily true. Or, more famously, a syllogism says that “if A implies B and B implies C, then A implies C”. While certainty is therefore ideally possible, we can’t see into the future in the physical world, so physical certainty is impossible. We also can never be completely certain about the present or past because we depend on observation, which is both indirect and unprovable. So physical objectivity is not about true (i.e. ideal) certainty, but it does relate to it. By reasoning out the connection, we can learn the definition and value of physical objectivity.

To be physically objective means to establish one or more ideally objective models and to correlate physical circumstance to these models. Physically objectivity is limited by the quality of that correlation, both in how well the inputs line up and how closely the output behavior of the ideal model(s) ends up matching physical circumstances. The innate strategy of minds is to presume perfect correlation as a basis for action and to mop up mistakes afterward. Also, importantly, minds will try to maximally leverage learned behavior first and improvised behavior second. That is, intuitive/automatic first and reasoned/manual second. So, for example, I know how to open the window next to me as I have done it before. I feel certain I can apply that knowledge now to open the window. But my certainty is not really about whether the window will open, it is about the model in my mind that shows it opening when I flip the latch and apply pressure to raise it. In that model, it is a certainty because these actions necessarily cause the window to open. If the window is stuck or even permanently epoxied, it doesn’t invalidate the model, it just means I used the wrong model. So if the window is stuck how do I mop up from this mistake? The models we apply sit within a constellation of models, some of which we hold in reserve from past experience, some of which we have consciously available from premeditation, and some of which we won’t consciously work out until the need arises. For every model we have a confidence level, the degree to which we think it correlates to the situation at hand. This confidence level is mostly a function of associative memory, as we subrationally evaluate how well the model premises line up with the physical circumstances. In the case of habitual or learned behavior, we do this automatically. So if this window doesn’t open as I thought it would, from learned behavior I will push harder and use quick thrusts if needed to try to unjam it. Whether it works or not I will update my internal model of the behavior of stuck windows accordingly, but in this case, I didn’t directly employ reasoning, I just leveraged learned skills. But the mind will rather seamlessly maintain both learned and reasoned approaches for handling situations. This means it will maintain models about everything because it will frequently encounter situations where learned behavior is inadequate but reasoning with models that include causation works.

Just as we don’t generally need to separate the objective and subjective parts of our knowledge, we don’t generally need to separate learned behavior from reasoned behavior. It is important to the unity of our subjective experience that we perceive these very different things as fitting seamlessly together. But this introduces another factor that makes it hard to identify our internal models: learned behavior doesn’t need models or a representational approach, at least not of the simplified form used for logical analysis. We can, potentially, remember every detail of every experience and pull up relevant information exactly when needed in an appropriate way without any recourse to logic or reason at all. So what kind of existence do our ideal models have? While I think we do persist many aspects of many models in our memories as premises and rules, we tend not to be too hard and fast about them, and they blend into each other and our overall memory in ways that let us leverage their strengths without dwelling on their weaknesses. Even as we start to reason with them, we only require a general sense that their premises and rules are sound and well defined, and if pressed we may learn they are not, at which point we will fill them out until they are as certain as we like. We can, therefore, conclude that while we use objectivity all the time, in is usually in close conjunction with subjectivity and inseparable from it. To be convincing, we need to develop ways to isolate objective and subjective components.


Insight 8. Insight 8. We really have free will. We already know (intuitively) that we have free will, so I shouldn’t take any credit for this one. But I will because a preponderance of the experts believe we don’t, which is a consequence of their physicalist perspective. Yes, the universe is deterministic and everything happens according to fixed laws of nature, so we are not somehow changing those laws in our heads to make nature unfold differently. What happens in our heads is in fact part of its orderly operation; we are machines that have been tuned to change the world in the ways that we do. So far, that suggests we don’t have free will but are just robots following genetic programming. But several things happen to create genuine freedom. Freedom can’t mean altering the future from a preordained course to a new one because the universe is deterministic and each moment follows the preceding according to fixed laws of nature. But since the universe has always been this way and we nevertheless feel like we have free will, freedom must mean something else.

Freedom really has to do with our conception of possible futures. We imagine the different outcomes from different possible courses of action. These are just imaginary constructions (models with projected consequences) with no physical correlate, other than the fact that they exist in our brains. But we think of them as being possible futures even though there is really only one future for us. So our sense of free will is rooted in the idea that what we do changes the universe from its predetermined course. We don’t, but two factors explain why our perspective is indistinguishable from a universe in which we could change the future: unpredictability and optimized selection. Regarding unpredictability, neither we nor anyone could ever know for sure what we are going to do; only an approximate prediction is possible. Although thinking follows the laws of nature, the process is both complex and chaotic, meaning that any factor, even the smallest, could change the outcome. So every decision, even the simplest, could never be predicted with certainty. The second factor is optimized selection, which is a mental or computational process that uses an optimization algorithm to choose a strategy that has the best chance of producing a physical effect. First, the algorithm collects information, which is data that has more value for some purpose than white noise has. For example, sensory information is very valuable for assessing the current environment. And our innate preferences, experience, state of mind, and whim (which is a possibly unexplainable preference) are fed to the algorithm as well. This mishmash of inputs is weighed and an optimal outcome results. If the optimal action seems insufficiently justified, we will pause or reconsider as long as it takes until the moment of sufficient justification arrives, and then we invariably perform that action. At that moment the time for free will has passed; the universe is just proceeding deterministically. We exercised our free will just before that moment, but before I explain why I have a few more comments on unpredictability and optimization algorithms.

The weather is unpredictable but lacks optimized selection because it is undirected. A robot trained to use learned behavior alone to choose strategies that produce desired effects has an optimized selection algorithm, but might be entirely predictable. If its behavior dynamically evolves based on live input data, then it may become unpredictable. Viewed externally, the robot might appear to have “free will” in that its behavior would be unpredictable and goal-oriented like that of a human. However, internally the human is thinking in terms of selecting from possible futures, while the robot is just looking up learned behaviors. People don’t depend solely on learned behavior; we also use reason to contemplate general implications of object interactions. To do this, we set up mental models and project how different object interactions might play out if different events transpired.

The real and deep upshot of this is that our concept of reality is not the same thing as physical reality. It is a much vaster realm that includes all the possible futures and all the models we have used in our lives. Our concept of reality is really the ideal world, in which the physical world is just a special case. The exercise of free will, being the decisions we take in the physical world, does represent a free selection from myriad possibilities. Become our models correlate so well to the real world we come to think of them as being the same, but they aren’t. Free will exists in our minds, but not in our hypothetical robot minds, because our minds project possible futures. A robot programmed to do this would then have all the elements of free will we know, and further would be capable of intelligent reasoning and not just learned behavior. It could pass a Turing test where the questioner moved outside the range of the learned behavior of the robot. Once we build such a robot, we will need to start thinking about robot rights. Could an equally intelligent optimization algorithm be designed that did not use models (and consequently had no consciousness or free will)? Perhaps, but I can’t think of a way to do it.

So our brain’s algorithm made an unpredictable decision and acted on it. The real question is this: Why do “we” take credit for the free will of our optimization algorithms? Aren’t we just passively observing the algorithms execute? This is simply a matter of perspective. We “are” our modeling algorithms. Ultimately, we have to mean something when we talk about the real “us”. Broadly, we mean our bodies, inclusive of our minds, but more narrowly, when we are referring just to the part of us that makes the decisions, we mean those modeling algorithms. In the final analysis, we are just some nice algorithms. But that’s ok. Those algorithms harbor all the richness and complexity that we, as humans, can really handle anyway. They are enough for us, and we are very much evolved to feel and believe that they are enough for us. Objectively, they are a patchwork of different strategies held together with scotch tape and baling wire, but we don’t see them that way subjectively. Subjectively the world is a clean, clear picture where everything has its place and makes sense in one organic whole that seems fashioned by a divine creator in a state of sheer perfection. But subjectively we’re wearing rose-colored glasses, and darkly-tinted ones at that, because objectively things are very far from clean or perfect.

So that explains free will, the power to act in a way we can fairly call our own. To summarize, our brains behave deterministically, but we perceive the methods they use to do it as selections from a realm of possibilities, and we quite reasonably identify with those methods so that we take both credit and responsibility for the decisions. More significantly, while we were dealt a body and mind with certain strengths and an upbringing with certain cultural benefits, this still leaves a vast array of possible futures for our algorithms to choose from. Since nobody can exercise duress on us inside our own minds, this means that no other entity but the one we see as ourself can take credit or blame for any decision we make. Do we have to take responsibility for our actions or can we absolve ourselves as merely inheriting our bodies and minds? We do have to take credit and blame because running the optimization algorithms is an active role; abstaining would mean doing nothing, which is just a different choice. Note that this physical responsibility is not the same as moral responsibility. How our thoughts, choices, and actions stand up from a societal standpoint is another question outside the scope of this discussion. But physically, if we perform an action then it is a safe bet that we exercised free will to do it. The only alternate explanations are mind control or some kind of misinterpretation, e.g. that it looked like we pressed the button but actually we were asleep and our hand fell from the force of gravity.

Sometimes free will is defined as “the ability to choose between different possible courses of action”. This definition is actually tautological because the word “choose” is only meaningful if you understand and accept free will. To choose implies unpredictability, an optimization algorithm, consciousness, and ownership of consciousness. Our whole subjective vocabulary (subjective words include feel, see, imagine, hope) implies all sorts of internal mechanisms we can’t readily explain objectively. And we are so prone to intermingling subjective vocabulary with objective vocabulary that we are usually unaware we are doing it.

One more point about free will: my position is a compatibilist position, meaning that determinism and free will are compatible concepts. Free will doesn’t undermine determinism, it just combines unpredictability, optimization algorithms, and the conscious modeling of future possibilities to produce an effect that is indistinguishable from the actual future changing based on our actions.

 

A very brief overview of TDTM

TDTM, for Top-Down Theory of Mind, principally combines a new philosophical stance with two scientific theories:

1. Physicoidealism
2. The theory of evolution
3. The computational theory of mind (CTM)

Any scientific discussion must first define what exists, which is called an ontology or theory of being. I am proposing a new ontology for TDTM that I call physicoidealism. Physicalism is an ontological monism, which means it says just one kind of thing exists. Specifically, it asserts that only the physical world exists, consisting of space, matter, and energy. Idealism asserts that only the mental world exists, consisting of immaterial ideas. Physicoidealism is just the union of these two monisms, eliminating the “only” from each. As the brain is now known to reduce to purely physical phenomena, science has concluded that the ideal does not exist, but this is a bit preposterous considering science is built out of hypotheses, which are ideal. Math, ideas, models, and theories are all nonphysical constructions of the ideal world. Nothing about them precludes the physical in any way, but they are not physical. Yes, of course, our access to them is entirely mediated through physical mechanisms like the brain, computers, and books, but any given mathematical law exists (in an ideal sense) independently of any physical system that uses or refers to it. Scientists do try to divine the “actual” laws of nature, but we can never know if there are any as such. All we can do is create idealized, non-physical models that correlate pretty well with nature. So although we have some confidence “actual” laws of nature do exist since the universe behaves so consistently, we have no way to find them or prove that the laws we come up with are right.

The more adamant physicalists among you will by now be thinking that since reductionism implies that everything is physical, this means anything I am calling ideal is just a convenient fiction or illusion with no real substance. All ideas are fictions and illusions with no physical substance, but that doesn’t mean they can’t impact the physical world. Physical systems like minds and computers can use math and programs and ideas to affect the physical world. How these systems affect the world can only be understood through the ideal concepts of information and algorithm. No amount of study of the mechanics of the brain will ever reveal these important aspects of its programming. Programming is the key that unlocks the ideal world, where logic, mathematics, representations and ideas live. Programs represent possibilities; they use one kind of simplified representation or another to describe bounded or unbounded sets of possibilities, and they describe logical operations that can be used to generate a limited or unlimited set of outputs given any inputs. We can discuss an abstract idea, like a pencil or a cat, independent of any physical implementation and inclusive of possibilities both bounded or unbounded. As Sean Carroll writes1 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. Also, to be useful programs must ultimately correlate back to reality, tie references back to referents, and allow us to change reality. The mind (and other programs) can do that. So minds add a new capability to the physical world it lacked before, a capability that could only seem like magic to inanimate matter or even plants, the power to predict the future by dividing nature into causes and effects, and thus develop strategies and then act on them. It seems a bit ironic that we consider fortune tellers to be charlatans considering the purpose of minds is to “see” the future so as to better control it. Of course, the limitation is that minds never have certain knowledge of the future, but they are chock full of very good guesses.

Another question that tends to come up about now is “what about determinism?” Physicalism says the world is running on fixed laws and that the outcome is preordained. Now that we have quantum uncertainty, perhaps it is not preordained, but it is still not alterable by free will. I explained in Key Insights why free will exists despite determinism. Although the decisions we made could not have been decided otherwise, they seem like they could to us because we imagine how they could have turned out otherwise, and since the physical world is too complex to predict, no one can tell us we didn’t pick one future out from among many. What makes us free is that our minds dwell in the ideal world of possibilities, and only secondarily in the physical world. In our world baseball and other generalizations exist, but they are not strict physical objects or events. When we are about to do things, and after we have done them, we don’t think only in terms of that specific instance, we generalize to all similar situations. So while actual decisions could not have been decided otherwise, we don’t see decisions in the context of a specific case, we generalize to all similar situations. To function effectively, we have to view the world through this much larger lens of possibility than as a mundane physical world that ultimately lacks any possibility since only one path will unfold. Put another way, at the moment we take an action, we have no choice, it is done. The moment before that, the universe and our minds are simply too complex for it to be possible to predict what will happen, even though we know it must unfold deterministically. So looking forward minds manage all possibilities as possible and interpret their action-optimizing algorithms as choosing from those possibilities, even though they are just making the generalization that situations similar to those in the past will play out in similar ways given similar reactions.

So are human choices actually shaping the world? Yes, because free will actually does exist as we think it does. The only illusion here is the idea that determinism implies the future is simple to predict. It doesn’t. Because it is possible for minds to exist and to gather information, model it, and compute and take actions, the physical world actually includes this slice of the ideal world, and so outcomes that leverage the world of possibility are entirely within the laws of determinism. In other words, determinism is not limited to the “direct” interactions of particle A hitting particle B; information processing and feedback vastly expand the range of complexity of what might happen “indirectly” (I use quotes because everything physical is necessarily direct, it is just that direct can become very convoluted). The physical universe does seem simple enough to predict if you leave out minds, but minds are part of the physical universe. When we change the world around us, it is the physical world changing itself. We are just the most complex cogs in the machine.

So brains create minds, and minds open a window into the ideal world of possibility that actually turns out to be an infinitely richer world than the physical world that spawned it. What do we know so far, scientifically, about how the mind came to be and how it works? Darwin discovered how it came to be with his theory of evolution in 1859 and the computational theory of mind (CTM), proposed in its modern form by Hilary Putnam in 1961, provides the basis of how it works. While Darwin wondered, “How does consciousness commence?”, he didn’t solve it, but he opened the door to evolutionary psychology in The Origin of Species with this comment: “Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation.” We now have a good overall sense of the evolutionary basis of all major mental capacities. The computational theory of mind (CTM) proposes mechanisms to support mental processes. The idea that thought is an exercise in information management and not a standalone substance is a major breakthrough, so far fully supported by the evidence. We now have reasonably good digital algorithms that approximate some mental functions, though we are still far short of artificial intelligence itself.

So far, the implications of these theories for understanding the mind have been best organized into one place in Steven Pinker’s 1996 book How the Mind Works. Pinker covers many implications of evolution and CTM on how the mind works in a very objective way, and I highly recommend it and will build on it. But we have a ways to go. Pinker doesn’t wade into the treacherous waters of metaphysics I’m in. I’ve introduced the idea of a computational idealism that forms an independent monism that has to be combined with physicalism to cover all that exists. From there I have developed the subjective perspective as a referential reality that funnels a cartoon of the world into a stream that can be analyzed logically to make decisions. And I explain free will as a consequence of the future being unknowable combined with action-optimizing algorithms that model possible worlds and pick from them. From where I sit, we need to expand the scope of objective science to include the ideal world, which is not a discovered world but a created one, an engineering project. Logic and math and models and ideas are built, not discovered, and the mind is a software engineering project. So much of how it works is not the simple outcome of scientific laws but the complex result of engineering decisions. The biological and social sciences, of course, accept that life is engineered and that understanding it better requires some reverse engineering, but I think they have historically undervalued the need to apply reverse engineering to psychology. Steven Pinker does an excellent job covering evolutionary psychology, and I will take that thinking further still.

An overview of computation and the mind

[Brief summary of this post]

I grew up in the 60’s and 70’s with a tacit understanding that thinking was computing and that artificial intelligence was right around the corner. Fifty years later we have algorithms that can simulate some mental tasks, but nothing close to artificial intelligence, and no good overall explanation of how we think. Why is that? It’s just that the problem is harder than we first thought. We will get there, but we need to get a better grasp of what the problem is and what would qualify as a solution. Our accomplishments over seventy or so years of computing include developing digital computers, making them much faster, demonstrating that they can simulate anything requiring computation, and devising some nifty algorithms. Evolution, on the other hand, has spent over 550 million years working on the mind. Because it is results-oriented, it has simultaneously developed better hardware and software to deliver minds most capable of keeping animals alive. The “hardware” includes the neurons and their electrochemical mechanisms, and the “software” includes a memory of things, events, and behaviors that supports learning from experience. If the problem we are trying to solve is to use computers to perform tasks that only humans have been able to do, then we have laid all the groundwork. More and more algorithms that can simulate many human accomplishments will appear as our technology improves. But my interest is in explaining how the brain manages information to solve problems and why and how brains have minds. To solve that problem, let’s take a top-down or reverse-engineered view of computation.

Computation is not just manipulation of numbers by rules or data by instructions. More abstractly, the functional conception of computation is a process performed on inputs that yields outputs that can be used to reduce uncertainty, which can be used in feedback loops to achieve predictable outcomes. Any input or output that can reduce uncertainty is said to contain information. White noise is data that is completely free of usable information. Minds, in particular, are computers because they use inputs from the environment and their own experience to produce output actions that facilitate their survival. If we agree that minds are processing information in this way, and exclude the possibility that supernatural forces assist them, then we can conclude that we need a computational theory of mind (CTM) to explain them.

Reverse engineering all the algorithms used by the brain down to the hardware and software mechanisms that support them is a large project. My focus is a top-down explanation, so I am going to focus just on the algorithms involved at the highest level of control that decide what we do. Most of the hard work, from a computational perspective, happens at the lower levels, so we will need to have a sense of what those levels are doing, but we won’t need to consider too closely how they do it. This is good because we don’t know much about the mechanics of brain function yet. What we do know doesn’t explain how it works so much as provide physical constraints with which any explanatory theory must be consistent. I will discuss some of those constraints in later in The process of mind.

At this point, I have to ask you to take a computational leap of faith with me. I will try to justify it as we go, but it is a hard point to prove, so once the groundwork has been laid we will have to evaluate whether we have enough evidence to prove it. The leap is this: the mind makes top-level decisions by reasoning with meaningful representations. This leap has a fascinating corollary: consciousness only exists to help this top-level representational decision-making process. Intuitively, this makes a lot of sense — we know we reason by considering propositions that have meaning to us, and we feel such reasoning directly supports many of our decisions. And our feeling of awareness or consciousness seems to be closely related to how we make decisions. But I need to explain what I mean by this from an objective point of view.

The study of meaning is called semantics. Semantics defines meaning in terms of the relationship between symbols, also called signifiers or representation, and what they refer to, also called referents or denotation. A model of the mind based on these kinds of relationships is called a representational theory of mind (RTM). I propose that these relationships form the meaningful representations that are the basis of all top-level reasoned decisions. I do not propose that everything in the mind is representational; most of what the mind does and experiences is not representational. RTM only applies to this top-level reasoning process. Some examples of information processing that are not representational include raw sensory perception, habitual behavior, and low-level language processing. To the extent we feel our senses without forming any judgments those feelings are free of meaning; they just are. So instrumental music consequently has no meaning. And to the extent we behave in a rote fashion, performing an action based on intuition or learned behavior without forming judgments, those actions are free of meaning, they just happen. Perhaps when we learned those behaviors we did form judgments, and if we recall those judgments when we use the behaviors then there is some residual meaning, but the meaning has become irrelevant and can be, and often will be, forgotten. People who tie their shoelaces by rote with no notion as to why the actions produce a knot have no detailed representation of knots; they just know they will end up with a knot. So much “intelligent” or at least clever behavior can take place without meaning. Finally, although language a critical tool, perhaps the critical tool, in supporting representational reasoning, as words and sentences can be taken as directly representing concepts and propositions, it only achieves this success through subconscious skills that understand grammar and tie concepts to words.

Importantly, just as we can tie knots by rote, we could, in theory, live our whole lives by rote without reasoning with represented objects (i.e. concepts) and, consequently, without conscious awareness. We would first need to be trained how to handle every situation we might encounter. While we don’t know how we could do that for people, we can train computers using an approach called machine learning. By training a self-driving car using feedback from millions of real-life examples, the algorithm can be refined to solve the driving problem from a practical standpoint without representation. Sure, the algorithm could not do as well as a human in entirely new conditions. For example, a human driver could quickly adapt to driving on the left side of the road in England, but the self-driving car might need special programming to flip its perspective. Also note that such algorithms do typically use representation for some elements, e.g. vehicles and pedestrians. But they don’t reason using these objects; they just use them to look up the best learned behaviors. So some algorithmic use of representation is not the same as using representation to support reasoning.

People can’t live their lives entirely by rote as we encounter novel situations almost continuously, so learned behavior is used more as another input to the reasoning process than as a substitute for it. Perhaps Mother Nature could have devised another way for us to solve problems as flexibly as reasoning can, but if so, we don’t know what that way might be. Furthermore, we appear quite unable to take actions that are the product of our top-level reasoning without explicit conscious awareness and attention in the full subjective theater of our minds. This experience, in surround-sound technicolor, is not at all incidental to the reasoning process but exists to provide reasoning with the most relevant information at all times.

Note that language is entirely representational by its very nature. Every word represents something. Just what words represent is more complex. Words are a window into both representational and nonrepresentational knowledge. They can be used in a highly formalized way to represent specific concepts and propositions about them in a logical, reasoned way. Or they can be used more poetically to represent feelings, impressions or mood. In practice, they will evoke different concepts and connotations in different people in different contexts as our minds interpret language at both conscious and subconscious levels. My focus on language will be more toward the rational support it provides consciousness to support reasoning with concepts, many of which represent things or relationships in the real world.

The top-down theory of mind (TDTM) I am proposing says that all mental functions use CTM, but only some processes use RTM. Further, I propose that consciousness exists to support reasoning, which critically depends on RTM while also seamlessly integrating with nonrepresentational mental capacities. While I am not going to review other theories at this time, conventional RTM theories propose that meaning ends with representation, while I say it is only the outer layer of the onion. Similarly, associative or connectionist theories explain memory and learned behavior with limited or no use of representation, as I have above, but do not propose a process that can compete with reasoning.

While the above provides some objective basis for reasoning as a flexible mental tool and consciousness as a way to efficiently present relevant information to the reasoning process, it does not say why we experience consciousness the way we do. We know we strive tenaciously and are fairly convinced, if we ask ourselves why, that it is because we have desires. That is, it is not because we know we have to survive and reproduce to satisfy evolution, but because the things we want to do usually include living and procreating. So apparently, working on behalf of our desires is a subjective equivalent to the objective struggle for survival. But why do we want things instead of just selecting actions that will best propagate our genes? Why does it feel like we’ve each got a quarter in the greatest video game ever made, a virtual reality first-person shooter that takes virtual reality to a whole new level — let’s call it “immersive reality” — in which we are not just playing the game, we are the game? Put simply, it is because life is a game, so it has to play like one. A game is an activity with goals, rules, challenges, and interaction. The imperatives of evolution create the goals and rules evolve around them. But the rules that develop are abstract ideas that don’t need a physical basis; they just need to get the job done.

The game of life has one main goal: survive. Earth’s competitive environment added a critical subgoal: perform at the highest level of efficiency and efficacy. And sexual selection, whose high evolutionary cost seems to be offset by the benefit of greater variation, led to the goal of sexual reproduction. But what could make animals pursue these goals with gusto? The truth is, animals don’t need to know what the goals are, they just need to act in such a way that they are attained. You could say theirs not to reason why, theirs but to do and die, in the sense that it doesn’t help animals in their mission to know why they eat certain foods, struggle relentlessly, or select certain mates. But it is crucial that they eat just the right amount of the foods that best sustain them and select the fittest mates. This is the basis of our desires. They are, objectively, nothing more than rules instructing us to prioritize certain behaviors to achieve certain goals. We are not innately aware of the ultimate goals, although as humans who have figured out how natural selection works, we now know what they are.

Our desires don’t force our hand; they only encourage us to prioritize our actions appropriately. We develop propositions about them that exist for us just as much as physical objects; they are part of our reality, which is a combination of physical and ideal. Closely held propositions are called beliefs. Beliefs founded in desires are subjective beliefs while beliefs founded in sensory perception are objective beliefs. Subjective beliefs could never be proven (as desires are inherently variable), but objective beliefs are verifiable. We learn strategies to fulfill desires. We learn many elements of the strategies we use to fulfill our most basic desires by following innate cues (instincts) for skills like mimicry, chewing, breathing, and so on. While we later develop conscious and reasoning control over these mostly innate strategies, it is only to act as a top-level supervisory capacity. So discussions of this top reasoning level are not intended to overlook the significance of well-developed underlying mechanisms that do the “real work”, much the way a company can function fairly well for a while without its CEO. With that caveat in mind, when we apply logical rules to propositions based on our senses, desires, and beliefs the implications spell out our actions. After we have developed detailed strategies for eating and mating, we still need to apply conscious reasoning all the time to apply prioritization to that cacophony of possibilities. We don’t need to know our evolutionary goals because our desires and the beliefs and strategies that follow from them are extremely well tuned to lead us to behaviors that will fulfill them.

Desires are fundamentally nonrepresentational in that they are experienced viscerally on a scale with greater or lesser force. They are not qualia themselves but the degree to which each quale appeals to us, which varies as a function of our metabolic state. So when we feel cold, we want warmth, and when we feel hunger, we want food. They are aids to prioritization and steer the decision-making process (both through reasoning and at levels below that). To reason with desires and subjective beliefs, we interpret them as weighted propositions using probabilistic logic. Because all relevant beliefs, desires, sensory qualia and memories are processed concurrently by consciousness, they all contribute to a continuous prioritization exercise that allows us to accomplish many goals appropriately despite having a serial instrument (one body). In other words, we have distinct qualia and as needed desires for them for the express purpose of ensuring all the relevant data is on the table before we make each decision.

So what is consciousness, exactly? Consciousness is a segregated aspect of the mind that uses qualia, memory, and expertise to make reasoned decisions. From a computational perspective, this means it is a subroutine fed data through custom data channels (in both nonrepresentational and representational ways) that has customized ways to process it. The nonrepresentational forms support whims or intuitions, and the representational forms support reasoned decisions. Importantly, reason has the last word; we can’t act, or at least not for very long, without support from our reasoning minds. Conversely, the conscious mind doesn’t exactly act itself, it delegates actions to the subconscious for execution, analogously to a manager and his employees. From a subjective perspective, the segregation of consciousness from the rest of the mind creates the subjective perspective or theater of mind. It seems to us to be a seamless projection of the external world into our heads because it is supposed to. We interpret what is actually a jumble of qualia as a smoothly flowing movie because the mandate to continuously prioritize and decide requires that we commit to the representation being the reality. To hesitate in accepting imagination as reality would lead to frequent and costly delays and mistakes. We consequently can’t help but believe that the world that floods into our conscious minds on qualia channels is real. It is not physically real, of course, but wetware really is running a program in our minds and that is real, so we can say that the world of our imagination, our conduit to the ideal world, is real as well, though in a different way.

Our objective beliefs, supported by our sensory qualia and memory, meet a very high objective standard, while our subjective beliefs, supported by our desires, are self-serving and only internally verifiable. Because our selfish needs often overlap with those of others and the ecosystem at large, they can often be fulfilled without stepping on any toes, but competition is an inescapable part of the equation. Our subjective beliefs give us a framework for prioritizing our interactions with others based entirely on abstracted preferences rather than literal evolutionary goals, based on desires tuned by evolution to achieve those goals. In other words, blindly following our subjective beliefs should result in self-preservation and the preservation of our communities and ecosystems. However, humans face a special challenge because we are no longer in the ancestral environment for which our desires are tuned, and we have free will and know how to use it. While this is a potential recipe for disaster, we will ultimately best satisfy our desires by artificially realigning them with evolutionary objectives. While our desires are immutable, the beliefs and strategies we develop to fulfill them are open to interpretation. In other words, we can use science and other tools of reasoning to help us adjust our subjective beliefs, through laws if necessary, to fulfill our desires in a way that is compatible with a sustainable future.

I call the portion of the conscious mind dedicated to reasoning the “single stream step selector”, or SSSS. While “just” a subprocess of the mind, it is the part of our conscious minds that we identify with most. The SSSS exercises free will in making decisions in both a subjective and objective sense. Subjectively we feel we are freely selecting from among possible worlds. We are also objectively free in a few ways, most significantly because our behavior is unpredictable, being driven by partially chaotic forces in our brains. Secondly, and more significantly to us as intelligent beings, our actions are optimized selections leveraging information management, i.e. computation, which doesn’t happen by chance or in simple natural systems. So without violating the determinism of the universe we nevertheless make things happen that would never happen without huge computational help.

The process of making decisions is much more involved than simply weighing propositions. Propositions in isolation are meaningless. What gives them meaning is context. Computationally, context is all the relationships between all the symbols used by the propositions. These relationships are the underlying propositions that set the ground rules for the propositions in question. Subjectively, a context is a model or possible world. Whenever we imagine a situation working according to certain rules, we have established a model in our minds. If the rules are somewhat flexible or not nailed down, this can be thought of as establishing a range of models. We keep a primary model (really a set of models covering different aspects) for the way we think the world actually is. We create future models for the ways we think things might go. We expect one of those future models to become real, in the sense that it should in time line up with the actual present within our limits of accuracy and detail. We keep a past model (again, really a set) for the way we think things were. Internally, our models support considerable flexibility, and we adapt them all the time as new information becomes available. Externally, at the moment we decide to do something, we have committed to a specific model and its implications. That model itself can be a weighted combination of several models that may be complementary or antagonistic to each other, but in any case, we are taking a stand. We have done an evaluation, either off the cuff or with deep forethought, of all the relevant information, using as many models as seem relevant to the situation and building new models we haven’t used before as we go if needed.

Viewed abstractly, what the mind is creating inside is a different kind of universe, a mental one instead of a physical one. Our mental universe is a special case of an ideal universe, in which ideas comprise reality. One could argue that the conscious and subconscious realms comprise distinct ideal universes which overlap in places. And one could argue that mathematical systems and scientific theories and our own models each comprise their own ideal world, bound by the rules that define them. Ideal worlds can be interesting in their own right for reasons abstracted from practical application, but their primary role is to help us predict real world behavior. To do this, we have to establish a mapping or correlation between the model and what we observe. Processing feedback from our observations and actions is called learning. We are never fully convinced our methods are perfect, so we are always evaluating how well they work and refining them. This approach of continuous improvement was successfully applied by Toyota (where it is called kaizen), but we do it automatically. It is worth noting at this point that the above argument solves the classic mind-body problem of how mental states are related to physical states, that is, how the contents of our subjective lives relate to the physical world. The answer I am proposing, a union of an ideal universe and a physical one, goes beyond this discussion on computation, but I will speak more on it later.

We have no access to our own programming and can only guess how the program is organized. But that’s ok; we are designed to function without knowing how the programming works or even that we are a program. We experience the world as our programming dictates: we strive because we are programmed to strive, and our whole UI (user interface) is organized to inspire us to relentlessly select steps that will optimize our chances of staying in the game. “Inspire” is the critical word here, meaning both “to fill with an animating, quickening, or exalting influence” (as a subjective word) and “to breathe in” (as an objective word). Is it a mystical force or a physical action? It sits at the confluence of the mental and physical worlds, capturing the essence of our subjective experience of being in the world and our physical presence in the world that comes one breath at a time. The physical part seems easy enough to understand, but what is the subjective part?

How does a program make the world feel the way it does to us? Yes, it’s an illusion. Nothing in the mind is real, after all, so it has to be an illusion. But it is not incidental or accidental. It all stems from the simple fact that animals can only do one thing at a time (or at least their bodies can only engage in one coordinated set of actions at a time). Most of all, they can’t be in two places at the same time. The SSSS must take one step at a time, and then again in an endless stream. But why should this requirement lead to the creation of a subjective perspective with a theater of mind? It follows from the way logic works. Logic works with propositions, not with raw data from the real world. The real world itself does not run by reasoning through logical propositions; it works simply because a lot of particles move about on their own trajectories. Although we believe they obey strict physical laws, their movements can’t be perfectly foretold. First, it would violate the Heisenberg Uncertainty Principle to know the exact position and bearing of each particle, as that would eclipse their wave-like nature. And secondly, the universe is happening but not “watching itself happen”. This argument, called Laplace’s demon, is the idea that someone (the demon) who knew the precise location and momentum of every atom in the universe could predict the future. It is now considered impossible on several grounds. But while the physical universe precludes exact prediction, it does not preclude approximate prediction, and it is through this loophole that the concept of a reasoning mind starts to emerge.

Think back to the computational leap I am trying to prove: the mind makes top-level decisions by reasoning with meaningful representations. I can’t prove that reasoning is the only way to control bodies at the top level, but I have argued above that it is the way we do it. But how exactly can reasoning help in a world of particles? It starts, before reasoning enters the picture, with generalization. The symbols we represent don’t exist as such in the physical world. We represent physical objects with idealized representations (called concepts) that include the essential characteristics of those objects. Generalization is the ability to recognize patterns and to group things, properties, and ideas into categories reflecting that similarity. It is probably the most important and earliest of all mental skills. But it carries a staggering implication: it shapes the way minds interpret the world. We have a simplified, stripped down view of the world, which could fairly be called a cartoon, that subdivides it into logical components (concepts, which include objects and actions) around which simple deductions can be made to direct a single body. While my thrust is to describe how these generalized representations support reason, they also support associative approaches like intuition and learned behavior. The underlying mechanism is feedback: generalized information about past patterns can help predict what patterns will happen again.

Reasoning takes the symbols formed from generalized representations and develops propositions about them to create logical scenarios called models or possible worlds. Everything I have written above drives to this point. A model is an idealization with two kinds of existence, ideal and physical, which are independent. For example, 1+1=2 according to some models of arithmetic, and this is objectively and unalterably true, independent of the physical world or even our ability to think it. Ideal existence is a function of relationships. On the other hand, a model can physically exist using a computer (e.g. biological or silicon) to implements it, or on paper or other recorded form which could later be interpreted by a computer. Physical existence is a function of spacetime, which in this case takes the form of a set of instructions in a computer. To use models, we need to expect that the physical implementation is done well so that we can focus on what the model says ideally. In other words, we need a good measure of trust in the correlation from the ideal representation to the physical referent. While we are not stupid and we know that perception is not reality, we are designed to trust the theater we interact with implicitly, both because it spares us from excess worry and because that correlation is very dependable in practice.

The ideal and physical worlds are independent of each other and might always have remained so were it not for the emergence of the animal mind some 550 million years ago. The upgrades we received in the past 4 million years with the rise of the Australopithecus and Homo genera are the most ambitious improvements in a long time, but animal minds were already quite capable. We’re just version 10.03 or so in a long line of impressive earlier releases. Animal minds probably all model the world using representation, which, as noted, captures the essential characteristics of referents, as well as rules about how objects and their properties behave relative to each other in the model. Computationally, minds use data structures that represent the world in a generalized or approximate way by recording just the key properties. All representations are formed by generalizing, but while some remain general (as with common nouns), some are tracked as specific instances (and optionally named, as with proper nouns). For that matter, generalizations can be narrow or broad for detailed or summary treatments of situations. For any given circumstance the mind draws together the concepts (being the objects and their characteristics) that seem relevant to the level of detail at hand so it can construct propositions and draw logical conclusions in a single stream. We weigh propositions using probabilistic logic and consider multiple models for every situation, which improves our flexibility. This analysis creates the artificial world of our conscious experience, the cartoon. This simplified logical view seamlessly overlays with our sensory perceptions, which pull the quality of the experience up from a cartoon to photorealistic quality.

If the SSSS is the reason we run a simplified cartoon of the world in our conscious minds, that may explain why we have a subjective experience of consciousness, but it still doesn’t explain why it feels exactly the way it does. The exact feel is a consequence of how data flows into our minds. To be effective, the conscious mind must not overlook any important source of information when making a decision. For example, any of our senses might provide key information at any time. For this reason, this information is fed to the conscious mind through sensory data channels called qualia, and each quale (kwol-ee, the singular) is a sensory quality, like redness, loudness or softness. Some even blend to create, for example, the sensation of a range of colors. The channels provide a streaming experience much like a movie. While the SSSS focuses on just the aspects most relevant to making decisions, it has an awareness of all the channels simultaneously. So it is capable of processing inputs in parallel even though it must narrow its outputs to a single stream of selected steps.

But why do data channels “feel” like something? First, we have to keep in mind that as substantial as our qualia feel, it is all in our heads, meaning that it is ultimately just information and not a physical quality. There is no magic in the brain, just information processing. A lot of information processing goes into creating the conscious theater of mind; it is no coincidence that it seems beautiful to us. Millions of years went into tailoring our conscious experience to allow all the qualia to be distinct from each other and to inform the reasoning process in the most effective way. Any alteration to that feel would affect our ability to make decisions. How should hot and cold feel? It doesn’t really matter what they feel like so long as you can tell them apart. Surprisingly, out of context, people can confuse hot with cold, because they use the same quale channel and we use them in a highly contextual way. Specifically, If you are cold, warmth should feel good, and if you are hot, coolness should feel good. And lo and behold, they do feel that way. Much of the rich character we associate with qualia comes not from the raw sensory feel itself but from the contextual associations we develop from genetic predispositions and years of experience. So red and loud will seem a bit scarier and alarming than blue or quiet, and soft will seem more soothing than rough. Ultimately, that qualia feel so rich and fit together seamlessly into a smooth movie-like experience proves that extensive parallel subconscious computational support goes into creating them.

Beyond sensory qualia, data channels carry other streams of information from subconscious processing into our conscious awareness. These streams enhance the conscious experience with emotion, recognition, and language. The subconscious mind evaluates situations, and if it finds cause for sadness (or other emotional content), then it informs the conscious mind, which then feels that way. We feel emotional qualia as distinctly as sensory qualia, and the base emotions seem to have distinct channels as we can feel multiple emotions at once. Recognition is a subconscious process that scans our memory matching everything we see and experience to our store of objects and experiences (concepts). It provides us with a live streaming data lookup service that tells us what we are looking at along with many related details, all automatically. We think of language as a conscious process, but only a small part is conscious. A processing engine hidden to our conscious minds learns the rules of our mother tongue (and others if we teach it), and it can generate words and sentences that line up with the propositions flowing through the SSSS, or parse language we hear or read into such propositions. Language processing is a kind of specialized recognition channel that connects symbols to meanings. The goal is for the conscious mind to have a simple but thorough awareness of the world, so everything not directly relevant to conscious decision making is processed subconsciously so as not to be a distraction. Desires don’t have their own qualia but instead add color to sensory and emotional qualia. Computationally this means additional information about prioritization comes through the qualia data channels. Desires come through recognition data channels (memory) as beliefs. Beliefs are desires we have committed to memory in the sense that we have computed our level of desire and now remember it. As noted above, recall that desires and beliefs are the only factors that influence how we prioritize our actions.

While we are born with no memories, and consequently all recognition and language are learned, we are so strongly inclined to learn to use our subconscious skills that all humans raised in normal environments will learn how without any overt training. We thus learn to recognize objects and experience appropriate emotions in context whether we like it or not. Similarly, we can’t suppress our ability to understand language. Interestingly, lack of conscious control over our emotions has been theorized to help others “see” our true feelings, which greatly facilitates their ability to trust us and work for both parties’ best interests. Other subconscious skills also include facility with physics, psychology, face recognition and more, which flow into our consciousness intuitively. We are innately predisposed to learn these skills and once trained we use them miraculously without conscious thought. The net result of all these subconsciously produced data channels is that the conscious mind is fed an artificial but very informative and largely predigested movie of the world, so much so that our conscious minds can, if they like, just drift on autopilot enjoying the show with little or no effort.

Lots of information flows into the conscious mind on all these data channels. It is still too much for the SSSS to process using modeling and logical propositions. So while we have a conscious awareness of all of it, attention is a specialized ability to focus on just the items relevant to the decision-making process. Computationally, what attention does is fill the local variable slots of the SSSS process with the most relevant items from the many data channels flowing into the conscious mind. So just as you can only read words at the focal point of your vision, you can only do logic on items under conscious attention, though you retain awareness of all the data channels analogously to peripheral vision. Further, since those items must be representational, data from sensory or emotional qualia must first be processed into representations through recognition channels. We can shift focus to senses and emotions, e.g. to consciously control breathing, blinking, or laughing, through representations as well. It is similar for learned behaviors. We can not only walk and chew gum at the same time, we can also carry on a conversation that engages most of our attention. Same for when we are tying our shoes or driving. But to stay on the lookout for novel situations, we retain conscious awareness of them and can bring them to attention if needed. Conscious focus is how we flexibly handle the most relevant factors moment to moment. Deciding what to focus on is a complex mental task itself that is handled almost entirely subconsciously.

The loss of smell in humans probably follows from the value in maintaining a simple logical model. Humans, and to a lesser degree other primates, have lost much of their ability to smell, which has probably been offset by improvements in vision, specifically in color and depth. That primates benefit more from better vision makes sense, but why did we lose so much variety and depth from smell perception? Disuse alone seems unlikely to explain so much loss considering rarely-used senses are still occasionally useful. The more likely explanation is that the sense of smell was a troublesome distraction from vision. That is, when forced to rely on vision primates did better than they would with both vision and smell. This can be explained by analogy to blind people, who develop other senses more keenly to compensate. Those forced to develop more keen visual senses could use them more effectively in many ways than those who trusted smell, which may turn out not to deliver as much benefit for primates, and especially humans. If you consider how much value we get from vision compared to smell, this seems like a good trade-off.

To summarize what I have said so far, the conscious mind has a broad subrational awareness of much sensory, emotional and recognition data. It can use intuition, learned behavior, and many subconscious skills but does so with conscious awareness and supervision. To consciously reason, the SSSS processes representations created by generalizing that data. The SSSS only reasons with propositions built on representations under conscious attention, i.e. those that are relevant. Innate desires are used to prioritize decisions, that is, they lead us to do things we want to do.

We know we are smarter than animals, but what exactly do we do that they can’t? Use of tools, language (and symbols in general), and self-awareness seem more like consequences of greater intelligence than its cause. The key underlying mental capacity humans have that other animals lack is directed abstract thinking. Only humans have the facility and penchant for connecting thoughts together in arbitrarily complex and generalized ways. In a sense, all other animals are trapped in the here and now; their reasoning capacities are limited to the problems at hand. They can reason, focus, imitate, wonder, remember, and dream but they can’t daydream, which is to say they can’t chain thoughts together at will just to see what might happen. If you think about it, it is a risky evolutionary strategy as daydreamers might just starve. But our instinctual drives have kept up with intelligence to motivate us to meet evolutionary requirements. Steven Pinker believes metaphors are a consequence of the evolutionary step that gave humans directed abstract thinking:

When given an opportunity to reach for a piece of food behind a window using objects set in front of them, the monkeys go for the sturdy hooks and canes, avoiding similar ones that are cut in two or made of string of paste, and not wasting their time if an obstruction or narrow opening would get in the way. Now imagine an evolutionary step that allowed the neural programs that carry out such reasoning to cut themselves loose from actual hunks of matter and work on symbols that can stand for just about anything. The cognitive machinery that computes relations among things, places, and causes could then be co-opted for abstract ideas. The ancestry of abstract thinking would be visible in concrete metaphors, a kind of cognitive vestige.

…Human intelligence would be a product of metaphor and combinatorics. Metaphor allows the mind to use a few basic ideas — substance, location, force, goal — to understand more abstract domains. Combinatorics allows a finite set of simple ideas to give rise to an infinite set of complex ones.1

Pinker believes the “stuff of thought” is sub-linguistic, and is only translated to/from a natural language for communication with oneself or others. That is, he does not hold that we “think” in language. But we can’t discuss thinking without distinguishing conscious and subconscious thought. Consciously, we only have access to the customized data channels our subconscious provides us to give us an efficient, logical interface to the world. In humans, a language data channel gives us conscious access to a subconscious ability to form or decompose linguistic representations of ideas. I agree with Pinker that the SSSS does not require language to reason, but language is a critical data channel integrally involved with advanced reasoning, i.e. directed abstract thinking. The SSSS processes many lines of thought across many models with many possible interpretations, which we can think of as being done in parallel (i.e. within conscious awareness) or in rotation (i.e. under conscious focus). But because language reduces thought to a single stream it provides a very useful way to simplify the logical processing of the SSSS down to one stream that can be put to action or used to communicate with oneself or others. Also, language is a memory aid and helps us construct more complex chains of abstract thought than could easily be managed without it, in much the same way writing amps up our ability to build longer and clearer arguments than can be sustained verbally. So the linguistic work of SSSS, i.e. conscious thought, works exclusively with natural language, but most of the real work (computationally speaking) of language is done subconsciously by processes that map meaning to words and words to meaning. Pinker somewhat generically calls the subconscious level of thinking “mentalese”, but this word is very misleading because it suggests a linguistic layer underlies reasoning when it doesn’t. Language processing is done by a specialized language center that feeds both natural language and its meaning to/from our conscious minds (the SSSS). And this center uses processing algorithms that can only process languages that obey the Universal Grammar (UG) Noam Chomsky described. But the language center does no reasoning; reasoning is a function of the SSSS, for which natural language is a tool that helps broker meanings.

So let’s consider metaphor again. The SSSS reasons with propositions built on representations that are themselves ultimately generalizations about the world. Metaphor is a generalized use of generalizations. It is a powerful tool of inductive reasoning in its own right that can help explain causation by analogy independent of its use in language. But language does make extensive use of metaphorical words and idioms as a tool of reasoning because a metaphor implies that explanations about the source will apply to the target. And more broadly, metaphors, like all ideas, are relational, defined in terms of each other, and ultimately joined to physical phenomena to anchor our sense of meaning. I agree with Pinker that metaphor provides a useful way to create words and idioms for ideas new to language and that these metaphors become partly or wholly vestigial when words or idioms are understood independent of their metaphorical origin. The words manipulate and handle derive from the skillful use of hands and yet are also applied to skillful use of the mind, and many mental words have physical origins and often retain their physical meanings, but we use them mentally without thinking of the physical meaning. But metaphorical reasoning is also well supported by language just because it is a powerful explanatory device.

An important consequence of directed abstract thinking and language is that humans have a vastly larger inventory of representations or concepts with which they can reason than other animals. We have distinct words for a small fraction of these, and most words are overloaded generalizations that we apply to a range of concepts we can actually distinguish more finely. We distinguish many kinds of parts and objects in our natural and artificial environments and many kinds of abstract concepts like health, money, self, and pomposity.

But what about language, tool use, and self-reflection? No one could successfully argue that chimps could do this as well as us if only they had generalized abstract thought. While generalized abstract thought is the underlying breakthrough that opened the floodgates of intelligence, it has co-evolved with language, manipulating hands and the wherewithal to use them, and a strong sense of self. Many genetic changes now separate our intellectual reach from our nearest relatives. Any animal can generalize from a strategy that has worked before to apply it again in similar circumstances, but only humans can reason at will about anything to proactively solve problems. Language magically connects words and grammar to meanings for us through subconscious support, but we are most familiar with how we consciously use it to represent and manipulate ideas symbolically. We can’t communicate most abstract ideas without using language, but even to develop ideas in our own mind though a chain of reasoning language is invaluable. Though our genetic separation from chimps is relatively small and recent, the human mind takes a second order qualitative leap into the ideal world that gives us unlimited access to ideas in principle because all ideas are relationships.

An overview of evolution and the mind

[Brief summary of this post]

The human mind arose from three somewhat miraculous breakthroughs:

1) Natural selection, a process dating back about 2 billion years that changes through adaptations in response to new environmental challenges

2) Animal minds, which opened up a new branch of reality: imagination. Feedback led to computation and representation, which enabled animals to flourish.

3) Directed abstract thinking, the special skill that lets people abstract away from the here and now to the anywhere and anywhen with great facility, giving us unlimited access to the world of imagination.

Of the four billion years we have spent evolving, about 600 million years (about 15%) has been as animals with minds, and at most 4 million years (about 0.1%) as human-like primates. That brief 4 million year burst may have changed 1% to 5% of our genes, which numerically is just fine tuning already well-established bodies and minds. Animals diverged into over a million animal species, but the appearance of directed abstract thinking in humans changed the playing field. Humans could survive not just in one narrow ecological niche, but in many niches, potentially flourishing anywhere on earth and ultimately squeezing out nearly all animal competition our size or bigger. Other mental capacities coevolved with and help support directed abstract thinking, like 3-D color vision, face recognition, generalized use of hands, language, and sophisticated cognitive skills like reasoning with logic, causation, time, and space. Directed abstract thinking is a risky evolutionary strategy because it can be used for nonadaptive purposes, such as the contemplating of navels, or even spiraling into analysis paralysis. To keep us on the straight and narrow, we have been equipped with enhanced senses and emotions that command our attention more than those found in other animals, for things like love, hate, friendship, food, sex, etc. The more pronounced development of sexual organs and behaviors in humans relative to other primates is well known 12, but the reasons are still speculative. I am suggesting one reason is to motivate us to pursue evolutionary goals (notably survival and reproduction) despite the distractions of “daydreaming”. Books, movies, TV, the internet, and soon virtual reality threaten our survival by fooling our survival instincts with simulacra of interactions with reality.

The mind is integrally connected to the mechanisms of life, so we have to look back to how life evolved to see why minds arose. While we don’t know the details of how life emerged, the latest theories fill some missing links better than before. Deep sea hydrothermal vents 3 may have provided the necessary precursors and stable conditions for early life to develop around 4 billion years ago, including at least these four:

(a) carbon fixation direct from hydrogen reacting with carbon dioxide,
(b) an electrochemical gradient to power biochemical reactions that led to ATP (adenine triphosphate) as the store of energy for biochemical reactions,
(c) formation of the “RNA world” within iron-sulfur bubbles, where RNA replicates itself and catalyzes reactions,
(d) the chance enclosure of these bubbles within lipid bubbles, and the preferential selection of proteins that would increase the integrity of their parent bubble, which eventually led to the first cells

From this point, life became less dependent on the vents and gradually moved away. These steps came next:

(e) expansion of biochemical processes, including use of DNA, the ability of cells to divide and the formation of cell organelles by capture of smaller cells by larger,
(f) a proliferation of cells that led eventually to LUCA, the “Last Universal Common Ancestor” cell about 3.5 billion years ago,
(g) multicellular life, which independently arose dozens of times, but most notably in fungi, algae, plants and animals about 1 billion years ago, and
(h) the appearance of sexual reproduction, which has also arisen independently many times, as a means of leveraging genetic diversity in heterogeneous environments.4 and resisting parasites 5. Whatever the reason, we have it.

The net result was the sex-based process of natural selection that Darwin identified. Lifeforms now had a biochemical capacity to encode feedback from the environment into genes that could express proteins that would result in improving the chances of survival.

Larger multicellular life diverged along two strategies: sessile and mobile. Plants chose the sessile route, which is best for direct conversion of solar energy into living matter. Animals chose mobility, which has the advantage of invincibility if one is at the top of the food chain, but the disadvantage of requiring complex control algorithms to do it. Animals further down the food chain are more vulnerable but require less sophisticated control. But how exactly did animals evolve the kind of control they need for a mobile existence? Sponges 6are the most primitive animals, having no neurons or indeed any organs or specialized cells. But they have animal-like immune systems and some capacity for movement in distress. Cnidarians (like jellyfish, anemones, and corals) feature diffuse nervous systems with nerve cells distributed throughout the body without a central brain, but often featuring a nerve net that coordinates movements of a radially symmetric body. What would help animals more, if it were possible, is an ability to move to food sources in a coordinated and efficient way. The radial body design seems limiting in this regard and may be why all higher animals are bilateral (though some, like sea stars and sea urchins, have bilateral larvae but radial adults). Among the bilateria, which arose about 550-600 million years ago, nearly all developed single centralized brains, presumably because it helps them coordinate their actions more efficiently, excepting a few invertebrates like the octopus, which has a brain for each arm, and a centralized brain to loosely administer. Independent eight-way control of arms comes in handy for an octopus; with practice, we can use our limbs independently in limited ways, but our attention can only focus on one at a time.

But how do nerves work, exactly? While we understand some aspects of neural function in detail, exactly how they accomplish what they do is still mostly unknown. Our knowledge of the mechanisms breaks down beyond a certain point, and we have to guess. But we can see the effects that nerves have: nerves control the body, and the brain is a centralized network of nerves that control the nerves that control the body. The existence of nerves and brains and indeed higher animals stands as proof that it is physically possible for a creature to move to food sources in a coordinated and efficient way, and indeed to enhance its chances of survival using centralized control. We can thus safely conclude, without any idea how they work, that the overall function of the brain is to provide centralized, coordinated control of the body.

For the most part, I will deal with the brain as a black box that controls the body and try to unravel its logical functions without too much regard as to its physical mechanisms. I will, however, try to be careful to take into account the constraints the brain’s structure entails. For example, we know brains must be fast and work continuously to be effective. To do this, they must employ a great deal of parallel processing to make decisions quickly. But let’s focus first on what they must do to control the body rather than how they do it.

To control a body so as to cause it to locate food sources, avoid dangers, and find mates requires that we start using verbs like “locate,” “avoid,” and “find”. We know minds can do these kinds of things while rocks and streams can’t, but how can we talk about them objectively, independent of the idea of minds? By observing their behavior. An animal’s body can move toward food as if it had a crystal ball predicting what it would find. It seems to have some way of knowing in advance where the food will be and animating its body so as to transport itself there. If rocks and streams can’t do it, how can animals?

The brain operates with a feedback loop of sensing, computing and acting. From an information standpoint, these steps correspond to data inputs, data processing, and data outputs. This is the crux of the computational theory of mind. When we speak of computation in this context, we are not referring to digital computation with 1’s and 0’s, but to any physical process that accomplishes information management. Information can be representational or nonrepresentational. Nonrepresentational information is just data that has value to the process that uses it. Raw sound or image data is nonrepresentational, as is much of the information supporting habitual behavior. Probably most of the information managed by the brain is nonrepresentational, but much of the information consciousness uses is representational. Representational information is grouped into concepts (e.g. objects) that describe essential and important characteristics of referents. Logical operations performed on the references are later applied back to the referents. For example, we recognize objects in raw image data by matching characteristics to our remembered representations of the objects.

At every moment the brain causes each part of the body to perform (or not perform) an action to produce the coordinated movement of the body toward a goal, such as locating a food source. Because there is only one body, and it can only be in one place at a time, the central brain must function as what I call a single-stream step selector, or SSSS, where a step is part of a chain of actions the animal takes to accomplish a goal. If the brain discerns new goals, it must prioritize them, though the body can sometimes pursue multiple goals in parallel. For example, we can walk, talk, eat, blink, and breathe at the same time. As I related in An overview of evolution and the mind, we prioritize goals in response to desires and subjective beliefs, which objectively and computationally are preference parameters that are well tuned to lead us to behaviors that coincide with the objectives of evolution (in the ancestral environment; they are not always so well tuned in modern times).

While we know the whole brain must function as an SSSS to achieve top-level coordination of the body, this doesn’t mean the SSSS has to be a special-purpose subprocess of the brain. For example, we can imagine building a robot with one overall program that tells it what to do next. But evolution didn’t do it that way. In animal minds, consciousness is a small subset of overall mental processing that creates a subjective perspective that is like an internal projection of the external physical landscape. It is a technique that works very well, regardless of whether other equally good ways of controlling the body might exist. As of now, we know that we can build robots without such a perspective, such as self-driving cars, but their responses are limited to situations they have learned to handle, which is nowhere near flexible enough to handle the life challenges all animals face. Learned behavior and reasoning are the only two top-level approaches to control that have a good degree of flexibility that I know of, but I can’t preclude the possibility of others. But we do know that animals use reasoning, which I believe mandates a simplified proposition-based logical perspective/projection of the world into a top-level portion of the mind that acts as an SSSS.

Brains use a lot of parallel processing. We know this is true for sensory processing because it provides useful sensory feedback in a fraction of a second, yet we know computationally that a non-parallel solution would be terribly slow. Real-time vision, for example, processes a large visual field almost instantly. Evolution will tend to exploit tools at its disposal if they provide a competitive advantage, so many kinds of operations in the brain use parallel processing. Associative memory, for instance, throws a pattern against every memory we have looking for matches. The computational cost of all those mismatches in just a few seconds is probably longer than our lifetimes, but that’s ok because our subconscious has nothing better to do and it doesn’t bother our conscious minds with the mismatches. Control of the body is another subconscious task using massively parallel processing. So sensing, memory, and motor control are highly parallel. But what about reasoning?

The SSSS is a subprocess of the brain that causes the body to do just one (coordinated) thing at a time, i.e. a serial set of steps. But while it produces actions serially, this does not prove that reasoning is strictly serial. Propositional logic itself is serial, but we could, in principle, think several trains of thought in parallel and then eventually act on just one of them. My guess, weighing the evidence from my own mind, is that the SSSS and our entire reasoning capacity is in fact strictly serial. Drawing on an analogy to computers, the SSSS has one CPU. It is, however, a multitasking process that uses context switching to shuffle its attention between many trains of thought. In other words, we pursue just one train of thought at a time but switch between many trains of thought about different topics floating around in our heads. Each train of thought has a set of associated memories describing what has been thought so far, what is currently under consideration, and goals. For the most part, we are aware of the trains we are running. For example, I have trains now for several aspects of what I am writing about, the temperature and lighting of my room, what the birds are doing at my bird feeder, how hungry I am, what I am planning to eat next, what is going on in the news, etc. These trains float at the edge of my awareness competing for attention, but my attention process keeps me on the highest prioritized task. But to prioritize them the attention process has to “steal cycles” from my primary task and cycle through them to see if they warrant more attention. It does that at a low level that doesn’t disturb my primary train of thought too much, but enough to keep me aware of them. When we walk, talk, and chew gum at the same time our learned behavior guides most of the walking and chewing, but we have to let these activities steal a few cycles from talking. We typically retain no memory of this low-level supervision the SSSS provides to non-primary tasks and may be so absorbed in our primary task we don’t seem to consciously realize we are lending some focus to the secondary tasks, but I believe we do interrupt our primary trains to attend to them. However, we are designed to prevent these interruptions from reducing our effectiveness at the primary task, for the obvious reason that quality attention to our primary task is vital to survival. The higher incidence of traffic accidents when people are using cell phones seems to confirm these interruptions. The person we are speaking to doesn’t expect us to be time-sharing them with another activity, which works out well so long as we can drive on autopilot (learned behavior). But when an unexpected driving situation requiring reasoning pops up we will naturally context switch to deal with it, but the other party doesn’t realize this and continues to expect our full attention. We may consequently fail to divert enough reasoning power to driving.

Why wouldn’t the mind evolve a capacity to reason with multiple tasks in parallel? I believe the benefits of serial processing with multitasking outweigh the potential benefits of parallel processing. First and foremost, serial processing allows for constant prioritization adjustments between processes. If processes could execute in parallel, this would greatly complicate deciding how to prioritize them. Having the mind dedicate all of its reasoning resources into a task that is known to be the most important at that moment is a better use of resources than going off in many directions and trying to decide later which was better to act on. Secondly, there isn’t enough demand for parallel processing at the top level to justify it. Associative memory and other subconscious processes require parallel processing to be fast enough, but since we do only need to do one thing at a time and our animal minds have been able to keep up with that demand using serial processing, parallel designs just haven’t emerged. While such a design has the potential to think much faster, achieving consensus between parallel trains is costly. This is the too-many-cooks-in-the-kitchen headache groups of people have when working together to solve problems. If the brain has a single CPU instead of many then parts of that CPU must be centrally located, and since consciousness goes back to the early days of bilateral life, some of those parts must be in the most ancient parts of the brain.

The brain controls the body using association-based and decision-based strategies. Association-based approaches use unsupervised learning through pattern recognition. It is unsupervised in the sense that variations in the data sets alone are used to identify patterns which are then correlated to desired effects. The mind then recognizes patterns and triggers appropriate actions. In this way, it can learn to favor strategies that work and avoid those that fail. While the mind heavily depends on association-based approaches for memory and learning, they do not explain consciousness or the unique intelligence of humans, which results from decision-based strategies.

Reasoning is powered by a combination of association-based and decision-based strategies, but the association-based parts are subsidiary as the role of decision-based strategies is to override learned behavior when appropriate. Decision-based strategies draw logical conclusions from premises either with certainty (deduction) or probability (induction). Reasoning itself, the application of logic given premises, is the easy part from the perspective of information management. The hard part is establishing the premises. The physical world has no premises; it only has matter and energy moving about in different configurations. Beneath the level of reasoning, the mind looks for patterns and distinguishes the observed environment into a collection of objects (or, more broadly, concepts). The distinguished objects themselves are not natural kinds because the physical world has no natural kinds, just matter and energy, but there are some compelling practical reasons for us to group them this way. Lifeforms, in particular, each have a single body, and some of them (animals) can move about. Since animals prey on lifeforms for food, and also need to recognize mates and confederates, an ability to recognize and reason about lifeforms is indispensable. Physically, lifeforms have debatable stability, as their composition constantly changes through metabolism, but that bears little on our need to categorize them. Similarly, other aspects of the environment prove useful to distinguish as objects and by generalization as kinds of objects. Animals chunk data at levels of granularity that prove useful for accomplishing objectives. Grouping information into concepts this way sets the stage for the SSSS to use them in propositions and do logical reasoning. Concepts become the actors in a chain of events and can be said to have “cause and effect” relationships with each other from the “perspective” of the SSSS. That is, cause and effect are abstractions defined by the way the data is grouped and behaves at the grouped level that the SSSS can then use as a basis for decisions. In this way, the world is “dumbed down” for the SSSS so it can make decisions (i.e. select actions) in real time with great quality and efficiency despite having just one processing stream.

We experience the SSSS as the focal point of reasoning, the center of conscious awareness, where our attention is overseeing or making decisions. Though it sounds a bit surprising that we are nothing more than processes running in our brains, unless magic or improbable laws of physics are involved, this is the only possible way to explain what we are and is also consistent with brain studies to date and computer science theory. The way our conscious mind “feels” to us, more than anything, is information. The world feels amazing to us because consciousness is designed so that important information grabs our attention through all the distractions. Our conscious experience of vision, hearing, body sense, other senses, and memory are all just ways of interpreting gobs of pure information to facilitate a continuous stream of decisions. The human conscious experience is a big step up from that of animals because directed abstract thinking enables us to potentially conceive of any relationship or system, and in particular powers our ability to imagine possible worlds, including self-awareness of ourselves as abstract players in such systems.

The process of mind

[Brief summary of this post]

Let’s say the mind is a kind of computer. As a program, it moves data around and executes instructions. Herein I am going to consider the form of the data and the structure of the program. I have proposed that from the top down the mind is controlled by a process I call the SSSS, for single stream step selector. I have argued that this process uses a single CPU, i.e. one thread or train of thought, but an unlimited number of multitasked processes, though it is only actively pursuing a handful of these at a time. And I have argued that top-level decisions use reason, either inductive of deductive logic, on propositions, which are simplifications or generalizations about the world, guided by desires, which are instinctive preferences understood consciously as preferential propositions. Propositions are represented using concepts framed by models, both of which we keep in our memory.

To decompose this further working from the top down let’s consider how a program works. First, it collects data, aka inputs. Then it does some processing on the data. Third, it produces outputs. And last, it repeats. For a service-oriented program, i.e. one that provides a continuous stream of outputs for a shifting stream of inputs, this endless iteration of the central processing loop, which for minds is heavily driven by outputs feeding back to inputs, forms the outer structure of the program. I call the loop used by the SSSS the RRR loop, for recollection, reasoning, and reaction.

Before I discuss these in some detail, I want to say something about the data and instructions. If I say I’m losing my mind, I’m talking about my memory, not my faculties, which I can take for granted. All of the “interesting” parts are in the data, from our past experiences to our understanding of the present to our future plans. The instructions our brain and body follows are, by comparison, low-level and mostly hard-wired. The detailed plans that let us play the piano or speak a sentence are stored in memory. Built-in instructions support memory retrieval, logical operations, and transmission of instructions to our fingers or mouths, but any higher-level understanding of the mind relates to the contents of memory. Our memory is inconceivably vast. At any one time, we can consciously manage just a handful of data references and an impression of the data to which they refer. But that referenced data itself in turn ultimately refers to all the data in our minds, everything we have ever known, and to some degree everything everyone has ever known. Because “everything” means representations of everything, and since representations are generalizations that lose information, much has been lost. Most, no doubt. But it is still a massive amount of useful information, distilled from our personal experience, our interactions with others, culture, and a genetic heritage of instinctive impressions that develop into memory as we grow. Note that genetically-based “memory” is not yet memory at birth but a predisposition to develop procedural memory (e.g. breastfeeding, walking) or declarative memory (e.g. concepts, language).

One more thing before I go into the phases. We consciously control the SSSS process; making decisions is the part of our existence we identify with most strongly. But the SSSS process is supported by an incalculably large (from a conscious perspective) amount of subconscious thinking. Our subconscious does so much for us we are already very smart before we consciously “lift a finger”. This predigested layer is what makes explaining the way the mind works so impenetrable: how can you explain what just appears by magic? Yes, subjectively it is magic — conscious awareness and attention is a subprocess of the mind that is constrained to see just the outer layers of thought that support the SSSS, without all the distraction of the underlying computations that support it. But objectively we can deduce much about what the subconscious layers must be doing and how they must be doing it, and we now have machine learning algorithms that approximate some of what the subconscious does for the SSSS in a very rudimentary way. So from a computational standpoint, all three phases of the SSSS are almost entirely subconscious. All the conscious layer is doing is providing direction — recall this, reason that, react like so — and the subconscious makes it happen with a vast amount of hidden machinery.

Recollections can be either externally or internally stimulated, which I call recognition-based or association-based recall. Recognition means identifying things in the environment similar to what has been seen before, a process known in psychology as apperception. Sensory perception provides a flood of raw information that can only be put to use by the SSSS to aid in control if it can be distilled into a propositional form, which is done by generalizing the information into concepts. The mind first builds simplified generic object representations that require no understanding about what is being sensed. For example, vision processing converts the visual field into a set of colored 3-D objects adjusted for lighting conditions, without trying to recognize them. These objects must have a discrete internal representation headed by an object pointer and containing the attributes assigned to the object. For example, if we identify a red sphere, then a red sphere object pointer contains the attributes red, sphere, and other salient details we noticed. Such a pointer lets us distinguish a red sphere from a blue cube, i.e. that red goes with the sphere and blue goes with the cube, which is called the segregation problem in cognitive science, or sometimes the binding problem (technically subproblem BP1 of the binding problem). Being able to create distinct mental objects at will for anything we see that we wish to think about discretely is critical to making use of the information. Note that in this simplified example I have called out two idealized attributes, red and sphere, but this processing happens subconsciously, so it would be presumptuous (and wrong) to infer that it identifies the red sphere simply by using those two attributes. More on that below.

The next step of recognition is matching perceived objects to things we have seen before. This presupposes we have memories, so let’s just assume that for now. Memory acts like a dictionary of known objects. The way we associate perceived objects to memories, technically called pattern recognition, is solved by brute force: the object is simultaneously compared to every memory we have, trying to match the attributes of that object against the attributes of every object in memory. Technically, to do this comparison concurrently means doing many comparisons in parallel, which probably means many neural copies of the perceived object are broadcast across the brain looking for a match. Nearly all these subconscious attempts to match will fail, but if a match is found then consciously it will just seem to pop out. We know pattern recognition works this way in principle because it is the only way we could recognize things so quickly. Search engines and voice recognition algorithms use machine learning algorithms that function in a similar way, which is sometimes called associative memory. While we don’t know much yet about brain function, this explanation is consistent with brain studies and what we know about nerve activation.

After a match, our associative memory returns the meaning of the object, which is analogous to a dictionary definition, but while any given dictionary definition uses a fixed set of words, a memory returns a pointer connected to other memories. So the meaning consists of other objects and relationships from the given object to them. So when we recognize our wallet, the pointer for our wallet connects it to many other objects, e.g. to a generic wallet object, to all the items in it, and to its composition. Each of these relationships has a type, like “is a”, “is a part of”, “is a feature of”, “is the composition of”, “contains”, etc. This is the tip of the iceberg because we also have long experience with our wallet, more than we can remember, much of which is stored and can potentially be recalled with the right trigger.

A single recognition event, the moment an object is compared against everything we know to find a match, is itself a simple hit or miss: our subconscious either finds relevant match(es) or it doesn’t. However, what we sense at the conscious level is a complex assembly of many such matches. There are many reasons for this, and I will list a few, but they stem from the fact that consciousness needs more than an isolated recognition event can deliver:
1. The attributes one which we base recognition are themselves often products of recognition. Our experience with substances leads us to evaluate the composition of the object based on texture, color, and pattern. Our experience with letters leads us to evaluate them based on lines, curves, and enclosed areas. Our experience with shapes leads us to evaluate them based on flatness or curviness, protuberances, and proportions. This kind of low-level recognition is based on a very large internal database of attributes comprehensible only to our internal subconscious matching process (beyond just “red” or “sphere”) that is built from a lifetime of experience and not from rational idealizations we concoct consciously. So size, luminosity, depth, tone, context and more trigger many subconscious recognition events from our whole life experience. These subconscious attributes derive from what is called unsupervised learning in machine learning circles, meaning that they result from patterns in the data and not from a qualitative assessment of what “should” be an attribute.
2. Each subset of the object’s attributes represents a potentially matchable object. So red spheres can also match anything red or any sphere. Every added attribute doubles the number of combinations and adds a new subset with all the attributes, so five attributes have 31 combinations and six have 63. A small shiny red sphere with a small white circle having a black “3” in it has six (named) attributes, and we will immediately recognize it as a pool ball, specifically the 3-ball, which is always red. Our subconscious does the 63 combinations for us and finds a match on the combination of all six attributes. Without the white circle with the “3”, the sphere could be a red snooker ball, a Christmas ornament, or a bouncy ball, so these possibilities will occur to us as we study the red sphere. As noted from my comments on machine learning above, the subconscious is not really using these six attributes per se but draws on a much broader and more subtle set of attributes generalized from experience. But it still faces a subset matching problem that requires more recognition events.
3. Reconsideration. We’re never satisfied with our first recognition; we keep doing it and refining it and verifying it, quickly building up a fairly complex network of associations and likelihoods, which our subconsciously distills down for us to the most likely recognized assembly. So a red sphere among numbered pool balls will be seen as the 3-ball even if the “3” is hidden because the larger context is taken into consideration. A red ball on a Christmas tree will be seen as an ornament. So long as objects fit into well-recognized contexts, the subconscious takes care of all the details, though this leaves us somewhat vulnerable to optical illusions.
Although the possible attribute combinations from approach (2) grow exponentially to infinity, our experience-based memory of encountered attributes using approach (1) constrain that growth. So familiar objects like phones and cars, composed of many identifiable sub-objects and attributes seen in countless related variations over the years, are instantly identified and confirmed using approach (3) even if they look slightly different from any seen before.

Our subconscious recognition algorithms are largely innate, e.g. they know how to identify 3-D objects and assemble memories. But some are learned. Linguistic abilities, which enable us to not only remember things but words that go with them and ways to compose them into sentences, are chief among these for humans. Generalization, mechanics (knowledge of motion), math (knowledge of quantity), psychology (knowledge of behavior), and focusing attention on what is important are other examples where innate talents make things easy for us. We can also train our subconscious procedural memory by learning new behaviors. In this case, we consciously work out what to do, practice it, and acquire the ability to perform them subconsciously with little conscious oversight. I allot both innate and learned algorithms to the recollection phase.

Beyond recognition, we recollect using what I call association-based recall. This happens when thoughts about one thing trigger recollection of related things. This is pretty obvious — our memory is stirred either by seeing something and recognizing it or because thinking about one thing leads to another. I already discussed how our subconscious does this to draw memories together through reconsideration, but here I am referring to when we consciously use it to elaborate on a train of thought. We can also conjure up seemingly random memories about topics unrelated to anything we have been thinking about. While subconscious and conscious free association are vital to maintaining our overall broad perspective, it is the conscious recognitions and associations that drive the reasoning process to make decisions. And in humans, our added ability to consciously direct abstract thinking lets us pursue any train of thought as far as we like.

The second phase, reasoning, is the conscious use of deductive and inductive logic. This means applying logical operations like and, or, not, and if…then on the propositions under attention. Deduction produces conclusions that necessarily follow from premises while induction produces conclusions that likely follow from premises based on prior experience. Intuition (which I consider part of the recollection phase) is very much like a subconscious capacity for induction, as it reviews our prior experience to find good inferences. But that review uses subconscious logic hidden to us which we can generally trust because it has been reliable before, but not trust too much because it is localized data analysis that doesn’t take everything into account the way reasoning can. Recollection and reasoning form an inner RR loop that cycles many times before generating a reaction, though if we need a very quick response we may jump straight from intuition to reaction. Although there is only one RRR loop, the mind multitasks, swapping between many trains of thought at once. This comes in handy when planning what to do next as the mind pursues many possible futures simultaneously to find the most beneficial one. Those that seem most likely draw most of our attention while the least likely hover at the periphery of our awareness.

Just as recollection is mostly subconscious but consciously steered, so too does reasoning leverage a lot of subconscious support, much of which itself leverages memory to hold the propositions and models behind all the work it is multitasking. For example, most of our more common deductions don’t need to be explicitly spelled out because habitual use of plans used many times before lets us blend learned behavior with step by step reasoning to spell out only the details that differ from past experience. So intuition basically tells us, “I think you’ve done this kind of thing before, I’ve got this,” and we give it a bit more rope. But the top level, where reasoning occurs, is entirely conscious and the central reason consciousness exists. A subprocess of the brain that pulls all the pieces together and considers the logical implications of all the parts is extremely helpful for handling novel situations. It turns out that nearly every situation has at least some novel aspects, so we are constantly reasoning.

The third phase of the RRR loop is reaction. Reaction has two components, deciding on the reaction and implementing it. The decision itself is the culminating purpose of the mind and especially the conscious mind, which only exists to make such top-level decisions. The mind considers many possible futures before settling on an action that it believes will hopefully precipitate one of them. The decision is simply the selection of the possible future (or, more specifically, one step toward that future) that the SSSS algorithm has ranked as the optimal one to aim for. That ranking process considers all the beliefs and desires the SSSS is monitoring, both from rational inputs and irrational feelings and intuitions. Selecting the right moment to act is one of the factors managed by that consideration process, so it follows logically from the reasoning process. While there is some pressure to reconsider indefinitely to refine the reaction, there is also pressure to seize the opportunity before it slips away or hampers one’s ability to move on to other decisions. Most decisions are routine, so we are fairly comfortable using tried and true methods, but we spend more time with novel circumstances.

While the SSSS decides on, or at least finalizes, the reaction, it delegates the implementation or physical reaction to the subconscious to carry out as this part doesn’t require further decision support. Even the simplest actions require a lot of parallel processing to control the muscles to perform the action, and the conscious mind is just not up to that or even wired for it. So all of our reactions, in the last few milliseconds at least, leverage innate or habituated behavior. As we execute a related chain of reactions, we will continue to provide conscious oversight to some degree, but will largely expect learned behavior to manage the details. This is why studies show that the brain often commits to decisions before we consciously become aware of them, an argument that has been used to suggest we don’t have free will since the body acts “on its own”. All this demonstrates is that we delegated our subconscious minds to execute plans we previously blessed. Of course, if we don’t like the way things are turning out we just consciously override them. In this way, walking, for instance, becomes second nature and doesn’t require continual conscious focus. But while not in focus, all actions within conscious awareness remain under the control of the RRR loop of the SSSS process, as is necessary for overall coordinated action. Some actions not normally within the range of conscious control, like pulse rate and blood pressure, can be consciously managed to a degree using biofeedback. It is reasonable for us to lack conscious control over housekeeping tasks that don’t benefit from reason. This is why the enteric nervous system, or “gut brain”, can function pretty well even if the vagus nerve connecting it to the central nervous system is severed1.

Recollection, essential for all three phases of the RRR process, assumes we have the right kind of knowledge stored in our memory, but I did not say how it got there. Considering that our memory is empty when we begin life, we must be able to add to our store of memory very frequently early in life to develop an understanding of what we are doing. Once mature, the ability to add to our memory lets us keep a record of everything we do and to expand our knowledge to adapt to changes, which have become frequent in our fast-paced world. From a logical perspective, then, we can conclude that the brain would be well served by committing to memory every experience that passes through the RRR loop. However, one can readily calculate that the amount of information passing through our senses would fill any storage mechanism the brain might use in a few hours or days at most. So we can amend the strategy to this: attempt to remember everything, but prioritize remembering the most important things.

This is a pretty broad mandate. Without some knowledge of the brain’s memory storage mechanisms, it will be hard to deduce more details about the process of mind with much confidence. It is certainly not impossible, and I am prepared to go deeper, but now is a good time to introduce what we do know about how the memory works because brain research has produced some important breakthroughs in this area. While the history of this subject is fascinating and mostly concerns a few patients with short and long-term memory loss, I will jump to the most broadly-supported conclusions, which are mostly well-known enough now to be considered common knowledge. In particular, we have short-term and long-term memory, which differ principally in that short-term memory lasts from moments to minutes, while long-term memory lasts indefinitely. We don’t consciously differentiate the two because the smooth operation of the mind benefits from maintaining the illusion of remembering everything. We know gaps can develop in our memory quickly, but we come to accept them because they have a limited impact on our decisions going forward, which is the role of the conscious mind.

We understand long-term memory better. If you picture the brain you see the wrinkled neocortex, most of which is folded up beneath the surface. But long-term memories are not formed in the neocortex. After all, every vertebrate can form long-term memories, but only mammals have a neocortex. Long-term memory comes in two forms stored very differently in the brain. Procedural memory (learned motor skills) are stored outside the cortex in the cerebellum and other structures, and is inaccessible to conscious thought, though we can, of course, employ it. Declarative memory (events, facts, and concepts) is created in the hippocampus, part of the archicortex (called the limbic system in mammals), which is the earliest evolved portion of the cortex. This kind of long-term memory is rehearsed by looping it via the Papez circuit from the hippocampus through to the medial temporal lobe and back again. After some iterations, the memory is consolidated into a form that joins the parts together (solving the binding problem mentioned above) and is stored in the medial temporal lobe using permanent and stable changes in neural connections. Over the course of years the memory is gradually distributed to other locations in the neocortex so that recent memories are mostly in the medial temporal lobe and memories within twelve years have been maximally distributed elsewhere2. For the most part, I will be focusing on declarative memory (aka explicit memory, as opposed to implicit procedural memory) as it is the cornerstone of reasoning, but we can’t forget that the rest of the brain and nervous system contribute useful impressions. For example, the enteric nervous system or “gut brain” (noted above) generates gut feelings. The knowledge conveyed from the gut is now believed to arise from its microbiome. This show of “no digestion without representation” is our gut bacteria chipping in their two cents toward our best interests.

What about short-term memory? It is sometimes called working memory because long-term memory needs to be put into short-term memory to be consciously available for reasoning. In humans, we know it is mostly managed in the prefrontal lobe of the neocortex. Short-term memory persists for about 10 to 20 seconds but can be extended indefinitely by rehearsal, that is, repeating the memory to reinforce it. In this way, it seems short-term memories can be kept for minutes without actually forming long-term memories. The amount of active short-term memory is thought to be about 4 to 5 items, but can be enlarged by chunking, which is grouping larger sets into subsets of three to four. Short-term memory being kept available by rehearsal can extend this, even though only 4 to 5 items are consciously available at once.

While reasoning probably only considers propositions encoded in prefrontal short-term memory, the other data channels flowing into conscious awareness provide other forms of short-term memory. Sensory memory registers provide brief persistence of sensory data. Visible persistence (iconic memory) lasts a fraction of a second, one second at most, aural persistence (echoic memory) up to about four seconds, and touch persistence (haptic memory) for about two seconds. Senses are processed into information such as objects, sounds, or textures, and a short-term memory of this sensory information independent of prefrontal memory seems to exist but has not been extensively studied. Sensory and emotional data channels that provide a fairly constant message (like body sense or hunger) can also be thought of as a form of short-term memory because the information they carry is always available to be moved into prefrontal short-term memory.

Short-term and long-term memory were first proposed in 1968 by Atkinson’s and Shiffrin’s (1968) multi-store model. Baddeley and Hitch introduced a more complex model they called working memory to explain how auditory and visual tasks could be done simultaneously with nearly the same efficiency as if done separately. From a top-down perspective, the brain has great potential to process tasks in parallel but ultimately must reconcile any parallel processing into a single stream of actions. Processing sensory signals, however, are not reactions to those signals, so it makes sense we can process them in parallel and that some short-term memory capacity in each would facilitate that. If the mechanisms the brain uses to maintain short-term memories of sensory signals and pre-frontal working memory involve close loops that rehearse or cycle the memories to give them enough longevity that the mind has time to manipulate them in various ways, then it makes sense that the brain would have just a handful of such closed loops which work closely with pre-frontal working memory to manage all short-term memory needs. Alan Baddeley proposed a central executive process that coordinates the different kinds of working memory, to which he added episodic buffer in 2000. He based the central executive on the idea of the Supervisory Attentional System (SAS) of Norman and Shallice (1980).

Interestingly, we appear to be unable to form new long-term memories during REM sleep, nor do our dreaming thoughts pursue prioritized objectives. However, if we are awakened or disturbed from REM sleep we can recover our long-term storage capacity quickly enough to commit some of our dreams to memory. This suggests some mechanisms of the SSSS are disabled during dreaming while others still operate3.

Having established the basic outer process of the conscious mind as an RRR loop within an SSSS process supported by algorithms and memory that largely operate subconsciously, the next question is how this framework is used to generate the content of the conscious mind, concepts and models.

Concepts and Models

[Brief summary of this post]

In The process of mind I discussed the reasoning process as
the second phase of the RRR loop (recollection, reasoning, and reaction). That discussion addressed procedural elements of reasoning, while this discussion will address the nature of the informational content. Information undergoes a critical shift in order to be used by the reasoning process, a shift from an informal set of associations to explicit relationships in formal systems, in which thoughts are slotted into buckets which can be processed logically into outcomes which are certain instead of just likely. Certainty is dramatically more powerful than guesswork. The buckets are propositions about concepts and the formal systems are an aspect of mental models (which I will hereafter call models).

I have previously described this formal cataloging as a cartoon, which you can review here. So is that it then, consciousness is a cartoon and life is a joke? No, the logic of reasoning is a cartoon but the concepts and models that comprise them bridge the gap — they have an informal side that carries the real meaning and a formal side that is abstracted away from the underlying meaning. So there is consequently a great schism in the mind between the formal or rational side and the informal or subrational side. Both participate in conscious awareness, but the reason for consciousness is to support the rational side. Reasoning requires that the world be broken down, as it were, into black and white choices, but to be relevant and helpful it needs to remain tightly integrated to both external and internal worlds, so the connections between the cartoon world and the real world must be as strong as possible.

So let’s define some terms in a bit more detail and then work out the implications. I call anything that floats through our conscious mind a thought. That includes anything from a sensory perception to a memory to a feeling to a concept. A concept is a thought about something, i.e. an indirect reference to it, and this indirect reference is the formal aspect that supports reasoning, a thought process that uses concepts to form propositions to do logical analysis. (A concept may also be about nothing; see below.) What concepts refer to doesn’t actually matter to logical analysis; logic is indifferent to content. Of course, content ultimately matters to the value of an analysis, so reasoning goes beyond logic to incorporate meaning, context, and relevance. So I distinguish reasoning from rational thought in that it leverages both rational and subrational thinking. And concepts as well leverage both: though they may be developed or enhanced by rational thinking, they are first and foremost subrational. They are a way of grouping thoughts, e.g. sensory impressions or thoughts about other thoughts, into categories for easy reference.

We pragmatically subdivide our whole world into concepts. The divisions are arbitrary in the sense that the physical world has no such lines — it is just a collection of waves and/or particles in ten or so dimensions. But it is not arbitrary in the sense that patterns emerge that carry practical implications: wavelets clump into subatomic particles, which clump into atoms, which clump into molecules, which clump into earth, water, and air or self-organize into living things. These larger clumps behave as if causes produce effects at a given macro level, which can explain how lakes collect water or squirrels collect nuts. The power that organizes things into concepts is generalization, which starts from recognizing commonalities between two or more experiences. Fixed reactions to sensory information, e.g. to keep eating while hungry, are not a sufficiently nuanced response to ensure survival. No one reaction to any sight, sound or smell is helpful in all cases, and in any case, one never sees exactly the same thing twice. Generalization is the differentiator that provides the raw materials that go into creating concepts. Our visual system contains custom algorithms to differentiate objects based on hardwired expectations about the kinds of boundaries between objects that we encountered in our ancestral environment that we benefited most from being able to discriminate. Humans are adapted to specialize in binocular, high-resolution, 3-D color vision of slowly moving objects under good lighting, even to the point of being particularly good at recognizing specific threats, like snakes1. Most other animals do better than us with fast objects, poor lighting, and peripheral vision. My point here is just that there are many options for collecting visual information and for generalizing from it, and we are designed to do much of that automatically. But being able to recognize a range of objects doesn’t tell us how best to interact with them. Animals also need concepts about those objects that relate their value to make useful decisions.

Internally, a concept has two parts, its datum and a reference to the datum, which we can call a handle after the computer science term for an abstract, relocatable way of referring to a data item. A handle does two things for us. First, it says I am here, I am a concept, you can move me about as a unit. Second, it points to its datum, which is a single piece of information insofar as it has one handle, but connecting to much more information, the generalizations, which together comprises the meaning of the concept. A datum uniquely collects the meaning of a given concept at a given time in a given mind, but other thoughts or concepts may also use that connected information for other purposes. This highly generalized representation is very flexible because a concept can hold any idea — a sensation, a word, a sentence, a book, a library — without restricting alternative formulations of similar concepts. And a handle with no datum at all is still useful in a discussion about generic concepts, such as the unspecified concept in this clause, which doesn’t point to anything!

To decompose concepts we need to consider what form the datum takes. This is where things start to get interesting, and is also the point where conventional theories of concepts start to run off the rails. We have to remember that concepts are fundamentally subrational. This means that any attempt to decompose them into logical pieces will fail, or at best produce a rationalization2, which is an after-the-fact reverse-engineered explanation that may contain some elements of the truth but is likely to oversimplify something not easily reducible to logic. For a rational explanation of subrational processes, we should instead think about the value of information more abstractly, e.g. statistically. The datum for the concept APPLE (discussions of concepts typically capitalize examples) might reasonably include a detailed memory of every apple we have ever encountered or thought about. If we were to analyze all that data we might find that most of the apples were red, but some were yellow or green or a mixture. Many of our encounters will have been with products made from apples, so we have a catalog of flavors as well. We also have concepts for prototypical apples for different circumstances, and we are aware of prototypical apples used by the media, as well as many representations of apples or idiomatic usages. All of this information and more, ultimately linking through to everything we know, is embedded in our concept for APPLE. And, of course, everyone has their own distinct APPLE concept.

Given this very broad and even all-encompassing subrational structure for APPLE, it is not hard to see why theories of concepts that seek to provide a logical structure for concepts might go awry. The classical theory of concepts3, widely held until the 1970’s, holds that necessary and sufficient conditions defining the concept exist. It further says that concepts are either primitive or complex. A primitive concept, like a sensation, cannot be decomposed into other concepts. A complex concept either contains (is superordinate to) constituent concepts or implies (is subordinate to) less specific concepts, as red implies color. But actually, concepts are not comprised of other concepts at all. Their handles are all unique, but their data is all shared. Concepts are not primitive or complex; they are handles plus data. Concepts don’t have discrete definitions; their datum comprises a large amount of direct experience which then links ultimately to everything else we know. Rationalizations of this complex reality may have some illustrative value but won’t help explain concepts.

The early refinements to the classical theory, through about the year 2000, fell into two camps, revamp or rejection. Revamps included the prototype, neoclassical and theory-theory, and rejection included the atomistic theory. I’m not going to review these theories in detail here; I am just going to point out that their approach limited their potential. Attempts to revamp still held out hope that some form of definitive logical rules ultimately supported concepts, while atomism covered the alternative by declaring that all concepts are indivisible and consequently innate. But we don’t have to do down either of those routes; we just have to recognize that there are two, or at least two, great strategies for information management: mental association and logic. Rationality and reasoning depend on logic, but there are an unlimited number of potentially powerful algorithmic approaches for applying mental associations. For example, our minds subconsciously apply such algorithms for memory (storage, recall and recognition), sensory processing (especially visual processing in humans), language processing, and theory of mind (ToM, the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others). Logic itself critically depends on the power of association to create concepts and so is at least partially subordinate to it. So an explanation of reasoning doesn’t result in turtles (logic) all the way down. One comes first to logic, which can be completely abstracted from mental associations. One then gets to concepts, which may be formed purely by association but usually includes parts (that are necessarily embedded in concepts) built using logic as well. And finally one reaches associations, which are completely untouchable by direct logical analysis and can only be rationally explained indirectly via concepts, which in turn simplify and rationalize them, consequently limiting their explanatory scope to specific circumstances or contexts.

I have established that concepts leverage both informal information (via mental association) and formal information (via logic), but I have not said yet what it means to formalize information. To formalize means to dissociate form from function. Informal information is thoroughly linked or correlated to the physical world. While no knowledge can be literally “direct” since direct implies physical and knowledge is mental (i.e. relational, being about something else), our sensory perceptions are the most direct knowledge we have. And our informal ability to recognize objects, say an APPLE, is also superficially pretty direct — we have clear memories of apples. Formalization means to select properties from our experiences of APPLES that idealize in a simple and generalized way how they interact with other formalized concepts. On the one hand, this sounds like throwing the baby out with the bathwater, as it means ignoring the bulk of our apple-related experiences. But on the other hand, it represents a powerful way to learn from those experiences as it gives us a way to gather usable information about them into one place. I call that place a model; it goes beyond a single generalization to create a simplified or idealized world in our imagination that follows its own brand of logic. A model must be internally consistent but does not necessarily correspond to reality. It is, of course, usually our goal to align our models to reality, but we cognitively distinguish models from reality. We recognize, intuitively if not consciously, that we need to give our models some “breathing room” to follow the rules we set for them rather than any “actual” rules of nature because we don’t have access to the actual rules. We only have our models (including models we learn from others), along with our associative knowledge (because we don’t throw our associative knowledge out with the bathwater; it is the backbone beneath our models). Formally, models are called formal systems, or, in the context of human minds, mental models. Formal systems are dissociated from their content; they are just rules about symbols. But their fixed rules make them highly predictable, which can be very helpful if those predictions could be applied back to the real world. The good news is that many steps can be taken to ensure that they do correlate well with reality, converting their form back into function.

But why do we formalize knowledge into models? Might not the highly detailed, associative knowledge remembered from countless experiences be better? No, we instead simplify reality down to bare-bones cartoon descriptions in models to create useful information. The detailed view misses the forest for the trees. Generalization eliminates irrelevant detail to identify commonality. The mind isolates repetitive patterns over space and time, which inherently simplifies and streamlines. This initially creates a capacity for identification, but the real goal is a capacity for application. Not just news, but news you can use. So from patterns of behavior, the mind starts to generalize rules. It turns out that the laws of nature, whatever they may ultimately be, have enough regularity that patterns pop up everywhere. We start to find predictable consequences from actions at any scale. We call these cause and effect if the effect follows only if the cause precedes, presumably due to some underlying laws of nature. It doesn’t matter if the underlying laws of nature are ever fully understood, or even if they are known at all, which is good because we have no way of learning what the real laws of nature are. All that matters is the predictability of the outcome. And predictability does approach certainty for many things, which is when we nominate the hypothesized cause as a law. But we need to remember that what we are really doing is describing the rules of a model, and both the underlying concepts in the model and their rules can never perfectly correspond to the physical world, even though they appear to do so for all practical purposes. Where there is one model, there can always be another with slightly different rules and concepts that explains all the same phenomena. Both models are effectively correct until a test can be found to challenge them. This is how science vets hypotheses and the paradigms (larger scale models) that hold them.

Having established that we have models and why, we can move on to how. As I noted above, while logic can be abstracted from mental associations, it is not turtles (i.e. logical) all the way down. Models are a variety of concept, and concepts are mostly subrational, the informal products of association: we divine rules and concepts about the world using pattern recognition without formal reasoning. We can and often do greatly enrich models (and all concepts) via reasoning, which ultimately makes it difficult to impossible to say where subrational leaves off and rational begins.4 As noted above, we can’t use reason to separate subrational from rational, because that is rationalizing, whose output is rational. Rational output has plenty of uses, but can’t help but stomp on subrational distinctions. But although we can’t identify where the subrational parts of the model end and the rational parts begin, it does happen, which means we can talk about an informal model that consists of both subrational and rational parts, and a formal model consisting of only rational parts. When we reason, we are using only formal models which implicitly derive their meaning from the informal model that contains them. This is a requirement of formal systems: the rules of logic operate on propositions, which are statements that are true or false affirmations or predicates about a subject, which itself must be a concept. So “apples are edible” and “I am hungry” are propositions about the concepts APPLE, EDIBLE, and HUNGRY (at least). Our informal model in this scenario consists of the aspects of the data (plural of datum) of these concepts and all related interactions we recall or have generalized about in the past. To create a formal model with which we can reason we add propositions such as: “hunger can be cured by eating” and “one must only eat edible items”. From here, logical consequences (entailments) follow. So with this model, I can conclude as a matter of logical necessity that eating an apple could cure my hunger. So while our experience may remind us (by association) of many occasions on which apples cured hunger, reasoning provides a causal connection. Furthermore, anyone would reach that conclusion with that model even though the data behind their concepts varies substantially. The conclusion holds even if we have never eaten an apple and even if we don’t know what an apple is. So chains of reasoning can provide answers where we lack first-hand experience.

So we form idealized worlds in our heads called models so we can reason and manage our actions better. But how much better, exactly, can we manage them than with mental association alone? At the core of formal systems lies logic, which is what makes it possible for everything that is true in the system to be necessarily true, which in principle can confer the power of total certainty. Of course, reasoning is not completely certain, as it involves more than just logic. As Douglas Hofstadter put it, “Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system.”5 I would go a step beyond that. Hofstadter’s methods “outside the system” are themselves inside systems of rules of thumb or common sense we develop that are themselves highly rational. We might not have formally written down when it is a good idea to skip steps or draw diagrams, but we could, so these are still what I call formal models. But that still only scratches the surface of the domain of reason. Reasoning more significantly includes accessing conscious capacities for subrational thought across informal models, and so is a vastly larger playing field than rational thought within formal models. In fact it must be played in this larger arena because logic alone is is an ivory tower — it must be contextualized and correlated to the physical world to be useful. Put simply, we constantly rebalance our formal models using data and skills (e.g. memory, senses, language, theory of mind (ToM), emotion) from informal models, which is where all the meaning behind the models lies. I do still maintain that consciousness overall exists as a consequence of the simplified, logical view of rationality, but our experience of it also includes many subjective (i.e. irrational) elements that, not incidentally, also provide us with the will to live and thrive.