Recent Posts

The Mind Matters: The Scientific Case for Our Existence

The mind occupies a very tenuous position scientifically. Probably no two scientists would agree on precisely what kind of entity the mind even is. At one extreme are the eliminative materialists (or eliminativists), who believe that although our states of mind form convincing illusions, they are really just aggregates of neurochemical phenomena without being objectively real in their own right. At the other extreme are the cognitive idealists, who believe that our mental states are real but immaterial, that is, that mental states are ontologically distinct from (have a separate basis for existence from) and are not reducible to the physical. If science had to take an official stance, it would favor the former, noting the predominance and relative certainty of the hard sciences, which have successively eroded notions of human preeminence, such as our place in the universe and our divine creation. Probably most would say that the clear role of the brain as the mechanism settles the matter, and all that remains is to work out the details, at which point we will be able to relegate the mind to the dustbin of history along with the flat earth, geocentrism, and phlogiston. Just as chemistry was shown by Linus Pauling1 to be reducible to physics but is still worth studying as a special science, so, too, do social scientists tacitly accept that the mind is probably reducible to mechanisms but can still be useful to study as if aggregate conceptions of it meant something. The whole point of this book is to prove that this analogy is flawed and to establish a nonphysical yet scientific basis for the existence of the mind as we know it. Dualism, the idea of a separate existence of mind and matter, has long had a taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical, at least when formulated correctly. I did not originate the basis for dualism I will present, but I have refined it substantially so it can stand as a stronger foundation for cognitive science, and for science overall.

Any contemplation of existence starts with solipsism, the philosophical idea that only one’s own mind is sure to exist. As Descartes put it, I think therefore I am. The next candidates for existence are the contents of our thoughts, which invariably include the raft of persistent objects that we think of as comprising the physical world. In fact, objects persist so predictably that a whole canon of materialist scholarship, physical science, has developed. The rules of physical science hold that everything physical must follow physical laws. While these laws cannot be proven, their comprehensiveness and reliability suggest that we should trust them until proven otherwise. Applying physical science to our own minds has revealed that the mind is a process of the brain. That the brain is physical seems to imply that the mind is physical, too, which in turn suggests that any non-physical sense in which we think of thoughts is an illusion; it is just our mind looking at itself and interpreting physical processes as something more than they are. In the words of the neurophilosopher Paul Churchland, the activities of the mind are just the “dynamical features of a massively recurrent neural network”2. From a physical perspective, this is entirely true, provided one accepts the phrase “massively recurrent neural network” as a gross simplification of the brain’s overall architecture. The problem lies in the word “dynamical”, which takes for granted (incorrectly, as we will see) that all change in a physical world can be understood using a purely physical paradigm. However, because some physical systems use complex feedback loops to capture and then use information, which itself is not physical, we need a paradigm that can explain processes that use it. Information is the basic unit of function, which is an entirely separate kind of existence. These two kinds of existence create a complete ontology (philosophy of existence), which I call form and function dualism. While philosophers sometimes list a wider variety of categories of being (e.g. properties, events), I believe these additional categories reduce to either form or function and no further.

Because physical systems that use information (which I generically call information management systems) have an entirely physical mechanism, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. Functional existence is not a separate substance in the brain as Descartes proposed. It is not even anywhere in the brain because only physical things have a location. Any given thought I might have simultaneously exists in two ways, physically and functionally. Its physical form is arguably a subset of my neurons and what they are up to, and perhaps also involves to some degree my whole brain or my whole body. We can’t yet delineate the physical boundaries of a thought, but we know it has a physical form. The thought also has a function, being the role or purpose it serves. The thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate similar past situations to the current situation. The relationship between information and the circumstances in which it can be employed is inherently indirect, and abstractly so. While we establish an indirect reference whenever we use information, until it is used the information is strictly nonphysical, though it can be stored physically. By this, I mean that the information characterizes potential relationships through as-yet unestablished indirect references, and so its “substance” is essentially a web of connections untethered to the physical world, even though this web has been recorded in a physical way. Consequently, no theory of physical form can characterize this nonphysical function, which is why function must be viewed as a distinct kind of existence. The two kinds of existence never even hint at each other. Form doesn’t hint at purpose, only thinking about function does. The universe runs, some would say runs down, without any concept of purpose. And function doesn’t mandate form; many forms could perform the same function.

While form does not need function for its physical existence, function needs a form, an information management system, to exist physically. This raises the obvious question: if we were to understand the physical mechanism of the mental world, would that explain it? First, “understand” and “explain” are matters of function, not form, so one must presumably be willing to accept this use of function to explain itself. Second, given point one, we must also accept that there are always many functional ways to explain anything, each forming a perspective that covers different aspects (or the same aspects in different ways). In the case of the brain, I would distinguish at least three primary aspects: a mechanical understanding of neural circuitry to explain the form, an evolutionary understanding to explain the long-term (gene line) control function, and an experiential understanding to explain the short-term (single lifetime) control function. Physically, neural circuitry and biochemistry underlie all brain activity. Functionally, information management in the brain includes both instinctive inclinations encoded in DNA and lessons from experience encoded in memory. Both instinct and experience use feedback to achieve purpose-oriented designs, though evolution has no designer and the degree to which we design our behavior will be a subject of future discussion. These three kinds of understanding are the subjects of neurochemistry, evolutionary biology, and psychology respectively.

Neurochemistry and evolutionary biology will not be my primary angle of attack, even though they are fundamental and must support any conclusions I might reach. The reason is that the third category, psychology, encompassing our knowledge from a lifetime of experience and the whole veneer of civilization, has a greater relevance to us than the underlying mechanisms that make it possible, and, more significantly, is “under our control” in ways our wiring and instincts are not. We use the word artificial to distinguish the aspects and products of mind we can control, and humans appear to be able to control a whole lot more than other animal minds. To attack this memory-based, experiential side of the mind, I will at first draw what support I can from neurochemistry and evolutionary biology and from there propose new theory drawing on introspection, common knowledge, psychology and computer science. In other words, I am going to speculate, but within an appropriate framework as objectively defined as I believe anyone has yet attempted for this subject. To keep my presentation from getting bogged down in details, I will just state my positions at first, whether they are based on new theory or old, and I will gradually develop supporting arguments and sources as I go on.

To cut right to the chase, what stands out about the human mind relative to all others is its greater capacity for abstract thought (note that this is new theory). People use their minds to control themselves, but they can also freely associate seemingly any ideas in their heads with any others ad infinitum to consider potentially any situation from any angle. Of course, whether and how this association is really free is a matter for further discussion, but, practically speaking, we are flexible with our generalizations and so can find connections between any two things or concepts. This lets us leverage our knowledge in vastly more ways than if we only applied our knowledge narrowly, as is more generally the case with other animals. Language is taken by some to be the source and hallmark of human intelligence, and it did coevolve with the evolutionary expansion of abstract thought. Visual and temporal thinking also expanded, but linguistic, spatial and event-based thinking could not have evolved further without abstract thinking, which made them worthwhile and so can be thought of as the driving force. The combination of greater abstract thought with linguistic, spatial, and temporal gifts that employ it collectively constitute the human mental edge over other species.

Abstraction is the essence of what it means to be human (above and beyond being an animal). It is the light bulb that goes off in our heads that shows that we “get it”. If you interviewed a robot that appeared to be able to make abstract connective leaps, you would have to begrudgingly grant it a measure of intelligence. But if it could not, then no matter how clever it was at other things you would still think of it as a dumb automaton. The evolution of abstraction is hardly mandated by or an obvious development of evolution, so we could study neurochemistry and evolutionary biology for some time without ever suspecting it happened, as it didn’t in other animals. But we have a strong, vested interest in understanding human minds in particular, so we have to look for differences where we can and try to explain them. A remarkable thing about abstraction is that it gives us unlimited cognitive reach: we can understand anything, and no idea is beyond us. Our unlimited capacity to abstract gives us the ability to transcend our neurochemical and evolutionary limitations, at least in some ways. We can use mechanical means, e.g. books or computers, to extend our memory and processing power. We can ignore evolutionary mandates enforced by emotions and instincts to choose any path we like using abstract reasoning, a freedom evolution granted us with the expectation that we will think responsibly to meet our evolutionary goals and not thwart them. Note that if we destroy the planet and ourselves in the process, this will reveal the drawback of this expectation.

There is a school of thought, new mysterianism, that holds that some problems are beyond human ability to solve and therefore to understand, possibly because of our biological limitations. This position was proposed to justify the insolubility of the so-called “hard problem of consciousness”, which is the problem of explaining why we have qualia (sensory experiences). The problem is considered hard because no apparent physical mechanism can explain them. I hold that it is nonsensical to suggest that any problem is beyond our ability to understand and solve because understanding comes from without, not from within. That is, understanding is a perspective outside that which is being understood. It consists of generalizations, which are essentially approximations with some predictive power. One can always generalize about anything, no matter how ineffable or complex, so one can always understand it, at least to a certain degree. “Perfect” understanding is always impossible because of the approximate nature of generalizations. In functional terms, to generalize means to extract information or useful patterns collected from different perspectives or observations, i.e. to serve different purposes. For example, we observe the universe, and although we cannot see the underlying mechanism, we have generalized many laws of nature to describe it, explain it, and solve any number of problems. So solutions to problems are just explanatory perspectives. If the point the new mysterians are making is that we can’t prove the true nature of the universe, then that point has to be granted, because laws of nature can’t be proven (and perfect explanations aren’t possible anyway because they are approximate). But a theory that explains, say, all relevant observations about qualia can be formulated, and I will present one later on.

But what about problems that are just too complex, with too many parts, for humans to wrap their heads around? By generalizing more we can always break the complex down into the simple, so we can definitely understand them and solve them eventually. But I have to concede that some problems may be beyond our practical grasp because it would take us too long to properly understand all the parts and how they fit together. A classic example of this issue is four-dimensional (or higher) vision. We take our ability to visualize in 3D for granted, and yet we draw on highly customized subconscious hardware to do it. Unless we somehow find a way to add 4D hardware to our brains, we will never have an intuitive feel for higher dimensional thinking. We can always project slices down to 2D or 3D, and so in principle we can eventually solve problems in higher dimensions, but our lack of intuition is a serious practical hindrance. Another example is our “one-track mind”, which makes it hard for us to conceive of complex (e.g. biological) processes that happen in parallel. We instead track them separately and try to factor in knock-on effects, but this is a crude analogy. We have to accept that our practical reach has many constraints which can obscure deeper understandings from us.

Let me return from my digression into human-specific capacities to the existential nature of function. I initially said that information is the basic unit of function, and I defined information in terms of its ability to correlate similar past situations to the current situation to make an informed prediction. This strategy hinges on the likelihood that similar kinds of things will happen repeatedly. At a subatomic level the universe presumably never exactly repeats itself, but we have discovered consistent laws of nature that are highly repeatable even for the macroscopic objects we typically interact with. Lifeforms, as DNA-based information management systems, bank on the repeatable value of genetic traits when they use positive feedback to reward adaptive traits over nonadaptive ones. Hence DNA uses information to predict the future. Further adaptation will always be necessary, but some of what has been learned before (via genetic encoding) remains useful indefinitely, allowing lifeforms to expand their genetic repertoire over billions of years. Some of this information is provided directly to the mind through instincts. For example, the value of wanting to eat or to have kids has been demonstrated through countless generations. In the short-term, however, that is, over a single organism’s lifetime, instincts alone don’t and can’t capture enough detailed information to provide the level of decision support animals need. For example, food sources and threats vary too much to ingrain them entirely as instincts. To meet this challenge, animals learn, and to learn an animal must be able to store experiential information as memory that it can access as needed over its lifetime. In principle, minds continually learn throughout life, always assessing effects and associating them to their causes. In practice, the minds of all animals undergo rapid learning phases during youth followed by the confident application of lessons learned during adulthood. Adults continue to learn, but acting quickly is generally more valuable to adults than learning new tricks, so stubbornness overshadows flexibility. I have defined function and information as aids to prediction. This capacity of function to help us underlies its meaning and distinguishes it from form, even though we use mechanisms (form) to perform functions and store information. Form and function are distinct: the ability to predict has no physical aspect, and particles, molecules, and objects have no predictive aspect.

With that established we can get to the most interesting aspect of the mind, which is this: brains manage information in two ways, but only one of those ways is responsible for the existence of minds. I call these two kinds of information logical and associative, and minds exist because of logical information. I will get to why further down, but first let’s look at these two kinds of information. Logical information invokes formalizations and logic while associative information gleans patterns. The simplest kind of formalization is the indirect reference, which is the ability to reference something using a name, reference, or container instead of the value itself. In the brain, I will assume the existence of a low-level (i.e. subconscious) capacity to create such containers, which I will henceforth call concepts. We often use words to label concepts, though most are unnamed. Also, words come to refer to a host of concepts and not just one, including what we think of as their distinct dictionary entries, connotations, and also a constantly evolving set of variations specific to our experience. We manage associative information entirely by subconscious processing, which is because it is done in parallel. We don’t consciously think in parallel, we consciously follow a single train of thought, but subconsciously all our neurons are working for us all the time behind the scenes, and any insights they glean from a variety of algorithms that survey the available data can be called associative information. It is a lot like inductive reasoning because it is based on a preponderance of the evidence, but it differs in that it is subconscious instead of conscious, and it may leverage or produce generalizations/concepts, but it doesn’t have to. I collectively label associative information that bubbles up to conscious awareness as intuition. Intuition, an immediate or instinctive understanding without conscious reasoning, derives from our senses, instincts, emotions, and memory. The classic example of associative information is recognition, in which an input image is simultaneously compared against everything the brain has ever seen and the best match just “pops” out consciously as identified. This also helps us connect ideas to similar ideas we have had before, which gives us our ready understanding of how things work. Brain processes are either logical or associative, but they can leverage each other. Associative information supports the formalization of concepts, and concepts, which are formed and extended by logical thinking, are often searched associatively once committed to memory.

Concepts are at the core of logical thought. Concepts are generalized, indirect references that can be applied to similar situations. “Baseball”, “three” and “red” can refer to any number of physical phenomena but are not physical themselves. Even when they are applied to specific physical things, they are still only references and not the things themselves. Churchland would say these mental terms are a part of folk psychology that makes sense to us subjectively but have no place in the real world, which depends on the calculations that flow through the brain but does not care about our high level “molar” interpretation of them as “ideas”. Really, though, the mysterious, folksy, molar property he can’t quite put his finger on is function, and it can’t be ignored or reduced. The atomic embodiment of function in our logical thinking is the concept, and in our associative thinking it is intuition. The way that concepts and intuition guide our actions results from neurochemical mechanisms, but the mechanisms themselves can’t tell us why we chose an action; for that, we have to look at the information that was used from a functional perspective. Some eliminativists might say that using functional means to explain the physical is still physical, because function is an advanced aspect of complex physical systems, but this is a cheat, because it isn’t: specific events are physical, but generalizations to functionality are not.

But why do minds only exist because of logical information? In this usage of mind I am clearly distinguishing the mind from being the totality of everything the brain does. Computers do a lot too, but they don’t have minds. We reserve the word “mind” to describe a certain subset of what the brain does, specifically to our first-person capacity for awareness, attention, perception, feeling, thinking, will and reason. We currently lack a workable scientific explanation for why we seem to be agents experiencing these mental states. Consequently, we can’t define them except, irreducibly, in terms of each other. I have an explanation, which I will elevate to the standard of being scientific, which I will now summarize. Our agency ultimately follows from the fact that concepts are generalizations, and generalizations are fictional, which literally means they are only figments of our imagination. These figments create worlds independent of the physical world, if by world we mean a functional realm or domain, comprised of concepts (generalizations) defined in terms of other concepts. Causes and effects can be tracked in functional domains the same way they can in the physical world, but this tracking happens relative to the perspective of the functional domain, which is to say according to logical and associative operations. I am going to argue that this relative perspective creates the sense of consciousness as we know it specifically because logic must follow a single path or “train of thought”. Let’s look closer at the ways (i.e. the functional domains) that association and logic are used in the brain.

We don’t know all that much about associative functional domains because our brains perform associative operations subconsciously. Loosely speaking, we collect from a set of experiences a body of knowledge that groups similar experiences, and using this we can then decide whether or not a new example belongs to the set. This skill has been artificially replicated using machine learning algorithms that are “trained” using many examples that have been correlated to identifying characteristics of each example. This skill most obviously allows us to recognize things from memory, but it also lets us recall the correct word for a situation, it tells us what things to focus our attention on, and it inspires us to feel calm or scared in soothing or dangerous situations. We don’t have to reason these things out; we know them intuitively from expert associative functional domains.

While we also don’t know that much about logical functional domains because much of what they do is also done subconsciously, we do have considerable conscious awareness of them and so can study them somewhat directly through introspection. Doing this, we can see that (loosely speaking, again) we establish rules of cause and effect linking our concepts together. Causality is just a way of describing the predictive power of logical, but not associative, information. Associative information is predictive, but it can’t be said to be causative because while it can indicate likely outcomes, it is indifferent to whether a causative mechanism exists. It can advise our decisions by suggesting behaviors that have worked well in the past in similar circumstances, which is exactly how our subconscious behaviors work. Nearly everything our brains do follows this “mindless” but effective automated approach. The problem is that it doesn’t work very well in novel situations, and there is usually at least some novelty in nearly all our top-level interactions. This is where logic and reasoning come in. If we had rules that could tell us what was going to happen with certainty (or significant accuracy) by representing the situation in mechanical terms, then that would not only be extremely “predictive” but also causative. For example, if I let go of a book, gravity will cause it to fall to the ground. This conclusion is entirely independent of the associative knowledge I acquired from the thousands or millions of times I’ve seen things fall; it is based solely on the logical model in my head that describes how objects are pulled down to the ground by gravity. The true mechanism of the universe, which is unknowable anyway, doesn’t matter. Whether gravity is a force or results from the shape of spacetime, if we presume it is causative it can do much more than intuition alone. The catch is that rules don’t operate on raw sensory inputs, they need to operate on nicely packaged pieces of information. In the brain these logical pieces are concepts. If I can generalize a hand and a book from my sensory data, I can map my gravity rule onto the book. I can then identify a cause (letting go) with an effect (the book falling). That the effect will follow the cause more often than when the cause did not happen demonstrates that the cause is at least partially responsible for it and justifies labeling them cause and effect. What this tells us is that there is more to abstraction than just forming concepts from generalizations. We form networks of interrelated concepts and rules about them called systems of abstract thought. Much of this book will focus on figuring out how these systems work in our minds, but we can also devise systems that work outside our own mind as well, which are easier to study. Language is, of course, the primary example of such a system. It partially standardizes a number of concepts and rules and can be used or extended to characterize any new concepts or rules that come along. A fully standardized, i.e. logical, abstract system is called a formal system. Formal systems, which often employ formal languages, can achieve much greater logical precision than natural language34.

Systems of abstract thought employ conceptual models, which are sets of rules and concepts that work together to characterize some topic to be explained (the subject of prediction). Any system of abstract thought would use a variety of conceptual models to cover different topics, and then use additional associative or logical means to select and arbitrate between conceptual models for each application. Because abstract thought systems can use associative strategies to manage conceptual models, and because concepts themselves depend on associative support for their meaning, these systems are not as strictly logical as formal systems but instead include both associative and logical aspects. In fact, generally speaking, the mind fluidly uses both associative and logical information, each to their strengths, and so is often not aware of their distinction. But they are very different kinds of information, and we do recognize the difference on reflection.

“Conceptual model” is an abstract term that describes networks of concepts independently of they ways such models work in actual minds, or in our minds specifically. A conceptual model presented through language (natural or formal) doesn’t depend directly on the structure of the mind to be expressed, and so much potential complexity has been abstracted away. A conceptual model as it is held in our minds is called a mental model. As it is a conceptual model, a mental model only represents one logical model to explain its underlying topic(s). As noted in Wikipedia:

Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in the psychology of reasoning (Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case of counterfactual conditionals and counterfactual thinking (Byrne, 2005).

The article says, simplistically as it will turn out, that “People infer that a conclusion is valid if it holds in all the possibilities.” If the way we used mental models were as strictly logical as mental models themselves, then this would hold up, but the systems of abstract thought that we use to reason include associative knowledge and processing, which is not logical itself. So you could say that we use intuitive approaches to rate our mental models, lending appropriate levels of credence to each. But it is more than that; we also use intuitive powers to align mental models with observed circumstances, this time using our power of recognition to match models instead of objects (or other concepts). We can just feel when a model is a good fit, but this is not logical (rational), it is based on experience and the weight of the evidence our subconscious has considered for us. We will, of course, supplement associative knowledge with logical scrutiny when it seems appropriate, focusing our attention on details to make sure the models and concepts we are using make logical sense. But this logical oversight, and all of logical reasoning and consciousness, for that matter, are analogous to a CEO who indirectly oversees thousands of employees gathering information and doing things down below. The CEO has just one train of thought, while all those workers… well, they are not quite analogous to people because they are just associative, processing information without dwelling on the implications. To summarize, mental models are possible worlds, but it takes more than logic to match up possible worlds to actual worlds and to evaluate their worth.

Now we can see why logic creates agency. Logic is all about dwelling on the implications. Consciousness and its attendant traits — awareness, attention, perception, feeling, thinking, will and reason — are all designed to efficiently reduce the deluge of information flowing from our senses to our brains into a single train of thought, a train that is single because logic can only draw one conclusion from premises at a time. The entity I think of as me is just a logical process in my brain doing what it has to do. If you think about it, it can’t help but create the traits of consciousness; they need to be that way to work. But why do they feel the way they do? This, too, is not a miracle, it is just a bias. We think of our sensations as being miraculous because we can distinguish them from each other so readily, and because each feels so appropriate to its task, but, then again, they have to feel different and appropriate. Their appropriateness is a simple consequence of their being highly integrated with the knowledge most relevant to them. Heat feels comfortable in moderation and painful when too much or too little. What else would we expect? Red seems bright and surprising while green is calm and soothing. This helps us distinguish fruits from leaves. Emotions are hardwired reactions that, big surprise, inspire us to react in exactly the ways that are most likely to be helpful. In short, our subjective feelings feel the way the do as a shorthand for the information they represent. It is the feel of information that has been packaged up by associative subconscious systems into easily digestible form for conscious consideration. Our exact feel of qualia evolved over billions of years, so our subjective experience is not a blank slate that is different for each of us but is extremely similar in everyone. Furthermore, since the feel of qualia is an expression of their function, similar qualia must feel the same in different species, although with appropriate differences due to variations in function. It is ironic that qualia (especially emotion) are not themselves logical and yet they exist to support logic and consequently consciousness, which need to have complex inputs simplified down to a single stream for logical processing.

Finally, let’s circle back to see whether we really need a functional view or whether a physical view might suffice. Arguably, we can understand life processes using only physical (i.e. biochemical) and not functional (i.e. evolutionary) analysis. Biochemistry explains all the processes of metabolism and reproduction, including DNA’s mechanisms. We can consequently say that everything that has happened since life began was just a sequence of physical events without invoking functional notions like evolution. So the reason some traits and species survived while others perished is superficially nothing more than that the individuals survived, which in turn can be attributed to their mechanisms working well. Even adaptive change can be explained mechanically: the physical system is set up so that feedback selects mechanisms that will be more likely to be adaptive in future situations. One doesn’t have to jump to any conclusions or generalizations about “traits” these mechanisms “carry”. We don’t need a functional perspective until and unless we want to leverage logic, either as a way of explaining the design or doing logical design ourselves. Logic, the hallmark power of consciousness, brings capabilities to the table that a purely physical understanding of functional phenomena lacks. Logic lets us explain function with function, and in so doing lets us draw confident conclusions about things that will happen that a physical explanation couldn’t even hint at. Understanding the biochemistry gives us no idea or perspective how well a mechanism will do in the future because physical perspectives don’t predict the future. When we look at biology from an evolutionary perspective, suddenly we see purposes for every trait, purposes which in turn can be said to have at least partially caused the traits to evolve. We develop an explanation that carries much more predictive power than a biochemical explanation alone.

The above argument applies to the mind as well, since it is one of the life processes under evolutionary management. But the mind is much more than that because it does not just leverage information stored in DNA, it leverages information collected from cultural and personal experience, and these kinds of information are arguably of greater interest to us than those that we acquire innately. Neurochemistry studies the mechanics of mental functionality at the “hardware” (physical) level. While highly dependent on this physical form, cognitive function introduces a whole new plane of existence which I have argued is not even hinted at by it. My goal is to explain the cognitive function of the brain, also called our mental life or the mind. DNA encodes the subset of mental functionality that is purely biological, and evolutionary biology studies this perspective. I have argued that cognitive function in humans qualitatively exceeds that of other animals because we evolved greater powers of abstraction simultaneously with language. Psychology is the field directly charged with studying mental life. Psychology and other social sciences have had to take our mental existence as a prerequisite to do further work, but my goal here is to break that existence down into its component parts. But how can we objectively study something we can’t see or measure with instruments, but can only examine through introspection and observation of behavior? As I have said, neurochemistry and evolutionary biology are our most objective sources, so we have to start there. Evolutionary psychology is the branch of evolutionary biology concerned with cognitive function, and I will look very closely at what it has revealed. We like to give ourselves credit for creating a manmade “artificial” layer of existence, a veneer of civilization, over and above our natural endowment of mental gifts. But it is not over and above; our capabilities are entirely genetic and our accomplishments are entirely consistent with and driven by our nature. And yet, the manmade parts — our culture, education and personal experience — have themselves through feedback participated in our evolution, while also creating worlds of information beyond anything genetics could have anticipated. Abstraction opened the door to a knowledge explosion, because once it is packaged abstractly, knowledge not only accumulates but can build on itself to create ever more complex artifacts and institutions. So although we are still physical creatures constrained by biochemistry and evolution, we have the freedom to abstract across the full range of functional existence.

The Mind from Behind the Scenes

We are all the stars of the movies that play in our own heads. We write our own parts and then act them out. Of course, we don’t literally write or act: we think about what we want, then we imagine ways to get it, and then we do things to achieve it. We know why we do it: to preserve our lives and lifestyle. But we don’t know how we do it. We don’t know, in a detailed, scientific sense, what is happening when we are wanting or imagining. While scientists are fairly certain our minds are the consequence of fantastically intricate but natural processes in the brain, from our first-person perspective it looks to us like magic, even supernatural. Thanks to the theory of evolution and the computational theory of mind we can now imagine how it might be natural, but we can only explain it in very broad strokes that leave most of the answers to the imagination. In a world where seemingly everything important is now well-understood, must we must continue to accept that the core of our being has not been explained and won’t be anytime soon? I am bothered by this because I think we can do better, and more to the point, I think we already know enough to do better.

This shortcoming in our knowledge hasn’t gone completely unnoticed. It has interested philosophers for thousands of years and scientists for over a hundred, leading to the formation of cognitive science as a discipline in 1971 and for the 1990’s to be dubbed the “decade of the mind”. Much has been learned, but not much consensus has formed around explanations. Instead, the field is littered with half-baked theories and contentiously competing camps. The casual observer might wonder whether we have made any progress at all. Contrast this for a moment with consensus on climate change, which has risen to nearly 100% in the forty some years since Wallace Broecker first brought the matter into the public eye in 1975 with his paper Climatic Change: Are We on the Brink of a Pronounced Global Warming?”1. What separates the two is that the theory of global warming rests on the well-proven warming effects of carbon dioxide and the human contribution of carbon dioxide to the atmosphere, while theories of the mind rest on an almost unlimited number of less certain factors. We only understand a tiny fraction of the phenomena at play in the brain and mind. This suggests that any explanatory theory will mostly be guesswork. Yes and no. Yes, we have to guess, i.e. hypothesize, first before we can see if those guesses hold up. But these can be very informed guesses. We know enough to establish a broad scientific consensus around an overall explanatory theory of the mind. Though it is still early days, and we should still expect a wide range of viewpoints, it is no longer so early that we can’t roughly agree on much of what is going on, as supported by an extensive body of common knowledge and established science. It is my objective to pull together what we already know into a single, coherent scientific explanation, i.e. an overarching scientific theory.

While some interesting and insightful books have been written that summarize what we know about how the mind works, e.g. Steven Pinker’s How the Mind Works2, their authors somewhat necessarily stay close to their area of expertise, which is where their knowledge and credibility is highest. From my perspective, Pinker covers a broader range than anyone else but still does not actually touch on what I consider to be the core issue, which is the existential character of thought. It is a bias of natural scientists to see things mostly in physical terms, which leaves them in the awkward situation of explaining functional phenomena in terms of physical processes. This leads ultimately to evolutionary psychology, in which scientists use evolutionary arguments to explain psychological traits as chemical consequences of adaptation. These arguments make sense, and they are even true, but they miss the forest for the trees, which is this: the physical mechanisms of the brain, which provide a neurochemical basis for instincts, emotions and thought, are not themselves the objects of existence under discussion. The mind is actually about function, information, and purpose, which is an alternate plane of existence with which we have intimate familiarity but which is invisible to physical science. I am going to correct this fundamental oversight herein, and develop its consequences into a unified approach to science, and especially to cognitive science.

Minds not Brains: Introducing Theoretical Cognitive Science

[Brief summary of this post]

I’m going to make a big deal about the difference between the mind and the brain. We take our concept of mind for granted, but it is completely without scientific basis. Conventionally, the mind is “our ability to feel and reason through a first-person awareness of the world”. This definition begs the question of what “feel”, “reason” and “first-person awareness” might be, since we can’t just define the mind by using terms that are only meaningful to the owner of one. While we can safely say they are techniques that help the brain perform its primary function, which is to control the body, we will have to dig deeper to figure out how they work. Our experience of mind links it strongly to our bodies, and scientists have long made the case that it resides in our nervous systems and the brain in particular. Steven Pinker says that “The mind is what the brain does.”1 This is only superficially right, because it is not what, but why. It is not the mechanism or form of the mind that matters as much as its purpose or function. But how can we embark on the scientific study of the mind from the perspective of its function? As currently practiced, the natural sciences don’t see function as a thing itself, but more as a side effect of mechanical processes. The social sciences start with the assumption that the mind exists but take no steps to connect it back to the brain. Finally, the formal sciences can study theoretical, abstract systems, including logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics, but leave it to natural and social scientists to apply them to natural phenomena like brains and minds. What is the best scientific standpoint to study the mind? Cognitive science was created in 1971 to fill this gap, which it does by encouraging collaboration between the sciences. I think we need to go beyond collaboration and admit that the existing three branches have practical and metaphysical constraints that limit their reach into the study of the mind. We need to lift these constraints and develop a unified and expanded scientific framework that can cleanly address both mental and physical phenomena.

Viewed most abstractly, science divides into two branches, the formal and experimental sciences, with the formal being entirely theoretical, and the experimental being a collaboration between theory and testing. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and special sciences (all other natural and social sciences), which are presumed to be reducible to fundamental physics, at least in principle. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum. Hypotheses are purely functional while testing is purely physical. That is, hypotheses are ideas with no physical existence, though we think about and discuss them through physical means, while testing tries to evaluate the physical world as directly as possible. Of course, we use theory to perform and interpret the tests, so it can’t escape some functional dependency. The scientific method tacitly acknowledges and leverages both functional and physical existence, even though it does not overtly explain what functional existence might be or attempt to explain how the mind works. That’s fine — science works — but we can no longer take functional existence and its implications for granted as we start to study the mind. It’s remarkable, really, that all scientific understanding, and everything we do for that matter, depend critically on our ability to use our minds, yet don’t need an understanding of how it works or what it is doing. But we have to find a way to make minds and ideas into objects of study themselves to understand what they are.

The special sciences are broken down further into the natural and social sciences. The natural sciences include everything in nature except minds, and the social sciences study minds and their implications. The social sciences start with the assumption that people, and hence their minds, exist. They draw on our perspectives about ourselves and on patterns in our behavior to explain what we are and help us manage our lives better. Natural scientists (aka hard scientists) call the social sciences “soft sciences” because they are not based on physical processes bound by mathematical laws of nature; nothing about minds has so far yielded that kind of precision. Our only direct knowledge of the mind is our subjective viewpoint, and our only indirect knowledge comes from behavioral studies and evolutionary psychology. The study of behavior finds patterns in the ways brains make bodies behave that support the idea of mental states but doesn’t prove they exist. Evolutionary psychology also suggests how mental states could explain behavior, but can’t prove they exist. The differences in approach have opened up a gap between hard and soft sciences that currently can’t be bridged, but we have to bridge it to develop a complete explanation of the mind. This schism between our subjective and objective viewpoints is formally called the explanatory gap, referring specifically to the fact that we don’t know how physical properties alone could cause a subjective perspective (and its associated feelings) to arise. I closed this gap in The Mind Matters, but not rigorously. In brief, I said that the mind is a process in the brain that experiences things the way it does because creating a process that behaves like an agent and sees itself as an agent is the most effective way to get the job done. So perceptions are just the way our brains process information and “present” it to the process of mind. It is not a side effect; much of the wiring of the brain was designed to make this illusion happen exactly the way it does.

Natural science currently operates on the assumption that natural phenomena can be readily modeled by hypotheses which can be tested in a reproducible way. This works well enough for simple systems, i.e. those which can be modeled using a handful of components and rules. The mind, however, is not a simple phenomenon. Unlike a muscle in which every fiber works the same way for the same purpose, the mind is structured around complexity, with every thought being different for different purposes. More than just complexity, the mind has a different metaphysical nature. While thoughts are apparently implemented using nerve impulses through neurons, their significance is not in how they are implemented but in what they refer to; their physical instantiation is of less significance than their mental worth. Consequently, developing a hypothesis and testing it is vastly more complicated than for simple natural phenomena, so the prevailing practices and conventions in the natural sciences won’t work. Yet the attitude among natural scientists is that the mind will ultimately just boil down to brain chemistry; once we know all the details, it will reveal the workings of the mind like a cuckoo clock. This work will indeed reveal the physical mechanisms, but it can’t reveal the reasons for the design, the purposes to which the mechanisms are employed.

Of course, it is no big secret that the ultimate purpose of the mind is to facilitate survival, but because the brain manages information instead of muscle fibers, there are essentially an infinite number of hypotheses that might explain how it does it. The brain is a somewhat general-purpose information management platform that is biological instead of digital. But in a sense, the brain chemistry doesn’t matter because any mechanism that can manage information could get the job done; we could probably simulate minds on digital computers given more know-how (a contentious point I will address more later). What this means is that the primary problem we need to address to understand the mind is how the information is managed, and to do this we need focus more on developing plausible hypotheses and algorithms than on analyzing brain structure. This is the part that falls outside natural science’s comfort zone, because the success of natural science has followed from hypotheses that are never more complex than ball-and-stick models and a few equations. But this approach doesn’t begin to address the complexities inherent to information management. What we need most are theories about the control of information, not theories about physical mechanisms.

How many programmers does it take to screw in a light bulb? None, it’s a hardware problem. How many natural (or social) scientists does it take to explain the mind? None, it’s a software problem. Of course, hardware and software are more interdependent than that, but I’m highlighting a systemic flaw in the structure of science: it can’t see the programmer’s perspective. Programs are a series of actions to accomplish desired tasks. Minds perform actions to accomplish desired tasks. From a functional standpoint, they are the same kind of thing. I will argue that this is the only standpoint that defines either programs or minds and that as their definition it both establishes and proves their existence. Their physical manifestation, be it on a computer or in a brain, is incidental (but not coincidental) to their functional existence because many similar physical mechanisms could perform the same function. Information, too, exists functionally but only secondarily physically. Information is patterns that help functional processes work, and the way it is represented is incidental to this objective. And what branch of science can even recognize functional existence? This is entirely the province of the formal sciences, chiefly theoretical computer science, which focuses on the algorithms and data structures that perform functions. The natural and social sciences are currently equipped to address functionality only in straightforward ways and not in the complex ways that programs and their data can create, and certainly not as a new form of existence. So we need to expand the scope of science to embrace functionality and the complexity that comes with it.

Having broadly established the need for change, I’m not going to look at little closer at what we know about the mind itself and how we know it. We have two sources of knowledge: introspection and investigation. In other words, personal and scientific. Our personal knowledge is what we gather from our own experience. This knowledge is enriched by a wealth of cultural knowledge about the mind ensconced in language, common knowledge and social institutions like school, work, and religion. However it comes to us, what we think about it is called introspection. Investigation, on the other hand, is the outward search for knowledge from more objective sources than personal whims. When it is practiced according to agreed-upon standards, it is called science. As I have said, natural sciences study the brain but not the mind since it can’t be directly observed. Consequently natural science purists think of the mind as a mysterious, emergent side effect of the brain that will we will be able to explain away (i.e. by reductionism) as we learn more about the brain. This idea of “emergence,” or springing from nothing, is the way they acknowledge functional existence. As I will show later, it doesn’t spring from nothing, but it is a wholly other kind of existence that arises from physical feedback loops. Social scientists take the existence of minds as a starting point and study the patterns of behavior that result. They are thus wholly concerned with the study of function, but more with macroscopic effects than the algorithms and data that produce them. The formal sciences study tools that can bridge this gap but from a theoretical side and so don’t directly concern themselves with the methods of the brain or any physical system.

Nearly all our knowledge of our mind relates to using it, not understanding it. We are experts at using our minds. Our facility develops naturally and is helped along by nurture. Then we spend decades at schools to further develop our ability to use our mind. But despite all this attention on using it, we think little about what it is and how it works. Just as we don’t need to understand how any machine works to use it, we don’t need to know how our mind works to use it. And we can no more intuit how it works that we can intuit how a car or TV works. We consequently take it for granted and even develop a blindness about the subject because of its irrelevance. But it is the relevant subject here, so we have to overcome this innate bias. We can’t paint a picture of a scene we won’t look at. We can’t literally see it because it is a construct of information hiding in the brain and we can’t mentally see it because we just use it without understanding how. And understanding the physical mechanisms of the brain won’t explain the mind any more than taking a TV apart can explain what’s on TV. The programming and structure of the mind derive primarily from the function it is trying to accomplish. The mind is a construct of that function, both shaped and defined by it. So the mind is not just what the brain does, it is why it does it. It is not about the way it physically accomplishes it; it is about what it is trying to accomplish. It is an abstract thing and home for all abstract things that exists to animate our bodies to keep them alive. This non-physical or functional kind of existence (which is a mentally functional existence in the case of functions of the mind) has as much claim to existence as a subject of conversation as physical things. Actually, it has a better and prior claim to existence because our only direct knowledge of existence comes from the mind — I think therefore I am — and our knowledge of the physical world is secondary, being derived from observations we correlate using our minds.

Knowing that functional existence is real and being able to talk about it still doesn’t explain how it works. We take understanding to be axiomatic. We use words to explain it, but they are words defined in terms of each other without any underlying explanation. For example, to understand is to know the meaning of something, to know is to have information about, information is facts or what a representation conveys, facts are things that are known, convey is to make something known to someone, meaning is a worthwhile quality or purpose, purpose is a reason for doing something, reason is a cause for an event, and cause is to induce, give rise, bring about, or make happen, which we understand from experience but are not part of natural science, which says that things happen without causes as a consequence of particles hitting each other. To make progress explaining understanding itself, we will either have to stop using all the circular words that describe mental phenomena or define them better; otherwise everything we say will remain in a relativistic bubble. Science gives us a way to get past our subjective viewpoint with an objective toolkit. But it is not a toolkit that is well-equipped to address the mind itself as natural science depends on physical observations, which we can’t make, and social science depends on behavioral observations, which don’t prove anything. So again it seems that expanding the scope of science to address functionality in a new way would help.

While science does not, in my opinion, attempt to study the mind head on, it does study it from a number of directions that I consider indirect. In the natural sciences, neuroscience focuses on the mechanical aspects of brain function. A few subfields of neuroscience merge into social science, like behavioral, cultural and social neuroscience. Social sciences all study the mind but from indirect perspectives. Anthropology, sociology, economics, law, politics, management, linguistics, and music study mental behavior and thus characterize the mind without trying to explain it. Psychology is the direct study the mind and approaches the subject from a variety of subdisciplines, including neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, and humanistic psychology. They each draw on a different objective source of information. Neuropsychology studies the brain for effects on behavior and cognition. Behavioral psychology studies behavior. Evolutionary psychology studies the impact of evolution. Cognitive psychology studies mental processes like perception, attention, reasoning, thinking, problem-solving, memory, learning, language, and emotion. Psychoanalysis studies experience (but with a medical goal). Humanistic psychology studies uniquely human issues, such as free will, personal growth, self-actualization, self-identity, death, aloneness, freedom, and meaning. Finally, the third branch of science, formal science, intersects with some capacities of mind. Mathematics addresses quantities and rules, computer science deals with algorithms, and formal systems can describe language and potentially other kinds of mental abstractions. I noted before that cognitive science is designated to be the field that directly studies the mind, but I have felt underwhelmed by its approach, which has mostly been to seek collaboration between the sciences rather than to formulate and complete a prevailing paradigm.

This brings me to the one social science I need to address separately: philosophy. Broadly, philosophy is the theoretical basis of a branch of knowledge, so every field needs its own philosophy, its own paradigm. Practiced as an independent field, general philosophy studies fundamental questions, such as the nature of knowledge, reality, and existence. From my perspective, general philosophy has failed as a field because it lacks focus, a way of assessing the value of any approach. Lacking such firm support, every perspective floats on quicksand, only able to appeal to an audience willing to buy into tacit assumptions that have no real support themselves. For example, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism. Universality and relativism assume the concepts of facts, discovery, understanding, and perception, but these assumptions are undefined. A philosophy that builds on assumptions we share from language and culture doesn’t really say anything. This fault extends to all philosophies because they all fall back on unsupported assumptions we make about and with our minds. I contend that science provides the firmest kind of support we have found so far: objectivity. I will undertake a lengthy inquiry into the nature of objectivity later on, but for now we can think of it as precisely avoiding the kind of assumptions from our own minds to which I have been objecting. So we need to be objective in our approach to philosophy and to focus in particular on the philosophies we will need to study science and the mind.

As fields of study, philosophy of science and philosophy of mind have suffered significantly from a lack of objective focus, and consequently they exist more as surveys of perspectives with no consensus as to which is best. I believe we can cut through all that red tape and get to the philosophy we need by reasoning it out objectively from scratch using personal and scientific knowledge we confidently hold. But a brief summary the fields is a good starting point to provide some orientation. Science was a well-established practice long before efforts were made to describe its philosophy. August Comte proposed in 1848 that science proceeds through three stages, the theological, the metaphysical, and the positive. The theological stage is prescientific and cites supernatural causes. In the metaphysical stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes and embrace an ever-progressing refinement of facts based on empirical observations. So every theory must be guided by observed facts, which in turn can only be observed under the guidance of some theory. Thus arises the hypothesis-testing loop of the scientific method and the widely accepted view that science continually refines our knowledge of nature. Though this remains the prevailing paradigm used by practicing scientists, not everyone knows that this philosophy progressively collapsed in the 20th century, leaving science without a firm foundation today. I consider it a huge problem that needs to be repaired, but to fix it we will need to develop an expanded objective framework to support science.

In short, here is how the philosophy of science was undermined. Comte’s third stage developed further in the 1920’s into logical positivism, the theory that only knowledge verified empirically (by observation) was meaningful. More specifically, logical positivism states with mathematical precision that the meaning of logically defined symbols could mirror or capture the lawful relationship between an effect and its cause2. Every term or symbol in a theory must correspond to an observed phenomena, which then provides a rigorous way to describe nature mathematically. It was a bold assertion because it says that science derives the actual laws of nature, even though we know any given evidence can be used to support any number of theories, even if the simplest (i.e. by Occam’s razor) seems more compelling. In the middle of the 20th century, cracks begin to appear in logical positivism as the sense of certainty promised by modernism began to be replaced by a postmodern feeling of uncertainty and continuous change. In the sciences, Thomas Kuhn published The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin that phrase specifically). Though Kuhn’s goal was to help science by unmasking the forces behind scientific revolutions, he inadvertently opened a door he couldn’t shut, forever ending dreams of absolutism and a complete understanding of nature and replacing it with a relativism in which potentially all truth is socially constructed. In the 1990s, postmodernists claimed all of science was a social construction in the so-called science wars. Because this seems to be true in many ways, science formally lost this battle against relativism and has yet to mount a response. And yet scientists today still consider conclusions made under prevailing paradigms to be effectively true. They know that paradigms and truths will shift over time, but what matters seems to be that today’s truths are useful as far as they go.

The philosophy of mind as a discipline is mostly studied as an array of topics rather than as a problem in need of a unified approach. The principle topics are existence (the mind-body problem), theories of mental phenomena, consciousness/qualia/self/will, and thoughts/concepts/meaning. Because the field of philosophy of mind has so many contrasting viewpoints, it is hard to separate helpful from hurtful. Rather than surveying the field, which would give airtime to any number of invalid viewpoints, I plan to approach the subject from a well-founded perspective in the first place so as to hit on the correct views. But before I get to that, I will reveal in brief (much more detail later!) just what I consider the correct views to be on some of these topics:

I endorse physicalism (i.e. minimal or supervenience physicalism), which is the position that everything we think of as mental has a physical basis, or, as philosophers say, the mental supervenes on the physical. This means that if we could physically duplicate the world, it would also duplicate it mentally and socially. Note that because quantum events in such duplicates would begin diverging immediately, they would not remain the same for long, but if somehow quantum events also stayed in sync then the mental would as well. This stance is also called physical monism: only physical things exist. Note that this position rejects the idea of an immortal soul and Descartes’ substance dualism in which mind and body are distinct substances.

I endorse non-reductive physicalism, which means I think there are non-physical aspects of mental phenomena. There is nothing mystical about these aspects; it is just that they are relational, like math or logic, and not tied to physical particulars. A given thought is a physical particular in a brain, but what it is about, it’s indirect meaning, is not. Thus, “three” and “above” are not physical particulars. This is exactly the point where the subject starts to become confusing because discussions on the subject don’t characterize mental things in a meaningful way. They might talk about mental properties or types or tokens or states, without defining these terms particularly well, in an apparent attempt to keep the discussion abstract, while at the same time they aim to connect the mental to the physical. But physical things can’t be abstractions because they are particles, or waves, or something concrete in spacetime. The key notion is that while mental things have a direct, physical nature, they usually also have an indirect, referential nature that is what they are “about” that has no physical description at all. I call the idea something non-physical about the mental exists form & function dualism, where physical equates to form and mental equates to function. It is not exactly inconsistent with physical monism because everything mental is simultaneously physical, but it does point out that physicalism overlooked functional existence, which is clearly wrong because “three” and “above” must have some kind of existence since we can talk about them.

I endorse functionalism, which is the theory of mental phenomena that says that mental things are identified by what they do rather than what they are made of. Put another way, mental things are about something rather than being something. This is why I call the dualism of physical and mental form & function dualism, because physical things are all form and mental things are all function.

I endorse the idea that consciousness is a subprocess of the brain that is designed so as to create a subjective theater from which centralized control of the body by the brain can be performed efficiently. All the familiar aspects of consciousness such as qualia, self, and the will are just states managed by this subprocess.

Finally, I endorse the idea that thoughts, concepts, and meaning are information management techniques that have both conscious and subconscious aspects, where subconscious refers to subprocesses of the brain that are supportive of consciousness, which is the most supervisory subprocess.

While I have thus revealed much about where I am going, I have not yet revealed how I got there or why a properly unified philosophy of science and mind implies these things.

Deriving an Appropriate Scientific Perspective for Studying the Mind

[Brief summary of this post]

I have made the case for developing a unified and expanded scientific framework that can cleanly address both mental and physical phenomena. I am going to focus first on deriving an appropriate scientific perspective for studying the mind, from which I will extrapolate to science at large. I will follow these five steps:

1. The common knowledge perspective of how the mind works
2. Form & Function Dualism: things and ideas exist
3. Pragmatism: to know is to predict
4. How does knowledge become objective and free of preferences, biases, and fallacies?
5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

1. The common-knowledge perspective of how the mind works

Before we get all sciency it makes sense to think about what we know about the mind from common knowledge, which is the subset of our shared knowledge with which we are most likely to agree. Common knowledge has much of the reliability of science in practice, so we should not discount its value. Much of it is uncontroversial and does not depend on explanatory theories or schools of thought, including our knowledge of language and many basic aspects of our existence. So what do we collectively think about how the mind works? This brief summary is just to characterize the subject and is not intended to be exhaustive. While some of the things I will assume from common knowledge are perhaps debatable, my larger argument does not hinge on common knowledge.

First and foremost, having a mind means being conscious, which is a first person (subjective) awareness of our surroundings through our senses and an ability to direct our interactions with the world using our body. We know our minds travel with our bodies and we feel that our minds are in our heads because of our eyes and ears and because feeling well in the head correlates to thinking well. We feel implicit trust in our sensory connection to the world, but we also know that our senses can fool us, so we’re always looking again and reassessing. Our sensations, more formally called qualia, are subjective mental states like redness, warmth, and roughness, or emotions like anger, fear, and happiness. Qualia have a persistent feel that occurs in direct response to triggering stimuli. Beyond direct sensing, we can simulate sensing using imagination and thought, but the feel of the qualia is diminished. We can also think about abstract groupings of things or ideas, which we can direct with or without the aid of language (though language provides our strongest connection to many abstract ideas). Thoughts lake the persistent feel of qualia but instead stimulate other, related thoughts. We can think about all the above ideas as abstractions, and also about our bodies and minds (the self). We have an ability called reasoning which lets us think logically about abstractions. We can preserve our interactions with the world and our thoughts in our memory as experience. We develop preferences and develop strategies to satisfy them by learning from feedback. We have free will to direct our thoughts, but only at the top (conscious) level. Thoughts I will call subconscious (because we are not aware of them unless we turn our attention to them) control learned, habitual behaviors. We perform them on “autopilot,” essentially delegating that they should happen with a little conscious oversight but without constant conscious handholding. We can gain conscious access to some subconscious thought processes, but others, like how we remember or recognize things or words just work like magic to us. We have a subconscious gift for juggling many possibilities or possible worlds given just the slightest conscious effort. Some possibilities correspond well to the physical world and others are more imaginary. We think of those that correspond particularly well as being true, which relates mostly to their ability to successfully predict what will happen.

While we build much of our concept of the outside world on the “blank slate” of our minds, which are innately predisposed to make sense of it in human-oriented ways, culture contributes a huge pool of common knowledge. This includes what I am saying here, ideas embedded in words (though language is mostly an innate capacity), and many schools of thought from customs to explanatory theories to science. We gather much of this cultural knowledge from nonverbal interactions with people and artifacts, but most of it comes through language, either conversationally or from books and other media. Our vocabulary neatly divides into physical words about things and mental words about ideas, and never the twain shall meet (though some words can go both ways, like “up” for upward or happy). Physical words describe things and events, covering everything that unfolds in spacetime. All physical words have a kind of mental component to them because we group things and events in the physical world using generalizations that are mental constructions. So how we divide the parts of an object or the steps of a process has a subjective angle to it, but there is no doubt these words refer to the physical world. Mental words cover sensations (see, hear, hunger, feeling, etc.), emotions (anger, fear, feeling, etc.), and thinking (know, believe, like, design, etc.), none of which have physical corollaries. The mental and physical worlds can follow disjoint paths, with mental events developing independently of physical and physical independently of mental, but they can also bring about changes in each other. Our sensations are directly affected by physical changes, our emotions are indirectly affected by our interpretations of physical changes (with many emotional changes under subconscious and hence involuntary control), and our thoughts are also indirectly affected as we have good reason to focus many of our thoughts on our physical circumstances. Our bodies perform actions that cause change to the physical world, and those actions are directed by our mental states. We join the physical and mental worlds through causality; we speak as if events in one world can simply cause events in the other, even though causation itself is a macroscopic generalization with no hard physical definition.

The current state of common knowledge is constantly evolving, and in a society that embraces objectivity, as ours has been doing increasingly for centuries, that change brings rapid progress in the depth and scope of that knowledge. Put approximately, the average educated person of today knows more about psychology (say) than the experts knew a generation ago. Put more accurately, while none of us is fully up to date in all fields, as our collective knowledge across all disciplines grows, our common knowledge (the most broadly shared parts) grows as well. This has led to a deeper shared familiarity of all fields and also to general truths that connect the fields together, many of which are not articulated formally through science. For example, we have a greater appreciation for the interconnectedness of life and both the resilience and fragility of biological systems. As an example regarding the mind specifically, we think more about what shapes our thought processes than people did a few generations back. So instead of just thinking that decisions have a “trust your gut” indivisibility, today we are aware of the impact of biases, data quality, and how to use decision trees and other algorithmic approaches. Beyond awareness, we have internalized political correctness (a term of dubious provenance) to the point where we automatically apply anti-biasing in our thinking and social interactions. We are less innocent and more mature, starting at an ever-earlier age. We know so much more than our forebears that we can’t easily conceive just how much weaker their world view was. I’m not saying we are smarter or more capable; modern conveniences have cost us many of the skills of self-sufficiency and much of the patience for mastery. I’m just saying we carry the burden of more comprehensive knowledge. For this discussion, what matters is that we know more than we think we know: a clear, scientific view of the world supported by common knowledge is right before us, and our bullshit meters are good enough that we can throw out theories that have overstayed their welcome.

2. Form & Function Dualism: things and ideas exist

We can’t study anything without a subject to study. What we need first is an ontology, a doctrine about what kinds of things exist. We are all familiar with the notion of physical existence, and so to the extent we are referring to things in time and space that can be seen and measured we share the well-known physicalist ontology. Physicalism is an ontological monism, which means it says just one kind of thing exists, namely physical things. But expert opinion is divided as to whether physicalism is a sufficient ontology to explain the mind. The die-hard natural scientists insist it is and must be, and that anything else is new age nonsense. I am sympathetic to that view as mysticism is not explanatory and consequently has no place in discussions about explanations. We can certainly agree from common knowledge that there is a physical aspect, being the body of each person and the world around us. But knowing that seems to give us little ability to explain our subjective experience, which is so much more complex than the observed physical properties of the brain would seem to suggest.

Alternatively, we are also familiar with the notion of mentally functional existence, as in Descartes’ “I think therefore I am.” We experience states of mind, feeling and thinking things, and this kind of existence does at least seem to us to be distinct from physical existence because thoughts have no length, width, or mass. Idealism is another monistic ontology, but it asserts that reality is entirely mental, and what we think of as physical things are really just mental representations. In other words, we dream up reality any way we like. Science, and our own experience, have provided overwhelming evidence of the persistent existence of physical reality, which has put idealism in a rather untenable state. But if one were to join the two together, one could imagine a dualism between mind and matter in which both the mental and physical exist without either being reducible to the other. All religions have seized on this idea, stipulating a soul (or equivalent) that is quite distinct from the body. And Descartes promoted it, but where he got into trouble was in trying to explain the mechanism: he supposed the brain had a special mental substance that did the thinking, a substance that could in principle be separated from the body. Descartes imagined the two substances somehow interacted in the pineal gland. But no such substance was ever found and the pineal gland is not the seat of the soul. Natural science gives us no evidence, reason, or inclination to suppose that anything in our brains transcends the rules of spacetime, so we need to try to figure out the mind within that constraint. While Descartes’ substance dualism doesn’t hold up, another form called property dualism has been proposed. Property dualism tries to separate the two by asserting the mental states are non-physical properties of physical substances. This misses the mark as well because it suggests a direct relationship between the mental state and the physical substance, and as we will see it is precisely the point that the relationship is not direct. A third variety of dualism called predicate dualism proposes that intentional predicates like belief, desire, thought, and feeling can’t be reduced to physical explanations. This comes a bit closer to the truth, because these predicates are certainly only indirectly physical, but they are also high-level human skills and not a fundamental kind of existence in their own right. We just need to break them down a bit more to establish a proper basis for dualism.

Let’s consider why any purely physical explanation will come up short. There are aspects of our thoughts that cannot be reduced to the physical because they have no extent in space or time; they are generalizations. “Three” and “red” and “up” are not physical. As Sean Carroll writes,1 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. All our mental words for sensations, emotions, and thoughts have no physical corollary. Although we can say that any given use of a word or thought at a point in time could be correlated to something about the physical state of the body (and probably the brain if we want to get anatomical), that can never be enough to define them, because their definition is not a function of physical form but of logical relation. It is exactly in the focusing on form and not function that the physical sciences miss the forest for the trees and remain completely unable to see the mind at all. Form and function are differences of perspective that not only explain things in different ways but which cannot be reduced to each other, at least not while retaining a useful explanation. When we start to contemplate function we have actually opened the door to a different brand of existence, that of mental things. They have a kind of physical existence in the brain yet they can’t be understood physically. So what are they then?

The mental world is a product of information. Information, perhaps itself the simplest of mental words, is the patterns hiding in data, the wheat separated from the chaff. The value of these patterns is that they can be used to predict the future. Brains use a wide variety of customized algorithms, some innate and some learned, that find information and use it to turn events to their advantage. Information is not a physical thing because its value is not physical, it is functional. Yes, it is physically encoded as a state in a computer (biological or manmade), or written down, but a physical representation of information isn’t noteworthy in isolation. It is that the information can be put to use that matters, and minds and computers are configured so as to be able to apply the information itself without much regard for the form it takes. What technical trick allows information to behave in this transphysical way? The trick is indirection, known more commonly as representation. The brain can take a stream of data and distill information from it to represent something else, the referent. “Ceci n’est pas une pipe,” as Magritte said. 2
A thought about something is not the thing itself. The phenomenon is not the noumenon, as Heidegger would have put it: the thing-as-sensed is not the thing-in-itself. If it is not the thing itself, what is it? It’s whole existence is wrapped up in its potential to predict the future; that is it. However, to us, as mental beings, it is very hard to distinguish phenomena from noumena, because we can’t know the noumena directly. Knowledge is only about representations, and isn’t and can’t be the physical things themselves. The only physical world the mind knows is actually a mental model of the physical world. So while Magritte’s picture of a pipe is not a pipe, the image in our minds of an actual pipe is not a pipe either: both are representations. And what they represent is a pipe you can smoke. What this critically tells us is that we don’t care about the pipe, we only care about what the pipe can do for us, i.e. what we can predict about it. Our knowledge was never about the noumenon of the pipe; it was only about the phenomena that the pipe could enter into. In other words, knowledge is about connecting things together, not the things themselves, which are in fact defined only in terms of their connections.

I argue that these two perspectives, physical and mental, are not just different ways of viewing a subject, but define different kinds of existences, different kinds of subjects. Physical things have form, e.g. in spacetime, or in any dimensional state in which they have an extent. Mental things have function. They have no extent, and they can be characterized as relationships between themselves. They include ideas, but they also include mathematics. They are the rules that comprise any formal system. They are also the information comprising a statistical system, which is a degenerate kind of formal system with a lot of data and not much logic. We recognize things subconsciously using statistical matches against lots of stored information (where by statistical I mean finding correlating support from a volume of data as opposed to drawing logical conclusions). Through representation the mental can be used to model the physical, which is to say that it can predict, i.e. “know,” certain things about the physical world. So function is the “substance” of mental things.

On this view, the mind still runs in a brain using entirely physical processes. However, because physical processes can execute indirection, the brain can create abstractions that are not physical themselves which we call concepts, ideas, perspectives, or mental models, and these collectively constitute minds. So an analysis of the physical form won’t explain the function, which has been abstracted away from the physical using physical mechanisms that support indirection. And we do have to keep in mind that scientific theories themselves, and the whole idea of understanding, “exist” in this realm of indirection, where potential physical referents are referred to via generalities with no physical existence themselves. It is an existence worth talking about because mathematical objects and logical concepts, built and manipulated this way through the imagination, support all the mental accomplishments of man.

The two fundamental kinds of existence are therefore form and function. As the distinction of form is implied in the terms property dualism and predicate dualism, I will just call this new ontology form & function dualism. It is a recast of predicate dualism that changes the emphasis from the data itself (predicate) to what the data can do (function), i.e. from intentional states to information, whose existence derives from its value in prediction and not in the data itself. While the brain and all the processes within it are entirely physical, the function of the brain is something else, an alternative kind of existence (i.e. the mind) that in no way undermines the physical nature of the brain, but which does empower the brain to do things it could not otherwise do. Form & function dualism merges physicalism with idealism, eliminating the “only” requirement from each: form and function don’t reduce to each other, they coexist. Function exists regardless of whether it is ever implemented because function is abstract and its existence doesn’t depend on whether a physical brain or computer thinks of it. However, as physical creatures, our access to function and the ideal realm is strictly limited by the physical mechanisms used to implement abstraction in our brains. We can, in principle, build better computers and minds that can do more, but they will always be physically constrained, and so we will always have limited access to the infinite ideal domain.

Physicalism is not exactly wrong on its own: a purely physical explanation does exist for every physical phenomenon. However, even the concept of a phenomenon or its explanation comes from the ideal world, as is our own existence as mental beings, so physicalism without idealism won’t get us very far. And besides that, one can’t physically explain animal behavior or human civilization; one must resort to functional explanations. Idealism, too, is not exactly wrong on its own: idealizations exist regardless of whether the physical world does. But for people, at least, the connection back to the physical world matters. We can’t conceive of a purpose of a pure existentialism independent of our physical existence. Even if we someday live simulated lives in virtual reality, and leave it to others to run the computers and pay the electric bill, those lives will simulate physical lives. I guess it comes down to having something to do. For people this means using our minds to control things, which is to say using our predictive abilities in an environment that challenges them. It is the purpose of our minds, and all potential minds, to manage information for functional purposes, even imaginary mathematical minds with no concept of physical existence.

The rise of physicalism has made us blind to idealism, even though science is built on ideas. The most physical of sciences — physics and chemistry — aim to be purely physical, and yet they invoke laws that are pure function with no form. Laws are predictive of natural phenomena but are not physical themselves, yet we seem unconcerned that physicalism has no place for them. We aren’t concerned about this gap so long as the subject doesn’t come up. The phenomena described by physics and chemistry have no functional aspect themselves, so our laws don’t overlap with what they describe. But everything beyond physics and chemistry is biological, and biological systems manage information and hence have an ideal or functional aspect which physicalism alone can’t explain. So we have to embrace form & function dualism to discuss biological systems; we need to consider what purpose genetic traits serve. And it is not just genetic; life encodes information in three reservoirs: genes (DNA), memory (neurons), and culture (artifacts, both physical and institutional). Memory is impossible without genes, and culture is impossible without genes and memory. Through feedback our genes and memory are also shaped over time by memory and culture, so the three influence each other in some ways. Of course, science has addressed these things, hence the biological and social sciences. But under what ontology? I propose the denigration of dualism has left them ontologically limited: they speak to biological or social systems without asking too many questions about what kinds of existence they are addressing. But this reticence becomes crippling as soon as we try to understand the mind itself. At every turn we face questions of both form and function, and so long as we overlook these ontological shortcomings we won’t be able to build an overall theory.

So the love-hate relationship that philosophy and science have had with dualism was just a misunderstanding. Physicalism and idealism have never been at odds with each other. Information processing is possible in the physical world, but the information processed is not physical and can’t be understood in physical terms. The brain is physical but the mind is mental. Unless and until one looks to the functional side of the mind one can’t explain it, which is to say one can’t predict how it will behave. So to move into biology or social science one must expand one’s ontology to include information, the functions it performs, and the ways it is processed. Form & function dualism gives cognitive science the clear ontological basis it needs to unify the sciences. Form is noumenal but known to us through observation (phenomena), and function is purely the generalization of behavior and properties of phenomena with the goal of separating predictive information from noise.

3. Pragmatism: to know is to predict

Given that we agree to break entities down into things and ideas, physical and mental, we next need to consider what we can know about them, and what it even means to know something. A theory about the nature of knowledge is called an epistemology. I described the mental world as being the product of information, which is patterns that can be used to predict the future. What if we propose that knowledge and information are the same thing? Charles Sanders Peirce called this epistemology pragmatism, the idea that knowledge consists of access to patterns that help predict the future for practical uses. As he put it, pragmatism is the idea that our conception of the practical effects of the objects of our conception constitutes our whole conception of them. So “practical” here doesn’t mean useful; it means usable for prediction, e.g. for statistical or logical entailment. Practical effects are the function as opposed to the form. It is just another way of saying that information or knowledge differs from noise to the extent that it can be used, i.e. for prediction. The ability to assist in prediction is nothing like the certainty of mathematics. It can be helpful but can never prove anything to be true.

Pragmatism takes a hard rap because it carries the heavy connotation of compromise. The pragmatist has given up on theory and has “settled” for the “merely” practical. But the goal of theory all along was to explain what really happens. It is not the burden of life to live up to theory, but of theory to live up to life. When an accepted scientific theory doesn’t exactly match experimental evidence, it is because the experimental conditions fail to live up to the ideal model. After all, the real world is full of impurities and imperfections that throw off simple equations. Presumably, if one took all the actual circumstances into account and had an appropriate way to model them theoretically then the theory would accurately predict the future. But it is often not practical to take all circumstances into account, and in these situations one can either settle for the inexact or inappropriate results of theory or employ other methods. The pragmatist, whose goal is to achieve the best prediction possible, will combine all available approaches to do it. This doesn’t mean giving up on theory; on the contrary, a pragmatist will prefer well-proven theory to the limit of practicality, but will then supplement using the pre-theoretic analysis of the relevant data using innate algorithms and/or learned reasoning skills, possibly leading to ad hoc theories or new formal theories. One might say one falls back on hunches and half-baked theories to fill in the gaps. It sounds haphazard, but our subconscious minds are great at these kinds of analyses, and we could not build our formal understandings without a rich network of this kind of informal support. Pragmatic often carries the connotation of favoring practical over theoretical because it generally goes unspoken that pragmatism applies where theory is unavailable. Note that, ironically, much of my task of explaining the mind is to develop a formal theory that will describe how we use pre-theoretic methods to think. I’m getting ahead of myself here because I haven’t formally described what a formal theory even is, but the common meaning is clear. But what I have said so far about the mind is that it manages information pragmatically, so it stands to reason that our approach for studying the mind should also start with pragmatism and refine from there to formal theory only as it proves appropriate.

Thinking is an effort to find information hiding in data. When a pattern emerges, we will call it a hunch, opinion, or theory if it seems uncertain, or knowledge, truth, or a fact if it does seems certain. Our primary basis for this categorical distinction is whether we have processed the pattern statistically or logically. Statistical patterns are supported purely by a predominance of the data. They can’t be certainties but do provide a statistical advantage over noise (data without pattern). Logical patterns are necessarily true because they follow from the rules of a formal system, aka a model. A model defines terms, how they are related, and rules of entailment from which one can connect causes to effects. If defined clearly enough, everything within the system is known and so is factual. Within the system, everything that is true is necessarily true; there are no doubts. Mathematics and physics define such logical systems. The standard rules of arithmetic and the Standard Model of particle physics are examples, and within these systems, one can know things with certainty. From the infinity of possible formal systems we choose systems that we find most helpful, often using Occam’s razor to pick the simplest (“Among competing hypotheses, the one with the fewest assumptions should be selected”). Does thinking really break down into either statistical or logical pattern analysis? Yes, it really does, but there is a lot of overlap between them, and the kind of logic we do in our minds is much more fluid than that of scientific formal systems. I will be discussing these two approaches in much more detail further on.

Because pragmatism is entirely reflective of the function of the mind, being the perspective of the practical effects of our ideas, it is a comprehensive epistemology for the mental world. Since our understanding of the physical world, i.e. physicalism, is also knowledge based on information, pragmatism must also be a comprehensive epistemology for the physical world. In other words, having a non-pragmatic knowledge of physical things would be nonsensical. But contrast it with rationalism, which holds that reason (i.e. logic) is the chief source of knowledge. Rationalism captures the logical side of knowledge but ignores knowledge gathered from statistical processes. So rationalism is a proper subset of pragmatism and so it is necessarily weaker. Also note that since statistical knowledge is vital to the mind, rationalism is overly glorified as the ultimate epistemology when it only solves half the problem. Also contrast pragmatism with empiricism, which holds that knowledge comes only or primarily from sensory experience, that is, what can be supported by experimental evidence. Empiricism has become the de facto epistemology of science because our ability to make physical predictions necessarily hinges on relevant physical evidence. The scope of empiricism includes all data we acquire about the physical world, both in statistical or logical form, and does not preclude either statistical or rational (logical) use of that data. But while it has put science on a firm footing and has stood the test of time, empiricism is an inadequate epistemology because it ignores the existence of mental things. One can’t gather physical evidence of mental phenomena; one can only gather implementation details (e.g. neural wiring, equivalent to code and memory dumps). The physical evidence can reveal the form but never the function. This is enough to study systems that don’t leverage information (e.g. pre-biological physics and chemistry) but leaves us out in the cold in the study of systems that do manage information. Pragmatism is the epistemology that will get us there. So empiricism is a proper subset of pragmatism for dealing with observable phenomena, but again solves only half the problem. In practice, what does pragmatism bring to the table that rationalism and empiricism don’t? When we look at genes, memory, and culture, we learn nothing about the function from studying the physical structure, so we need to go beyond the evidence to consider the purpose. That a gene makes a bone a certain length or strength doesn’t matter; what matters is that these traits have a time-tested general value for survival. This is information you could just never guess from physical considerations alone, even though it is encoded in the system using physical mechanisms in DNA and proteins. And we can’t limit our reasoning to provable logical consequences because we can never collect all the factors that caused them to develop the way they did and fit them into a strict formal model, so we need statistical reasoning.

4. How does knowledge become objective and free of preferences, biases, and fallacies?

Knowledge carries the expectation of a measure of certainty. Objectivity is a way of elevating knowledge to a higher standard, a way of increasing the certainty if you will, by removing any dependence on opinion or personal feelings, i.e. on subjectivity. Science tries to achieve objectivity by measuring with instruments, independently checking results, and using peer review. Statistical knowledge is never certain and is only as good as an observed correlation. But in principle, logical knowledge is always certain because a model’s workings are fully known. But the rub is that we don’t know what models run the physical world, or even if the world actually follows rules. All we know is that it seems to; all evidence points to an exact and consistent mechanism. More importantly, the models we have developed to explain it are very reliable. For example, the laws of motion, thermodynamics, gravity, and conservation of mass, energy, and momentum always seems to work for the systems they describe. But that doesn’t mean they are right; any number of other laws, perhaps more complex, would also work, and the probabilistic nature of quantum mechanics has made it pretty clear that the true mechanics are neither simple or obvious. So logical knowledge can be certain, but how well it corresponds to nature will always be a source of doubt that can be summarized by statistical knowledge.

Logical models also get fuzzy at the edges because if you zoom in you find that the pieces that comprise them are not physically identical. No two apples are totally alike, or any two gun parts, though Eli Whitney’s success with interchangeable parts has led us to think of them as being so. They are close enough that models can treat them as interchangeable. Models sometimes fail because the boundaries between pieces become unclear as imperfections mount. Is a blemished or rotten apple still an apple? Is a gun part still a gun part if it doesn’t fit every time? At a small enough scale, our Standard Model of particle physics proposes that all subatomic particles slot into specific particle types (e.g. quarks, leptons, bosons), and that any two particles of the same type are completely identical except for occupying a different location in spacetime3. And maybe they are identical. But location in spacetime is a big wrinkle; the insolubility of the three body problem suggests that a predictive model of how a large group of particles will behave is probably too complex to run even if we could devise one. So both at large scales and small, all models approximate and in so doing they always pick up a degree of uncertainty. But in many situations this uncertainty is small, often vanishingly small, which allows us to build guns and many other things that work very reliably under normal operating conditions.

Our mastery over some areas of science does not grant us full objectivity. Subjectivity still puts the quality of knowledge at risk. This is due to preferences, biases, and fallacies, which are both tools and traps of the mind. Preferences are innate prioritization schemes that make us aim for some objectives over others. Without preferences, we would have no reason to do anything, so they are indispensable, but can lead us to value wishes ahead of a realistic path to attain them (e.g. Gore’s “An Inconvenient Truth”). Biases are rules of thumb that often help but sometimes hurt. Examples include favoring existing beliefs (confirmation bias), favoring first impressions (anchoring), and reputational favoritism (halo effect). They are subconscious consequences of generalization whose risks can be damped through conscious awareness of them. Fallacies are mistakes of thinking that always compromise information quality, either due to irrelevance, ambiguity or false presumption. Biases and fallacies give us excuses to promote our preferences over those of others or over our lesser preferences.

How can we mitigate subjectivity and increase objectivity? More observations from more people help, preferably with instruments, which are much more accurate and bias-free than senses. This addresses evidence collection, but it not so easy to increase objectivity over strategizing and decision-making. These are functional tasks, not matters of form, and so are fundamentally outside the physical realm and so not subject to observation. Luckily, formal systems follow internal rules and not subjective whims, so to the degree we use logic we retain our objectivity. But this can only get us so far because we still have to agree on the models we are going to use in advance, and our preference of one model over another ultimately has subjective aspects. To the degree we use statistical reasoning we can improve our objectivity by using computers rather than innate or learned skills. Statistical algorithms exist that are quite immune to preference, bias, and fallacy (though again, deciding what algorithm to use involves some subjectivity). But we can’t yet program a computer to do logical reasoning on a par with humans. So we need to examine how we reason in order to find ways to be more objective about it so we can be objective when we start to study it. It’s a catch-22. We have to understand the mind first before we figure out how to understand it. If we rush in without establishing a basis for objectivity, then everything we do will be a matter of opinion. While there is no perfect formal escape from this problem, we informally overcome this bootstrapping problem with every thought through the power of assumption. An assumption, logically called a proposition, is an unsupported statement which, if taken to be true, can support other statements. All models are built using assumptions. While the model will ultimately only work if the assumptions are true, we can build the model and start to use it on the hope that the assumptions will hold up. So can I use a model of how the mind works built on the assumption that I was being objective to then establish the objectivity I need to build the model? Yes. The approach is a bit circular, but that isn’t the whole story. Bootstrapping is superficially impossible, but in practice is just a way of building up a more complicated process through a series of simpler processes: “at each stage a smaller, simpler program loads and then executes the larger, more complicated program of the next stage”. In our case, we need to use our minds to figure out our minds, which means we need to start with some broad generalizations about what we are doing and then start using those, then move to a more detailed but still agreeable model and start using that, and so on. So yes, we can only start filling in the details, even regarding our approach to studying the subject, by establishing models and then running them. While there is no guarantee it will work, we can be guaranteed it won’t work if we don’t go down this path. While not provably correct, nothing in nature can be proven. All we can do is develop hypotheses and test them. By iterating on the hypotheses and expanding them with each pass, we bootstrap them to greater explanatory power. Looking back, I have already done the first (highest level) iteration of bootstrapping by endorsing form & function dualism and the idea that the mind consists of processes that manage information. For the next iteration, I will propose an explanation for how the mind reasons, which I will then use to support arguments for achieving objectivity.

So then, from a high level, how does reasoning work? I presume a mind that starts out with some innate information processing capabilities and a memory bank into which experience can record learned information and capabilities. The mind is free of memories (a blank slate) when it first forms but is hardwired with many ways to process information (e.g. senses and emotions). Because our new knowledge and skills (stored in memory) build on what came before, we are essentially continually bootstrapping ourselves into more capable versions of ourselves. I mention all this because it means that the framework with which we reason is already highly evolved even from the very first time we start making conscious decisions. Our theory of reasoning has to take into account the influence of every event in our past that changed our memory. Every event that even had a short-term impact on our memory has the potential for long-term effects because long-term memories continually form and affect our overall impressions even if we can’t recall them specifically.

One could view the mind as being a morass of interconnected information that links every experience or thought to every other. That view won’t get us very far because it gives us nothing to manipulate, but it is true, and any more detailed views we develop should not contradict it. But on what basis can we propose to deconstruct reasoning if the brain has been gradually accumulating and refining a large pool of data for many years? On functional bases, of which I have already proposed two: logical and statistical, which I introduced above with pragmatism. Are these the only two approaches that can aid prediction? Supernatural prophecy is the only other way I can think of, but we lack reliable (if any) access to it, so I will not pursue it further. Just knowing that however the mind might be working, it is using logical and/or statistical techniques to accomplish its goals gives us a lot to work with. First, it would make sense, and I contend that it is true, that the mind uses both statistical and logical means to solve any problem, using each to the maximum degree they help. In brief, statistical means excel at establishing the assumptions and logical means at drawing out conclusions from the assumptions.

While we can’t yet say how neurons make reasoning possible, we can say that it uses statistics and logic, and from our knowledge of the kinds of problems we solve and how we solve them, we can see more detail about what statistical and logical techniques we use. Statistically, we know that all our experience contributes supporting evidence to generalizations we make about the world. More frequently used generalizations come to mind more readily than lesser used and are sometimes also associated with words or phrases, such as about the concept APPLE. An APPLE could be a specimen of fruit of a certain kind, or a reproduction or representation of such a specimen, or used in a metaphor or simile, which are situations where the APPLE concept helps illustrate something else. We can use innate statistical capabilities to recognize something as an APPLE by correlating the observed (or imagined) aspects of that thing against our large database every encounter we have ever had with APPLES. It’s a lot of analysis, but we can do it instantly with considerable confidence. Our concepts are defined by the union of our encounters, not by dictionaries. Dictionaries just summarize words, and yet words are generalizations and generalizations are summaries, so dictionaries are very effective because they summarize well. But brains are like dictionaries on steroids; our summaries of the assumptions and rules behind our concepts and models are much deeper and were reinforced by every affirming or opposing interaction we ever had. Again, most of this is innate: we generalize, memorize, and recognize whether we want to or not using built-in capacities. Consciousness plays an important role I will discuss later, but “sees” only a small fraction of the computational work our brains do for us.

Let’s move on to logical abilities. Logic operates in a formal system, which is a set of assumptions or axioms and rules of inference that apply to them. We have some facility for learning formal systems, such as the rules of arithmetic, but everyday reasoning is not done using formal systems for which we have laid out a list of assumptions and rules. And yet, the formal systems must exist, so where do they come from? The answer is that we have an innate capacity to construct mental models, which are both informal and formal systems. They are informal on many levels, which I will get into, but also serve the formal need required for their use in logic. How many mental models (models, for short) do we have in our heads? Looked at most broadly, we each have one, being the whole morass of all the information we have every processed. But it is not very helpful to take such a broad view, nor is it compatible with our experience using mental models. Rather, it makes sense to think of a mental model as the fairly small set of assumptions and rules that describe a problem we typically encounter. So we might have a model of a tree or of the game of baseball. When we want to reason about trees or baseball, we pull out our mental model and use it to draw logical conclusions. From the rules of trees, we know trees have a trunk with ever small branches branching off that have leaves that usually fall off in the winter. From the rules of baseball, we know that an inning ends on the third out. Referring back a paragraph, we can see that models and concepts are the same things — they are generalizations, which is to say they are assessments that combine a set of experience into a prototype. Though the same data, models and concepts have different functional perspectives: models view the data from the inside as the framework in which logic operates, and concepts view it from the outside as the generalized meaning it represents.

While APPLE, TREE, and BASEBALL are individual concepts/models, no two instances of them are the same. Any two apples must differ at least in time and/or place. When we use a model for a tree (let’s call it the model instance), we customize the model to fit the problem at hand. So for an evergreen tree, for example, we will think of needles as a degenerate or alternate form of leaves. Importantly, we don’t consciously reason out the appropriate model for the given tree; we recognize it using our innate statistical capabilities. A model or concept instance is created through recognition of underlying generalizations we have stored from long experience, and then tweaked on an ad hoc basis (via further recognition and reflection) to add unique details to this instance. Reflection can be thought of as a conscious tool to augment recognition. So a typical model instance will be based on recognition of a variety of concepts/models, some of which will overlap and even contradict each other. Every model instance thus contains a set of formal systems, so I generally call it a constellation of models rather than a model instance.

We reason with a model constellation by using logic within each component model and then using statistical means to weigh them against each other. The critical aspect of the whole arrangement is that it sets up formal systems in which logic can be applied. Beyond that, statistical techniques provide the huge amount of flexibility needed to line up formal systems to real-world situations. The whole trick of the mind is to represent the external world with internal models and to run simulations on those models to predict what will happen externally. We know that all animals have some capacity to generalize to concepts and models because their behavior depends on being able to predict the future (e.g. where food will be). Most animals, but humans in particular, can extend their knowledge faster than their own experience allows by sharing generalizations with others via communication and language, which have genetic cognitive support. And humans can extend their knowledge faster still through science, which formally identifies objective models.

So what steps can we take to increase the objectivity of what goes on in our minds, which has some objective elements in its use of formal models, but which also has many subjective elements that help form and interpret the models? Devising software that could run mental models would help because it could avoid fallacies and guard against biases. It would still ultimately need to prioritize using preferences, which are intrinsically subjective, but we could at least try to be careful and fair setting them up. Although it could guard against the abuses of bias, we have to remember that all generalizations are a kind of bias, being arguments for one way of organizing information over another. We can’t write software yet that can manage concepts or models, but machine learning algorithms, which are statistical in nature, are advancing quickly. They are becoming increasingly generalized to behave in ever more “clever” ways. Since concepts and models are themselves statistical entities at their core, we will need to leverage machine learning as a starting point for software that simulates the mind.

Still, there is much we can do to improve our objectivity of thought short of replacing ourselves with machines, and science has been refining methods to do it from the beginning. Science’s success depends critically on its objectivity, so it has long tried to reject subjective biases. It does this principally by cultivating a culture of objectivity. Scientists try to put opinion aside to develop hypotheses in response to observations. They then test them with methods that can be independently confirmed. Scientists also use peer review to increase independence from subjectivity. But what keeps peers from being subjective? In his 1962 classic, The Structure of Scientific Revolutions4, Thomas Kuhn noted that even a scientific community that considers itself objective can become biased toward existing beliefs and will resist shifting to a new paradigm until the evidence becomes overwhelming. This observation inadvertently opened a door which postmodern deconstructionists used to launch the science wars, an argument that sought to undermine the objective basis of science, calling it a social construction. To some degree this is undeniable, which has left science with a desperate need for a firmer foundation. The refutation science has fallen back on for now was best put by Richard Dawkins, who noted in 2013 that “Science works, bitches!”5. Yes, it does, but until we establish why we are blustering much like the social constructionists. The reason science works is that scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that they don’t (and in fact can’t) eliminate it. All that matters is that they reduce it, which distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So, yes, science is a social construction, but one that continually moves closer to truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount and quality of useful information. It is not enough for scientific communities to assume best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”67. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings, but is more commonly a problem when science is used to justify a commercial enterprise.

5. Orienting science (esp. cognitive science) with form & function dualism and pragmatism

The paradigm I am proposing to replace physicalism, rationalism, and empiricism is a superset of them. Form & function dualism embraces everything physicalism stands for but doesn’t exclude function as a form of existence. Pragmatism embraces everything rationalism and empiricism stand for but also includes knowledge gathered from statistical processes and function.

But wait, you say, what about biology and the social sciences: haven’t they been making great progress within the current paradigm? Well, they have been making great progress, but they have been doing it using an unarticulated paradigm. Since Darwin, biology has pursued a function-oriented approach. Biologists examine all biological systems with an eye to the function they appear to be serving, and they consider the satisfaction of function to be an adequate scientific justification, but it isn’t under physicalism, rationalism or empiricism. Biologists cite Darwin and evolution as justification for this kind of reasoning, but that doesn’t make it science. The theory of evolution is unsupportable under physicalism, rationalism, and empiricism alone, but instead of acknowledging this metaphysical shortfall some scientists just ignore evolution and reasoning about function while others just embrace it without being overly concerned that it falls outside the scientific paradigm. Evolutionary function occupies a somewhat confusing place in reasoning about function because it is not teleological, meaning that evolution is not directed toward an end or shaped by a purpose but rather is a blind process without a goal. But this is irrelevant from an informational standpoint because information never directs toward an end anyway, it just helps predict. Goals are artifacts of formal systems, and so contribute to logical but not statistical information management techniques. In other words, goals and logic are imaginary constructs; they are critical for understanding the mind but can be ignored for studying evolution and biology, which has allowed biology to carry on despite this weakness in its foundation.

The social sciences, too, have been proceeding on an unarticulated paradigm. Officially, they are trying to stay within the bounds of physicalism, rationalism, and empiricism, but the human mind introduces a black box, which is what scientists call a part of the system that is studied entirely through its inputs and outputs without any attempt to explain the inner workings. Some efforts to explain it have been attempted. Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s Verbal Behavior by explaining how language acquisition leverages innate linguistic talents8. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. So we now have good reason to believe the mind is much more than conditioned behavior and employs reasoning and subconscious know-how. But that is not the same thing as having an ontology or epistemology to support it. Form & function dualism and pragmatism give us the leverage to separate the machine (the brain) from its control (the mind) and to dissect the pieces.

Expanding the metaphysics of science has a direct impact across science and not just regarding the mind. First, it finds a proper home for the formal sciences in the overall framework. As Wikipedia says, “The formal sciences are often excluded as they do not depend on empirical observations.” Next, and critically, it provides a justification for the formal sciences to be the foundation for the other sciences, which are dependent on mathematics, not to mention logic and hypotheses themselves. But the truth is that there is no metaphysical justification for invoking formal sciences to support physicalism, rationalism, and empiricism. With my paradigm, the justification becomes clear: function plays an indispensable role in the way the physical sciences leverage generalizations (scientific laws) about nature. In other words, scientific theories are from the domain of function, not form. Next, it explains the role evolutionary thinking is already having in biology because it reveals how biological mechanisms use information stored in DNA to control life processes through feedback loops. Finally, this expanded framework will ultimately let the social sciences shift from black boxes to knowable quantities.

But my primary motivation for introducing this new framework is to provide a scientific perspective for studying the mind, which is the domain of cognitive science. It will elevate cognitive science from a loose collaboration of sciences to a central role in fleshing out the foundation of science. Historically the formal sciences have been almost entirely theoretical pursuits because formal systems are abstract constructs with no apparent real-world examples. But software and minds are the big exceptions to this rule and open the door for formalists to study how real-world computational systems can implement formal systems. Theoretical computer science is a well-established formal treatment of computer science, but there is no well-established formal treatment for cognitive science, although the terms theoretical cognitive science and computational cognitive science are occasionally used. Most of what I discuss in this book is theoretical cognitive science because most of what I am doing is outlining the logic of minds, human or otherwise, but with a heavy focus on the design decisions that seem to have impacted earthly, and especially human, minds. Theoretical cognitive science studies the ways minds could work, looking at the problem from the functional side, and leaves it as a (big) future exercise to work out how the brain actually brings this sort of functionality to life.

It is worth noting here that we can’t conflate software with function: software exists physically as a series of instructions, while function exists mentally and has no physical form (although, as discussed, software and brains can produce functional effects in the physical world and this is, in fact, their purpose). Drew McDermott (whose class I took at Yale) characterized this confusion in the field of AI like this (as described by Margaret Boden in Mind as Machine):

A systematic source of self-deception was their common habit (made possible by LISP: see 10.v.c) of using natural-language words to name various aspects of programs. These “wishful mnemonics”, he said, included the widespread use of “UNDERSTAND” or “GOAL” to refer to procedures and data structures. In more traditional computer science, there was no misunderstanding; indeed, “structured programming” used terms such as GOAL in a liberating way. In Al, however, these apparently harmless words often seduced the programmer (and third parties) into thinking that real goals, if only of a very simple kind, were being modelled. If the GOAL procedure had been called “G0034” instead, any such thought would have to be proven, not airily assumed. The self-deception arose even during the process of programming: “When you [i.e. the programmer] say (GOAL… ), you can just feel the enormous power at your fingertips. It is, of course, an illusion” (p. 145). 9

This begs the million-dollar question: if an implementation of an algorithm is not itself function, where is the function, i.e. real intelligence, hiding? I am going to develop the answer to this question as the book unfolds, but the short answer is that information management is a blind watchmaker both in evolution and the mind. That is, from a physical perspective the universe can be thought of as deterministic, so there is no intelligence or free will. But the main thrust of my book is that this doesn’t matter because algorithms that manage information are predictive and this capacity is equivalent to both intelligence and free will. So if procedure G0034 is part of a larger system that uses it to effectively predict the future, it can fairly also be called by whatever functional name you like that describes this aspect. Such mnemonics are actually not wishful. It is no illusion that the subroutines of a self-driving car that get it to its destination in one piece do wield enormous power and achieve actual goals. This doesn’t mean we are ready to start programming goals to the level human minds conceive them (and certainly not UNDERSTAND!), but function, i.e. predictive power, can be broken down into simple examples and implemented using today’s computers.

What are the next steps? My main point is that we need start thinking about how minds achieve function and stop thinking that a breakthrough in neurochemistry will magically solve the problem. We have to solve the problem by solving the problem, not by hoping a better understanding of the hardware will explain the software. While the natural sciences decompose the physical world from the bottom up, starting with subatomic particles, we need to decompose the mental world from the top down, starting (and ending) with the information the mind manages.

An Overview of What We Are

[Brief summary of this post]

What are we? Are we bodies or minds or both? Natural science tells us with fair certainty that we are creatures, one type among many, who evolved over the past few billion years in an entirely natural and explainable way. I certainly endorse broad scientific consensus, but this only confirms bodies, not minds. Natural science can’t yet confirm the existence of minds; we can observe the brain, by eye or with instruments, but we can’t observe the mind. Everything we know (or think we know) about the mind comes from one of two sources: our own experience or hearsay. However comfortable we are with our own minds, we can’t prove anything about the experience. Similarly, everything we learn about the world from others is still hearsay, in the sense that it is information that can’t be proven. We can’t prove things about the physical world; we can only develop pretty reliable theories. And knowledge itself, being information and the ability to apply it, only exists in our minds. Some knowledge appears instinctively, and some is acquired through learning (or so it seems to us). Beyond knowledge, we possess senses, feelings, desires, beliefs, thoughts, and perspectives, and we are pretty sure we can recognize these things in others. All of these mental words mean something about our ability to function in the world, and have no physical meaning in and of themselves. And not incidentally, we also have physical words that let us understand and interact with the physical world even though these words are also mental abstractions, being generalizations about kinds or instances of physical phenomena. We can comfortably say (but can’t prove) that we have a very good understanding of a mentally functional existence that is quite independent of our physical existence, an understanding that is itself entirely mentally functional and not physical. It is this mentally functional existence, our mind, that we most strongly identify with. When we are discussing any subject, the “we” doing the discussing is our minds, not our bodies. While we can identify with our bodies and recognize them as an inseparable possession, they, including our brains, are at least logically distinct entities from our minds. We know (from science) that the brain hosts our mind, but that is irrelevant to how we use our minds (excepting issues concerning the care of our heads and bodies) because our thoughts are abstractions not bound (except through indirect reference) to the physical world.

Given that we know we are principally mental beings, i.e. that we exist more from the perspective of function than form, what can we do to develop an understanding of ourselves? All we need to do is approach the question from the perspective of function rather than form. We don’t need to study the brain or the body; we need to study what they do and why. Just as homologous evolution caused eyes to evolve independently about 50-100 times, all our brain functions are evolving because of their value rather than because of their mechanism. Function drives evolution, not form, although form constrains what can be achieved.

But let’s consider the form for a moment before we move on to function. Observations of the brain will eventually reveal how it works in the same way dissection of a computer would. This will illuminate all the interconnections, and even which areas specialize in what kind of tasks. Monitoring neural activation alone could probably even get to the point where one could predict the gist of our thoughts with fair accuracy by correlating areas of neural activity to specific memories and mental states. But that would still be a parlor trick because such a physical reading would not reveal the rationale for the logical relationships in our cognitive models. The physical study of the brain will reveal much about the constraints of the system (the “hardware”), including signal speeds, memory storage mechanisms, and areas of specialized functions, but could it trace our thoughts (the “software”)? To extend the computer analogy, one can study software by doing a memory dump, so a similar memory reading ability for brains could reveal thoughts. But it is not enough to know the software or the thoughts; one needs to know what function is being served, i.e. what the software or thoughts do. A physical examination can’t reveal that; it is a mental phenomenon that can be understood only by reasoning out what it does from a higher-level (generalized) perspective and why. One can figure out what software does from a list of instructions, but one can’t see the larger purposes being served without asking why, which moves us from form to function, from physical to mental. So a better starting point is to ask what function is being served, from which one can eventually back out how the hardware and software do it. Since we are far from being able to decode the hardware or software of the brain (“wetware”) in much detail anyway, I will adopt this more direct functional approach.

From the above, we have finally arrived at the question we need to ask: What function do minds serve? The answer, for which I will provide a detailed defense later on, is that the function of the brain is to provide centralized, coordinated control of the body, and the function of the conscious mind is to provide centralized, coordinated control of the brain. That brains control bodies is, by now, not a very controversial stance. The rest of the body provides feedback to the brain, but the brain ultimately decides. The gut brain does a lot of “thinking” for itself, passing along its hungers and fears, but it doesn’t decide for you. That the conscious mind controls the brain is intuitively obvious but hard to prove given that our only primary information source about the mind is the mind itself, i.e. it is subjective instead of objective. However, if we work from the assumption that the brain controls the body using information management, which is to say the application of algorithms on data, then we can define the mind as what the brain is doing from a functional perspective. That is, the mind is our capacity to do things.

The conscious mind, however, is just a subset of the mind, specifically including everything in our conscious awareness, from sensory input to memories, both at the center of our attention and in a more peripheral state of awareness. We feel this peripheral awareness both because we can tell it is there without dwelling on it and because we often do turn our attention to it, at which point it happily becomes the center. The capacity of our mind to do things is much larger than our conscious awareness, including all things our brains can do for which we don’t consciously sense the underlying algorithm. Statistically, this includes almost everything our brains do. The things we use our minds to do which we can’t explain are said to be done subconsciously, by our subconscious mind. We only know the subconscious mind is there by this process of elimination: we can do it, but we are not aware of how we do it or sometimes that we are doing it at all.

For example, we can move, talk, and remember using our (whole) mind, but we can’t explain how we do them because they are controlled subconsciously, and the conscious mind just pulls the strings. Any explanations I might attempt of the underlying algorithms behind these actions sound like they are at the puppeteer level: I tell my body to move, I use words to talk, I remember things by thinking about them. In short, I have no idea how I really do it. The explanations or understandings available to the conscious mind develop independently of the underlying subconscious algorithms. Our conscious understanding is based only on the information available to conscious awareness. While we are aware of much of the sensory data used by the brain, we have limited access to the subconscious processing performed on that data, and consequently limited access to the information it contains. What ends up happening is that we invent our own view of the world, our own way of understanding it, using only the information we can access through awareness and the subconscious and conscious skills that go with it. What this means is that our whole understanding of the world (including ourselves) is woven out of information we derive from our awareness and not from the physical world itself, which we only know second-hand. Exactly like a sculptor, we build a model of the world, similar to it in as many ways as we can make it feel similar, but at all times just a representation and not the real thing. While we evolved to develop this kind of understanding, it depends heavily on the memories we record over our lifetimes (both consciously accessible and subconsciously not). As the mind develops from infancy, it acquires information from feedback that it can put to use, and it thinks of this information as “knowledge” because it works, i.e. it helps us to predict and consequently to control. To us, it seems that the mind has a hotline to reality. Actually, though, the knowledge is entirely contextual within the mind, not reality itself but only representative of it. But by representing it the contexts or models of the conscious mind arise: the conscious mind has no choice but to believe in itself because that is all it has.

Speaking broadly, subconscious algorithms perform specialized informational tasks like moving a limb, remembering a word, seeing a shape, and constructing a phrase. Consciously, we don’t know how they do it. Conscious algorithms do more generalized tasks, like thinking of ways to find food or making and explaining plans. We know how we do these things because we think them through. Conscious algorithms provide centralized, coordinated control of subconscious (and other conscious) algorithms. Only the top layer of centralized control is done consciously; much can be done subconsciously. For example, all our habitual behavior starts under conscious development and is then delegated to the subconscious going forward. As the control central, though, the buck stops with the conscious mind; it is responsible for reviewing and approving, or, in the case of habitual behavior, preapproving, all decisions. Some recent studies impugn this decisive capacity of the conscious mind with evidence that we make decisions before we are consciously aware that we have done so.1 But that doesn’t undermine the role of consciousness, it just demonstrates that to operate with speed and efficiency we can preapprove behaviors. Ideally, the conscious mind can make each sort of decision just once and self-program to reapply that decision as needed going forward without having to repeat the analysis. It is like a CEO who never pulls triggers himself but has others to do it for him, but continually monitors to see if things are being done right.

I thus conclude that the conscious mind is a subprocess of the mind that exists to make decisions and that it does it using perspectives called knowledge that are only meaningful locally (i.e. in the context of the information under its management) and that these contexts are distilled from information fed to it by subconscious processes. The conscious mind is separate from the subconscious mind for practicality reasons. The algorithmic details of subconscious tasks are not relevant to centralized control. We subconsciously metabolize, pump blood, breathe, blink, balance, hear, see, move, etc. We have conscious awareness of these things only to the degree we need to to make decisions. For example, we can’t control metabolization and heartbeat (at least without biofeedback), and we consequently have no conscious awareness of them. Similarly, we don’t control what we recognize. Once we recognize something, we can’t see it as something else (unless an alternate recognition occurs). But we need to be aware of what we recognize because it affects our decisions. We breathe and blink automatically, but we are also aware we are doing it so we can sometimes consciously override it. So the constant stream of information from the subconscious mind that flows past our conscious awareness is just the set we need for high-level decisions. The conscious mind is unaware how the subconscious does these things because this extraneous information would overly complicate its task, slowing it down and probably compromising its ability to lead. We subjectively know the limits of our conscious reach, and we can also see evidence of all the things our brains must be doing for us subconsciously. I suspect this separation extends to the whole animal kingdom, which is nearly all comprised of bilateral animals having one brain. Octopuses are arguably an exception as they have separate brains for each arm, but the central octopus brain must still have some measure of high-level control over them, perhaps in the form of an awareness, similar to our consciousness. Whether each arm also has some degree of consciousness is an open question.2 Although a separate consciousness process is not the only possible solution to centralized control, it does appear to be the solution evolution has favored, so I will take it as my working assumption going forward.

One can further subdivide the subconscious mind along functional lines into what are called modules, which are specialized functions that also seem to have specialized physical areas of the brain that support them. Steven Pinker puts it this way:

The mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation. The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. 3
The mind is a set of modules, but the modules are not encapsulated boxes or circumscribed swatches on the surface of the brain. The organization of our mental modules comes from our genetic program, but that does not mean that there is a gene for every trait or that learning is less important than we used to think.4

Positing that the mind has modules doesn’t tell us what they are or how they work. Machines are traditionally constructed from parts that serve specific purposes, but design refinements (e.g. for miniaturization) can lead to a streamlining of parts that are fewer in number, but that holistically serve more functions. Having been streamlined by countless generations, the modules of the mind can’t be as easily distinguished along functional boundaries as the other parts of the body because they all perform information management in a highly collaborative way. But if we accept that any divisions we make are preliminary, we can get on with it without getting too caught up in the details. Drawing such lines is reverse engineering. Evolution engineered us, explaining what it did is reverse engineering. Ideally one learns enough from reverse engineering to build a duplicate mechanism from scratch. But living things were “designed” from trillions of small interactions spread over billions of years. We can’t identify those interactions individually, and in any event, natural selection doesn’t select for individual traits but for entire organisms, so even with all the data one would be hard-pressed to be sure what caused what. However, if one generalizes, that is, if one applies statistical reasoning, one can distinguish functional advantages of one trait over another. And considering that all knowledge and understanding are the product of such generalizing, it is a reasonable strategy. Again, it is not the objective of knowledge to describes things “as they are,” only to create models or perspectives that abstract or generalize certain features. So we can and should try to subdivide the mind into modules and guess how they interact, with the understanding that there is more than one way to skin this cat and greater clarity will come with time.

Subdividing the mind into consciousness and a number of subconscious components will do much to elucidate how the mind provides its centralized control function, but the next most critical aspect to consider is how it manages information. Information derives from the analysis of data, the separation of useful data (the wheat) from noisy data (the chaff). Our bodies use at least two physical mechanisms to record information: genes and memory. Genes are nature’s official book of record, and many mental functions have extensive instinctive support encoded by genes. We have fully decoded all our genes and have identified some functions of some of them. Genes either code for proteins or they help or regulate those that do. Their function can be viewed narrowly as a biochemical role or more broadly as the benefit conferred to the organism. We are still a long way off from connecting the genes to the biochemical roles, and further still from connecting to benefits. Even with good explanations for everything questions will always remain because billions of years of subtlety are coded into genes, and models for understanding invariably generalize that subtlety away.

Memory is an organism’s book of record, responsible for preserving any information it gleans from experience, a process also called learning. We don’t yet understand the neurochemical basis of memory, though we have identified some of the chemicals and pathways involved. Nurture (experience) is often steered by nature (instinct) to develop memory. Some of our instinctive skills work automatically without memory but must leverage memory for us to achieve mastery of a learned behavior. We are naturally inclined to learn to walk and talk but are born with no memory of steps or words. So we follow our genetic inclinations, and through practice we record models in memory that help us perform the behaviors reliably.

Genes and memory store information of completely incompatible types and formats. Genetic information encodes chemical structures (either mRNA or proteins) which translate to function mostly through proteins and gene regulation. Memory encodes objects, events and other generalizations which translate to function through indirection, mostly by correlating memory with reality. Genetic information is physical and is mechanically translated to function. Remembered information is mental and is indirectly or abstractly translated to function. While both ultimately get the job done, the mind starts out with no memory as a tabula rasa (blank slate) and assembles and accumulates memory as a byproduct of cogitation. Many algorithmic skills, like vision processing, are genetically prewired, but on-the-job training leverages memory (e.g. recognition of specific objects). In summary, genes carry information that travels across generations while memory carries information transient to the individual.

I mentioned before that culture is another reservoir of information, but it doesn’t use an additional biological mechanism. While culture depends heavily on our genetic nature, significantly on language, we reserve the word culture for additions we make beyond our nature and ourself. Language is an innate skill; a group of children with no language can create a completely vocabulary and grammar themselves in a few years. Therefore, cultural information is not stored in genes but only in memory, and it is also stored in artifacts as a form of external memory. Each of us forms a unique set of memories based on our own experience and our exposure to culture. What an apple is to each of us is a unique derivation of our lifetime exposure to apples, but we all share general ideas (knowledge) about what one can do with apples. We create memories of our experiences using feedback we ourselves collect. Our memory of culture, on the other hand, is partially based on our own experiences and partially on the underlying cultural information others created. Cultural institutions, technologies, customs, and artifacts have ancient roots and continually evolve. Culture extends our technological and psychological reach, providing new ways to control the world and understand our place in it. While cultural artifacts mediate much of the transmission of culture, most culture is acquired from direct interaction with other people via spoken language or other activities. Culture is just a thin veneer sitting on top of our individual memories, but it is the most salient part to us because it encodes so much of what we can share.

To summarize so far, we have conscious and subconscious minds that manage information using memory. The conscious mind is distinct from the subconscious as the point where relevant information is gathered for top-level centralized control. But why are conscious minds aware? Couldn’t our top-level control process be unaware and zombie-like? No, it could not, and the analogy to zombies or robots reveals why. While we can imagine an automaton performing a task effectively without consciousness, as indeed some automated machines do, we also know that they lack the wherewithal to respond to unexpected circumstances. In other words, we expect zombies and robots to have rigid responses and to be slow or ineffective in novel situations. This intuition we have about them results from our belief that simple tasks can be automated, but very general tasks require generalized thinking, which in turn requires consciousness. I’m going to explain why this intuition is sound and not just a bias, and in the process we will see why the consciousness process must be aware of what it is doing.

I have so far described the consciousness process as being a distinct subprocess of the mind which is supplied just the information relevant to high-level decisions from a number of subconscious processes, many of them sensory but also memory, language, spatial processing, etc. Its task is to make high-level decisions as efficiently and efficaciously as possible. I can’t prove that this design is the only possible way of doing things, but it is the way the human mind is set up. And I have spoken in general about how knowledge in the mind is contextual and is not identical to reality but only representative of it. But now I am going to look closer at how that representative knowledge causes a mind to “believe in itself” and consequently become aware. It is because we create virtual worlds (called mental models, or models for short) in our heads that look the same as the outside world. We superimpose these on the physical world and correlate them so closely that we can usually ignore the distinction. But they could not be more different. One of them is out there, and the other in here. One exists only physically, the other only mentally (albeit with the help of a physical computational mechanism, the brain). One is detailed down to atoms and then quarks, while the other is a network of generalizations with limited detail, but extensive association. For this reason, a model can be thought of as a simplified, cartoon-like representation5 of physical reality. Within the model, one can do simple, logical operations on this abridged representation to make high-level decisions. Our minds are very handy with models; we mostly manage them subconsciously and can recognize them much the same way we recognize objects. We automatically fit the world to a constellation of models we manage subconsciously using model recognition.

So the approach consciousness uses to make top level decisions is essentially to run simulations: it builds models that correlate well to physical conditions and then projects the models into the future to simulate what will happen. Consciousness includes models of future possibilities and models of current and past experiences as we observed them. We can’t remember the actual past as it actually was, only how we experienced it through our models. All our knowledge is relative to these models, which in turn relate indirectly to physical reality. But where does awareness fit in? Awareness is just the data managed by this process. We are aware of all the information relevant to top-level decisions because our conscious selves are this consciousness process in the brain. Not all the data within our awareness is treated equally. Since much more information is sensed and recognized than is needed for decisions, the data is funneled down further through an attention process that focuses on just select items in consciousness.6 As I noted before, we can apply our focusing power on anything within our conscious awareness at will to pull it into attention, but our subconscious attention process continually identifies noteworthy stimuli for us to focus on, and it does it by “listening” for signals that stand out from the norm. We know from experience that although we are aware of a lot of peripheral sensory information and peripheral thoughts floating around in our heads at any given point in time, we can only actively think about one thing at a time, in what seems to us as a train of thought where one thought follows another. This linear, plodding approach to top-level decision making ensures that the body will make just one coordinated action at a time because we don’t have to compete with ourselves like a committee every time we do something.

Let’s think again about whether minds could be robotic again. Self-driving cars, for example, are becoming increasingly capable of executing learned behaviors, and even expanding their proficiency dynamically, without any need for awareness, consciousness, reasoning, or meaning. But even a very good learned behavior falls far short of the range of responses that animals need to compete in an evolutionary environment. Animals need a flexible ability to assess and react to situations in a general way, that is, by considering a wide range of past experience. The modeling approach I propose for consciousness can do that. If we programmed a robot to use this approach, it would both internally and externally behave as if it were aware of the data presented to it, which is wholly analogous to what we do. It will have been programmed with a consciousness process that considers access to data “awareness”. Could we conclude that it had actually become aware? I think we could because it meets the logical requirements, although this doesn’t mean robotic awareness would be as rich an experience of awareness as our own. A lot goes into the richness of our experience from billions of years of tweaks that would take us a long time to replicate faithfully in artificial minds. But it is presumptuous of us to think that our awareness, which is entirely a product of data interpretation, is exclusive just because we
are inclined to feel that way.

Let me talk for a moment about that richness of experience. How and why our sensory experiences (called qualia) feel the way they do is what David Chalmers has famously called the hard problem of consciousness. The problem is only hard if you are unwilling to see consciousness as a subroutine in the brain that is programmed to interpret data as feelings. It works exactly the way it does because it is the most effective way that has evolved to get bodies to take all the steps they need to survive. As will be discussed in the next section, qualia are an efficient way to direct data from many external channels simultaneously to the conscious mind. The channels and the attention process focus the relevant data, but the quality or feeling of the qualia results from subconscious influences the qualia exert. Taste and smell simplify chemical analyses down for the conscious mind into a kind of preference. Color and sound can warn us of danger or calm us down. These qualia seem almost supernatural but they actually just neatly package up associations in our minds so we will feel like doing the things that are best for us. Why do we have a first-person experience of them? Here, too, it is nothing special. First-person is just the name we give to this kind of processing. If we look at our, or someone else’s, conscious process more from a third-person perspective we can see that what sets it apart is just the flood of information from subconscious processes giving us a continuous stream of sensations and skills that we take for granted. First person just means being connected so intimately to such a computing device.

Now think about whether robots can be conscious. Self-driving cars use a specialized algorithm that consults millions of hours of driving experience to pick the most appropriate responses. These cars don’t reason out what might happen in different scenarios in a general way. Instead, they use all that experience to look up the right answer, more or less. They still use internal models for pedestrians, other cars, roads, etc, but once they have modeled the basic circumstances they just look up the best behavior rather than reasoning it out generally. As we start to build robots that need more flexibility we may well design the equivalent of a conscious subprocess, i.e. a higher-level process that reasons with models. If we also use the approach of giving it qualia that color its preferences around its sensory inputs in preprogrammed (“subconscious”) ways to simplify the task at the conscious level, then we will have built a consciousness similar to our own. But while we may technically meet my definition of consciousness and while such a robot may even be able to convince people into thinking it is human sometimes (i.e. pass the Turing test), that alone won’t mean it experiences qualia anywhere near as rich as our own, and that is because we have more qualia which encode more preferences in a highly interconnected and seamless way following billions of years of refinements. Brains and bodies are an impressive accomplishment. But they are ultimately just machines, and it is theoretically possible to build them from scratch, though not with the approaches to building we have today.

The Certainty Engine

The Certainty Engine: How Consciousness Arose to Drive Decisions Through Rationality

The mind’s organization as we experience it revolves around the notion of certainty. It is a certainty engine. It is designed so as to enable us to act with the full expectation of success. In other words, we don’t just act confidently because we are brash, but because we are certain. It is a surprising capacity, given that we know the future is unknowable. We know we can’t be certain about the future, and yet at the same time we feel certain. That feeling comes from two sources, one logical and one psychological.

Logically, we break the world down into chunks which follow rules of cause and effect. We gather these chunks and rules into mental models (models for short) where certainty is possible because we make the rules. When we think logically, we are using these model models to think about the physical world, because logic, and cause and effect, only exist in the models; they exist mentally but not physically. Cause and effect are just illusions of the way we describe things — very near and dear to our hearts — but not scientific realities. The universe follows its clockwork mechanism according to its design, and any attempt to explain what “caused” what after the fact is going to be a rationalization, which is not necessarily a bad thing, but it does necessarily mean simplifying down to an explanatory model in which cause and effect become meaningful concepts. Consequently, if something is true in a model, then it is a logical certainty in that model. We are aware on some level that our models are simplifications that won’t perfectly match the physical world, but on another level, we are committed to our models because they are the world as we understand it.

Psychologically, it wouldn’t do for us to be too scared to ever act for fear of making a mistake, so once our confidence reaches a given threshold we leap. In some of our models we will succeed while in others we will fail. Most of our actions succeed. This is because most of our decisions are habitual and momentary, like putting one foot in front of the other. Yes, we know we could stub our toe on any step, and we have a model for that, but we rarely think about it. Instead, we delegate such decisions to our subconscious minds, which we trust both to avoid obstacles and to alert us to them as needed, meaning to the degree avoidance is more of a challenge than the subconscious is prepared to handle. For any decision more challenging than habit can handle we try to predict what will happen, especially with regard to what actions we can take to change the outcome. In other words, we invoke models of cause and effect. These models stipulate that certain causes have certain effects, so the model renders certainty. If I go to the mailbox to get the mail and the mailman has come today, I am certain I will find today’s mail. Our plans fail when our models fail us. We didn’t model the situation well enough, either because there were things we didn’t know or conclusions that were insufficiently justified. The real world is too complicated to model perfectly, but all that matters is that we model it well enough to produce predictions that are good enough to meet our goals. Our models simplify to imply logical outcomes that are more likely than chance to come true. This property, which separates information from noise, is why we believe a model, which is to say we are psychologically prepared to trust the certainty we feel about the model enough to act on it and face the consequences.

What I am going to examine how this kind of imagination arose, why it manifests in what we perceive as consciousness, and what it implies for how we should lead our lives.

To the fundamental question, “Why are we here?”, the short answer is that we are here to make decisions. The long answer will fill this book, but to elaborate some, we are physically (and mentally) here because our evolutionary strategy for survival has been successful. That mental strategy, for all but the most primitive of animals, includes being conscious with both awareness and free will, because those capacities help with making decisions, which translates to acting effectively. Decision-making involves capturing and processing information, and information is the patterns hiding in data. Brains use a wide variety of customized algorithms, some innate and some learned, to leverage these patterns to predict the future. To the extent these algorithms do not require consciousness I call them subrational. If all of them were subrational then there would be no need for subjective experience; animals could go about their business much like robots without any of the “inner life” which characterizes consciousness. But one of these talents, reasoning, mandates the existence of a subjective theater, an internal mental perspective which we call consciousness, that “presents” version(s) of the outside world to the mind for consideration as if they were the outside world. All but the simplest of animals need to achieve a measure of the certainty of which I have spoken and to do that they need to model worlds and map them to reality. This capacity is called rationality. It is a subset of the reasoning process, with the balance being our subrational innate talents, which proceed without such modeling (though some support it or leverage it). Rationality mandates consciousness, not as a side effect but because reasoning (which needs rationality) is just another way of describing what consciousness is. That is, our experience of consciousness is reasoning using the faculties we possess that help us do so.

At its heart, rationality is based on propositional logic, a well-developed discipline that consists of propositions and rules that apply to them. Propositions are built from concepts, which are references that can be about, represent, or stand for things, properties and states of affairs. Philosophers call this “aboutness” that concepts possess “intentionality”, and divide mental states into those that are intentional and those are merely conscious, i.e. feelings, sensations and experiences in our awareness1. To avoid confusion and ambiguity, I will henceforth simply call intentional states “concepts” and conscious states “awareness”. Logic alone doesn’t make rationality useful; concepts and conclusions have to connect back to the real world. To accomplish this they are built on an extensive subrational infrastructure, and understanding that is a big part of understanding how the mind works.

So let’s look closer at the attendant features of consciousness and how they contribute to rationality. Steven Pinker distinguishes four “main features of consciousness — sensory awareness, focal attention, emotional coloring, and the will.”2 The first three of these are subrational skills and the last is rational. Let’s focus on subrational skills for now, and we will get to the rational will, being the mind’s control center, further down. The mind also has many more kinds of subrational skills, sometimes called modules. I won’t focus too much on exact boundaries or roles of modules as that is inherently debatable, but I will call out a number of abilities as being modular. Subrational skills are processed subconsciously, so we don’t consciously sense how they work; they appear to work magically to us. We do have considerable conscious awareness and sometimes control over these subrational skills, so I don’t simply call them “subconscious”. I am going to briefly discuss our primary subrational skills.

First, though, let me more formally introduce the idea of the mind as a computational engine. The idea that computation underlies thinking goes back at least 500 years to Thomas Hobbes who said “by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.” Alan Turing and Claude Shannon developed theories of computability and information from the 1930’s to the 1950’s that led to Hilary Putnam formalizing the Computational Theory of Mind (CTM) in 1961. As Wikipedia puts it, “The computational theory of mind holds that the mind is a computation that arises from the brain acting as a computing machine, [e.g.] the brain is a computer and the mind is the result of the program that the brain runs”. This is not to suggest it is done digitally as our computers do it; brains use a highly customized blend of hardware (neurochemistry) and software (information encoded with neurochemistry). At this point we don’t know how it works except for some generalities at the top and some details at the bottom. Putnam himself abandoned CTM in the 1980’s, though in 2012 he resubscribed to a qualified version. I consider myself an advocate of CTM, provided it is interpreted from a functional standpoint. It does not matter how the underlying mechanism works, what matters is that it can manipulate information, which as I have noted does not consist of physical objects but of patterns that help predict the future. So nothing I am going to say in this book is dependent on how the brain works, though it will not be inconsistent with it either. While we have undoubtedly learned some fascinating things about the brain in recent years, none of it is proven and in any case it is still much too small a fraction of the whole to support many conclusions. So I will speak of the information management done in the brain as being computational, but that doesn’t imply numerical computations, it only implies some mechanism that can manage information. I believe that because information is abstract, the full range of computation done in human minds could be done on digital computers. At the same time, different computing engines are better suited to different tasks because the time it takes to compute can be considerable, so to perform well an engine must be finely tuned to its task. For some tasks, like math, digital computers are much better suited than human brains. For others, like assessing sensory input and making decisions that match that input against experience, they are worse (for now). Although we are a long way from being able to tune a computer as well to its task as our minds are to our task of survival, computers don’t have to match us in all ways to be useful. Our bodies are efficient, mobile, self-sustaining and self-correcting units. Computers don’t need to be any of those things to be useful, though it helps and we are making improvements in these areas all the time.

So knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and thoroughly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can sense different kinds of things at the same time without difficulty.

Looking at sensory awareness, we internally process sensory information into qualia (singular quale, pronounced kwol-ee), which is how each sense feels to us subjectively. This processing is a computation and the quale is a piece of data, nothing more, but we are wired to attach a special significance to it subjectively. We can think of the qualia as being data channels into our consciousness. Consciousness itself is a computational process that interprets the data from each these channels in a different way, which we think of as a different kind of feeling, but which is really just data from a different channel. Beyond this raw feel we recognize shapes, smells, sounds, etc. via the subrational skills of recollection and recognition, which bring experiences and ideas we have filed away back to us based on their connections to other ideas or their characteristics. This information is fed through a memory data channel. Interestingly, the memory of qualia has some of the feel of first-hand qualia, but is not as “vivid” or “convincing”, though sometimes in dreams it can seem to be. This is consistent with the idea that our memory can hold some but not all of the information the data channels carried.

Two core subrational skills let us create and use concepts: generalizing and modeling. Generalization is the ability to recognize patterns and to group things, properties, and ideas into categories called concepts. I consider it the most important mental skill. Generalizations are abstractions, not of the physical world but about it. A concept is an internal reference to a generalization in our minds that lets us think about the generalization as a unit or “thing”. Rational thought in particular only works with concepts as building blocks, not with sensations or other kinds of preconceptual ideas. Modeling itself is a subrational skill that builds conceptual frameworks that are heavily supported by preconceptual data. We can take considerable conscious control of the modeling process, but still the “heavy lifting” is both subrational and subconscious, just something we have a knack for. It is not surprising; our minds make the work of being conscious seem very easy to us so that we can focus with relative ease on making top-level decisions.

There are countless ways we could break down our many other subrational skills, with logical independence from each other and location in the brain being good ones. Harvard psychologist Howard Gardner identified eight types of independent “intelligences” in his 1983 book Frames of Mind: The Theory of Multiple Intelligences3: musical, visual-spatial, verbal-linguistic, logical-mathematical, bodily, interpersonal, intrapersonal and naturalistic. MIT neuroscientist Nancy Kanwisher in 2014 identified specific brain regions that specialize in shapes, motion, tones, speech, places, our bodies, face recognition, language, theory of mind (thinking about what other people are thinking), and “difficult mental tasks”.4 As with qualia and memory, most of these skills interact with consciousness via their own kind of data channel.

Focus itself is a special subrational skill, the ability to weigh matters pressing on the mind for attention and then to give focus to those that it judges most important. Rather than providing an external data channel into consciousness, focus controls the data channel between conscious awareness and conscious attention. Focusing itself is subrational and so its inner workings are subconscious, but it appears to select the thoughts it sends to our attention by filtering out repetitive signals and calling attention to novel ones. We can only apply reasoning to thoughts under attention, though we can draw on our peripheral awareness of things out of focus to bring them into focus. While focus works automatically to bring interesting items to our attention, we have considerable conscious control to keep our attention on anything already there.

Drives are another special kind of subrational skill that can feed consciousness through data channels with qualia of their own. A drive is logically distinct from the other subrational skills in that it creates a psychological need, a “negative state of tension”, that must be satisfied to alleviate the tension. Drives are a way of reducing psychological or physiological needs to abstractions that can be used to influence reasoning, to motivate us:

A motive is classified as an “intervening variable” because it is said to reside within a person and “intervene” between a stimulus and a response. As such, an intervening variable cannot be directly observed, and therefore, must be indirectly observed by studying behavior.5

Just rationally thinking about the world using models or perspectives doesn’t by itself give us a preference for one behavior over another. Drives solve that problem. While some decisions, such as whether our heart should beat, are completely subconscious and don’t need motivation or drive, others are subconscious yet can be temporarily overridden consciously, like blinking and breathing. These can be called instinctive drives because we start to receive painful feedback if we stop blinking or breathing. Others, like hunger, require a conscious solution, but the solution is still clear: one has to eat. Emotions have no single response that can resolve them, but instead provide nuanced feedback that helps direct us to desirable objectives. Our emotional response is very context-sensitive in that it depends substantially on how we have rationally interpreted, modeled and weighed our circumstances. But emotional response itself is not rational; an emotional response is beyond our conscious control. Since it depends on our rational evaluation of our circumstances, we can ameliorate it by reevaluating, but our emotions have access to our closely-held (“believed”) models and can’t be fooled by those we consider only hypothetically.

We have more than just one drive (to survive) because our rational interactions with the world break down into many kinds of actions, including bodily functions, making a living, having a family, and social interactions.6 Emotions provide a way of encoding beneficial advice that can be applied by a subjective, i.e. conscious, mind that uses models to represent the world. In this way, drives can exert influence without simply forcing a prescribed instinctive response. And it is not just “advice”; emotions also insulate us from being “reasonable” in situations where rationality would hurt more than help. Our faces betray our emotions so others can trust us.7 Romantic love is a very useful subrational mechanism for binding us to one other person as an evolutionary strategy. It can become frustratingly out of sync with rational objectives, but it has to have a strong, irrational, even mad, pull on us if it is to work.8

Although our conscious awareness and attention exist to support rationality, this doesn’t mean people are rational beings. We are partly rational beings who are driven by emotions and other drives. Rather than simply prescribing the appropriate reaction, drives provide pros and cons, which allow us to balance our often conflicting drives against each other by reasoning out consequences of various solutions. For any system of conflicting interests to persist in a stable way, one has to develop rules of fair play or each interest will simply fight to the death, bringing the system down. Fair play, also known as ethics, translates to respect: interests should respect each other to avoid annihilation. This applies to our own competing drives and interpersonal relationships. The question is, how much respect should one show, on a scale of me first to me last? Selfishness and cooperation have to be balanced in each system accordingly. The ethical choice is presumably one that produces a system that can survive for a long time. And living systems all embrace differing degrees of selfishness and cooperation, proving this point. Since natural living systems have been around a long time, they can’t be unethical by this definition, so any selfishness they contain is justified by this fact. Human societies, on the other hand, may overbalance either selfishness or cooperation, leading to societies that fail, either by actually collapsing or by under-competing with other societies, which eventually leads to their replacement.

And so it is that our conscious awareness becomes populated with senses, memories, emotions, language, etc, which are then focused by our power of attention for the consideration of our power of reasoning. Of this Steven Pinker says:

The fourth feature of consciousness is the funneling of control to an executive process: something we experience as the self, the will, the “I.” The self has been under assault lately. The mind is a society of agents, according to the artificial intelligence pioneer Marvin Minsky. It’s a large collection of partly finished drafts, says Daniel Dennett, who adds, “It’s a mistake to look for the President of the Oval Office of the brain.”
The society of mind is a wonderful metaphor, and I will use it with gusto when explaining the emotions. But the theory can be taken too far if it outlaws any system in the brain charged with giving the reins or the floor to one of the agents at a time. The agents of the brain might very well be organized hierarchically into nested subroutines with a set of master decision rules, a computational demon or agent or good-kind-of-homunculus, sitting at the top of a chain of command. It would not be a ghost in the machine, just another set of if-then rules or a neural network that shunts control to the loudest, fastest or strongest agent one level down.9
The reason is as clear as the old Yiddish expression, “You can’t dance at two weddings with only one tuches.” No matter how many agents we have in our minds, we each have exactly one body. 10

While it may only be Pinker’s fourth feature, it is the whole reason for consciousness. We have a measure of conscious awareness and control over our subrational skills only so that they can help with reasoning and thereby allow us to make decisions. This culmination into a single executive control process is a logical necessity given one body, but that it should be conscious or rational is not so much necessary as useful. Rationality is a far more effective way to navigate an uncertain world than habit or instinct. Perhaps we don’t need to create a model to put one foot in front of the other or chew a bite of food. But paths are uneven and food quality varies. By modeling everything in many degrees of detail and scope, we can reason out solutions better than more limited heuristical approaches of subrational skills. Reasoning brings power, but it can only work if the mind can manage multiple models and map them to and from the world, and that is a short description of what consciousness is. Consciousness is the awareness of our senses, the creation (modeling) of worlds based on them, and the combined application of rational and subrational skills to make decisions. Our decisions all have some degree of rational oversight, though we can, and do, grant our subrational skills (including learned behaviors) considerable free reign so we can focus our rational energies on more novel aspects of our circumstances.

Putting the shoe on the other foot, could reasoning exist robotically without the inner life which characterizes consciousness? No, because what we think of as consciousness is mostly about running simulations on models we have created to derive implications and reactions, and measuring our success with sensory feedback. It would feel correct to us to label a robot doing those things as conscious, and it would be able to pass any test of consciousness we cared to devise. It, like us, would metaphorically have only one foot in reality while its larger sense of “self” would be conjecturing and tracking how those conjectures played out. For the conscious being, life is a game played in the head that somewhat incidentally requires good performance in the physical world. Of course, evolved minds must deliver excellent performance as only the fittest survive. A robot consciousness, on the other hand, could be given different drives to fit a different role.

To summarize, one can draw a line between conscious beings and those lacking consciousness by dividing thoughts into a conceptual layer and the support layers beneath it. In the conceptual layer, information has been generalized into packets called concepts which are organized into models which gather together the logical relationships between concepts. The conceptual layer itself is an abstraction, but it connects back to the real world whenever we correlate our models with physical phenomena. This ability to correlate is another major subrational skill, though it can be considered a subset of our modeling ability. Beneath the conceptual layer are preconceptual layers or modules, which consists of both information and algorithms that capture patterns in ways that have proven useful. While the rational mind only sees the conceptual layer, some subrational modules use both preconceptual and conceptual data. Emotions are the most interesting example of a subrational skill that uses conceptual data: to arrive at an emotional reaction we have to reason out whether we should feel good or bad, and once we have done that, we experience the feeling so long as we believe the reasoning (though feelings will fade if their relevance does). Only if our underlying reasoning shifts will our feelings change. This will happen quickly if we discover a mistake, or slowly as our reasoned perspective evolves over time.

One can picture the mind then as a tree, where the part above the ground is the conceptual layer and the roots are the preconceptual layers. Leaves are akin to concepts and branches to models. Leaves and branches are connected to the roots and draw support from them. The above-ground, visible world is entirely rational, but reasoning without connecting back to the roots would be all form and no function. So, like a tree “feeling” its roots, our conscious awareness extends underground, anchoring our modeled creations back to the real world.

Key insights of my theory

The key insights as I see them:

1. Descartes was right about dualism
2. We underappreciate the impact of evolution on the mind
3. We underappreciate the computational nature of the mind.
4. Consciousness exists to facilitate reasoning.
5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel
6. Minds reason using models that represent possibilities
7. Reasoning is fundamentally an objective process that manages truth and knowledge
8. We really have free will

Insight 1. Descartes was right about dualism – mind and body are separate kinds of substances. He made the false assumption that the mind is a physical substance, but then, he had no scientific basis for distinguishing mental from physical. We do have a basis now, but no one, so far as I can tell, has pointed it out as such. I will do so now. Mind and body, or, as I will refer to them, the mental (or ideal) and physical, are not separate in the sense of being different physical substances, but in the sense of being different independent kinds of existence that don’t preclude each other, but can affect each other. The brain and everything it does has a physical aspect, but some of the things it does, e.g. relationships and ideas, have an ideal, or mental, aspect as well. The mechanics of how a brain thinks is physical, but the ideas it thinks, viewed abstractly, are mental. You could say the idea that 1+1=2 exists regardless of whether any brain thinks about it. So our experience of mind is physical, but to the degree that our minds use relationships and ideas as part of that experience (using a physical representation), those relationships and ideas are also mental (in that they have an abstract meaning). The brain leverages mental relationships analogously to the way life leverages chemicals that have different physical properties, except that mental relationships have no physical properties like chemicals but instead impact the physical world through feedback and information processing as a series of physical events. As with chemicals, the net effect is that the complexity of the physical world increases.

Only abstract relationships count as mental, where “abstract” refers to the idea of indirect reference, which is a technique of using one thing to represent or refer to another. A physical system, like a brain or a computer, that implements such techniques has all sorts of physical limitations on the scope and power of those representations, but, like a Turing machine, any implementation capable of performing logical operations on arbitrary abstract relationships can in principle compute anything in the ideal world. In other words, there are no “mysterious” ideas beyond our comprehension, though some will exceed our practical capacity. The confusion between physical and mental that has dogged philosophy and science for centuries only continues because we have not been clearly differentiating the brain from what it does. The brain implements a biological computer physically, but what it does is represent relationships as ideas. Ideas are not dependent on the implementation so that an idea can be represented with words and shared by author and reader. The three forms are very different, but we know that they share important aspects.

All abstract relationships exist (ideally) whether any brain (or computer) thinks about them or not. So the imaginary world is a much broader space than the physical, if you will, as it essentially parameterizes possibility – thoughts are not locked down in all their specifics but generalize to a range of possibilities. Consider a computer program, which is a simple system that manipulates abstract relations. A program executing on a computer will go through a very real set of instructions and process specific data from inputs to outputs. But a program’s capability can be nearly infinite if it is capable of handling many kinds of inputs across a whole range of situations. The program “itself” doesn’t know this, but the programmer does. Our minds work like the programmer; they manage an immense range of possibilities. We see these possibilities in general terms, then add specifics (provide inputs) to make them more concrete, and ultimately a few of them are realized in the real world (i.e. match up to things there). In a very real sense, we live our lives principally in this world of possibilities and only secondarily in the physical world. I’m not speaking fancifully about daydreaming about dragons and unicorns, though we can do that, but about whether the mail has come yet or rain is likely. Whatever actually happens to us immediately becomes the past and doesn’t matter anymore, except in regards to how it will help us in the future. Of course, knowing that we have gotten the mail or that it is rained matters a lot to how we plan for the future, so we have to track the past to manage the future. But nostalgia for its own sake matters little, and so it is no big surprise that our memory of past events dissipates rather quickly (on the theory that our memory has evolved to intentionally “forget” information that could do more to distract effective decision making that help it). My point, though, is that we continually imagine possibilities.

Insight 2. We underappreciate the impact of evolution on the mind. Darwin certainly tried to address this. “How does consciousness commence?” Darwin wondered. It was, and is, a hard question to answer because we still lack any objective means of studying what the mind does (as the mind is only visible from within, subjectively). Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s “Verbal Behavior” by explaining how language acquisition leverages innate linguistic talents. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. And we now know that thinking is much more than a conditioned behavior but employs reasoning and subconscious know-how. But evolution tells us more still. The mind is the control center of the body, so the direct feedback from evolution is more on how the mind handles a situation than the parts of the body it used to do it. The body is a multipurpose tool to help the mind satisfy its objectives. Mental evolution, therefore, leads somatic evolution. However, since we don’t understand the mechanics of the mind we have done less to study the mind than the body, which is just more tractable. Although understanding the full mechanics of mind is still a long way off, by looking at what selection pressures created demand for what kinds of cognitive skills evolutionary psychologists can explain them. Those explanations involve the evolution of both specialized and general purpose software and hardware in the brain, with consciousness itself being the ultimate general purpose coordinator of action.

Insight 3. We underappreciate the computational nature of the mind. As I noted in The Certainty Engine, what the mind does is computational, if computation is taken to be any information management process. But knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and broadly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can perceive many kinds of things at the same time without difficulty.

Insight 4. Consciousness exists to facilitate reasoning. Consciousness exists because we continually encounter situations beyond the range of our learned responses, and being able to reason out effective strategies works much better than not being able to. We can do a lot on “autopilot” through habit and learned behavior, but it is too limited to get us through the day. Most significantly, our overall top-level plan has to involve prioritizing many activities over short and long time frames, which learned behavior alone can’t do. Logic, inductive or deductive, can do it, but only if we come up with a way to interpret the world in terms of propositions composed of symbols. This is where a simplified, cartoon-like version of reality comes into play. To reason, we must separate relevant from irrelevant information, and then focus on the relevant to draw logical conclusions. So we reduce the flood of sensory inputs continually entering our brains into a set of discrete objects we can represent as symbols we can use in logical formulas (here I don’t mean shaped symbols but referential concepts we can keep in mind). The idea that hypothetical internal cognitive symbols represent external reality is called the Representational Theory of Mind (RTM), and in my view is the critical simplification employed by reasoning, but it is not critical to much of subconscious processing, which does not have this need to simplify. Although we can generalize to kinds using logical buckets like bird or robin, we can also track all experience and draw statistical inferences without any attempt at representation at all, yielding bird-like or robin-like without any actual categories.

Do we reason using language? Are these symbols words and are the formulas sentences? There is a debate about whether the mind reasons directly in our natural language (e.g. English) or an internal language, sometimes called “mentalese”. Both are partially right but mostly wrong; the confusion comes from failing to appreciate the difference between the conscious and subconscious minds. Language is part of the simplified world of consciousness that tries to turn a gray world into something more black and white that reason can attack (while not incidentally aiding communication). From the conscious side, language-assisted reasoning is done entirely in natural language. We are also capable of reasoning without language, and much of the time we do, but language is not just an add-on capability, it is what pushes human reasoning power into high gear. Animals have had solid reasoning skills (and consequently consciousness) for hundreds of millions of years, so that they could apply cause and effect in ways that matter to them, creating a subjective version of the laws of nature and the jungle. But without language animals can only follow simple chains of reasoning. Language, which evolved in just a few million years, lengthens those chains and adds nesting of concepts. It gives us the ability to reason in a directed way over an arbitrarily abstract terrain. Without inner speech, the familiar internal monolog of our native tongue, we can’t ponder or scheme, we can only manage simple tasks. Sure, we can keep schemes in our heads without further use of language, but language so greatly facilitates rearranging ideas that we can’t develop abstract ideas very far without it. Helen Keller could remember her languageless existence and claimed to be a non-thinking entity during that time. By “thinking” I believe she only meant directed abstract reasoning and all the higher categories of thoughts that brings to mind.

I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect.1

She admits to consciousness, which I argue exists because animals need to reason, yet also felt unconsciousness, in that large parts of her mind were absent, notably intellect (directed abstract reasoning) and will (abstract awareness of desire). So our ability to string ideas together with language vastly extends and generalizes our cognitive reach, accounting for what we think of as human intelligence.

While we could call the part of the subconscious that supports language mentalese, I wouldn’t recommend it, because this support system is not language itself. This massively parallel process can’t be thought of as just a string of symbols; each word and phrase branches out into a massive web of interconnections into a deeply multidimensional space that joins back to the rest of the mind. It follows Universal Grammar (UG), which as laid out by Noam Chomsky is a top-down set of language rules that the subconscious language module supports, but not because it is an internal language but because it has to simplify the output into a form consciousness can use. So natural language is the outer layer of the onion, but it is the only layer we can consciously access, so it is fair to say we consciously reason with the help of natural language, even though we can also do simple reasoning without language. While the part of language-assisted reasoning of which we are consciously aware is entirely conducted in natural language, it is only partially right to say we think in our native tongue because most of the actual work behind that reasoning happens subconsciously. And the subconscious part doing most of the work is not using an internal language at all, though it does use innate mechanisms that support features common to all languages.

So what about linguistic determinism, aka the Sapir-Whorf hypothesis, which states that the structure of a natural language determines or greatly influences the modes of thought and behavior characteristic of the culture in which it is spoken? As with all nature/nurture debates, it is some of each, but with the lion’s share being nature. Natural language is just a veneer on our deep subconscious language processing capacity, but both develop through practice and what we are practicing is the native language our society developed over time. The important point, though, is that thinking is only superficially conscious and consequently only superficially linguistic, and hence only marginally linguistically determined. Words do matter, as do the kinds of verbal constructions we use, so to the extent we guide our thinking process linguistically with inner speech they have an influence. But language is only a high-level organizer of ideas, not the source of meaning, so it does not ultimately constrain us, even though it can sometimes steer us. We can coin new words and idioms, and phase out those that no longer serve as well. So again, just to clarify: while a digital computer can parse language and store words and phrases, this doesn’t even scratch the surface of the deep language processing our subconscious does for us. It is only a false impression of consciousness that the flow of words through our minds reveals anything about the information processing steps that we perform when we understand things or reason with language.

Insight 5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel. We are not zombies or robots, pursuing our tasks with no inner life. Consciousness feels the way it does because the overall mind, which also includes considerable subconscious processing of which we are not consciously aware, cordons off conscious access to subconscious processing not deemed relevant to the role of consciousness. The fact that logic only works in a serial way, with propositions implying conclusions, and the fact that bodies can only do one thing at a time, put information management constraints on the mind that consciousness solves. To develop propositions on which one can apply logic that can be useful in making a decision, one has to generalize commonly encountered phenomena into categories about which one can track logical implications. So we simplify the flood of sensory data into a handful of concrete objects about which we reason. These items of reason, generically called concepts, are internal representations of the mind that can be thought of as pointers to the information comprising them. Our concept of a rock bestows a fixed physical shape, while a quantity of water has a fluid shape. The concept sharp refers to a capacity to cut, which is associated with certain physical traits. Freedom and French are abstract concepts only indirectly connected to the physical world about which we each have acquired a very detailed, personal internal representations. Consciousness is a special-purpose “application” (or subroutine) within the mind that focuses on the concepts most relevant to current circumstances and applies reason along with habit, learning and intuition to direct the body to take actions one at a time. The only real role of consciousness is to manage this top-level single-stream logic processing, so it doesn’t need to be aware of, and would only be distracted by, the details that the subconscious takes care of, including sensory processing, memory lookup/recognition, language processing and more. Consciousness needs access to all incoming information upon which it can be useful to apply reason. To do this in real time, the mind preprocesses concepts subconsciously where possible, which is often little more than a memory lookup service, but also includes converting 2-D images into known 3-D objects or converting concepts into linguistic form. We bypass concepts and reason entirely whenever habit, experience and intuition can manage alone, but do so with conscious oversight. Consciousness needs to act continuously and “enthusiastically”, so it is pre-configured to pursue innate desires, and can develop custom desires as well.

I call the consciousness subroutine the SSSS for single-stream step selection, because objectively that is what it is for, selecting the one coordinated action at a time for the body to perform next. Our whole subjective world of experience is just the way the SSSS works, and its first person aspect is just a consequence of the simplification of the world necessary to support reason, combined with all the data sources (senses, memory, emotion) that can help in making decisions. Our subjective perspective is only figuratively a projection or a cartoon; it is actually comprised of a combination of nonrepresentational data that statistically correlates information and representational data that represents both the real and imagined symbolically through concepts. This perspective evolved over millions of years, since the dawn of animal minds. Though reasoning ultimately leads to a single stream of digital decisions (ones that go one way or another), nothing constrains it from using analog or digital inputs or parallel processing along the way, and it does all these things and more to optimize performance. Conscious experience is consequently a combination of many things happening at once, which only feel like a seamless integrated experience because it would be very nonadaptive if it didn’t. For instance, we perceive a steady, complete field of vision as if it were a photograph because it would be distracting if we didn’t, but actually our eyes are only focused on a narrow circle of central vision, the periphery is a blur, and our eyes dart around a lot filling in holes and double checking. The blind spots in our peripheral vision (that form where the optic nerve passes through the retina) appear to have the same color and even pattern of the area around them because it would be distracting if they disturbed the approximation to a photograph. So the software of consciousness tries very hard to create a smooth and seamless experience out of something much more chaotic. It is an intentional illusion. It seems like we see a photo, but as we recognize objects we note the fact and start tracking them separately from the background. We can automatically track their shading and lighting from different perspectives without even being aware we are doing it. Colors have distinct appearances both to provide more information we can use and to alert us to associations we have for each color.

While the only purpose of consciousness is to support reasoning, it carries this very rich subjective feel with it because that helps us make the best decisions very quickly. That it seems pleasurable or painful to us is in a way just a side effect of our internal controls that lead us to seek pleasure and avoid pain. This is because consciousness simplifies decision making by reducing complex situations into an emotional response or a preference. Such responses have no apparent rational basis, but presumably serve an adaptive purpose since we have them and evolved traits are always adaptive, at least originally (in the ancestral environment). We just respond emotionally or prefer things a certain way and then can reason starting with those feelings as propositions. Objectively, we can figure out why such responses could be adaptive. For example, hunger makes us want to eat, and, not coincidentally, eating fends off starvation. Libido makes us want sex, and reproduction perpetuates the species. Providing hunger and libido as axiomatic desires to the reasoning process eliminates the need to justify them on rational grounds. Is there a good reason why we should survive or produce offspring? Not really, but if we just happen to want to do things that have that outcome, the mandate of evolution is satisfied. Basically, if we don’t do it someone else will, and more than that, if we don’t do it better they will, in the long run, squeeze us out, so we had better want it pretty bad. So feelings and desires are critical to support reasoning, even though these premises are not based on reason themselves.

This perhaps explains why we feel emotions and desires, but it doesn’t explain why they feel just the way they do to us. This is both a matter of efficiency and logical necessity. From an efficiency standpoint, for an emotion or innate desire to serve its purpose we need to be able to process immediately and appropriately, but also simultaneously with all other emotions, desires, sensory inputs, memory, and intuition that apply in each moment. To accomplish this, all of these inputs have independent input channels into the conscious mind, and to help us tell them all apart, they all have distinct quale (kwol-ee, the way it feels, the plural is qualia). From a logical necessity standpoint, for reasoning to work appropriately the quale should influence us directly and proportionally to the goal, independent of any internally processed factors. Our bodies require foods with appropriate levels of fat, carbohydrates, and protein, but subjectively we only know smell, taste and hunger (food labels notwithstanding). These senses and our innate preferences directly support reasoning where a detailed analysis (e.g. a list of calories, fat and protein) would not. Successful reproduction requires choosing a fit mate, ensuring that mates will stay together for a long time, and procreating. This gets simplified down to feelings of sex appeal, love, and libido. Based on any kind of subsidiary reasoning couples would never stay together; they need an irrational subconscious mandate, i.e. love.

Nihilists reject or disregard innate feelings and preferences, presumably on philosophical or rational grounds. While this is undoubtedly reasonable and consequently philosophical, we can’t change the way we are wired just by willing it so. We will want to heed our desires, i.e. to pursue happiness, although unlike other animals our facility with directed abstract thought gives us the freedom to reason our way out of it or, potentially, to reach any conclusion we can imagine. Evolution has done its best to keep our reasoning facility in thrall to our desires so that we focus more on surviving and procreating and less so on contemplating our navels, but humans have developed a host of vices which can lead us astray, with technology creating new ones all the time. If vices represent a failure of our desires to keep us focused on activities beneficial to our survival, virtues oppose them by emphasizing desires or values that are a benefit, not just to ourselves but our communities. We can consequently conclude that the meaning of life is to reject nihilism because it is a pointless and vain attempt to supersede our programming, and to embrace virtuous hedonism as its opposite, to exemplify what reason and intelligence can add to life on earth.

Michael Graziano explains well how attention works within consciousness, but he says the motivation to simplify the world down to a single stream is that: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others.”2 His sentiment is right, his reason is wrong; the brain has more than enough computational capacity to fully process all the parallel streams hitting the senses and all the streams of memory generated in response, but it does all this subconsciously. Massive parallel processing is the strength of the subconscious. The conscious subroutine is intentionally designed to produce a single stream of actions since there is only one body, and so this is the motivation to simplify and focus attention and create a conscious “theater” of experience. A corollary key insight to my points on consciousness is that most of our minds are subconscious and contain specialized and generalized functions that do all the computationally-intensive stuff.

Insight 6. Minds reason using models that represent possibilities. Its job is to control the body by analyzing current circumstances to compute an optimal response. No ordinary machine can do this; it requires a system that can collect information about the environment, compare it to stored information, and based on matches, statistics, and rules of logical entailment select an appropriate reaction. Matches and statistics are the primary drivers of associative memory, which not only helps us recognize objects on sight and remember information learned in the past given a few reminders, but also supports more general intuition about what is important or relevant to the present situation. While this information is useful, it is not predictive. Since the physical world follows pretty rigid laws of nature, it is possible to predict with near certainty what will happen under controlled circumstances, so an animal that had a way to do this would have a huge advantage over one that could not. Beyond that, once animals developed predictive powers, a mental arms race ensued to do it better. Nearly all animals can do it, and we do it best, but how?

The answer lies entirely in the words “controlled circumstances”. We create a mental model, which is an imaginary set of premises and rules. A physical model, in particular, contains physical objects and rules of cause and effect. Cause and effect is a way of describing laws of nature as they apply at the object level within a physical model. So gravity pulls objects down, object integrity holds objects together differently for each substance, and simple machines like ramps, levers and wheels can impact the effort required. And we recognize other animals as agents employing their own predictive strategies. Within a model, we can achieve certainty: rules that always apply and causes that always produce expected effects. The rules don’t have to be completely certain (deductive); they can be highly likely (inductive). But either way, they work, so once we decide to use a given model in a real-world situation, we can act quickly and effectively. And causal reasoning can be chained to solve complex puzzles. While we can control circumstances with models, the real world will never align precisely with an idealized model, so how we choose the models is as important as how we reason with them. Doubt will creep in if our results fall short of expectations, which can happen if we choose an inappropriate model, or if a model is appropriate but inadequately developed. For every situation, we select one or more models from a constellation of models, and we apply the rules and act with an appropriate degree of certainty based on our confidence in picking the model, its accuracy, and our ability to keep it aligned with reality.

Mental models are mostly subrational. A full definition of subrational will be presented later in Concepts and Models, but for now think of it as a superset of everything subconscious plus everything in our conscious awareness that is not a direct object of reasoning. Models themselves need not be based in reason and need not enter into the focus of conscious reasoning. We can reason with models supported entirely by hunches, but we can also if desired take a step back mentally and use reason to list the premises, rules, and scope of a model we have been using implicitly up to that point. However, as we will see in the next insight, doing this only rationalizes the subrational, which is to say it provides another way of looking at them that is not necessarily better or even right (to the extent an interpretation of something can be said to be right or wrong).

Insight 7. Reasoning is fundamentally an objective process that manages truth and knowledge. Objective principally means without bias and agreeable to anyone based on a preponderance of the evidence, and subjective is everything else, namely that which is biased or not agreeable to everyone. Science tries to achieve objectivity by using instruments for measurements, checking results independently, and using peer review to establish a level of agreement. While these are great ways to eliminate bias and foster agreement, we have no instruments for seeing thoughts or checking them: all our thoughts are inherently subjective. This is an obstacle to an objective understanding of the mind. Conventionally science deals with this by giving up: all evidence of our own thought processes are considered inadmissible, and science consequently has nothing to say. Consider this standard view of introspection:

“Cognitive neuroscientists generally believe that objective data is the only reliable kind of evidence, and they will tend to consider subjective reports as secondary or to disregard them completely. For conscious mental events, however, this approach seems futile: Subjective consciousness cannot be observed ‘from the outside’ with traditional objective means. Accordingly, some will argue, we are left with the challenge to make use of subjective reports within the framework of experimental psychology.” 3

This wasn’t always so. The father of modern psychology, Wilhelm Wundt, was a staunch supporter of introspection, which is the subjective observation of one’s own experience. But its dubious objectivity caught up with it, and in 1912 Knight Dunlap published an article called “The Case Against Introspection” that pointed out that no evidence supports the idea that we can observe the mechanisms of the mind with the mind. I agree, we can’t. In fact, I propose the SSSS process supporting consciousness filters our awareness to include only the elements useful to us in making decisions and consequently blocks our conscious access to the underlying mechanisms. So we don’t realize from introspection that we are machines, albeit fancy ones. But we have figured it out scientifically, using evolutionary psychology, the computational theory of mind, and other approaches within cognitive science.

The limitations of introspection don’t make it useless from an objective standpoint; they only mean we need to interpret it in light of objective knowledge. So we can, for example, postulate an objective basis for desires and then test them for consistency using introspection. We should eventually be able to eliminate introspection from the picture, but most of our understanding of consciousness and the SSSS at this point comes from our use of it, so we can’t ignore what we can glean from that.

While our whole experience is subjective, because we are the subject, that doesn’t mean a subset of what we know isn’t objective. We do know some things objectively, and we know we know them because we are using proven models. And we know the degree of doubt we should have in correlating these models to reality because we have used them many times and seen the results. It is usually more important for consciousness to commit to actions without doubt than to suffer from analysis paralysis, though of course for complex decisions we apply more conscious reasoning as appropriate.

We have many general-purpose models we trust, and we generally know how our models match up with those other people use and how much other people trust them. Since objectivity is a property of what other people think, i.e. agreeable to all and not subjective, we need to have a good idea of what models we are using and the degree to which other people use the same models (i.e. similar models; we each instantiate models differently). If our models are subrational, how can we ever achieve this? For the most part, it is done through an innate talent called theory of mind:

Theory of mind (often abbreviated ToM) is the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.

So we subrationally form models and can intuit much about the models of others using this subrational and largely subconscious skill. Our subconscious mind automatically does things for us that would be too laborious to work out consciously. Consciousness evolved more to support quick reactions than to think things through, though humans have developed quite a knack for that. But whether these skills are subconscious, subrational, or rational, the rational conscious mind directs the subrational and subconscious and thus takes credit for the capacities of the whole mind. This gives us the ability to distinguish objective knowledge from subjective opinion. Consider this: when a witness is asked to recount events under oath, the jury is expecting to hear objective truth. They will assess the evidence they hear using theory of mind on the witness to understand his models and to rule out compromising factors such as mental capacity, impartiality, memory or motives. People read each other very well, and it is hard to lie convincingly because we have subconscious tells that others can read, which is a mechanism that evolved to allow trust to develop despite our ability to lie. The jury expects the witness can record objective knowledge similarly to a video camera and that they can acquire this knowledge from him. Beyond juries, we know our capacities, partiality, memory, and motives well enough to know whether our own knowledge is objective (that is, agreeable to anyone based on a preponderance of the evidence). So we’ve been objective long before scientific instruments, independent confirmation or peer review came along. Of course, we know that science can provide more precision and certainty using its methods than we can with ours, but science is more limited in the phenomena to which it applies. People have always understood the world around them to a high degree of objectivity that would stand up to considerable scrutiny, despite also having many personal opinions that would not. We tend not to give ourselves much credit for that because we are not often asked to separate the objective and subjective parts.

Objectivity does not mean certainty. Certainty only applies to tautologies, which are concepts of the ideal world, not the physical world. A tautology is a proposition that is necessarily true, or, put another way, is true by definition. If one sets up a model, sometimes called a possible world, in which one defines what is true, then within that possible world, everything that is true is necessarily true, by definition. So “a tiger is a cat” is certain if our model defines tiger as a kind of cat. Often to verify whether or not one has a tautology one has to clarify definitions, i.e. clarify the model. This process will identify rhetorical tautologies. If we say that the rules of logic apply in our models, and we generally will, then anything logically implied is also necessarily true and a logical tautology. The law of the excluded middle, for example, demonstrates a logical necessity by saying that “A or not A” is necessarily true. Or, more famously, a syllogism says that “if A implies B and B implies C, then A implies C”. While certainty is therefore ideally possible, we can’t see into the future in the physical world, so physical certainty is impossible. We also can never be completely certain about the present or past because we depend on observation, which is both indirect and unprovable. So physical objectivity is not about true (i.e. ideal) certainty, but it does relate to it. By reasoning out the connection, we can learn the definition and value of physical objectivity.

To be physically objective means to establish one or more ideally objective models and to correlate physical circumstance to these models. Physically objectivity is limited by the quality of that correlation, both in how well the inputs line up and how closely the output behavior of the ideal model(s) ends up matching physical circumstances. The innate strategy of minds is to presume perfect correlation as a basis for action and to mop up mistakes afterward. Also, importantly, minds will try to maximally leverage learned behavior first and improvised behavior second. That is, intuitive/automatic first and reasoned/manual second. So, for example, I know how to open the window next to me as I have done it before. I feel certain I can apply that knowledge now to open the window. But my certainty is not really about whether the window will open, it is about the model in my mind that shows it opening when I flip the latch and apply pressure to raise it. In that model, it is a certainty because these actions necessarily cause the window to open. If the window is stuck or even permanently epoxied, it doesn’t invalidate the model, it just means I used the wrong model. So if the window is stuck how do I mop up from this mistake? The models we apply sit within a constellation of models, some of which we hold in reserve from past experience, some of which we have consciously available from premeditation, and some of which we won’t consciously work out until the need arises. For every model we have a confidence level, the degree to which we think it correlates to the situation at hand. This confidence level is mostly a function of associative memory, as we subrationally evaluate how well the model premises line up with the physical circumstances. In the case of habitual or learned behavior, we do this automatically. So if this window doesn’t open as I thought it would, from learned behavior I will push harder and use quick thrusts if needed to try to unjam it. Whether it works or not I will update my internal model of the behavior of stuck windows accordingly, but in this case, I didn’t directly employ reasoning, I just leveraged learned skills. But the mind will rather seamlessly maintain both learned and reasoned approaches for handling situations. This means it will maintain models about everything because it will frequently encounter situations where learned behavior is inadequate but reasoning with models that include causation works.

Just as we don’t generally need to separate the objective and subjective parts of our knowledge, we don’t generally need to separate learned behavior from reasoned behavior. It is important to the unity of our subjective experience that we perceive these very different things as fitting seamlessly together. But this introduces another factor that makes it hard to identify our internal models: learned behavior doesn’t need models or a representational approach, at least not of the simplified form used for logical analysis. We can, potentially, remember every detail of every experience and pull up relevant information exactly when needed in an appropriate way without any recourse to logic or reason at all. So what kind of existence do our ideal models have? While I think we do persist many aspects of many models in our memories as premises and rules, we tend not to be too hard and fast about them, and they blend into each other and our overall memory in ways that let us leverage their strengths without dwelling on their weaknesses. Even as we start to reason with them, we only require a general sense that their premises and rules are sound and well defined, and if pressed we may learn they are not, at which point we will fill them out until they are as certain as we like. We can, therefore, conclude that while we use objectivity all the time, in is usually in close conjunction with subjectivity and inseparable from it. To be convincing, we need to develop ways to isolate objective and subjective components.

Insight 8. Insight 8. We really have free will. We already know (intuitively) that we have free will, so I shouldn’t take any credit for this one. But I will because a preponderance of the experts believe we don’t, which is a consequence of their physicalist perspective. Yes, the universe is deterministic and everything happens according to fixed laws of nature, so we are not somehow changing those laws in our heads to make nature unfold differently. What happens in our heads is in fact part of its orderly operation; we are machines that have been tuned to change the world in the ways that we do. So far, that suggests we don’t have free will but are just robots following genetic programming. But several things happen to create genuine freedom. Freedom can’t mean altering the future from a preordained course to a new one because the universe is deterministic and each moment follows the preceding according to fixed laws of nature. But since the universe has always been this way and we nevertheless feel like we have free will, freedom must mean something else.

Freedom really has to do with our conception of possible futures. We imagine the different outcomes from different possible courses of action. These are just imaginary constructions (models with projected consequences) with no physical correlate, other than the fact that they exist in our brains. But we think of them as being possible futures even though there is really only one future for us. So our sense of free will is rooted in the idea that what we do changes the universe from its predetermined course. We don’t, but two factors explain why our perspective is indistinguishable from a universe in which we could change the future: unpredictability and optimized selection. Regarding unpredictability, neither we nor anyone could ever know for sure what we are going to do; only an approximate prediction is possible. Although thinking follows the laws of nature, the process is both complex and chaotic, meaning that any factor, even the smallest, could change the outcome. So every decision, even the simplest, could never be predicted with certainty. The second factor is optimized selection, which is a mental or computational process that uses an optimization algorithm to choose a strategy that has the best chance of producing a physical effect. First, the algorithm collects information, which is data that has more value for some purpose than white noise has. For example, sensory information is very valuable for assessing the current environment. And our innate preferences, experience, state of mind, and whim (which is a possibly unexplainable preference) are fed to the algorithm as well. This mishmash of inputs is weighed and an optimal outcome results. If the optimal action seems insufficiently justified, we will pause or reconsider as long as it takes until the moment of sufficient justification arrives, and then we invariably perform that action. At that moment the time for free will has passed; the universe is just proceeding deterministically. We exercised our free will just before that moment, but before I explain why I have a few more comments on unpredictability and optimization algorithms.

The weather is unpredictable but lacks optimized selection because it is undirected. A robot trained to use learned behavior alone to choose strategies that produce desired effects has an optimized selection algorithm, but might be entirely predictable. If its behavior dynamically evolves based on live input data, then it may become unpredictable. Viewed externally, the robot might appear to have “free will” in that its behavior would be unpredictable and goal-oriented like that of a human. However, internally the human is thinking in terms of selecting from possible futures, while the robot is just looking up learned behaviors. People don’t depend solely on learned behavior; we also use reason to contemplate general implications of object interactions. To do this, we set up mental models and project how different object interactions might play out if different events transpired.

The real and deep upshot of this is that our concept of reality is not the same thing as physical reality. It is a much vaster realm that includes all the possible futures and all the models we have used in our lives. Our concept of reality is really the ideal world, in which the physical world is just a special case. The exercise of free will, being the decisions we take in the physical world, does represent a free selection from myriad possibilities. Become our models correlate so well to the real world we come to think of them as being the same, but they aren’t. Free will exists in our minds, but not in our hypothetical robot minds, because our minds project possible futures. A robot programmed to do this would then have all the elements of free will we know, and further would be capable of intelligent reasoning and not just learned behavior. It could pass a Turing test where the questioner moved outside the range of the learned behavior of the robot. Once we build such a robot, we will need to start thinking about robot rights. Could an equally intelligent optimization algorithm be designed that did not use models (and consequently had no consciousness or free will)? Perhaps, but I can’t think of a way to do it.

So our brain’s algorithm made an unpredictable decision and acted on it. The real question is this: Why do “we” take credit for the free will of our optimization algorithms? Aren’t we just passively observing the algorithms execute? This is simply a matter of perspective. We “are” our modeling algorithms. Ultimately, we have to mean something when we talk about the real “us”. Broadly, we mean our bodies, inclusive of our minds, but more narrowly, when we are referring just to the part of us that makes the decisions, we mean those modeling algorithms. In the final analysis, we are just some nice algorithms. But that’s ok. Those algorithms harbor all the richness and complexity that we, as humans, can really handle anyway. They are enough for us, and we are very much evolved to feel and believe that they are enough for us. Objectively, they are a patchwork of different strategies held together with scotch tape and baling wire, but we don’t see them that way subjectively. Subjectively the world is a clean, clear picture where everything has its place and makes sense in one organic whole that seems fashioned by a divine creator in a state of sheer perfection. But subjectively we’re wearing rose-colored glasses, and darkly-tinted ones at that, because objectively things are very far from clean or perfect.

So that explains free will, the power to act in a way we can fairly call our own. To summarize, our brains behave deterministically, but we perceive the methods they use to do it as selections from a realm of possibilities, and we quite reasonably identify with those methods so that we take both credit and responsibility for the decisions. More significantly, while we were dealt a body and mind with certain strengths and an upbringing with certain cultural benefits, this still leaves a vast array of possible futures for our algorithms to choose from. Since nobody can exercise duress on us inside our own minds, this means that no other entity but the one we see as ourself can take credit or blame for any decision we make. Do we have to take responsibility for our actions or can we absolve ourselves as merely inheriting our bodies and minds? We do have to take credit and blame because running the optimization algorithms is an active role; abstaining would mean doing nothing, which is just a different choice. Note that this physical responsibility is not the same as moral responsibility. How our thoughts, choices, and actions stand up from a societal standpoint is another question outside the scope of this discussion. But physically, if we perform an action then it is a safe bet that we exercised free will to do it. The only alternate explanations are mind control or some kind of misinterpretation, e.g. that it looked like we pressed the button but actually we were asleep and our hand fell from the force of gravity.

Sometimes free will is defined as “the ability to choose between different possible courses of action”. This definition is actually tautological because the word “choose” is only meaningful if you understand and accept free will. To choose implies unpredictability, an optimization algorithm, consciousness, and ownership of consciousness. Our whole subjective vocabulary (subjective words include feel, see, imagine, hope) implies all sorts of internal mechanisms we can’t readily explain objectively. And we are so prone to intermingling subjective vocabulary with objective vocabulary that we are usually unaware we are doing it.

One more point about free will: my position is a compatibilist position, meaning that determinism and free will are compatible concepts. Free will doesn’t undermine determinism, it just combines unpredictability, optimization algorithms, and the conscious modeling of future possibilities to produce an effect that is indistinguishable from the actual future changing based on our actions.


A very brief overview of TDTM

TDTM, for Top-Down Theory of Mind, principally combines a new philosophical stance with two scientific theories:

1. Physicoidealism
2. The theory of evolution
3. The computational theory of mind (CTM)

Any scientific discussion must first define what exists, which is called an ontology or theory of being. I am proposing a new ontology for TDTM that I call physicoidealism. Physicalism is an ontological monism, which means it says just one kind of thing exists. Specifically, it asserts that only the physical world exists, consisting of space, matter, and energy. Idealism asserts that only the mental world exists, consisting of immaterial ideas. Physicoidealism is just the union of these two monisms, eliminating the “only” from each. As the brain is now known to reduce to purely physical phenomena, science has concluded that the ideal does not exist, but this is a bit preposterous considering science is built out of hypotheses, which are ideal. Math, ideas, models, and theories are all nonphysical constructions of the ideal world. Nothing about them precludes the physical in any way, but they are not physical. Yes, of course, our access to them is entirely mediated through physical mechanisms like the brain, computers, and books, but any given mathematical law exists (in an ideal sense) independently of any physical system that uses or refers to it. Scientists do try to divine the “actual” laws of nature, but we can never know if there are any as such. All we can do is create idealized, non-physical models that correlate pretty well with nature. So although we have some confidence “actual” laws of nature do exist since the universe behaves so consistently, we have no way to find them or prove that the laws we come up with are right.

The more adamant physicalists among you will by now be thinking that since reductionism implies that everything is physical, this means anything I am calling ideal is just a convenient fiction or illusion with no real substance. All ideas are fictions and illusions with no physical substance, but that doesn’t mean they can’t impact the physical world. Physical systems like minds and computers can use math and programs and ideas to affect the physical world. How these systems affect the world can only be understood through the ideal concepts of information and algorithm. No amount of study of the mechanics of the brain will ever reveal these important aspects of its programming. Programming is the key that unlocks the ideal world, where logic, mathematics, representations and ideas live. Programs represent possibilities; they use one kind of simplified representation or another to describe bounded or unbounded sets of possibilities, and they describe logical operations that can be used to generate a limited or unlimited set of outputs given any inputs. We can discuss an abstract idea, like a pencil or a cat, independent of any physical implementation and inclusive of possibilities both bounded or unbounded. As Sean Carroll writes1 “Does baseball exist? It’s nowhere to be found in the Standard Model of particle physics. But any definition of “exist” that can’t find room for baseball seems overly narrow to me.” Me too. Also, to be useful programs must ultimately correlate back to reality, tie references back to referents, and allow us to change reality. The mind (and other programs) can do that. So minds add a new capability to the physical world it lacked before, a capability that could only seem like magic to inanimate matter or even plants, the power to predict the future by dividing nature into causes and effects, and thus develop strategies and then act on them. It seems a bit ironic that we consider fortune tellers to be charlatans considering the purpose of minds is to “see” the future so as to better control it. Of course, the limitation is that minds never have certain knowledge of the future, but they are chock full of very good guesses.

Another question that tends to come up about now is “what about determinism?” Physicalism says the world is running on fixed laws and that the outcome is preordained. Now that we have quantum uncertainty, perhaps it is not preordained, but it is still not alterable by free will. I explained in Key Insights why free will exists despite determinism. Although the decisions we made could not have been decided otherwise, they seem like they could to us because we imagine how they could have turned out otherwise, and since the physical world is too complex to predict, no one can tell us we didn’t pick one future out from among many. What makes us free is that our minds dwell in the ideal world of possibilities, and only secondarily in the physical world. In our world baseball and other generalizations exist, but they are not strict physical objects or events. When we are about to do things, and after we have done them, we don’t think only in terms of that specific instance, we generalize to all similar situations. So while actual decisions could not have been decided otherwise, we don’t see decisions in the context of a specific case, we generalize to all similar situations. To function effectively, we have to view the world through this much larger lens of possibility than as a mundane physical world that ultimately lacks any possibility since only one path will unfold. Put another way, at the moment we take an action, we have no choice, it is done. The moment before that, the universe and our minds are simply too complex for it to be possible to predict what will happen, even though we know it must unfold deterministically. So looking forward minds manage all possibilities as possible and interpret their action-optimizing algorithms as choosing from those possibilities, even though they are just making the generalization that situations similar to those in the past will play out in similar ways given similar reactions.

So are human choices actually shaping the world? Yes, because free will actually does exist as we think it does. The only illusion here is the idea that determinism implies the future is simple to predict. It doesn’t. Because it is possible for minds to exist and to gather information, model it, and compute and take actions, the physical world actually includes this slice of the ideal world, and so outcomes that leverage the world of possibility are entirely within the laws of determinism. In other words, determinism is not limited to the “direct” interactions of particle A hitting particle B; information processing and feedback vastly expand the range of complexity of what might happen “indirectly” (I use quotes because everything physical is necessarily direct, it is just that direct can become very convoluted). The physical universe does seem simple enough to predict if you leave out minds, but minds are part of the physical universe. When we change the world around us, it is the physical world changing itself. We are just the most complex cogs in the machine.

So brains create minds, and minds open a window into the ideal world of possibility that actually turns out to be an infinitely richer world than the physical world that spawned it. What do we know so far, scientifically, about how the mind came to be and how it works? Darwin discovered how it came to be with his theory of evolution in 1859 and the computational theory of mind (CTM), proposed in its modern form by Hilary Putnam in 1961, provides the basis of how it works. While Darwin wondered, “How does consciousness commence?”, he didn’t solve it, but he opened the door to evolutionary psychology in The Origin of Species with this comment: “Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation.” We now have a good overall sense of the evolutionary basis of all major mental capacities. The computational theory of mind (CTM) proposes mechanisms to support mental processes. The idea that thought is an exercise in information management and not a standalone substance is a major breakthrough, so far fully supported by the evidence. We now have reasonably good digital algorithms that approximate some mental functions, though we are still far short of artificial intelligence itself.

So far, the implications of these theories for understanding the mind have been best organized into one place in Steven Pinker’s 1996 book How the Mind Works. Pinker covers many implications of evolution and CTM on how the mind works in a very objective way, and I highly recommend it and will build on it. But we have a ways to go. Pinker doesn’t wade into the treacherous waters of metaphysics I’m in. I’ve introduced the idea of a computational idealism that forms an independent monism that has to be combined with physicalism to cover all that exists. From there I have developed the subjective perspective as a referential reality that funnels a cartoon of the world into a stream that can be analyzed logically to make decisions. And I explain free will as a consequence of the future being unknowable combined with action-optimizing algorithms that model possible worlds and pick from them. From where I sit, we need to expand the scope of objective science to include the ideal world, which is not a discovered world but a created one, an engineering project. Logic and math and models and ideas are built, not discovered, and the mind is a software engineering project. So much of how it works is not the simple outcome of scientific laws but the complex result of engineering decisions. The biological and social sciences, of course, accept that life is engineered and that understanding it better requires some reverse engineering, but I think they have historically undervalued the need to apply reverse engineering to psychology. Steven Pinker does an excellent job covering evolutionary psychology, and I will take that thinking further still.

An overview of computation and the mind

[Brief summary of this post]

I grew up in the 60’s and 70’s with a tacit understanding that thinking was computing and that artificial intelligence was right around the corner. Fifty years later we have algorithms that can simulate some mental tasks, but nothing close to artificial intelligence, and no good overall explanation of how we think. Why is that? It’s just that the problem is harder than we first thought. We will get there, but we need to get a better grasp of what the problem is and what would qualify as a solution. Our accomplishments over seventy or so years of computing include developing digital computers, making them much faster, demonstrating that they can simulate anything requiring computation, and devising some nifty algorithms. Evolution, on the other hand, has spent over 550 million years working on the mind. Because it is results-oriented, it has simultaneously developed better hardware and software to deliver minds most capable of keeping animals alive. The “hardware” includes the neurons and their electrochemical mechanisms, and the “software” includes a memory of things, events, and behaviors that supports learning from experience. If the problem we are trying to solve is to use computers to perform tasks that only humans have been able to do, then we have laid all the groundwork. More and more algorithms that can simulate many human accomplishments will appear as our technology improves. But my interest is in explaining how the brain manages information to solve problems and why and how brains have minds. To solve that problem, let’s take a top-down or reverse-engineered view of computation.

Computation is not just manipulation of numbers by rules or data by instructions. More abstractly, the functional conception of computation is a process performed on inputs that yields outputs that can be used to reduce uncertainty, which can be used in feedback loops to achieve predictable outcomes. Any input or output that can reduce uncertainty is said to contain information. White noise is data that is completely free of usable information. Minds, in particular, are computers because they use inputs from the environment and their own experience to produce output actions that facilitate their survival. If we agree that minds are processing information in this way, and exclude the possibility that supernatural forces assist them, then we can conclude that we need a computational theory of mind (CTM) to explain them.

Reverse engineering all the algorithms used by the brain down to the hardware and software mechanisms that support them is a large project. My focus is a top-down explanation, so I am going to focus just on the algorithms involved at the highest level of control that decide what we do. Most of the hard work, from a computational perspective, happens at the lower levels, so we will need to have a sense of what those levels are doing, but we won’t need to consider too closely how they do it. This is good because we don’t know much about the mechanics of brain function yet. What we do know doesn’t explain how it works so much as provide physical constraints with which any explanatory theory must be consistent. I will discuss some of those constraints in later in The process of mind.

At this point, I have to ask you to take a computational leap of faith with me. I will try to justify it as we go, but it is a hard point to prove, so once the groundwork has been laid we will have to evaluate whether we have enough evidence to prove it. The leap is this: the mind makes top-level decisions by reasoning with meaningful representations. This leap has a fascinating corollary: consciousness only exists to help this top-level representational decision-making process. Intuitively, this makes a lot of sense — we know we reason by considering propositions that have meaning to us, and we feel such reasoning directly supports many of our decisions. And our feeling of awareness or consciousness seems to be closely related to how we make decisions. But I need to explain what I mean by this from an objective point of view.

The study of meaning is called semantics. Semantics defines meaning in terms of the relationship between symbols, also called signifiers or representation, and what they refer to, also called referents or denotation. A model of the mind based on these kinds of relationships is called a representational theory of mind (RTM). I propose that these relationships form the meaningful representations that are the basis of all top-level reasoned decisions. I do not propose that everything in the mind is representational; most of what the mind does and experiences is not representational. RTM only applies to this top-level reasoning process. Some examples of information processing that are not representational include raw sensory perception, habitual behavior, and low-level language processing. To the extent we feel our senses without forming any judgments those feelings are free of meaning; they just are. So instrumental music consequently has no meaning. And to the extent we behave in a rote fashion, performing an action based on intuition or learned behavior without forming judgments, those actions are free of meaning, they just happen. Perhaps when we learned those behaviors we did form judgments, and if we recall those judgments when we use the behaviors then there is some residual meaning, but the meaning has become irrelevant and can be, and often will be, forgotten. People who tie their shoelaces by rote with no notion as to why the actions produce a knot have no detailed representation of knots; they just know they will end up with a knot. So much “intelligent” or at least clever behavior can take place without meaning. Finally, although language a critical tool, perhaps the critical tool, in supporting representational reasoning, as words and sentences can be taken as directly representing concepts and propositions, it only achieves this success through subconscious skills that understand grammar and tie concepts to words.

Importantly, just as we can tie knots by rote, we could, in theory, live our whole lives by rote without reasoning with represented objects (i.e. concepts) and, consequently, without conscious awareness. We would first need to be trained how to handle every situation we might encounter. While we don’t know how we could do that for people, we can train computers using an approach called machine learning. By training a self-driving car using feedback from millions of real-life examples, the algorithm can be refined to solve the driving problem from a practical standpoint without representation. Sure, the algorithm could not do as well as a human in entirely new conditions. For example, a human driver could quickly adapt to driving on the left side of the road in England, but the self-driving car might need special programming to flip its perspective. Also note that such algorithms do typically use representation for some elements, e.g. vehicles and pedestrians. But they don’t reason using these objects; they just use them to look up the best learned behaviors. So some algorithmic use of representation is not the same as using representation to support reasoning.

People can’t live their lives entirely by rote as we encounter novel situations almost continuously, so learned behavior is used more as another input to the reasoning process than as a substitute for it. Perhaps Mother Nature could have devised another way for us to solve problems as flexibly as reasoning can, but if so, we don’t know what that way might be. Furthermore, we appear quite unable to take actions that are the product of our top-level reasoning without explicit conscious awareness and attention in the full subjective theater of our minds. This experience, in surround-sound technicolor, is not at all incidental to the reasoning process but exists to provide reasoning with the most relevant information at all times.

Note that language is entirely representational by its very nature. Every word represents something. Just what words represent is more complex. Words are a window into both representational and nonrepresentational knowledge. They can be used in a highly formalized way to represent specific concepts and propositions about them in a logical, reasoned way. Or they can be used more poetically to represent feelings, impressions or mood. In practice, they will evoke different concepts and connotations in different people in different contexts as our minds interpret language at both conscious and subconscious levels. My focus on language will be more toward the rational support it provides consciousness to support reasoning with concepts, many of which represent things or relationships in the real world.

The top-down theory of mind (TDTM) I am proposing says that all mental functions use CTM, but only some processes use RTM. Further, I propose that consciousness exists to support reasoning, which critically depends on RTM while also seamlessly integrating with nonrepresentational mental capacities. While I am not going to review other theories at this time, conventional RTM theories propose that meaning ends with representation, while I say it is only the outer layer of the onion. Similarly, associative or connectionist theories explain memory and learned behavior with limited or no use of representation, as I have above, but do not propose a process that can compete with reasoning.

While the above provides some objective basis for reasoning as a flexible mental tool and consciousness as a way to efficiently present relevant information to the reasoning process, it does not say why we experience consciousness the way we do. We know we strive tenaciously and are fairly convinced, if we ask ourselves why, that it is because we have desires. That is, it is not because we know we have to survive and reproduce to satisfy evolution, but because the things we want to do usually include living and procreating. So apparently, working on behalf of our desires is a subjective equivalent to the objective struggle for survival. But why do we want things instead of just selecting actions that will best propagate our genes? Why does it feel like we’ve each got a quarter in the greatest video game ever made, a virtual reality first-person shooter that takes virtual reality to a whole new level — let’s call it “immersive reality” — in which we are not just playing the game, we are the game? Put simply, it is because life is a game, so it has to play like one. A game is an activity with goals, rules, challenges, and interaction. The imperatives of evolution create the goals and rules evolve around them. But the rules that develop are abstract ideas that don’t need a physical basis; they just need to get the job done.

The game of life has one main goal: survive. Earth’s competitive environment added a critical subgoal: perform at the highest level of efficiency and efficacy. And sexual selection, whose high evolutionary cost seems to be offset by the benefit of greater variation, led to the goal of sexual reproduction. But what could make animals pursue these goals with gusto? The truth is, animals don’t need to know what the goals are, they just need to act in such a way that they are attained. You could say theirs not to reason why, theirs but to do and die, in the sense that it doesn’t help animals in their mission to know why they eat certain foods, struggle relentlessly, or select certain mates. But it is crucial that they eat just the right amount of the foods that best sustain them and select the fittest mates. This is the basis of our desires. They are, objectively, nothing more than rules instructing us to prioritize certain behaviors to achieve certain goals. We are not innately aware of the ultimate goals, although as humans who have figured out how natural selection works, we now know what they are.

Our desires don’t force our hand; they only encourage us to prioritize our actions appropriately. We develop propositions about them that exist for us just as much as physical objects; they are part of our reality, which is a combination of physical and ideal. Closely held propositions are called beliefs. Beliefs founded in desires are subjective beliefs while beliefs founded in sensory perception are objective beliefs. Subjective beliefs could never be proven (as desires are inherently variable), but objective beliefs are verifiable. We learn strategies to fulfill desires. We learn many elements of the strategies we use to fulfill our most basic desires by following innate cues (instincts) for skills like mimicry, chewing, breathing, and so on. While we later develop conscious and reasoning control over these mostly innate strategies, it is only to act as a top-level supervisory capacity. So discussions of this top reasoning level are not intended to overlook the significance of well-developed underlying mechanisms that do the “real work”, much the way a company can function fairly well for a while without its CEO. With that caveat in mind, when we apply logical rules to propositions based on our senses, desires, and beliefs the implications spell out our actions. After we have developed detailed strategies for eating and mating, we still need to apply conscious reasoning all the time to apply prioritization to that cacophony of possibilities. We don’t need to know our evolutionary goals because our desires and the beliefs and strategies that follow from them are extremely well tuned to lead us to behaviors that will fulfill them.

Desires are fundamentally nonrepresentational in that they are experienced viscerally on a scale with greater or lesser force. They are not qualia themselves but the degree to which each quale appeals to us, which varies as a function of our metabolic state. So when we feel cold, we want warmth, and when we feel hunger, we want food. They are aids to prioritization and steer the decision-making process (both through reasoning and at levels below that). To reason with desires and subjective beliefs, we interpret them as weighted propositions using probabilistic logic. Because all relevant beliefs, desires, sensory qualia and memories are processed concurrently by consciousness, they all contribute to a continuous prioritization exercise that allows us to accomplish many goals appropriately despite having a serial instrument (one body). In other words, we have distinct qualia and as needed desires for them for the express purpose of ensuring all the relevant data is on the table before we make each decision.

So what is consciousness, exactly? Consciousness is a segregated aspect of the mind that uses qualia, memory, and expertise to make reasoned decisions. From a computational perspective, this means it is a subroutine fed data through custom data channels (in both nonrepresentational and representational ways) that has customized ways to process it. The nonrepresentational forms support whims or intuitions, and the representational forms support reasoned decisions. Importantly, reason has the last word; we can’t act, or at least not for very long, without support from our reasoning minds. Conversely, the conscious mind doesn’t exactly act itself, it delegates actions to the subconscious for execution, analogously to a manager and his employees. From a subjective perspective, the segregation of consciousness from the rest of the mind creates the subjective perspective or theater of mind. It seems to us to be a seamless projection of the external world into our heads because it is supposed to. We interpret what is actually a jumble of qualia as a smoothly flowing movie because the mandate to continuously prioritize and decide requires that we commit to the representation being the reality. To hesitate in accepting imagination as reality would lead to frequent and costly delays and mistakes. We consequently can’t help but believe that the world that floods into our conscious minds on qualia channels is real. It is not physically real, of course, but wetware really is running a program in our minds and that is real, so we can say that the world of our imagination, our conduit to the ideal world, is real as well, though in a different way.

Our objective beliefs, supported by our sensory qualia and memory, meet a very high objective standard, while our subjective beliefs, supported by our desires, are self-serving and only internally verifiable. Because our selfish needs often overlap with those of others and the ecosystem at large, they can often be fulfilled without stepping on any toes, but competition is an inescapable part of the equation. Our subjective beliefs give us a framework for prioritizing our interactions with others based entirely on abstracted preferences rather than literal evolutionary goals, based on desires tuned by evolution to achieve those goals. In other words, blindly following our subjective beliefs should result in self-preservation and the preservation of our communities and ecosystems. However, humans face a special challenge because we are no longer in the ancestral environment for which our desires are tuned, and we have free will and know how to use it. While this is a potential recipe for disaster, we will ultimately best satisfy our desires by artificially realigning them with evolutionary objectives. While our desires are immutable, the beliefs and strategies we develop to fulfill them are open to interpretation. In other words, we can use science and other tools of reasoning to help us adjust our subjective beliefs, through laws if necessary, to fulfill our desires in a way that is compatible with a sustainable future.

I call the portion of the conscious mind dedicated to reasoning the “single stream step selector”, or SSSS. While “just” a subprocess of the mind, it is the part of our conscious minds that we identify with most. The SSSS exercises free will in making decisions in both a subjective and objective sense. Subjectively we feel we are freely selecting from among possible worlds. We are also objectively free in a few ways, most significantly because our behavior is unpredictable, being driven by partially chaotic forces in our brains. Secondly, and more significantly to us as intelligent beings, our actions are optimized selections leveraging information management, i.e. computation, which doesn’t happen by chance or in simple natural systems. So without violating the determinism of the universe we nevertheless make things happen that would never happen without huge computational help.

The process of making decisions is much more involved than simply weighing propositions. Propositions in isolation are meaningless. What gives them meaning is context. Computationally, context is all the relationships between all the symbols used by the propositions. These relationships are the underlying propositions that set the ground rules for the propositions in question. Subjectively, a context is a model or possible world. Whenever we imagine a situation working according to certain rules, we have established a model in our minds. If the rules are somewhat flexible or not nailed down, this can be thought of as establishing a range of models. We keep a primary model (really a set of models covering different aspects) for the way we think the world actually is. We create future models for the ways we think things might go. We expect one of those future models to become real, in the sense that it should in time line up with the actual present within our limits of accuracy and detail. We keep a past model (again, really a set) for the way we think things were. Internally, our models support considerable flexibility, and we adapt them all the time as new information becomes available. Externally, at the moment we decide to do something, we have committed to a specific model and its implications. That model itself can be a weighted combination of several models that may be complementary or antagonistic to each other, but in any case, we are taking a stand. We have done an evaluation, either off the cuff or with deep forethought, of all the relevant information, using as many models as seem relevant to the situation and building new models we haven’t used before as we go if needed.

Viewed abstractly, what the mind is creating inside is a different kind of universe, a mental one instead of a physical one. Our mental universe is a special case of an ideal universe, in which ideas comprise reality. One could argue that the conscious and subconscious realms comprise distinct ideal universes which overlap in places. And one could argue that mathematical systems and scientific theories and our own models each comprise their own ideal world, bound by the rules that define them. Ideal worlds can be interesting in their own right for reasons abstracted from practical application, but their primary role is to help us predict real world behavior. To do this, we have to establish a mapping or correlation between the model and what we observe. Processing feedback from our observations and actions is called learning. We are never fully convinced our methods are perfect, so we are always evaluating how well they work and refining them. This approach of continuous improvement was successfully applied by Toyota (where it is called kaizen), but we do it automatically. It is worth noting at this point that the above argument solves the classic mind-body problem of how mental states are related to physical states, that is, how the contents of our subjective lives relate to the physical world. The answer I am proposing, a union of an ideal universe and a physical one, goes beyond this discussion on computation, but I will speak more on it later.

We have no access to our own programming and can only guess how the program is organized. But that’s ok; we are designed to function without knowing how the programming works or even that we are a program. We experience the world as our programming dictates: we strive because we are programmed to strive, and our whole UI (user interface) is organized to inspire us to relentlessly select steps that will optimize our chances of staying in the game. “Inspire” is the critical word here, meaning both “to fill with an animating, quickening, or exalting influence” (as a subjective word) and “to breathe in” (as an objective word). Is it a mystical force or a physical action? It sits at the confluence of the mental and physical worlds, capturing the essence of our subjective experience of being in the world and our physical presence in the world that comes one breath at a time. The physical part seems easy enough to understand, but what is the subjective part?

How does a program make the world feel the way it does to us? Yes, it’s an illusion. Nothing in the mind is real, after all, so it has to be an illusion. But it is not incidental or accidental. It all stems from the simple fact that animals can only do one thing at a time (or at least their bodies can only engage in one coordinated set of actions at a time). Most of all, they can’t be in two places at the same time. The SSSS must take one step at a time, and then again in an endless stream. But why should this requirement lead to the creation of a subjective perspective with a theater of mind? It follows from the way logic works. Logic works with propositions, not with raw data from the real world. The real world itself does not run by reasoning through logical propositions; it works simply because a lot of particles move about on their own trajectories. Although we believe they obey strict physical laws, their movements can’t be perfectly foretold. First, it would violate the Heisenberg Uncertainty Principle to know the exact position and bearing of each particle, as that would eclipse their wave-like nature. And secondly, the universe is happening but not “watching itself happen”. This argument, called Laplace’s demon, is the idea that someone (the demon) who knew the precise location and momentum of every atom in the universe could predict the future. It is now considered impossible on several grounds. But while the physical universe precludes exact prediction, it does not preclude approximate prediction, and it is through this loophole that the concept of a reasoning mind starts to emerge.

Think back to the computational leap I am trying to prove: the mind makes top-level decisions by reasoning with meaningful representations. I can’t prove that reasoning is the only way to control bodies at the top level, but I have argued above that it is the way we do it. But how exactly can reasoning help in a world of particles? It starts, before reasoning enters the picture, with generalization. The symbols we represent don’t exist as such in the physical world. We represent physical objects with idealized representations (called concepts) that include the essential characteristics of those objects. Generalization is the ability to recognize patterns and to group things, properties, and ideas into categories reflecting that similarity. It is probably the most important and earliest of all mental skills. But it carries a staggering implication: it shapes the way minds interpret the world. We have a simplified, stripped down view of the world, which could fairly be called a cartoon, that subdivides it into logical components (concepts, which include objects and actions) around which simple deductions can be made to direct a single body. While my thrust is to describe how these generalized representations support reason, they also support associative approaches like intuition and learned behavior. The underlying mechanism is feedback: generalized information about past patterns can help predict what patterns will happen again.

Reasoning takes the symbols formed from generalized representations and develops propositions about them to create logical scenarios called models or possible worlds. Everything I have written above drives to this point. A model is an idealization with two kinds of existence, ideal and physical, which are independent. For example, 1+1=2 according to some models of arithmetic, and this is objectively and unalterably true, independent of the physical world or even our ability to think it. Ideal existence is a function of relationships. On the other hand, a model can physically exist using a computer (e.g. biological or silicon) to implements it, or on paper or other recorded form which could later be interpreted by a computer. Physical existence is a function of spacetime, which in this case takes the form of a set of instructions in a computer. To use models, we need to expect that the physical implementation is done well so that we can focus on what the model says ideally. In other words, we need a good measure of trust in the correlation from the ideal representation to the physical referent. While we are not stupid and we know that perception is not reality, we are designed to trust the theater we interact with implicitly, both because it spares us from excess worry and because that correlation is very dependable in practice.

The ideal and physical worlds are independent of each other and might always have remained so were it not for the emergence of the animal mind some 550 million years ago. The upgrades we received in the past 4 million years with the rise of the Australopithecus and Homo genera are the most ambitious improvements in a long time, but animal minds were already quite capable. We’re just version 10.03 or so in a long line of impressive earlier releases. Animal minds probably all model the world using representation, which, as noted, captures the essential characteristics of referents, as well as rules about how objects and their properties behave relative to each other in the model. Computationally, minds use data structures that represent the world in a generalized or approximate way by recording just the key properties. All representations are formed by generalizing, but while some remain general (as with common nouns), some are tracked as specific instances (and optionally named, as with proper nouns). For that matter, generalizations can be narrow or broad for detailed or summary treatments of situations. For any given circumstance the mind draws together the concepts (being the objects and their characteristics) that seem relevant to the level of detail at hand so it can construct propositions and draw logical conclusions in a single stream. We weigh propositions using probabilistic logic and consider multiple models for every situation, which improves our flexibility. This analysis creates the artificial world of our conscious experience, the cartoon. This simplified logical view seamlessly overlays with our sensory perceptions, which pull the quality of the experience up from a cartoon to photorealistic quality.

If the SSSS is the reason we run a simplified cartoon of the world in our conscious minds, that may explain why we have a subjective experience of consciousness, but it still doesn’t explain why it feels exactly the way it does. The exact feel is a consequence of how data flows into our minds. To be effective, the conscious mind must not overlook any important source of information when making a decision. For example, any of our senses might provide key information at any time. For this reason, this information is fed to the conscious mind through sensory data channels called qualia, and each quale (kwol-ee, the singular) is a sensory quality, like redness, loudness or softness. Some even blend to create, for example, the sensation of a range of colors. The channels provide a streaming experience much like a movie. While the SSSS focuses on just the aspects most relevant to making decisions, it has an awareness of all the channels simultaneously. So it is capable of processing inputs in parallel even though it must narrow its outputs to a single stream of selected steps.

But why do data channels “feel” like something? First, we have to keep in mind that as substantial as our qualia feel, it is all in our heads, meaning that it is ultimately just information and not a physical quality. There is no magic in the brain, just information processing. A lot of information processing goes into creating the conscious theater of mind; it is no coincidence that it seems beautiful to us. Millions of years went into tailoring our conscious experience to allow all the qualia to be distinct from each other and to inform the reasoning process in the most effective way. Any alteration to that feel would affect our ability to make decisions. How should hot and cold feel? It doesn’t really matter what they feel like so long as you can tell them apart. Surprisingly, out of context, people can confuse hot with cold, because they use the same quale channel and we use them in a highly contextual way. Specifically, If you are cold, warmth should feel good, and if you are hot, coolness should feel good. And lo and behold, they do feel that way. Much of the rich character we associate with qualia comes not from the raw sensory feel itself but from the contextual associations we develop from genetic predispositions and years of experience. So red and loud will seem a bit scarier and alarming than blue or quiet, and soft will seem more soothing than rough. Ultimately, that qualia feel so rich and fit together seamlessly into a smooth movie-like experience proves that extensive parallel subconscious computational support goes into creating them.

Beyond sensory qualia, data channels carry other streams of information from subconscious processing into our conscious awareness. These streams enhance the conscious experience with emotion, recognition, and language. The subconscious mind evaluates situations, and if it finds cause for sadness (or other emotional content), then it informs the conscious mind, which then feels that way. We feel emotional qualia as distinctly as sensory qualia, and the base emotions seem to have distinct channels as we can feel multiple emotions at once. Recognition is a subconscious process that scans our memory matching everything we see and experience to our store of objects and experiences (concepts). It provides us with a live streaming data lookup service that tells us what we are looking at along with many related details, all automatically. We think of language as a conscious process, but only a small part is conscious. A processing engine hidden to our conscious minds learns the rules of our mother tongue (and others if we teach it), and it can generate words and sentences that line up with the propositions flowing through the SSSS, or parse language we hear or read into such propositions. Language processing is a kind of specialized recognition channel that connects symbols to meanings. The goal is for the conscious mind to have a simple but thorough awareness of the world, so everything not directly relevant to conscious decision making is processed subconsciously so as not to be a distraction. Desires don’t have their own qualia but instead add color to sensory and emotional qualia. Computationally this means additional information about prioritization comes through the qualia data channels. Desires come through recognition data channels (memory) as beliefs. Beliefs are desires we have committed to memory in the sense that we have computed our level of desire and now remember it. As noted above, recall that desires and beliefs are the only factors that influence how we prioritize our actions.

While we are born with no memories, and consequently all recognition and language are learned, we are so strongly inclined to learn to use our subconscious skills that all humans raised in normal environments will learn how without any overt training. We thus learn to recognize objects and experience appropriate emotions in context whether we like it or not. Similarly, we can’t suppress our ability to understand language. Interestingly, lack of conscious control over our emotions has been theorized to help others “see” our true feelings, which greatly facilitates their ability to trust us and work for both parties’ best interests. Other subconscious skills also include facility with physics, psychology, face recognition and more, which flow into our consciousness intuitively. We are innately predisposed to learn these skills and once trained we use them miraculously without conscious thought. The net result of all these subconsciously produced data channels is that the conscious mind is fed an artificial but very informative and largely predigested movie of the world, so much so that our conscious minds can, if they like, just drift on autopilot enjoying the show with little or no effort.

Lots of information flows into the conscious mind on all these data channels. It is still too much for the SSSS to process using modeling and logical propositions. So while we have a conscious awareness of all of it, attention is a specialized ability to focus on just the items relevant to the decision-making process. Computationally, what attention does is fill the local variable slots of the SSSS process with the most relevant items from the many data channels flowing into the conscious mind. So just as you can only read words at the focal point of your vision, you can only do logic on items under conscious attention, though you retain awareness of all the data channels analogously to peripheral vision. Further, since those items must be representational, data from sensory or emotional qualia must first be processed into representations through recognition channels. We can shift focus to senses and emotions, e.g. to consciously control breathing, blinking, or laughing, through representations as well. It is similar for learned behaviors. We can not only walk and chew gum at the same time, we can also carry on a conversation that engages most of our attention. Same for when we are tying our shoes or driving. But to stay on the lookout for novel situations, we retain conscious awareness of them and can bring them to attention if needed. Conscious focus is how we flexibly handle the most relevant factors moment to moment. Deciding what to focus on is a complex mental task itself that is handled almost entirely subconsciously.

The loss of smell in humans probably follows from the value in maintaining a simple logical model. Humans, and to a lesser degree other primates, have lost much of their ability to smell, which has probably been offset by improvements in vision, specifically in color and depth. That primates benefit more from better vision makes sense, but why did we lose so much variety and depth from smell perception? Disuse alone seems unlikely to explain so much loss considering rarely-used senses are still occasionally useful. The more likely explanation is that the sense of smell was a troublesome distraction from vision. That is, when forced to rely on vision primates did better than they would with both vision and smell. This can be explained by analogy to blind people, who develop other senses more keenly to compensate. Those forced to develop more keen visual senses could use them more effectively in many ways than those who trusted smell, which may turn out not to deliver as much benefit for primates, and especially humans. If you consider how much value we get from vision compared to smell, this seems like a good trade-off.

To summarize what I have said so far, the conscious mind has a broad subrational awareness of much sensory, emotional and recognition data. It can use intuition, learned behavior, and many subconscious skills but does so with conscious awareness and supervision. To consciously reason, the SSSS processes representations created by generalizing that data. The SSSS only reasons with propositions built on representations under conscious attention, i.e. those that are relevant. Innate desires are used to prioritize decisions, that is, they lead us to do things we want to do.

We know we are smarter than animals, but what exactly do we do that they can’t? Use of tools, language (and symbols in general), and self-awareness seem more like consequences of greater intelligence than its cause. The key underlying mental capacity humans have that other animals lack is directed abstract thinking. Only humans have the facility and penchant for connecting thoughts together in arbitrarily complex and generalized ways. In a sense, all other animals are trapped in the here and now; their reasoning capacities are limited to the problems at hand. They can reason, focus, imitate, wonder, remember, and dream but they can’t daydream, which is to say they can’t chain thoughts together at will just to see what might happen. If you think about it, it is a risky evolutionary strategy as daydreamers might just starve. But our instinctual drives have kept up with intelligence to motivate us to meet evolutionary requirements. Steven Pinker believes metaphors are a consequence of the evolutionary step that gave humans directed abstract thinking:

When given an opportunity to reach for a piece of food behind a window using objects set in front of them, the monkeys go for the sturdy hooks and canes, avoiding similar ones that are cut in two or made of string of paste, and not wasting their time if an obstruction or narrow opening would get in the way. Now imagine an evolutionary step that allowed the neural programs that carry out such reasoning to cut themselves loose from actual hunks of matter and work on symbols that can stand for just about anything. The cognitive machinery that computes relations among things, places, and causes could then be co-opted for abstract ideas. The ancestry of abstract thinking would be visible in concrete metaphors, a kind of cognitive vestige.

…Human intelligence would be a product of metaphor and combinatorics. Metaphor allows the mind to use a few basic ideas — substance, location, force, goal — to understand more abstract domains. Combinatorics allows a finite set of simple ideas to give rise to an infinite set of complex ones.1

Pinker believes the “stuff of thought” is sub-linguistic, and is only translated to/from a natural language for communication with oneself or others. That is, he does not hold that we “think” in language. But we can’t discuss thinking without distinguishing conscious and subconscious thought. Consciously, we only have access to the customized data channels our subconscious provides us to give us an efficient, logical interface to the world. In humans, a language data channel gives us conscious access to a subconscious ability to form or decompose linguistic representations of ideas. I agree with Pinker that the SSSS does not require language to reason, but language is a critical data channel integrally involved with advanced reasoning, i.e. directed abstract thinking. The SSSS processes many lines of thought across many models with many possible interpretations, which we can think of as being done in parallel (i.e. within conscious awareness) or in rotation (i.e. under conscious focus). But because language reduces thought to a single stream it provides a very useful way to simplify the logical processing of the SSSS down to one stream that can be put to action or used to communicate with oneself or others. Also, language is a memory aid and helps us construct more complex chains of abstract thought than could easily be managed without it, in much the same way writing amps up our ability to build longer and clearer arguments than can be sustained verbally. So the linguistic work of SSSS, i.e. conscious thought, works exclusively with natural language, but most of the real work (computationally speaking) of language is done subconsciously by processes that map meaning to words and words to meaning. Pinker somewhat generically calls the subconscious level of thinking “mentalese”, but this word is very misleading because it suggests a linguistic layer underlies reasoning when it doesn’t. Language processing is done by a specialized language center that feeds both natural language and its meaning to/from our conscious minds (the SSSS). And this center uses processing algorithms that can only process languages that obey the Universal Grammar (UG) Noam Chomsky described. But the language center does no reasoning; reasoning is a function of the SSSS, for which natural language is a tool that helps broker meanings.

So let’s consider metaphor again. The SSSS reasons with propositions built on representations that are themselves ultimately generalizations about the world. Metaphor is a generalized use of generalizations. It is a powerful tool of inductive reasoning in its own right that can help explain causation by analogy independent of its use in language. But language does make extensive use of metaphorical words and idioms as a tool of reasoning because a metaphor implies that explanations about the source will apply to the target. And more broadly, metaphors, like all ideas, are relational, defined in terms of each other, and ultimately joined to physical phenomena to anchor our sense of meaning. I agree with Pinker that metaphor provides a useful way to create words and idioms for ideas new to language and that these metaphors become partly or wholly vestigial when words or idioms are understood independent of their metaphorical origin. The words manipulate and handle derive from the skillful use of hands and yet are also applied to skillful use of the mind, and many mental words have physical origins and often retain their physical meanings, but we use them mentally without thinking of the physical meaning. But metaphorical reasoning is also well supported by language just because it is a powerful explanatory device.

An important consequence of directed abstract thinking and language is that humans have a vastly larger inventory of representations or concepts with which they can reason than other animals. We have distinct words for a small fraction of these, and most words are overloaded generalizations that we apply to a range of concepts we can actually distinguish more finely. We distinguish many kinds of parts and objects in our natural and artificial environments and many kinds of abstract concepts like health, money, self, and pomposity.

But what about language, tool use, and self-reflection? No one could successfully argue that chimps could do this as well as us if only they had generalized abstract thought. While generalized abstract thought is the underlying breakthrough that opened the floodgates of intelligence, it has co-evolved with language, manipulating hands and the wherewithal to use them, and a strong sense of self. Many genetic changes now separate our intellectual reach from our nearest relatives. Any animal can generalize from a strategy that has worked before to apply it again in similar circumstances, but only humans can reason at will about anything to proactively solve problems. Language magically connects words and grammar to meanings for us through subconscious support, but we are most familiar with how we consciously use it to represent and manipulate ideas symbolically. We can’t communicate most abstract ideas without using language, but even to develop ideas in our own mind though a chain of reasoning language is invaluable. Though our genetic separation from chimps is relatively small and recent, the human mind takes a second order qualitative leap into the ideal world that gives us unlimited access to ideas in principle because all ideas are relationships.

An overview of evolution and the mind

[Brief summary of this post]

The human mind arose from three somewhat miraculous breakthroughs:

1) Natural selection, a process dating back about 2 billion years that changes through adaptations in response to new environmental challenges

2) Animal minds, which opened up a new branch of reality: imagination. Feedback led to computation and representation, which enabled animals to flourish.

3) Directed abstract thinking, the special skill that lets people abstract away from the here and now to the anywhere and anywhen with great facility, giving us unlimited access to the world of imagination.

Of the four billion years we have spent evolving, about 600 million years (about 15%) has been as animals with minds, and at most 4 million years (about 0.1%) as human-like primates. That brief 4 million year burst may have changed 1% to 5% of our genes, which numerically is just fine tuning already well-established bodies and minds. Animals diverged into over a million animal species, but the appearance of directed abstract thinking in humans changed the playing field. Humans could survive not just in one narrow ecological niche, but in many niches, potentially flourishing anywhere on earth and ultimately squeezing out nearly all animal competition our size or bigger. Other mental capacities coevolved with and help support directed abstract thinking, like 3-D color vision, face recognition, generalized use of hands, language, and sophisticated cognitive skills like reasoning with logic, causation, time, and space. Directed abstract thinking is a risky evolutionary strategy because it can be used for nonadaptive purposes, such as the contemplating of navels, or even spiraling into analysis paralysis. To keep us on the straight and narrow, we have been equipped with enhanced senses and emotions that command our attention more than those found in other animals, for things like love, hate, friendship, food, sex, etc. The more pronounced development of sexual organs and behaviors in humans relative to other primates is well known 12, but the reasons are still speculative. I am suggesting one reason is to motivate us to pursue evolutionary goals (notably survival and reproduction) despite the distractions of “daydreaming”. Books, movies, TV, the internet, and soon virtual reality threaten our survival by fooling our survival instincts with simulacra of interactions with reality.

The mind is integrally connected to the mechanisms of life, so we have to look back to how life evolved to see why minds arose. While we don’t know the details of how life emerged, the latest theories fill some missing links better than before. Deep sea hydrothermal vents 3 may have provided the necessary precursors and stable conditions for early life to develop around 4 billion years ago, including at least these four:

(a) carbon fixation direct from hydrogen reacting with carbon dioxide,
(b) an electrochemical gradient to power biochemical reactions that led to ATP (adenine triphosphate) as the store of energy for biochemical reactions,
(c) formation of the “RNA world” within iron-sulfur bubbles, where RNA replicates itself and catalyzes reactions,
(d) the chance enclosure of these bubbles within lipid bubbles, and the preferential selection of proteins that would increase the integrity of their parent bubble, which eventually led to the first cells

From this point, life became less dependent on the vents and gradually moved away. These steps came next:

(e) expansion of biochemical processes, including use of DNA, the ability of cells to divide and the formation of cell organelles by capture of smaller cells by larger,
(f) a proliferation of cells that led eventually to LUCA, the “Last Universal Common Ancestor” cell about 3.5 billion years ago,
(g) multicellular life, which independently arose dozens of times, but most notably in fungi, algae, plants and animals about 1 billion years ago, and
(h) the appearance of sexual reproduction, which has also arisen independently many times, as a means of leveraging genetic diversity in heterogeneous environments.4 and resisting parasites 5. Whatever the reason, we have it.

The net result was the sex-based process of natural selection that Darwin identified. Lifeforms now had a biochemical capacity to encode feedback from the environment into genes that could express proteins that would result in improving the chances of survival.

Larger multicellular life diverged along two strategies: sessile and mobile. Plants chose the sessile route, which is best for direct conversion of solar energy into living matter. Animals chose mobility, which has the advantage of invincibility if one is at the top of the food chain, but the disadvantage of requiring complex control algorithms to do it. Animals further down the food chain are more vulnerable but require less sophisticated control. But how exactly did animals evolve the kind of control they need for a mobile existence? Sponges 6are the most primitive animals, having no neurons or indeed any organs or specialized cells. But they have animal-like immune systems and some capacity for movement in distress. Cnidarians (like jellyfish, anemones, and corals) feature diffuse nervous systems with nerve cells distributed throughout the body without a central brain, but often featuring a nerve net that coordinates movements of a radially symmetric body. What would help animals more, if it were possible, is an ability to move to food sources in a coordinated and efficient way. The radial body design seems limiting in this regard and may be why all higher animals are bilateral (though some, like sea stars and sea urchins, have bilateral larvae but radial adults). Among the bilateria, which arose about 550-600 million years ago, nearly all developed single centralized brains, presumably because it helps them coordinate their actions more efficiently, excepting a few invertebrates like the octopus, which has a brain for each arm, and a centralized brain to loosely administer. Independent eight-way control of arms comes in handy for an octopus; with practice, we can use our limbs independently in limited ways, but our attention can only focus on one at a time.

But how do nerves work, exactly? While we understand some aspects of neural function in detail, exactly how they accomplish what they do is still mostly unknown. Our knowledge of the mechanisms breaks down beyond a certain point, and we have to guess. But we can see the effects that nerves have: nerves control the body, and the brain is a centralized network of nerves that control the nerves that control the body. The existence of nerves and brains and indeed higher animals stands as proof that it is physically possible for a creature to move to food sources in a coordinated and efficient way, and indeed to enhance its chances of survival using centralized control. We can thus safely conclude, without any idea how they work, that the overall function of the brain is to provide centralized, coordinated control of the body.

For the most part, I will deal with the brain as a black box that controls the body and try to unravel its logical functions without too much regard as to its physical mechanisms. I will, however, try to be careful to take into account the constraints the brain’s structure entails. For example, we know brains must be fast and work continuously to be effective. To do this, they must employ a great deal of parallel processing to make decisions quickly. But let’s focus first on what they must do to control the body rather than how they do it.

To control a body so as to cause it to locate food sources, avoid dangers, and find mates requires that we start using verbs like “locate,” “avoid,” and “find”. We know minds can do these kinds of things while rocks and streams can’t, but how can we talk about them objectively, independent of the idea of minds? By observing their behavior. An animal’s body can move toward food as if it had a crystal ball predicting what it would find. It seems to have some way of knowing in advance where the food will be and animating its body so as to transport itself there. If rocks and streams can’t do it, how can animals?

The brain operates with a feedback loop of sensing, computing and acting. From an information standpoint, these steps correspond to data inputs, data processing, and data outputs. This is the crux of the computational theory of mind. When we speak of computation in this context, we are not referring to digital computation with 1’s and 0’s, but to any physical process that accomplishes information management. Information can be representational or nonrepresentational. Nonrepresentational information is just data that has value to the process that uses it. Raw sound or image data is nonrepresentational, as is much of the information supporting habitual behavior. Probably most of the information managed by the brain is nonrepresentational, but much of the information consciousness uses is representational. Representational information is grouped into concepts (e.g. objects) that describe essential and important characteristics of referents. Logical operations performed on the references are later applied back to the referents. For example, we recognize objects in raw image data by matching characteristics to our remembered representations of the objects.

At every moment the brain causes each part of the body to perform (or not perform) an action to produce the coordinated movement of the body toward a goal, such as locating a food source. Because there is only one body, and it can only be in one place at a time, the central brain must function as what I call a single-stream step selector, or SSSS, where a step is part of a chain of actions the animal takes to accomplish a goal. If the brain discerns new goals, it must prioritize them, though the body can sometimes pursue multiple goals in parallel. For example, we can walk, talk, eat, blink, and breathe at the same time. As I related in An overview of evolution and the mind, we prioritize goals in response to desires and subjective beliefs, which objectively and computationally are preference parameters that are well tuned to lead us to behaviors that coincide with the objectives of evolution (in the ancestral environment; they are not always so well tuned in modern times).

While we know the whole brain must function as an SSSS to achieve top-level coordination of the body, this doesn’t mean the SSSS has to be a special-purpose subprocess of the brain. For example, we can imagine building a robot with one overall program that tells it what to do next. But evolution didn’t do it that way. In animal minds, consciousness is a small subset of overall mental processing that creates a subjective perspective that is like an internal projection of the external physical landscape. It is a technique that works very well, regardless of whether other equally good ways of controlling the body might exist. As of now, we know that we can build robots without such a perspective, such as self-driving cars, but their responses are limited to situations they have learned to handle, which is nowhere near flexible enough to handle the life challenges all animals face. Learned behavior and reasoning are the only two top-level approaches to control that have a good degree of flexibility that I know of, but I can’t preclude the possibility of others. But we do know that animals use reasoning, which I believe mandates a simplified proposition-based logical perspective/projection of the world into a top-level portion of the mind that acts as an SSSS.

Brains use a lot of parallel processing. We know this is true for sensory processing because it provides useful sensory feedback in a fraction of a second, yet we know computationally that a non-parallel solution would be terribly slow. Real-time vision, for example, processes a large visual field almost instantly. Evolution will tend to exploit tools at its disposal if they provide a competitive advantage, so many kinds of operations in the brain use parallel processing. Associative memory, for instance, throws a pattern against every memory we have looking for matches. The computational cost of all those mismatches in just a few seconds is probably longer than our lifetimes, but that’s ok because our subconscious has nothing better to do and it doesn’t bother our conscious minds with the mismatches. Control of the body is another subconscious task using massively parallel processing. So sensing, memory, and motor control are highly parallel. But what about reasoning?

The SSSS is a subprocess of the brain that causes the body to do just one (coordinated) thing at a time, i.e. a serial set of steps. But while it produces actions serially, this does not prove that reasoning is strictly serial. Propositional logic itself is serial, but we could, in principle, think several trains of thought in parallel and then eventually act on just one of them. My guess, weighing the evidence from my own mind, is that the SSSS and our entire reasoning capacity is in fact strictly serial. Drawing on an analogy to computers, the SSSS has one CPU. It is, however, a multitasking process that uses context switching to shuffle its attention between many trains of thought. In other words, we pursue just one train of thought at a time but switch between many trains of thought about different topics floating around in our heads. Each train of thought has a set of associated memories describing what has been thought so far, what is currently under consideration, and goals. For the most part, we are aware of the trains we are running. For example, I have trains now for several aspects of what I am writing about, the temperature and lighting of my room, what the birds are doing at my bird feeder, how hungry I am, what I am planning to eat next, what is going on in the news, etc. These trains float at the edge of my awareness competing for attention, but my attention process keeps me on the highest prioritized task. But to prioritize them the attention process has to “steal cycles” from my primary task and cycle through them to see if they warrant more attention. It does that at a low level that doesn’t disturb my primary train of thought too much, but enough to keep me aware of them. When we walk, talk, and chew gum at the same time our learned behavior guides most of the walking and chewing, but we have to let these activities steal a few cycles from talking. We typically retain no memory of this low-level supervision the SSSS provides to non-primary tasks and may be so absorbed in our primary task we don’t seem to consciously realize we are lending some focus to the secondary tasks, but I believe we do interrupt our primary trains to attend to them. However, we are designed to prevent these interruptions from reducing our effectiveness at the primary task, for the obvious reason that quality attention to our primary task is vital to survival. The higher incidence of traffic accidents when people are using cell phones seems to confirm these interruptions. The person we are speaking to doesn’t expect us to be time-sharing them with another activity, which works out well so long as we can drive on autopilot (learned behavior). But when an unexpected driving situation requiring reasoning pops up we will naturally context switch to deal with it, but the other party doesn’t realize this and continues to expect our full attention. We may consequently fail to divert enough reasoning power to driving.

Why wouldn’t the mind evolve a capacity to reason with multiple tasks in parallel? I believe the benefits of serial processing with multitasking outweigh the potential benefits of parallel processing. First and foremost, serial processing allows for constant prioritization adjustments between processes. If processes could execute in parallel, this would greatly complicate deciding how to prioritize them. Having the mind dedicate all of its reasoning resources into a task that is known to be the most important at that moment is a better use of resources than going off in many directions and trying to decide later which was better to act on. Secondly, there isn’t enough demand for parallel processing at the top level to justify it. Associative memory and other subconscious processes require parallel processing to be fast enough, but since we do only need to do one thing at a time and our animal minds have been able to keep up with that demand using serial processing, parallel designs just haven’t emerged. While such a design has the potential to think much faster, achieving consensus between parallel trains is costly. This is the too-many-cooks-in-the-kitchen headache groups of people have when working together to solve problems. If the brain has a single CPU instead of many then parts of that CPU must be centrally located, and since consciousness goes back to the early days of bilateral life, some of those parts must be in the most ancient parts of the brain.

The brain controls the body using association-based and decision-based strategies. Association-based approaches use unsupervised learning through pattern recognition. It is unsupervised in the sense that variations in the data sets alone are used to identify patterns which are then correlated to desired effects. The mind then recognizes patterns and triggers appropriate actions. In this way, it can learn to favor strategies that work and avoid those that fail. While the mind heavily depends on association-based approaches for memory and learning, they do not explain consciousness or the unique intelligence of humans, which results from decision-based strategies.

Reasoning is powered by a combination of association-based and decision-based strategies, but the association-based parts are subsidiary as the role of decision-based strategies is to override learned behavior when appropriate. Decision-based strategies draw logical conclusions from premises either with certainty (deduction) or probability (induction). Reasoning itself, the application of logic given premises, is the easy part from the perspective of information management. The hard part is establishing the premises. The physical world has no premises; it only has matter and energy moving about in different configurations. Beneath the level of reasoning, the mind looks for patterns and distinguishes the observed environment into a collection of objects (or, more broadly, concepts). The distinguished objects themselves are not natural kinds because the physical world has no natural kinds, just matter and energy, but there are some compelling practical reasons for us to group them this way. Lifeforms, in particular, each have a single body, and some of them (animals) can move about. Since animals prey on lifeforms for food, and also need to recognize mates and confederates, an ability to recognize and reason about lifeforms is indispensable. Physically, lifeforms have debatable stability, as their composition constantly changes through metabolism, but that bears little on our need to categorize them. Similarly, other aspects of the environment prove useful to distinguish as objects and by generalization as kinds of objects. Animals chunk data at levels of granularity that prove useful for accomplishing objectives. Grouping information into concepts this way sets the stage for the SSSS to use them in propositions and do logical reasoning. Concepts become the actors in a chain of events and can be said to have “cause and effect” relationships with each other from the “perspective” of the SSSS. That is, cause and effect are abstractions defined by the way the data is grouped and behaves at the grouped level that the SSSS can then use as a basis for decisions. In this way, the world is “dumbed down” for the SSSS so it can make decisions (i.e. select actions) in real time with great quality and efficiency despite having just one processing stream.

We experience the SSSS as the focal point of reasoning, the center of conscious awareness, where our attention is overseeing or making decisions. Though it sounds a bit surprising that we are nothing more than processes running in our brains, unless magic or improbable laws of physics are involved, this is the only possible way to explain what we are and is also consistent with brain studies to date and computer science theory. The way our conscious mind “feels” to us, more than anything, is information. The world feels amazing to us because consciousness is designed so that important information grabs our attention through all the distractions. Our conscious experience of vision, hearing, body sense, other senses, and memory are all just ways of interpreting gobs of pure information to facilitate a continuous stream of decisions. The human conscious experience is a big step up from that of animals because directed abstract thinking enables us to potentially conceive of any relationship or system, and in particular powers our ability to imagine possible worlds, including self-awareness of ourselves as abstract players in such systems.

The process of mind

[Brief summary of this post]

Let’s say the mind is a kind of computer. As a program, it moves data around and executes instructions. Herein I am going to consider the form of the data and the structure of the program. I have proposed that from the top down the mind is controlled by a process I call the SSSS, for single stream step selector. I have argued that this process uses a single CPU, i.e. one thread or train of thought, but an unlimited number of multitasked processes, though it is only actively pursuing a handful of these at a time. And I have argued that top-level decisions use reason, either inductive of deductive logic, on propositions, which are simplifications or generalizations about the world, guided by desires, which are instinctive preferences understood consciously as preferential propositions. Propositions are represented using concepts framed by models, both of which we keep in our memory.

To decompose this further working from the top down let’s consider how a program works. First, it collects data, aka inputs. Then it does some processing on the data. Third, it produces outputs. And last, it repeats. For a service-oriented program, i.e. one that provides a continuous stream of outputs for a shifting stream of inputs, this endless iteration of the central processing loop, which for minds is heavily driven by outputs feeding back to inputs, forms the outer structure of the program. I call the loop used by the SSSS the RRR loop, for recollection, reasoning, and reaction.

Before I discuss these in some detail, I want to say something about the data and instructions. If I say I’m losing my mind, I’m talking about my memory, not my faculties, which I can take for granted. All of the “interesting” parts are in the data, from our past experiences to our understanding of the present to our future plans. The instructions our brain and body follows are, by comparison, low-level and mostly hard-wired. The detailed plans that let us play the piano or speak a sentence are stored in memory. Built-in instructions support memory retrieval, logical operations, and transmission of instructions to our fingers or mouths, but any higher-level understanding of the mind relates to the contents of memory. Our memory is inconceivably vast. At any one time, we can consciously manage just a handful of data references and an impression of the data to which they refer. But that referenced data itself in turn ultimately refers to all the data in our minds, everything we have ever known, and to some degree everything everyone has ever known. Because “everything” means representations of everything, and since representations are generalizations that lose information, much has been lost. Most, no doubt. But it is still a massive amount of useful information, distilled from our personal experience, our interactions with others, culture, and a genetic heritage of instinctive impressions that develop into memory as we grow. Note that genetically-based “memory” is not yet memory at birth but a predisposition to develop procedural memory (e.g. breastfeeding, walking) or declarative memory (e.g. concepts, language).

One more thing before I go into the phases. We consciously control the SSSS process; making decisions is the part of our existence we identify with most strongly. But the SSSS process is supported by an incalculably large (from a conscious perspective) amount of subconscious thinking. Our subconscious does so much for us we are already very smart before we consciously “lift a finger”. This predigested layer is what makes explaining the way the mind works so impenetrable: how can you explain what just appears by magic? Yes, subjectively it is magic — conscious awareness and attention is a subprocess of the mind that is constrained to see just the outer layers of thought that support the SSSS, without all the distraction of the underlying computations that support it. But objectively we can deduce much about what the subconscious layers must be doing and how they must be doing it, and we now have machine learning algorithms that approximate some of what the subconscious does for the SSSS in a very rudimentary way. So from a computational standpoint, all three phases of the SSSS are almost entirely subconscious. All the conscious layer is doing is providing direction — recall this, reason that, react like so — and the subconscious makes it happen with a vast amount of hidden machinery.

Recollections can be either externally or internally stimulated, which I call recognition-based or association-based recall. Recognition means identifying things in the environment similar to what has been seen before, a process known in psychology as apperception. Sensory perception provides a flood of raw information that can only be put to use by the SSSS to aid in control if it can be distilled into a propositional form, which is done by generalizing the information into concepts. The mind first builds simplified generic object representations that require no understanding about what is being sensed. For example, vision processing converts the visual field into a set of colored 3-D objects adjusted for lighting conditions, without trying to recognize them. These objects must have a discrete internal representation headed by an object pointer and containing the attributes assigned to the object. For example, if we identify a red sphere, then a red sphere object pointer contains the attributes red, sphere, and other salient details we noticed. Such a pointer lets us distinguish a red sphere from a blue cube, i.e. that red goes with the sphere and blue goes with the cube, which is called the segregation problem in cognitive science, or sometimes the binding problem (technically subproblem BP1 of the binding problem). Being able to create distinct mental objects at will for anything we see that we wish to think about discretely is critical to making use of the information. Note that in this simplified example I have called out two idealized attributes, red and sphere, but this processing happens subconsciously, so it would be presumptuous (and wrong) to infer that it identifies the red sphere simply by using those two attributes. More on that below.

The next step of recognition is matching perceived objects to things we have seen before. This presupposes we have memories, so let’s just assume that for now. Memory acts like a dictionary of known objects. The way we associate perceived objects to memories, technically called pattern recognition, is solved by brute force: the object is simultaneously compared to every memory we have, trying to match the attributes of that object against the attributes of every object in memory. Technically, to do this comparison concurrently means doing many comparisons in parallel, which probably means many neural copies of the perceived object are broadcast across the brain looking for a match. Nearly all these subconscious attempts to match will fail, but if a match is found then consciously it will just seem to pop out. We know pattern recognition works this way in principle because it is the only way we could recognize things so quickly. Search engines and voice recognition algorithms use machine learning algorithms that function in a similar way, which is sometimes called associative memory. While we don’t know much yet about brain function, this explanation is consistent with brain studies and what we know about nerve activation.

After a match, our associative memory returns the meaning of the object, which is analogous to a dictionary definition, but while any given dictionary definition uses a fixed set of words, a memory returns a pointer connected to other memories. So the meaning consists of other objects and relationships from the given object to them. So when we recognize our wallet, the pointer for our wallet connects it to many other objects, e.g. to a generic wallet object, to all the items in it, and to its composition. Each of these relationships has a type, like “is a”, “is a part of”, “is a feature of”, “is the composition of”, “contains”, etc. This is the tip of the iceberg because we also have long experience with our wallet, more than we can remember, much of which is stored and can potentially be recalled with the right trigger.

A single recognition event, the moment an object is compared against everything we know to find a match, is itself a simple hit or miss: our subconscious either finds relevant match(es) or it doesn’t. However, what we sense at the conscious level is a complex assembly of many such matches. There are many reasons for this, and I will list a few, but they stem from the fact that consciousness needs more than an isolated recognition event can deliver:
1. The attributes one which we base recognition are themselves often products of recognition. Our experience with substances leads us to evaluate the composition of the object based on texture, color, and pattern. Our experience with letters leads us to evaluate them based on lines, curves, and enclosed areas. Our experience with shapes leads us to evaluate them based on flatness or curviness, protuberances, and proportions. This kind of low-level recognition is based on a very large internal database of attributes comprehensible only to our internal subconscious matching process (beyond just “red” or “sphere”) that is built from a lifetime of experience and not from rational idealizations we concoct consciously. So size, luminosity, depth, tone, context and more trigger many subconscious recognition events from our whole life experience. These subconscious attributes derive from what is called unsupervised learning in machine learning circles, meaning that they result from patterns in the data and not from a qualitative assessment of what “should” be an attribute.
2. Each subset of the object’s attributes represents a potentially matchable object. So red spheres can also match anything red or any sphere. Every added attribute doubles the number of combinations and adds a new subset with all the attributes, so five attributes have 31 combinations and six have 63. A small shiny red sphere with a small white circle having a black “3” in it has six (named) attributes, and we will immediately recognize it as a pool ball, specifically the 3-ball, which is always red. Our subconscious does the 63 combinations for us and finds a match on the combination of all six attributes. Without the white circle with the “3”, the sphere could be a red snooker ball, a Christmas ornament, or a bouncy ball, so these possibilities will occur to us as we study the red sphere. As noted from my comments on machine learning above, the subconscious is not really using these six attributes per se but draws on a much broader and more subtle set of attributes generalized from experience. But it still faces a subset matching problem that requires more recognition events.
3. Reconsideration. We’re never satisfied with our first recognition; we keep doing it and refining it and verifying it, quickly building up a fairly complex network of associations and likelihoods, which our subconsciously distills down for us to the most likely recognized assembly. So a red sphere among numbered pool balls will be seen as the 3-ball even if the “3” is hidden because the larger context is taken into consideration. A red ball on a Christmas tree will be seen as an ornament. So long as objects fit into well-recognized contexts, the subconscious takes care of all the details, though this leaves us somewhat vulnerable to optical illusions.
Although the possible attribute combinations from approach (2) grow exponentially to infinity, our experience-based memory of encountered attributes using approach (1) constrain that growth. So familiar objects like phones and cars, composed of many identifiable sub-objects and attributes seen in countless related variations over the years, are instantly identified and confirmed using approach (3) even if they look slightly different from any seen before.

Our subconscious recognition algorithms are largely innate, e.g. they know how to identify 3-D objects and assemble memories. But some are learned. Linguistic abilities, which enable us to not only remember things but words that go with them and ways to compose them into sentences, are chief among these for humans. Generalization, mechanics (knowledge of motion), math (knowledge of quantity), psychology (knowledge of behavior), and focusing attention on what is important are other examples where innate talents make things easy for us. We can also train our subconscious procedural memory by learning new behaviors. In this case, we consciously work out what to do, practice it, and acquire the ability to perform them subconsciously with little conscious oversight. I allot both innate and learned algorithms to the recollection phase.

Beyond recognition, we recollect using what I call association-based recall. This happens when thoughts about one thing trigger recollection of related things. This is pretty obvious — our memory is stirred either by seeing something and recognizing it or because thinking about one thing leads to another. I already discussed how our subconscious does this to draw memories together through reconsideration, but here I am referring to when we consciously use it to elaborate on a train of thought. We can also conjure up seemingly random memories about topics unrelated to anything we have been thinking about. While subconscious and conscious free association are vital to maintaining our overall broad perspective, it is the conscious recognitions and associations that drive the reasoning process to make decisions. And in humans, our added ability to consciously direct abstract thinking lets us pursue any train of thought as far as we like.

The second phase, reasoning, is the conscious use of deductive and inductive logic. This means applying logical operations like and, or, not, and if…then on the propositions under attention. Deduction produces conclusions that necessarily follow from premises while induction produces conclusions that likely follow from premises based on prior experience. Intuition (which I consider part of the recollection phase) is very much like a subconscious capacity for induction, as it reviews our prior experience to find good inferences. But that review uses subconscious logic hidden to us which we can generally trust because it has been reliable before, but not trust too much because it is localized data analysis that doesn’t take everything into account the way reasoning can. Recollection and reasoning form an inner RR loop that cycles many times before generating a reaction, though if we need a very quick response we may jump straight from intuition to reaction. Although there is only one RRR loop, the mind multitasks, swapping between many trains of thought at once. This comes in handy when planning what to do next as the mind pursues many possible futures simultaneously to find the most beneficial one. Those that seem most likely draw most of our attention while the least likely hover at the periphery of our awareness.

Just as recollection is mostly subconscious but consciously steered, so too does reasoning leverage a lot of subconscious support, much of which itself leverages memory to hold the propositions and models behind all the work it is multitasking. For example, most of our more common deductions don’t need to be explicitly spelled out because habitual use of plans used many times before lets us blend learned behavior with step by step reasoning to spell out only the details that differ from past experience. So intuition basically tells us, “I think you’ve done this kind of thing before, I’ve got this,” and we give it a bit more rope. But the top level, where reasoning occurs, is entirely conscious and the central reason consciousness exists. A subprocess of the brain that pulls all the pieces together and considers the logical implications of all the parts is extremely helpful for handling novel situations. It turns out that nearly every situation has at least some novel aspects, so we are constantly reasoning.

The third phase of the RRR loop is reaction. Reaction has two components, deciding on the reaction and implementing it. The decision itself is the culminating purpose of the mind and especially the conscious mind, which only exists to make such top-level decisions. The mind considers many possible futures before settling on an action that it believes will hopefully precipitate one of them. The decision is simply the selection of the possible future (or, more specifically, one step toward that future) that the SSSS algorithm has ranked as the optimal one to aim for. That ranking process considers all the beliefs and desires the SSSS is monitoring, both from rational inputs and irrational feelings and intuitions. Selecting the right moment to act is one of the factors managed by that consideration process, so it follows logically from the reasoning process. While there is some pressure to reconsider indefinitely to refine the reaction, there is also pressure to seize the opportunity before it slips away or hampers one’s ability to move on to other decisions. Most decisions are routine, so we are fairly comfortable using tried and true methods, but we spend more time with novel circumstances.

While the SSSS decides on, or at least finalizes, the reaction, it delegates the implementation or physical reaction to the subconscious to carry out as this part doesn’t require further decision support. Even the simplest actions require a lot of parallel processing to control the muscles to perform the action, and the conscious mind is just not up to that or even wired for it. So all of our reactions, in the last few milliseconds at least, leverage innate or habituated behavior. As we execute a related chain of reactions, we will continue to provide conscious oversight to some degree, but will largely expect learned behavior to manage the details. This is why studies show that the brain often commits to decisions before we consciously become aware of them, an argument that has been used to suggest we don’t have free will since the body acts “on its own”. All this demonstrates is that we delegated our subconscious minds to execute plans we previously blessed. Of course, if we don’t like the way things are turning out we just consciously override them. In this way, walking, for instance, becomes second nature and doesn’t require continual conscious focus. But while not in focus, all actions within conscious awareness remain under the control of the RRR loop of the SSSS process, as is necessary for overall coordinated action. Some actions not normally within the range of conscious control, like pulse rate and blood pressure, can be consciously managed to a degree using biofeedback. It is reasonable for us to lack conscious control over housekeeping tasks that don’t benefit from reason. This is why the enteric nervous system, or “gut brain”, can function pretty well even if the vagus nerve connecting it to the central nervous system is severed1.

Recollection, essential for all three phases of the RRR process, assumes we have the right kind of knowledge stored in our memory, but I did not say how it got there. Considering that our memory is empty when we begin life, we must be able to add to our store of memory very frequently early in life to develop an understanding of what we are doing. Once mature, the ability to add to our memory lets us keep a record of everything we do and to expand our knowledge to adapt to changes, which have become frequent in our fast-paced world. From a logical perspective, then, we can conclude that the brain would be well served by committing to memory every experience that passes through the RRR loop. However, one can readily calculate that the amount of information passing through our senses would fill any storage mechanism the brain might use in a few hours or days at most. So we can amend the strategy to this: attempt to remember everything, but prioritize remembering the most important things.

This is a pretty broad mandate. Without some knowledge of the brain’s memory storage mechanisms, it will be hard to deduce more details about the process of mind with much confidence. It is certainly not impossible, and I am prepared to go deeper, but now is a good time to introduce what we do know about how the memory works because brain research has produced some important breakthroughs in this area. While the history of this subject is fascinating and mostly concerns a few patients with short and long-term memory loss, I will jump to the most broadly-supported conclusions, which are mostly well-known enough now to be considered common knowledge. In particular, we have short-term and long-term memory, which differ principally in that short-term memory lasts from moments to minutes, while long-term memory lasts indefinitely. We don’t consciously differentiate the two because the smooth operation of the mind benefits from maintaining the illusion of remembering everything. We know gaps can develop in our memory quickly, but we come to accept them because they have a limited impact on our decisions going forward, which is the role of the conscious mind.

We understand long-term memory better. If you picture the brain you see the wrinkled neocortex, most of which is folded up beneath the surface. But long-term memories are not formed in the neocortex. After all, every vertebrate can form long-term memories, but only mammals have a neocortex. Long-term memory comes in two forms stored very differently in the brain. Procedural memory (learned motor skills) are stored outside the cortex in the cerebellum and other structures, and is inaccessible to conscious thought, though we can, of course, employ it. Declarative memory (events, facts, and concepts) is created in the hippocampus, part of the archicortex (called the limbic system in mammals), which is the earliest evolved portion of the cortex. This kind of long-term memory is rehearsed by looping it via the Papez circuit from the hippocampus through to the medial temporal lobe and back again. After some iterations, the memory is consolidated into a form that joins the parts together (solving the binding problem mentioned above) and is stored in the medial temporal lobe using permanent and stable changes in neural connections. Over the course of years the memory is gradually distributed to other locations in the neocortex so that recent memories are mostly in the medial temporal lobe and memories within twelve years have been maximally distributed elsewhere2. For the most part, I will be focusing on declarative memory (aka explicit memory, as opposed to implicit procedural memory) as it is the cornerstone of reasoning, but we can’t forget that the rest of the brain and nervous system contribute useful impressions. For example, the enteric nervous system or “gut brain” (noted above) generates gut feelings. The knowledge conveyed from the gut is now believed to arise from its microbiome. This show of “no digestion without representation” is our gut bacteria chipping in their two cents toward our best interests.

What about short-term memory? It is sometimes called working memory because long-term memory needs to be put into short-term memory to be consciously available for reasoning. In humans, we know it is mostly managed in the prefrontal lobe of the neocortex. Short-term memory persists for about 10 to 20 seconds but can be extended indefinitely by rehearsal, that is, repeating the memory to reinforce it. In this way, it seems short-term memories can be kept for minutes without actually forming long-term memories. The amount of active short-term memory is thought to be about 4 to 5 items, but can be enlarged by chunking, which is grouping larger sets into subsets of three to four. Short-term memory being kept available by rehearsal can extend this, even though only 4 to 5 items are consciously available at once.

While reasoning probably only considers propositions encoded in prefrontal short-term memory, the other data channels flowing into conscious awareness provide other forms of short-term memory. Sensory memory registers provide brief persistence of sensory data. Visible persistence (iconic memory) lasts a fraction of a second, one second at most, aural persistence (echoic memory) up to about four seconds, and touch persistence (haptic memory) for about two seconds. Senses are processed into information such as objects, sounds, or textures, and a short-term memory of this sensory information independent of prefrontal memory seems to exist but has not been extensively studied. Sensory and emotional data channels that provide a fairly constant message (like body sense or hunger) can also be thought of as a form of short-term memory because the information they carry is always available to be moved into prefrontal short-term memory.

Short-term and long-term memory were first proposed in 1968 by Atkinson’s and Shiffrin’s (1968) multi-store model. Baddeley and Hitch introduced a more complex model they called working memory to explain how auditory and visual tasks could be done simultaneously with nearly the same efficiency as if done separately. From a top-down perspective, the brain has great potential to process tasks in parallel but ultimately must reconcile any parallel processing into a single stream of actions. Processing sensory signals, however, are not reactions to those signals, so it makes sense we can process them in parallel and that some short-term memory capacity in each would facilitate that. If the mechanisms the brain uses to maintain short-term memories of sensory signals and pre-frontal working memory involve close loops that rehearse or cycle the memories to give them enough longevity that the mind has time to manipulate them in various ways, then it makes sense that the brain would have just a handful of such closed loops which work closely with pre-frontal working memory to manage all short-term memory needs. Alan Baddeley proposed a central executive process that coordinates the different kinds of working memory, to which he added episodic buffer in 2000. He based the central executive on the idea of the Supervisory Attentional System (SAS) of Norman and Shallice (1980).

Interestingly, we appear to be unable to form new long-term memories during REM sleep, nor do our dreaming thoughts pursue prioritized objectives. However, if we are awakened or disturbed from REM sleep we can recover our long-term storage capacity quickly enough to commit some of our dreams to memory. This suggests some mechanisms of the SSSS are disabled during dreaming while others still operate3.

Having established the basic outer process of the conscious mind as an RRR loop within an SSSS process supported by algorithms and memory that largely operate subconsciously, the next question is how this framework is used to generate the content of the conscious mind, concepts and models.

Concepts and Models

[Brief summary of this post]

In The process of mind I discussed the reasoning process as
the second phase of the RRR loop (recollection, reasoning, and reaction). That discussion addressed procedural elements of reasoning, while this discussion will address the nature of the informational content. Information undergoes a critical shift in order to be used by the reasoning process, a shift from an informal set of associations to explicit relationships in formal systems, in which thoughts are slotted into buckets which can be processed logically into outcomes which are certain instead of just likely. Certainty is dramatically more powerful than guesswork. The buckets are propositions about concepts and the formal systems are an aspect of mental models (which I will hereafter call models).

I have previously described this formal cataloging as a cartoon, which you can review here. So is that it then, consciousness is a cartoon and life is a joke? No, the logic of reasoning is a cartoon but the concepts and models that comprise them bridge the gap — they have an informal side that carries the real meaning and a formal side that is abstracted away from the underlying meaning. So there is consequently a great schism in the mind between the formal or rational side and the informal or subrational side. Both participate in conscious awareness, but the reason for consciousness is to support the rational side. Reasoning requires that the world be broken down, as it were, into black and white choices, but to be relevant and helpful it needs to remain tightly integrated to both external and internal worlds, so the connections between the cartoon world and the real world must be as strong as possible.

So let’s define some terms in a bit more detail and then work out the implications. I call anything that floats through our conscious mind a thought. That includes anything from a sensory perception to a memory to a feeling to a concept. A concept is a thought about something, i.e. an indirect reference to it, and this indirect reference is the formal aspect that supports reasoning, a thought process that uses concepts to form propositions to do logical analysis. (A concept may also be about nothing; see below.) What concepts refer to doesn’t actually matter to logical analysis; logic is indifferent to content. Of course, content ultimately matters to the value of an analysis, so reasoning goes beyond logic to incorporate meaning, context, and relevance. So I distinguish reasoning from rational thought in that it leverages both rational and subrational thinking. And concepts as well leverage both: though they may be developed or enhanced by rational thinking, they are first and foremost subrational. They are a way of grouping thoughts, e.g. sensory impressions or thoughts about other thoughts, into categories for easy reference.

We pragmatically subdivide our whole world into concepts. The divisions are arbitrary in the sense that the physical world has no such lines — it is just a collection of waves and/or particles in ten or so dimensions. But it is not arbitrary in the sense that patterns emerge that carry practical implications: wavelets clump into subatomic particles, which clump into atoms, which clump into molecules, which clump into earth, water, and air or self-organize into living things. These larger clumps behave as if causes produce effects at a given macro level, which can explain how lakes collect water or squirrels collect nuts. The power that organizes things into concepts is generalization, which starts from recognizing commonalities between two or more experiences. Fixed reactions to sensory information, e.g. to keep eating while hungry, are not a sufficiently nuanced response to ensure survival. No one reaction to any sight, sound or smell is helpful in all cases, and in any case, one never sees exactly the same thing twice. Generalization is the differentiator that provides the raw materials that go into creating concepts. Our visual system contains custom algorithms to differentiate objects based on hardwired expectations about the kinds of boundaries between objects that we encountered in our ancestral environment that we benefited most from being able to discriminate. Humans are adapted to specialize in binocular, high-resolution, 3-D color vision of slowly moving objects under good lighting, even to the point of being particularly good at recognizing specific threats, like snakes1. Most other animals do better than us with fast objects, poor lighting, and peripheral vision. My point here is just that there are many options for collecting visual information and for generalizing from it, and we are designed to do much of that automatically. But being able to recognize a range of objects doesn’t tell us how best to interact with them. Animals also need concepts about those objects that relate their value to make useful decisions.

Internally, a concept has two parts, its datum and a reference to the datum, which we can call a handle after the computer science term for an abstract, relocatable way of referring to a data item. A handle does two things for us. First, it says I am here, I am a concept, you can move me about as a unit. Second, it points to its datum, which is a single piece of information insofar as it has one handle, but connecting to much more information, the generalizations, which together comprises the meaning of the concept. A datum uniquely collects the meaning of a given concept at a given time in a given mind, but other thoughts or concepts may also use that connected information for other purposes. This highly generalized representation is very flexible because a concept can hold any idea — a sensation, a word, a sentence, a book, a library — without restricting alternative formulations of similar concepts. And a handle with no datum at all is still useful in a discussion about generic concepts, such as the unspecified concept in this clause, which doesn’t point to anything!

To decompose concepts we need to consider what form the datum takes. This is where things start to get interesting, and is also the point where conventional theories of concepts start to run off the rails. We have to remember that concepts are fundamentally subrational. This means that any attempt to decompose them into logical pieces will fail, or at best produce a rationalization2, which is an after-the-fact reverse-engineered explanation that may contain some elements of the truth but is likely to oversimplify something not easily reducible to logic. For a rational explanation of subrational processes, we should instead think about the value of information more abstractly, e.g. statistically. The datum for the concept APPLE (discussions of concepts typically capitalize examples) might reasonably include a detailed memory of every apple we have ever encountered or thought about. If we were to analyze all that data we might find that most of the apples were red, but some were yellow or green or a mixture. Many of our encounters will have been with products made from apples, so we have a catalog of flavors as well. We also have concepts for prototypical apples for different circumstances, and we are aware of prototypical apples used by the media, as well as many representations of apples or idiomatic usages. All of this information and more, ultimately linking through to everything we know, is embedded in our concept for APPLE. And, of course, everyone has their own distinct APPLE concept.

Given this very broad and even all-encompassing subrational structure for APPLE, it is not hard to see why theories of concepts that seek to provide a logical structure for concepts might go awry. The classical theory of concepts3, widely held until the 1970’s, holds that necessary and sufficient conditions defining the concept exist. It further says that concepts are either primitive or complex. A primitive concept, like a sensation, cannot be decomposed into other concepts. A complex concept either contains (is superordinate to) constituent concepts or implies (is subordinate to) less specific concepts, as red implies color. But actually, concepts are not comprised of other concepts at all. Their handles are all unique, but their data is all shared. Concepts are not primitive or complex; they are handles plus data. Concepts don’t have discrete definitions; their datum comprises a large amount of direct experience which then links ultimately to everything else we know. Rationalizations of this complex reality may have some illustrative value but won’t help explain concepts.

The early refinements to the classical theory, through about the year 2000, fell into two camps, revamp or rejection. Revamps included the prototype, neoclassical and theory-theory, and rejection included the atomistic theory. I’m not going to review these theories in detail here; I am just going to point out that their approach limited their potential. Attempts to revamp still held out hope that some form of definitive logical rules ultimately supported concepts, while atomism covered the alternative by declaring that all concepts are indivisible and consequently innate. But we don’t have to do down either of those routes; we just have to recognize that there are two, or at least two, great strategies for information management: mental association and logic. Rationality and reasoning depend on logic, but there are an unlimited number of potentially powerful algorithmic approaches for applying mental associations. For example, our minds subconsciously apply such algorithms for memory (storage, recall and recognition), sensory processing (especially visual processing in humans), language processing, and theory of mind (ToM, the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others). Logic itself critically depends on the power of association to create concepts and so is at least partially subordinate to it. So an explanation of reasoning doesn’t result in turtles (logic) all the way down. One comes first to logic, which can be completely abstracted from mental associations. One then gets to concepts, which may be formed purely by association but usually includes parts (that are necessarily embedded in concepts) built using logic as well. And finally one reaches associations, which are completely untouchable by direct logical analysis and can only be rationally explained indirectly via concepts, which in turn simplify and rationalize them, consequently limiting their explanatory scope to specific circumstances or contexts.

I have established that concepts leverage both informal information (via mental association) and formal information (via logic), but I have not said yet what it means to formalize information. To formalize means to dissociate form from function. Informal information is thoroughly linked or correlated to the physical world. While no knowledge can be literally “direct” since direct implies physical and knowledge is mental (i.e. relational, being about something else), our sensory perceptions are the most direct knowledge we have. And our informal ability to recognize objects, say an APPLE, is also superficially pretty direct — we have clear memories of apples. Formalization means to select properties from our experiences of APPLES that idealize in a simple and generalized way how they interact with other formalized concepts. On the one hand, this sounds like throwing the baby out with the bathwater, as it means ignoring the bulk of our apple-related experiences. But on the other hand, it represents a powerful way to learn from those experiences as it gives us a way to gather usable information about them into one place. I call that place a model; it goes beyond a single generalization to create a simplified or idealized world in our imagination that follows its own brand of logic. A model must be internally consistent but does not necessarily correspond to reality. It is, of course, usually our goal to align our models to reality, but we cognitively distinguish models from reality. We recognize, intuitively if not consciously, that we need to give our models some “breathing room” to follow the rules we set for them rather than any “actual” rules of nature because we don’t have access to the actual rules. We only have our models (including models we learn from others), along with our associative knowledge (because we don’t throw our associative knowledge out with the bathwater; it is the backbone beneath our models). Formally, models are called formal systems, or, in the context of human minds, mental models. Formal systems are dissociated from their content; they are just rules about symbols. But their fixed rules make them highly predictable, which can be very helpful if those predictions could be applied back to the real world. The good news is that many steps can be taken to ensure that they do correlate well with reality, converting their form back into function.

But why do we formalize knowledge into models? Might not the highly detailed, associative knowledge remembered from countless experiences be better? No, we instead simplify reality down to bare-bones cartoon descriptions in models to create useful information. The detailed view misses the forest for the trees. Generalization eliminates irrelevant detail to identify commonality. The mind isolates repetitive patterns over space and time, which inherently simplifies and streamlines. This initially creates a capacity for identification, but the real goal is a capacity for application. Not just news, but news you can use. So from patterns of behavior, the mind starts to generalize rules. It turns out that the laws of nature, whatever they may ultimately be, have enough regularity that patterns pop up everywhere. We start to find predictable consequences from actions at any scale. We call these cause and effect if the effect follows only if the cause precedes, presumably due to some underlying laws of nature. It doesn’t matter if the underlying laws of nature are ever fully understood, or even if they are known at all, which is good because we have no way of learning what the real laws of nature are. All that matters is the predictability of the outcome. And predictability does approach certainty for many things, which is when we nominate the hypothesized cause as a law. But we need to remember that what we are really doing is describing the rules of a model, and both the underlying concepts in the model and their rules can never perfectly correspond to the physical world, even though they appear to do so for all practical purposes. Where there is one model, there can always be another with slightly different rules and concepts that explains all the same phenomena. Both models are effectively correct until a test can be found to challenge them. This is how science vets hypotheses and the paradigms (larger scale models) that hold them.

Having established that we have models and why, we can move on to how. As I noted above, while logic can be abstracted from mental associations, it is not turtles (i.e. logical) all the way down. Models are a variety of concept, and concepts are mostly subrational, the informal products of association: we divine rules and concepts about the world using pattern recognition without formal reasoning. We can and often do greatly enrich models (and all concepts) via reasoning, which ultimately makes it difficult to impossible to say where subrational leaves off and rational begins.4 As noted above, we can’t use reason to separate subrational from rational, because that is rationalizing, whose output is rational. Rational output has plenty of uses, but can’t help but stomp on subrational distinctions. But although we can’t identify where the subrational parts of the model end and the rational parts begin, it does happen, which means we can talk about an informal model that consists of both subrational and rational parts, and a formal model consisting of only rational parts. When we reason, we are using only formal models which implicitly derive their meaning from the informal model that contains them. This is a requirement of formal systems: the rules of logic operate on propositions, which are statements that are true or false affirmations or predicates about a subject, which itself must be a concept. So “apples are edible” and “I am hungry” are propositions about the concepts APPLE, EDIBLE, and HUNGRY (at least). Our informal model in this scenario consists of the aspects of the data (plural of datum) of these concepts and all related interactions we recall or have generalized about in the past. To create a formal model with which we can reason we add propositions such as: “hunger can be cured by eating” and “one must only eat edible items”. From here, logical consequences (entailments) follow. So with this model, I can conclude as a matter of logical necessity that eating an apple could cure my hunger. So while our experience may remind us (by association) of many occasions on which apples cured hunger, reasoning provides a causal connection. Furthermore, anyone would reach that conclusion with that model even though the data behind their concepts varies substantially. The conclusion holds even if we have never eaten an apple and even if we don’t know what an apple is. So chains of reasoning can provide answers where we lack first-hand experience.

So we form idealized worlds in our heads called models so we can reason and manage our actions better. But how much better, exactly, can we manage them than with mental association alone? At the core of formal systems lies logic, which is what makes it possible for everything that is true in the system to be necessarily true, which in principle can confer the power of total certainty. Of course, reasoning is not completely certain, as it involves more than just logic. As Douglas Hofstadter put it, “Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system.”5 I would go a step beyond that. Hofstadter’s methods “outside the system” are themselves inside systems of rules of thumb or common sense we develop that are themselves highly rational. We might not have formally written down when it is a good idea to skip steps or draw diagrams, but we could, so these are still what I call formal models. But that still only scratches the surface of the domain of reason. Reasoning more significantly includes accessing conscious capacities for subrational thought across informal models, and so is a vastly larger playing field than rational thought within formal models. In fact it must be played in this larger arena because logic alone is is an ivory tower — it must be contextualized and correlated to the physical world to be useful. Put simply, we constantly rebalance our formal models using data and skills (e.g. memory, senses, language, theory of mind (ToM), emotion) from informal models, which is where all the meaning behind the models lies. I do still maintain that consciousness overall exists as a consequence of the simplified, logical view of rationality, but our experience of it also includes many subjective (i.e. irrational) elements that, not incidentally, also provide us with the will to live and thrive.