Key insights of my theory

The key insights as I see them:

1. Descartes was right about dualism
2. We underappreciate the impact of evolution on the mind
3. We underappreciate the computational nature of the mind.
4. Consciousness exists to facilitate reasoning.
5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel
6. Minds reason using models that represent possibilities
7. Reasoning is fundamentally an objective process that manages truth and knowledge
8. We really have free will

Insight 1. Descartes was right about dualism – mind and body are separate kinds of substances. He made the false assumption that the mind is a physical substance, but then, he had no scientific basis for distinguishing mental from physical. We do have a basis now, but no one, so far as I can tell, has pointed it out as such. I will do so now. Mind and body, or, as I will refer to them, the mental (or ideal) and physical, are not separate in the sense of being different physical substances, but in the sense of being different independent kinds of existence that don’t preclude each other, but can affect each other. The brain and everything it does has a physical aspect, but some of the things it does, e.g. relationships and ideas, have an ideal, or mental, aspect as well. The mechanics of how a brain thinks is physical, but the ideas it thinks, viewed abstractly, are mental. You could say the idea that 1+1=2 exists regardless of whether any brain thinks about it. So our experience of mind is physical, but to the degree that our minds use relationships and ideas as part of that experience (using a physical representation), those relationships and ideas are also mental (in that they have an abstract meaning). The brain leverages mental relationships analogously to the way life leverages chemicals that have different physical properties, except that mental relationships have no physical properties like chemicals but instead impact the physical world through feedback and information processing as a series of physical events. As with chemicals, the net effect is that the complexity of the physical world increases.

Only abstract relationships count as mental, where “abstract” refers to the idea of indirect reference, which is a technique of using one thing to represent or refer to another. A physical system, like a brain or a computer, that implements such techniques has all sorts of physical limitations on the scope and power of those representations, but, like a Turing machine, any implementation capable of performing logical operations on arbitrary abstract relationships can in principle compute anything in the ideal world. In other words, there are no “mysterious” ideas beyond our comprehension, though some will exceed our practical capacity. The confusion between physical and mental that has dogged philosophy and science for centuries only continues because we have not been clearly differentiating the brain from what it does. The brain implements a biological computer physically, but what it does is represent relationships as ideas. Ideas are not dependent on the implementation so that an idea can be represented with words and shared by author and reader. The three forms are very different, but we know that they share important aspects.

All abstract relationships exist (ideally) whether any brain (or computer) thinks about them or not. So the imaginary world is a much broader space than the physical, if you will, as it essentially parameterizes possibility – thoughts are not locked down in all their specifics but generalize to a range of possibilities. Consider a computer program, which is a simple system that manipulates abstract relations. A program executing on a computer will go through a very real set of instructions and process specific data from inputs to outputs. But a program’s capability can be nearly infinite if it is capable of handling many kinds of inputs across a whole range of situations. The program “itself” doesn’t know this, but the programmer does. Our minds work like the programmer; they manage an immense range of possibilities. We see these possibilities in general terms, then add specifics (provide inputs) to make them more concrete, and ultimately a few of them are realized in the real world (i.e. match up to things there). In a very real sense, we live our lives principally in this world of possibilities and only secondarily in the physical world. I’m not speaking fancifully about daydreaming about dragons and unicorns, though we can do that, but about whether the mail has come yet or rain is likely. Whatever actually happens to us immediately becomes the past and doesn’t matter anymore, except in regards to how it will help us in the future. Of course, knowing that we have gotten the mail or that it is rained matters a lot to how we plan for the future, so we have to track the past to manage the future. But nostalgia for its own sake matters little, and so it is no big surprise that our memory of past events dissipates rather quickly (on the theory that our memory has evolved to intentionally “forget” information that could do more to distract effective decision making that help it). My point, though, is that we continually imagine possibilities.

Insight 2. We underappreciate the impact of evolution on the mind. Darwin certainly tried to address this. “How does consciousness commence?” Darwin wondered. It was, and is, a hard question to answer because we still lack any objective means of studying what the mind does (as the mind is only visible from within, subjectively). Pavlov and Skinner proposed that behaviorism could explain the mind as nothing more than operant conditioning, which sounded good at first but didn’t explain all that minds do. Chomsky refuted it in a rebuttal to Skinner’s “Verbal Behavior” by explaining how language acquisition leverages innate linguistic talents. And Piaget extended the list of innate cognitive skills by developing his staged theory of intellectual development. And we now know that thinking is much more than a conditioned behavior but employs reasoning and subconscious know-how. But evolution tells us more still. The mind is the control center of the body, so the direct feedback from evolution is more on how the mind handles a situation than the parts of the body it used to do it. The body is a multipurpose tool to help the mind satisfy its objectives. Mental evolution, therefore, leads somatic evolution. However, since we don’t understand the mechanics of the mind we have done less to study the mind than the body, which is just more tractable. Although understanding the full mechanics of mind is still a long way off, by looking at what selection pressures created demand for what kinds of cognitive skills evolutionary psychologists can explain them. Those explanations involve the evolution of both specialized and general purpose software and hardware in the brain, with consciousness itself being the ultimate general purpose coordinator of action.

Insight 3. We underappreciate the computational nature of the mind. As I noted in The Certainty Engine, what the mind does is computational, if computation is taken to be any information management process. But knowing that something is computed and knowing how it is done are very different things. We still only have vague ideas about the mechanisms, but we can still deduce much about how it works just by knowing it is computational. We know the brain doesn’t use digital computing, but there are many approaches to information processing and the brain leverages a number of them. Most of the deductions I will promote here center around the distinction between computations done consciously (and especially under conscious attention) and those done subconsciously. We know the brain performs much information processing of which we have no conscious awareness, including vision, associative memory lookup, language processing, and metabolic regulation, to name a few kinds. We know the subconscious uses massively parallel computing, as this is the only way such tasks could be completed quickly and broadly enough. Further, we know that the conscious mind largely feels like a single train of thought, though it can jump around a lot and can perceive many kinds of things at the same time without difficulty.

Insight 4. Consciousness exists to facilitate reasoning. Consciousness exists because we continually encounter situations beyond the range of our learned responses, and being able to reason out effective strategies works much better than not being able to. We can do a lot on “autopilot” through habit and learned behavior, but it is too limited to get us through the day. Most significantly, our overall top-level plan has to involve prioritizing many activities over short and long time frames, which learned behavior alone can’t do. Logic, inductive or deductive, can do it, but only if we come up with a way to interpret the world in terms of propositions composed of symbols. This is where a simplified, cartoon-like version of reality comes into play. To reason, we must separate relevant from irrelevant information, and then focus on the relevant to draw logical conclusions. So we reduce the flood of sensory inputs continually entering our brains into a set of discrete objects we can represent as symbols we can use in logical formulas (here I don’t mean shaped symbols but referential concepts we can keep in mind). The idea that hypothetical internal cognitive symbols represent external reality is called the Representational Theory of Mind (RTM), and in my view is the critical simplification employed by reasoning, but it is not critical to much of subconscious processing, which does not have this need to simplify. Although we can generalize to kinds using logical buckets like bird or robin, we can also track all experience and draw statistical inferences without any attempt at representation at all, yielding bird-like or robin-like without any actual categories.

Do we reason using language? Are these symbols words and are the formulas sentences? There is a debate about whether the mind reasons directly in our natural language (e.g. English) or an internal language, sometimes called “mentalese”. Both are partially right but mostly wrong; the confusion comes from failing to appreciate the difference between the conscious and subconscious minds. Language is part of the simplified world of consciousness that tries to turn a gray world into something more black and white that reason can attack (while not incidentally aiding communication). From the conscious side, language-assisted reasoning is done entirely in natural language. We are also capable of reasoning without language, and much of the time we do, but language is not just an add-on capability, it is what pushes human reasoning power into high gear. Animals have had solid reasoning skills (and consequently consciousness) for hundreds of millions of years, so that they could apply cause and effect in ways that matter to them, creating a subjective version of the laws of nature and the jungle. But without language animals can only follow simple chains of reasoning. Language, which evolved in just a few million years, lengthens those chains and adds nesting of concepts. It gives us the ability to reason in a directed way over an arbitrarily abstract terrain. Without inner speech, the familiar internal monolog of our native tongue, we can’t ponder or scheme, we can only manage simple tasks. Sure, we can keep schemes in our heads without further use of language, but language so greatly facilitates rearranging ideas that we can’t develop abstract ideas very far without it. Helen Keller could remember her languageless existence and claimed to be a non-thinking entity during that time. By “thinking” I believe she only meant directed abstract reasoning and all the higher categories of thoughts that brings to mind.

I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect.1

She admits to consciousness, which I argue exists because animals need to reason, yet also felt unconsciousness, in that large parts of her mind were absent, notably intellect (directed abstract reasoning) and will (abstract awareness of desire). So our ability to string ideas together with language vastly extends and generalizes our cognitive reach, accounting for what we think of as human intelligence.

While we could call the part of the subconscious that supports language mentalese, I wouldn’t recommend it, because this support system is not language itself. This massively parallel process can’t be thought of as just a string of symbols; each word and phrase branches out into a massive web of interconnections into a deeply multidimensional space that joins back to the rest of the mind. It follows Universal Grammar (UG), which as laid out by Noam Chomsky is a top-down set of language rules that the subconscious language module supports, but not because it is an internal language but because it has to simplify the output into a form consciousness can use. So natural language is the outer layer of the onion, but it is the only layer we can consciously access, so it is fair to say we consciously reason with the help of natural language, even though we can also do simple reasoning without language. While the part of language-assisted reasoning of which we are consciously aware is entirely conducted in natural language, it is only partially right to say we think in our native tongue because most of the actual work behind that reasoning happens subconsciously. And the subconscious part doing most of the work is not using an internal language at all, though it does use innate mechanisms that support features common to all languages.

So what about linguistic determinism, aka the Sapir-Whorf hypothesis, which states that the structure of a natural language determines or greatly influences the modes of thought and behavior characteristic of the culture in which it is spoken? As with all nature/nurture debates, it is some of each, but with the lion’s share being nature. Natural language is just a veneer on our deep subconscious language processing capacity, but both develop through practice and what we are practicing is the native language our society developed over time. The important point, though, is that thinking is only superficially conscious and consequently only superficially linguistic, and hence only marginally linguistically determined. Words do matter, as do the kinds of verbal constructions we use, so to the extent we guide our thinking process linguistically with inner speech they have an influence. But language is only a high-level organizer of ideas, not the source of meaning, so it does not ultimately constrain us, even though it can sometimes steer us. We can coin new words and idioms, and phase out those that no longer serve as well. So again, just to clarify: while a digital computer can parse language and store words and phrases, this doesn’t even scratch the surface of the deep language processing our subconscious does for us. It is only a false impression of consciousness that the flow of words through our minds reveals anything about the information processing steps that we perform when we understand things or reason with language.

Insight 5. Consciousness is a simplified, “cartoon”-like version of reality with its own feel. We are not zombies or robots, pursuing our tasks with no inner life. Consciousness feels the way it does because the overall mind, which also includes considerable subconscious processing of which we are not consciously aware, cordons off conscious access to subconscious processing not deemed relevant to the role of consciousness. The fact that logic only works in a serial way, with propositions implying conclusions, and the fact that bodies can only do one thing at a time, put information management constraints on the mind that consciousness solves. To develop propositions on which one can apply logic that can be useful in making a decision, one has to generalize commonly encountered phenomena into categories about which one can track logical implications. So we simplify the flood of sensory data into a handful of concrete objects about which we reason. These items of reason, generically called concepts, are internal representations of the mind that can be thought of as pointers to the information comprising them. Our concept of a rock bestows a fixed physical shape, while a quantity of water has a fluid shape. The concept sharp refers to a capacity to cut, which is associated with certain physical traits. Freedom and French are abstract concepts only indirectly connected to the physical world about which we each have acquired a very detailed, personal internal representations. Consciousness is a special-purpose “application” (or subroutine) within the mind that focuses on the concepts most relevant to current circumstances and applies reason along with habit, learning and intuition to direct the body to take actions one at a time. The only real role of consciousness is to manage this top-level single-stream logic processing, so it doesn’t need to be aware of, and would only be distracted by, the details that the subconscious takes care of, including sensory processing, memory lookup/recognition, language processing and more. Consciousness needs access to all incoming information upon which it can be useful to apply reason. To do this in real time, the mind preprocesses concepts subconsciously where possible, which is often little more than a memory lookup service, but also includes converting 2-D images into known 3-D objects or converting concepts into linguistic form. We bypass concepts and reason entirely whenever habit, experience and intuition can manage alone, but do so with conscious oversight. Consciousness needs to act continuously and “enthusiastically”, so it is pre-configured to pursue innate desires, and can develop custom desires as well.

I call the consciousness subroutine the SSSS for single-stream step selection, because objectively that is what it is for, selecting the one coordinated action at a time for the body to perform next. Our whole subjective world of experience is just the way the SSSS works, and its first person aspect is just a consequence of the simplification of the world necessary to support reason, combined with all the data sources (senses, memory, emotion) that can help in making decisions. Our subjective perspective is only figuratively a projection or a cartoon; it is actually comprised of a combination of nonrepresentational data that statistically correlates information and representational data that represents both the real and imagined symbolically through concepts. This perspective evolved over millions of years, since the dawn of animal minds. Though reasoning ultimately leads to a single stream of digital decisions (ones that go one way or another), nothing constrains it from using analog or digital inputs or parallel processing along the way, and it does all these things and more to optimize performance. Conscious experience is consequently a combination of many things happening at once, which only feel like a seamless integrated experience because it would be very nonadaptive if it didn’t. For instance, we perceive a steady, complete field of vision as if it were a photograph because it would be distracting if we didn’t, but actually our eyes are only focused on a narrow circle of central vision, the periphery is a blur, and our eyes dart around a lot filling in holes and double checking. The blind spots in our peripheral vision (that form where the optic nerve passes through the retina) appear to have the same color and even pattern of the area around them because it would be distracting if they disturbed the approximation to a photograph. So the software of consciousness tries very hard to create a smooth and seamless experience out of something much more chaotic. It is an intentional illusion. It seems like we see a photo, but as we recognize objects we note the fact and start tracking them separately from the background. We can automatically track their shading and lighting from different perspectives without even being aware we are doing it. Colors have distinct appearances both to provide more information we can use and to alert us to associations we have for each color.

While the only purpose of consciousness is to support reasoning, it carries this very rich subjective feel with it because that helps us make the best decisions very quickly. That it seems pleasurable or painful to us is in a way just a side effect of our internal controls that lead us to seek pleasure and avoid pain. This is because consciousness simplifies decision making by reducing complex situations into an emotional response or a preference. Such responses have no apparent rational basis, but presumably serve an adaptive purpose since we have them and evolved traits are always adaptive, at least originally (in the ancestral environment). We just respond emotionally or prefer things a certain way and then can reason starting with those feelings as propositions. Objectively, we can figure out why such responses could be adaptive. For example, hunger makes us want to eat, and, not coincidentally, eating fends off starvation. Libido makes us want sex, and reproduction perpetuates the species. Providing hunger and libido as axiomatic desires to the reasoning process eliminates the need to justify them on rational grounds. Is there a good reason why we should survive or produce offspring? Not really, but if we just happen to want to do things that have that outcome, the mandate of evolution is satisfied. Basically, if we don’t do it someone else will, and more than that, if we don’t do it better they will, in the long run, squeeze us out, so we had better want it pretty bad. So feelings and desires are critical to support reasoning, even though these premises are not based on reason themselves.

This perhaps explains why we feel emotions and desires, but it doesn’t explain why they feel just the way they do to us. This is both a matter of efficiency and logical necessity. From an efficiency standpoint, for an emotion or innate desire to serve its purpose we need to be able to process immediately and appropriately, but also simultaneously with all other emotions, desires, sensory inputs, memory, and intuition that apply in each moment. To accomplish this, all of these inputs have independent input channels into the conscious mind, and to help us tell them all apart, they all have distinct quale (kwol-ee, the way it feels, the plural is qualia). From a logical necessity standpoint, for reasoning to work appropriately the quale should influence us directly and proportionally to the goal, independent of any internally processed factors. Our bodies require foods with appropriate levels of fat, carbohydrates, and protein, but subjectively we only know smell, taste and hunger (food labels notwithstanding). These senses and our innate preferences directly support reasoning where a detailed analysis (e.g. a list of calories, fat and protein) would not. Successful reproduction requires choosing a fit mate, ensuring that mates will stay together for a long time, and procreating. This gets simplified down to feelings of sex appeal, love, and libido. Based on any kind of subsidiary reasoning couples would never stay together; they need an irrational subconscious mandate, i.e. love.

Nihilists reject or disregard innate feelings and preferences, presumably on philosophical or rational grounds. While this is undoubtedly reasonable and consequently philosophical, we can’t change the way we are wired just by willing it so. We will want to heed our desires, i.e. to pursue happiness, although unlike other animals our facility with directed abstract thought gives us the freedom to reason our way out of it or, potentially, to reach any conclusion we can imagine. Evolution has done its best to keep our reasoning facility in thrall to our desires so that we focus more on surviving and procreating and less so on contemplating our navels, but humans have developed a host of vices which can lead us astray, with technology creating new ones all the time. If vices represent a failure of our desires to keep us focused on activities beneficial to our survival, virtues oppose them by emphasizing desires or values that are a benefit, not just to ourselves but our communities. We can consequently conclude that the meaning of life is to reject nihilism because it is a pointless and vain attempt to supersede our programming, and to embrace virtuous hedonism as its opposite, to exemplify what reason and intelligence can add to life on earth.

Michael Graziano explains well how attention works within consciousness, but he says the motivation to simplify the world down to a single stream is that: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others.”2 His sentiment is right, his reason is wrong; the brain has more than enough computational capacity to fully process all the parallel streams hitting the senses and all the streams of memory generated in response, but it does all this subconsciously. Massive parallel processing is the strength of the subconscious. The conscious subroutine is intentionally designed to produce a single stream of actions since there is only one body, and so this is the motivation to simplify and focus attention and create a conscious “theater” of experience. A corollary key insight to my points on consciousness is that most of our minds are subconscious and contain specialized and generalized functions that do all the computationally-intensive stuff.

Insight 6. Minds reason using models that represent possibilities. Its job is to control the body by analyzing current circumstances to compute an optimal response. No ordinary machine can do this; it requires a system that can collect information about the environment, compare it to stored information, and based on matches, statistics, and rules of logical entailment select an appropriate reaction. Matches and statistics are the primary drivers of associative memory, which not only helps us recognize objects on sight and remember information learned in the past given a few reminders, but also supports more general intuition about what is important or relevant to the present situation. While this information is useful, it is not predictive. Since the physical world follows pretty rigid laws of nature, it is possible to predict with near certainty what will happen under controlled circumstances, so an animal that had a way to do this would have a huge advantage over one that could not. Beyond that, once animals developed predictive powers, a mental arms race ensued to do it better. Nearly all animals can do it, and we do it best, but how?

The answer lies entirely in the words “controlled circumstances”. We create a mental model, which is an imaginary set of premises and rules. A physical model, in particular, contains physical objects and rules of cause and effect. Cause and effect is a way of describing laws of nature as they apply at the object level within a physical model. So gravity pulls objects down, object integrity holds objects together differently for each substance, and simple machines like ramps, levers and wheels can impact the effort required. And we recognize other animals as agents employing their own predictive strategies. Within a model, we can achieve certainty: rules that always apply and causes that always produce expected effects. The rules don’t have to be completely certain (deductive); they can be highly likely (inductive). But either way, they work, so once we decide to use a given model in a real-world situation, we can act quickly and effectively. And causal reasoning can be chained to solve complex puzzles. While we can control circumstances with models, the real world will never align precisely with an idealized model, so how we choose the models is as important as how we reason with them. Doubt will creep in if our results fall short of expectations, which can happen if we choose an inappropriate model, or if a model is appropriate but inadequately developed. For every situation, we select one or more models from a constellation of models, and we apply the rules and act with an appropriate degree of certainty based on our confidence in picking the model, its accuracy, and our ability to keep it aligned with reality.

Mental models are mostly subrational. A full definition of subrational will be presented later in Concepts and Models, but for now think of it as a superset of everything subconscious plus everything in our conscious awareness that is not a direct object of reasoning. Models themselves need not be based in reason and need not enter into the focus of conscious reasoning. We can reason with models supported entirely by hunches, but we can also if desired take a step back mentally and use reason to list the premises, rules, and scope of a model we have been using implicitly up to that point. However, as we will see in the next insight, doing this only rationalizes the subrational, which is to say it provides another way of looking at them that is not necessarily better or even right (to the extent an interpretation of something can be said to be right or wrong).

Insight 7. Reasoning is fundamentally an objective process that manages truth and knowledge. Objective principally means without bias and agreeable to anyone based on a preponderance of the evidence, and subjective is everything else, namely that which is biased or not agreeable to everyone. Science tries to achieve objectivity by using instruments for measurements, checking results independently, and using peer review to establish a level of agreement. While these are great ways to eliminate bias and foster agreement, we have no instruments for seeing thoughts or checking them: all our thoughts are inherently subjective. This is an obstacle to an objective understanding of the mind. Conventionally science deals with this by giving up: all evidence of our own thought processes are considered inadmissible, and science consequently has nothing to say. Consider this standard view of introspection:

“Cognitive neuroscientists generally believe that objective data is the only reliable kind of evidence, and they will tend to consider subjective reports as secondary or to disregard them completely. For conscious mental events, however, this approach seems futile: Subjective consciousness cannot be observed ‘from the outside’ with traditional objective means. Accordingly, some will argue, we are left with the challenge to make use of subjective reports within the framework of experimental psychology.” 3

This wasn’t always so. The father of modern psychology, Wilhelm Wundt, was a staunch supporter of introspection, which is the subjective observation of one’s own experience. But its dubious objectivity caught up with it, and in 1912 Knight Dunlap published an article called “The Case Against Introspection” that pointed out that no evidence supports the idea that we can observe the mechanisms of the mind with the mind. I agree, we can’t. In fact, I propose the SSSS process supporting consciousness filters our awareness to include only the elements useful to us in making decisions and consequently blocks our conscious access to the underlying mechanisms. So we don’t realize from introspection that we are machines, albeit fancy ones. But we have figured it out scientifically, using evolutionary psychology, the computational theory of mind, and other approaches within cognitive science.

The limitations of introspection don’t make it useless from an objective standpoint; they only mean we need to interpret it in light of objective knowledge. So we can, for example, postulate an objective basis for desires and then test them for consistency using introspection. We should eventually be able to eliminate introspection from the picture, but most of our understanding of consciousness and the SSSS at this point comes from our use of it, so we can’t ignore what we can glean from that.

While our whole experience is subjective, because we are the subject, that doesn’t mean a subset of what we know isn’t objective. We do know some things objectively, and we know we know them because we are using proven models. And we know the degree of doubt we should have in correlating these models to reality because we have used them many times and seen the results. It is usually more important for consciousness to commit to actions without doubt than to suffer from analysis paralysis, though of course for complex decisions we apply more conscious reasoning as appropriate.

We have many general-purpose models we trust, and we generally know how our models match up with those other people use and how much other people trust them. Since objectivity is a property of what other people think, i.e. agreeable to all and not subjective, we need to have a good idea of what models we are using and the degree to which other people use the same models (i.e. similar models; we each instantiate models differently). If our models are subrational, how can we ever achieve this? For the most part, it is done through an innate talent called theory of mind:

Theory of mind (often abbreviated ToM) is the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.

So we subrationally form models and can intuit much about the models of others using this subrational and largely subconscious skill. Our subconscious mind automatically does things for us that would be too laborious to work out consciously. Consciousness evolved more to support quick reactions than to think things through, though humans have developed quite a knack for that. But whether these skills are subconscious, subrational, or rational, the rational conscious mind directs the subrational and subconscious and thus takes credit for the capacities of the whole mind. This gives us the ability to distinguish objective knowledge from subjective opinion. Consider this: when a witness is asked to recount events under oath, the jury is expecting to hear objective truth. They will assess the evidence they hear using theory of mind on the witness to understand his models and to rule out compromising factors such as mental capacity, impartiality, memory or motives. People read each other very well, and it is hard to lie convincingly because we have subconscious tells that others can read, which is a mechanism that evolved to allow trust to develop despite our ability to lie. The jury expects the witness can record objective knowledge similarly to a video camera and that they can acquire this knowledge from him. Beyond juries, we know our capacities, partiality, memory, and motives well enough to know whether our own knowledge is objective (that is, agreeable to anyone based on a preponderance of the evidence). So we’ve been objective long before scientific instruments, independent confirmation or peer review came along. Of course, we know that science can provide more precision and certainty using its methods than we can with ours, but science is more limited in the phenomena to which it applies. People have always understood the world around them to a high degree of objectivity that would stand up to considerable scrutiny, despite also having many personal opinions that would not. We tend not to give ourselves much credit for that because we are not often asked to separate the objective and subjective parts.

Objectivity does not mean certainty. Certainty only applies to tautologies, which are concepts of the ideal world, not the physical world. A tautology is a proposition that is necessarily true, or, put another way, is true by definition. If one sets up a model, sometimes called a possible world, in which one defines what is true, then within that possible world, everything that is true is necessarily true, by definition. So “a tiger is a cat” is certain if our model defines tiger as a kind of cat. Often to verify whether or not one has a tautology one has to clarify definitions, i.e. clarify the model. This process will identify rhetorical tautologies. If we say that the rules of logic apply in our models, and we generally will, then anything logically implied is also necessarily true and a logical tautology. The law of the excluded middle, for example, demonstrates a logical necessity by saying that “A or not A” is necessarily true. Or, more famously, a syllogism says that “if A implies B and B implies C, then A implies C”. While certainty is therefore ideally possible, we can’t see into the future in the physical world, so physical certainty is impossible. We also can never be completely certain about the present or past because we depend on observation, which is both indirect and unprovable. So physical objectivity is not about true (i.e. ideal) certainty, but it does relate to it. By reasoning out the connection, we can learn the definition and value of physical objectivity.

To be physically objective means to establish one or more ideally objective models and to correlate physical circumstance to these models. Physically objectivity is limited by the quality of that correlation, both in how well the inputs line up and how closely the output behavior of the ideal model(s) ends up matching physical circumstances. The innate strategy of minds is to presume perfect correlation as a basis for action and to mop up mistakes afterward. Also, importantly, minds will try to maximally leverage learned behavior first and improvised behavior second. That is, intuitive/automatic first and reasoned/manual second. So, for example, I know how to open the window next to me as I have done it before. I feel certain I can apply that knowledge now to open the window. But my certainty is not really about whether the window will open, it is about the model in my mind that shows it opening when I flip the latch and apply pressure to raise it. In that model, it is a certainty because these actions necessarily cause the window to open. If the window is stuck or even permanently epoxied, it doesn’t invalidate the model, it just means I used the wrong model. So if the window is stuck how do I mop up from this mistake? The models we apply sit within a constellation of models, some of which we hold in reserve from past experience, some of which we have consciously available from premeditation, and some of which we won’t consciously work out until the need arises. For every model we have a confidence level, the degree to which we think it correlates to the situation at hand. This confidence level is mostly a function of associative memory, as we subrationally evaluate how well the model premises line up with the physical circumstances. In the case of habitual or learned behavior, we do this automatically. So if this window doesn’t open as I thought it would, from learned behavior I will push harder and use quick thrusts if needed to try to unjam it. Whether it works or not I will update my internal model of the behavior of stuck windows accordingly, but in this case, I didn’t directly employ reasoning, I just leveraged learned skills. But the mind will rather seamlessly maintain both learned and reasoned approaches for handling situations. This means it will maintain models about everything because it will frequently encounter situations where learned behavior is inadequate but reasoning with models that include causation works.

Just as we don’t generally need to separate the objective and subjective parts of our knowledge, we don’t generally need to separate learned behavior from reasoned behavior. It is important to the unity of our subjective experience that we perceive these very different things as fitting seamlessly together. But this introduces another factor that makes it hard to identify our internal models: learned behavior doesn’t need models or a representational approach, at least not of the simplified form used for logical analysis. We can, potentially, remember every detail of every experience and pull up relevant information exactly when needed in an appropriate way without any recourse to logic or reason at all. So what kind of existence do our ideal models have? While I think we do persist many aspects of many models in our memories as premises and rules, we tend not to be too hard and fast about them, and they blend into each other and our overall memory in ways that let us leverage their strengths without dwelling on their weaknesses. Even as we start to reason with them, we only require a general sense that their premises and rules are sound and well defined, and if pressed we may learn they are not, at which point we will fill them out until they are as certain as we like. We can, therefore, conclude that while we use objectivity all the time, in is usually in close conjunction with subjectivity and inseparable from it. To be convincing, we need to develop ways to isolate objective and subjective components.


Insight 8. Insight 8. We really have free will. We already know (intuitively) that we have free will, so I shouldn’t take any credit for this one. But I will because a preponderance of the experts believe we don’t, which is a consequence of their physicalist perspective. Yes, the universe is deterministic and everything happens according to fixed laws of nature, so we are not somehow changing those laws in our heads to make nature unfold differently. What happens in our heads is in fact part of its orderly operation; we are machines that have been tuned to change the world in the ways that we do. So far, that suggests we don’t have free will but are just robots following genetic programming. But several things happen to create genuine freedom. Freedom can’t mean altering the future from a preordained course to a new one because the universe is deterministic and each moment follows the preceding according to fixed laws of nature. But since the universe has always been this way and we nevertheless feel like we have free will, freedom must mean something else.

Freedom really has to do with our conception of possible futures. We imagine the different outcomes from different possible courses of action. These are just imaginary constructions (models with projected consequences) with no physical correlate, other than the fact that they exist in our brains. But we think of them as being possible futures even though there is really only one future for us. So our sense of free will is rooted in the idea that what we do changes the universe from its predetermined course. We don’t, but two factors explain why our perspective is indistinguishable from a universe in which we could change the future: unpredictability and optimized selection. Regarding unpredictability, neither we nor anyone could ever know for sure what we are going to do; only an approximate prediction is possible. Although thinking follows the laws of nature, the process is both complex and chaotic, meaning that any factor, even the smallest, could change the outcome. So every decision, even the simplest, could never be predicted with certainty. The second factor is optimized selection, which is a mental or computational process that uses an optimization algorithm to choose a strategy that has the best chance of producing a physical effect. First, the algorithm collects information, which is data that has more value for some purpose than white noise has. For example, sensory information is very valuable for assessing the current environment. And our innate preferences, experience, state of mind, and whim (which is a possibly unexplainable preference) are fed to the algorithm as well. This mishmash of inputs is weighed and an optimal outcome results. If the optimal action seems insufficiently justified, we will pause or reconsider as long as it takes until the moment of sufficient justification arrives, and then we invariably perform that action. At that moment the time for free will has passed; the universe is just proceeding deterministically. We exercised our free will just before that moment, but before I explain why I have a few more comments on unpredictability and optimization algorithms.

The weather is unpredictable but lacks optimized selection because it is undirected. A robot trained to use learned behavior alone to choose strategies that produce desired effects has an optimized selection algorithm, but might be entirely predictable. If its behavior dynamically evolves based on live input data, then it may become unpredictable. Viewed externally, the robot might appear to have “free will” in that its behavior would be unpredictable and goal-oriented like that of a human. However, internally the human is thinking in terms of selecting from possible futures, while the robot is just looking up learned behaviors. People don’t depend solely on learned behavior; we also use reason to contemplate general implications of object interactions. To do this, we set up mental models and project how different object interactions might play out if different events transpired.

The real and deep upshot of this is that our concept of reality is not the same thing as physical reality. It is a much vaster realm that includes all the possible futures and all the models we have used in our lives. Our concept of reality is really the ideal world, in which the physical world is just a special case. The exercise of free will, being the decisions we take in the physical world, does represent a free selection from myriad possibilities. Become our models correlate so well to the real world we come to think of them as being the same, but they aren’t. Free will exists in our minds, but not in our hypothetical robot minds, because our minds project possible futures. A robot programmed to do this would then have all the elements of free will we know, and further would be capable of intelligent reasoning and not just learned behavior. It could pass a Turing test where the questioner moved outside the range of the learned behavior of the robot. Once we build such a robot, we will need to start thinking about robot rights. Could an equally intelligent optimization algorithm be designed that did not use models (and consequently had no consciousness or free will)? Perhaps, but I can’t think of a way to do it.

So our brain’s algorithm made an unpredictable decision and acted on it. The real question is this: Why do “we” take credit for the free will of our optimization algorithms? Aren’t we just passively observing the algorithms execute? This is simply a matter of perspective. We “are” our modeling algorithms. Ultimately, we have to mean something when we talk about the real “us”. Broadly, we mean our bodies, inclusive of our minds, but more narrowly, when we are referring just to the part of us that makes the decisions, we mean those modeling algorithms. In the final analysis, we are just some nice algorithms. But that’s ok. Those algorithms harbor all the richness and complexity that we, as humans, can really handle anyway. They are enough for us, and we are very much evolved to feel and believe that they are enough for us. Objectively, they are a patchwork of different strategies held together with scotch tape and baling wire, but we don’t see them that way subjectively. Subjectively the world is a clean, clear picture where everything has its place and makes sense in one organic whole that seems fashioned by a divine creator in a state of sheer perfection. But subjectively we’re wearing rose-colored glasses, and darkly-tinted ones at that, because objectively things are very far from clean or perfect.

So that explains free will, the power to act in a way we can fairly call our own. To summarize, our brains behave deterministically, but we perceive the methods they use to do it as selections from a realm of possibilities, and we quite reasonably identify with those methods so that we take both credit and responsibility for the decisions. More significantly, while we were dealt a body and mind with certain strengths and an upbringing with certain cultural benefits, this still leaves a vast array of possible futures for our algorithms to choose from. Since nobody can exercise duress on us inside our own minds, this means that no other entity but the one we see as ourself can take credit or blame for any decision we make. Do we have to take responsibility for our actions or can we absolve ourselves as merely inheriting our bodies and minds? We do have to take credit and blame because running the optimization algorithms is an active role; abstaining would mean doing nothing, which is just a different choice. Note that this physical responsibility is not the same as moral responsibility. How our thoughts, choices, and actions stand up from a societal standpoint is another question outside the scope of this discussion. But physically, if we perform an action then it is a safe bet that we exercised free will to do it. The only alternate explanations are mind control or some kind of misinterpretation, e.g. that it looked like we pressed the button but actually we were asleep and our hand fell from the force of gravity.

Sometimes free will is defined as “the ability to choose between different possible courses of action”. This definition is actually tautological because the word “choose” is only meaningful if you understand and accept free will. To choose implies unpredictability, an optimization algorithm, consciousness, and ownership of consciousness. Our whole subjective vocabulary (subjective words include feel, see, imagine, hope) implies all sorts of internal mechanisms we can’t readily explain objectively. And we are so prone to intermingling subjective vocabulary with objective vocabulary that we are usually unaware we are doing it.

One more point about free will: my position is a compatibilist position, meaning that determinism and free will are compatible concepts. Free will doesn’t undermine determinism, it just combines unpredictability, optimization algorithms, and the conscious modeling of future possibilities to produce an effect that is indistinguishable from the actual future changing based on our actions.

 

  1. p141, Keller, Helen. (1909). The World I Live In. London: Hodder and Stoughton
  2. A New Theory Explains How Consciousness Evolved, Michael Graziano, 2015, The Atlantic
  3. http://www.scholarpedia.org/article/Introspection

2 thoughts on “Key insights of my theory”

Leave a Reply