3.1 Getting Objective About Control

In this chapter, I will discuss considerations that relate to how we can study strategies of control used by the mind scientifically, rather than just conversationally:

Approaching Control Physically
Approaching Control Functionally
Deriving Objective Knowledge
The Contribution of Nature and Nurture to the Mind
How to Study the Mind Objectively
Our Common-Sense Theory of Consciousness
Toward a Scientific Theory of Consciousness

Approaching Control Physically

I have argued that evolution pulls life forward like a ratchet toward every higher levels of functionality. The ratchet doesn’t compel specific results, but it does take advantage of opportunities. A digestive tract is a specific functional approach for processing food found only in bilateral animals (which excludes animals like sponges and jellyfish). Jellyfish may be the most energy efficient animals, but bilaterals can eat more, move faster, do more things, and consequently spread further. Beyond digestion, bilaterals developed a variety of nervous, circulation, respiratory, and muscular systems across major lines like insects, mollusks, and vertebrates. These systems and their component organs and tissues are functionally specialized, not just because we choose to see them that way, but because they actually have specific functional roles. Evolution selects functions, which are abstract, generalized capabilities that don’t just do one thing at one instant but do similar kinds of things on an ongoing basis. Similarly means nothing to physicalism; it is entirely a property of functionalism. A digestive tract really does localize all food consumption at one end and all solid food elimination at the other. Muscles really do make certain kinds of locomotion or manipulation possible. Eyes use a certain range of light to make images to a certain resolution that can be identified as certain kinds of objects. These are all fundamentally approximate functions, yet they are delineated to the very specific ranges that are most useful to each kind of animal.

Evolution often selects distinct functions we can point out. This is most often the case when the solution requires physical components, as is the case with the body of an organism. For example, tracts, blood vessels, somatic nerves, bones, muscles, have to stretch from A to B to achieve their function, and this places strong constraints on possible solutions. For example, tracts, vessels, and nerves are unidirectional because this approach works so well. I could find no examples where these systems or any part of them serve distinct, non-overlapping biological functions.1 One possible example is the pancreas. 99% of the pancreas is exocrine glands that produce digestive enzymes delivered by the pancreatic duct to the duodenum next to the stomach, while 1% is endocrine glands that release hormones directly into the blood. The endocrine system consists of glands from around the body that use the blood to send hormonal messages, and is considered distinct from the digestive system. But the hormones the pancreas creates (in the islets of Langerhans) are glucagon and insulin, which modulate whether the glucose produced by digestion should be used for energy or stored as glycogen. So these hormones control the digestive system are thus part of it. But this example brings me to my main point for this part of the book: control functions.

Let’s divide the functions animal bodies perform into control functions and physical functions. Control functions are the high-order information processing done by brains, and physical functions are everything else. Physical functions also need to be controlled, but let’s count local control beneath the level of the brain as physical since my focus here is on the brain. The brain uses generalized neural mechanisms to solve control problems using some kind of connectivity approach. While it is true that each physical part of the brain tends to specialize in specific functions, which typically relates to how it is connected to the body and the rest of the brain, the neural architecture is also very plastic. When parts of the brain are damaged, other parts can often step in to take over. It is not my intention to explore neural wiring, but just to note that however it works, it is pretty generalized. Unlike physical functions, for which physical layouts provide clues, the function of the brain is not related to its physical structure. Instead, it is the other way around: the physical structure of the brain is designed to facilitate information processing.

If the physical layout is not revealing the purpose, we need to look to what the brain does to tell us how it works. We know it uses subconcepts to pull useful patterns from the data, but these patterns are not without structure; the more they fall into high-level pattern groups, the more bang you can get for the buck. What I mean is that function always looks for ways to specialize so it can do more with less, improving the efficient delivery of the function. The circulation system does this by subdividing arteries down to capillaries and then collecting blood back through ever-larger veins. So it stands to reason that if there were any economies of scale that would be useful in the realm of cognitive function, they would have evolved. What’s more, concepts are ideal for describing such high-level functions. This doesn’t mean a conceptual description will perfectly match what happens in the brain, and realistically it probably can’t, because these systems of functionality in the brain can interact across many more dimensions and to a much greater depth than can physical systems. But if we recognize we can only achieve approximate descriptions, we should be able to make real progress.

Although I am principally going to think about what the brain is doing to explain how it works, let’s just start with its physical structure to see what light it can shed on the problem. Control in the body is managed by the neuroendocrine system. This system controls the body through somatic nerves and hormones in the blood. Hormones engage in specific chemical reactions, from which we have determined their approximate function and what conditions modulate them. Somatic nerves connect to each corner of the body, with sensory nerves collecting information and motor nerves controlling muscles, as I noted in Chapter 2.2, but knowing that doesn’t tell us anything about how they are controlled. All non-hormonal control in most animals is produced entirely by interneurons in the brain. Unfortunately, we understand very little about how interneurons achieve that control.

The brain is not just a blob on interneurons; it has a detailed internal anatomy that tells us some things about how it works. The brain has a variety of physically distinct kinds of tissues which divide it up into broad areas of functional specialization. The brain appears to have evolved the most essential control functions at the top of the spinal column, called the brainstem. These lowest levels of the brain are collectively called the “reptilian brain” as an approximate nod to having evolved by the time reptiles and mammals split. The medulla oblongata, which comes first, regulates autonomic functions like breathing, heart rate and blood pressure. Next is the pons, which helps with respiration and connections to other parts of the brain. The midbrain is mostly associated with vision, hearing, motor control, sleep, alertness, and temperature regulation. Behind the brainstorm is the cerebellum or hindbrain, which is critical to motor control, balance, and emotion. The balance of the brain is called the forebrain. The forebrain has some smaller, central structures, collectively called the “paleomammalian brain” because all mammals have them. These areas notably include the thalamus, which is mostly a relay station between the lower and upper brain, the hypothalamus, which links the brain to the endocrine system through the pituitary gland, and the limbic system, which is involved in emotion, behavior, motivation, and memory.

The largest part of the forebrain is the cerebral cortex or “neomammalian brain”. Its outer surface or neocortex has many cortical folds that are thought to be the source of higher cognitive function. Parts of the neocortex are connected to sensory nerves. In particular, the sensory cortex is prewired to receive neurons from sense organs to specific regions, most notably for sight, hearing, smell, taste, and touch. Touch maps each part of the body to corresponding areas of the primary somatosensory cortex. This creates a map of the body in the brain called the cortical homunculus. Most of this cortex is dedicated to the hands and mouth. The retina maps to corresponding areas of the visual cortex. The right hemisphere controls sight and touch for the left side of the body and vice versa. The rest of the cerebral cortex is sometimes called the association cortex to highlight its role in drawing more abstract associations, including relationships, memory and thought. It divides into occipital, parietal, temporal, and frontal lobes. The occipital lobe houses the visual cortex and its functions are nearly all vision-related. The parietal lobe holds the primary somatosensory cortex and also areas that relate to body image, emotional perception, language, and math. The temporal lobe contains the auditory cortex and is also associated with language and new memory formation. The frontal lobe, which is the largest part of the neocortex, is the home of the motor cortex, which controls voluntary movements, and supports active reasoning capabilities like planning, problem-solving, judgment, abstract thinking, and social behavior. Most dopamine neurons are in the frontal lobe, and dopamine is connected to reward, attention, short-term memory tasks, planning, and motivation.

Approaching Control Functionally

Does any of this detail really tell us much of anything about how interneurons control things? Not really, even though the above summary only scratches the surface of what we know. This kind of knowledge only tells us approximately where certain functions reside, not how they work. Also, no function is exactly in any specific place because the brain builds knowledge using many variations of the same patterns and so naturally achieves a measure of redundancy. To develop mastery over any subject, many brain areas learn the same things in slightly different ways, and all contribute to our overall understanding. Most notably, we have two hemispheres that are limited in how much they can communicate with each other, and so allow us to specialize similarly yet differently in each half. This separate capacity is so significant that surgically divided hemispheres seem to two create separate, capable consciousnesses (though only one half will generally be able to control speech). Live neuroimaging tracks blood flow to show how related functionality is widely distributed around the brain. Brain areas develop and change over time with considerable neuroplasticity or flexibility, and can actually become bigger and stronger like muscles, but with permanent benefit. But the brain is not infinitely plastic; each area tends to specialize based on its interconnections with the body and the rest of the brain, and the areas differ in their nerve architecture and neurochemistry. But each area still needs to develop, and how we use it will partially affect its development. The parts of the neocortex beyond the sensory cortex have a similar architecture of cortical folds and similar connectivity, and none seem to be the mandatory site of any specific mental function. Each subarea of each lobe does tend to be best at certain kinds of things, but the evidence suggests that different kinds of functionality can and does develop in different areas in different people. Again, this is most apparent in hemispheric specialization, called lateralization. The left half is wired to control the right side of the body and vice versa. The left brain is stereotypically the dominant half and is more logical, analytical, and objective, while the right brain is more creative and intuitive, but while this generalization is at best only approximately true and differences between the hemispheres probably matter less to brain function than their similarities. Still, it is true that about 90% of people process language predominantly in their left hemisphere, about 2% predominantly in the right, and about 8% symmetrically in each.2 Furthermore, while language processing was originally thought to be localized to Broca’s area in the frontal lobe and Wernicke’s area in the temporal lobe, we now know that language functionality occurs more broadly in these lobes and the chief language areas can differ somewhat in different people. It only makes sense that the brain can’t have specific areas for every kind of cultural knowledge, even though it will have strong predispositions for something like language which has many innate adaptations.

The upshot is that the physical study of the brain is not going to give us much more detail about how and why it controls the body. So how can we explain the mind? The mind, like all living systems, is built out of function. Understanding it is the same thing as knowing what it does. So instead of looking at parts, we need to think about how it controls the body. We need to look at what it does, not at its physical construction. What would a science founded entirely on functional instead of physical principles even look like? The formal sciences like math and computer science are entirely functional, so they are a good place to start. By abstracting themselves completely from physical systems, the formal sciences can concern themselves entirely with what functional systems are capable of. The formal sciences isolate logical relationships and study their implications using streamlined models or axiomatic systems, covering things like fields (number systems), topological spaces (deformations of space into manifolds), algorithms, and data structures. They define logical ground rules and then derive implications from them. Formalisms are entirely invented yet their content is both objective and true because their truth is relative to their internal consistency. Although functional things must be useful by definition, as the underlying nature of function is in its ability to predict what will happen, that capacity is entirely internal. Every formalism is its own little universe in which everything follows deterministically, and those implications are what useful means within each system. Conceptual thinking arose in humans to leverage the power of formal systems. Conceptual thinking uses formalisms, but just as significantly it requires ways to map real-world scenarios to formal models and back again. This mapping and use of formalisms is the very definition of application — putting a model to use.

Of course, many sciences do directly study the mind from a functional perspective: the social sciences. Most work in the social sciences is concerned with practical and prescriptive ways to improve our quality of life, which is an important subject but not the point of this book. But much work in the social sciences, particularly in psychology, anthropology, linguistics, and philosophy, focuses on increasing our understanding of the mind and its use without regard to any specific benefits. Just like the formal sciences, they seek general understandings with the hope that worthwhile applications will ensue. The social sciences are reluctant to talk too much about their philosophical basis because it is not firmly established. They would like to tacitly ride the coattails of physicalism on the grounds that humans evolved and are natural phenomena in a deterministic universe, and so our understanding of them should ultimately be able to enjoy the same level of objective certainty that the physical sciences claim. But this is impossible because information is constructed out of approximations, namely correlations of similarity that have been extrapolated into generalizations. The information of life is processed by cells and the information that manages the affairs of animals is managed by minds. The social sciences need only acknowledge their dual foundation on form and function dualism. This enhanced ontology shifts the way we should think about the social sciences. Where we could not previously connect functional theories back to physical mechanisms, we can now see that computational systems create information that connects back to referents indirectly. It is all a natural consequence of feedback, but it gives nonphysical things a way to manifest in an otherwise physical world. DNA allows information to be captured and managed biologically, and minds allow it to be captured and managed mentally. Culture then carries mental information forward through succeeding generations, and the Baldwin effect gradually sculpts our nature to produce culture better in a mutual feedback loop.

Deriving Objective Knowledge

More specifically, what would constitute scientific or objective knowledge about function? Objectivity seeks to reveal that which exists independent of our mental conception of that existence. In other words, objectivity seeks to reveal noumena, either physical or functional. We know that minds can’t know physical noumena directly and so must settle for information about them, which is to say, percepts and concepts about them. But how can percepts or concepts be seen as independent of our mental conception if they are mental conceptions? Strictly speaking, information about something other than itself can’t be perfectly objective. But information that is entirely about itself can be perfectly objective. I said before that we can know some functional noumena through direct or a priori knowledge by setting them true by definition. If we establish pure deductive models with axiomatic premises and fixed rules that produce inescapable conclusions, then truths in these models can be said to be perfectly objective because they are based only on internal consistency and are thus irrefutable.

If deduction is therefore wholly objective, what does that mean for percepts and concepts? Percepts are created inductively, without using axioms written in stone, and are therefore entirely biased. They have a useful kind of bias because information backs them up, but their bias is entirely prejudicial. One can’t explain or justify perception using perception, but it is still quite helpful. Concepts are both inductive and deductive. We create concepts from the bottom up based on the inductive associations of percepts, but the logical meaning of concepts is defined from the top down in terms of relationships between concepts, and this creates a conceptual model. From a logical standpoint, concepts are axiomatic and their relationships support deduction. Viewed from this perspective, conceptual models are perfectly objective. However, to have practical value, conceptual models need to be applied to circumstances, and this application depends on inductive mappings and motivations, which are highly susceptible to bias. So the flat earth theory is objectively true from the standpoint of internal consistency, but it doesn’t line up with other scientific theories and the observations that support them, so it seems likely that flat-earthers are clinging to preexisting beliefs and prejudicially excluding or interpreting other conceptual models as confirming their views (confirmation bias). We are capable of believing anything if we don’t know any better or if we choose to ignore contradicting information. It is risky to ignore our own capacity for reason, either because our conceptual models are inconsistent or because available evidence contradicts them, but we are allowed to go with our intuition instead if we like. But that route is not scientific. Experimental science must make an adequate effort to ensure that the theoretical model applies to appropriate experimental conditions, and this includes minimizing negative impacts from bias.

If maximizing objectivity means minimizing bias, then we need to focus on the biases most likely to compromise the utility of the results. Before I look at how we do that with science, consider how important minimizing bias is to us in our daily lives. If we worked entirely from percepts and didn’t think anything through conceptually, we could just proceed with knee-jerk reactions to everything based on first impressions, which is how animals usually operate (and sometimes people as well). As an independent human, if I am hungry, I can’t just go to my pantry and expect food to always be there. I have to buy the food and put it in the pantry first. To make things happen the way I want to an arbitrarily precise degree, I need lots of conceptual models. By going to the extra trouble of holding things to such rational standards, that is, by expecting causes to exist for all effects, I know I can achieve much better control of my environment. But I am not comfortable just having better control; I want to be certain. For this, I have to both have conceptual models and apply them correctly to relevant situations. Having done this, my certainty is contingent on my having applied the models correctly, but if I have minimized the risk of misapplication sufficiently, I can attach a special cognitive property called belief to the models, and through belief I can perceive this contingency as a certainty. The result is that I always have food on hand to the exact degree I expect.

As a lower-level example, we have many conceptual models that tell us what our hands can do. Our perception of our hands is very good and is constantly reinforced and never seems to be contradicted, so we are quite comfortable about what we can do with our hands. However, the rubber hand illusion demonstrates how easily we can be fooled into thinking a rubber hand is our own. When a subject’s real left hand is placed out of sight and a similar-looking rubber hand is placed next to their right hand, then when both hands are stroked synchronously with a paintbrush, the subject will start to feel that the rubber hand is her own. What this shows is that the connection between low-level senses and the high-level experience of having hands is not created by a fixed information pathway but triggers when given an adequate level of stimulation. In this case, providing visual but no tactile stimulation to the hand in question prompts us to feel the hand is our own. That activated experience is all-or-nothing — we believe it is our hand. Of course, this illusion won’t hold up for long; we have only to move our arms or fingers or touch our hands together and we will see through it.

Conscious belief works much the same way to let us “invest” in conceptual models. Once a threshold of supporting stimuli is achieved, we believe the model properly applies to the situation, which carries all its deductive certainty with it. We do realize on some level that the application of this model to the situation could be inappropriate, or that our conceptual model may be simplified relative to the real-world situation and thus may not adequately explain all implications, but belief is a state of mind that makes the models feel like they are correct. Movies feel real, too, although we know they are made up. We know belief has limits, but because we can feel it, it lets us apply our emotions and intuitions to develop fast but holistic responses. We need belief to make our conceptions feel fully-embodied so we can act on them efficiently.

Science proposes conceptual models. Formal sciences stay as rigorously deductive as possible and are thus able to retain nearly all their objectivity. Though some grounds for bias must always remain, bias is systematically eliminated from the formal sciences because it is so easily spotted. The explicit logic of formalisms doesn’t leave as much room for interpretation. For natural or real numbers to work consistently with the operations we define on them, we have to propose axioms and rules that can be tested for simplicity and consistency from many angles. That said, the axioms of formal systems have their roots in informal ideas that are arguably idiosyncratic to humans or certain humans. Other kinds of minds could conceivably devise number systems quite unlike the ones we have developed. However, I am concerned here with minds, which exist in physical brains and not simply as formalisms. We have to consider all the ways that bias can impact the experimental sciences.

The effect of bias on perception is generally helpful because it reflects knowledge gained through experience, but cognitive bias is detrimental because it subverts logical reasoning, leading to unsound judgments. Many kinds of cognitive biases have been identified and new ones are uncovered from time to time. Cognitive biases divide into two categories, hot and cold. Cold biases are inadvertent consequences of the ways our minds work. Illusory correlations lead us to think to things observed together are causally related. It is a reasonable bias because our experience shows that they often are. Neglect of probability is a bias which leads people to think very unlikely events are nearly as likely as common events. Again, this is reasonable because our experience only includes events that happened, so our ability to contemplate an event makes it feel like it could plausibly happen. This bias drives lottery jackpots sky high and makes people afraid that stories on the news will happen to them. Anchoring bias leads us to put too much faith in first impressions. It usually helps to establish a view quickly and then refine it over time as more information comes in, but this tendency gives stores an incentive to overprice merchandise and then run sales to give us the feeling we got a bargain. A related bias, insensitivity to sample size (which I previously mentioned was discovered by Kahneman and Tversky), relates to our difficulty in intuitively sensing how large a sample size needs to be statistically significant. Combined with our faith that anyone doing a study must have done the math to pick the right size, it leads us to trust invalid studies. Hot biases happen when we become emotionally invested in the outcome, leading to “wishful” thinking. When we want a specific result, we can rationalize almost any behavior to reach it. But why would a scientist want to reach a specific result rather than learn the truth? It can either be to avoid cognitive dissonance or because the incentives to do science are skewed. Cognitive dissonance includes confirmation bias that favor their existing beliefs, superiority bias that they know better than others, or simply the bias that they are not biased. We all like to think we have been thinking clearly, which leads us at first to reject evidence that we have not been. Skewed scientific incentives are the biggest bias problem that science faces. Private research is skewed to bolster claims that maximize profit. Public research is skewed by political motivations, but even more so by institutional cognitive dissonance, which carries groups of people along on the idea that whatever they have been doing has been justified. Individual scientists must publish or perish, which skews their motivations toward product rather than quality.

Knowing that cognitive bias is a risk, let’s consider how much it impacts the physical sciences. Most physical laws are pretty simple and are based on only a few assumptions, but because nature is quite regular this has worked out pretty well. The Standard Model of particle physics predicts what will happen to matter and energy with great reliability outside of gravitational influences, and general relativity predicts the effects of gravity. Chemistry, materials science, and earth science also provide highly reliable models which, in principle, reduce to the models of physics. Because the models in these sciences have such a strong physical basis and can be tested and replicated using precision instruments to high technical standards, there is not a lot of room for cognitive bias. For example, despite great political pressure to find that the climate is stable, nearly all scientists maintain that it is changing quickly due to human causes. Politics do not sway it much, and the proven value of the truth seems higher than people’s desires to support fictions. More could be done to mitigate bias, but overall it is a minor problem in the physical sciences. But systemic bias is a bit more than a theoretical concern. In what is widely thought to be the most significant work in the philosophy of science, Thomas Kuhn’s The Structure of Scientific Revolutions argued that normal science moves along with little concern for philosophy until a scientific revolution undermines and replaces the existing paradigm(s). This change, which we now call a paradigm shift, doesn’t happen as soon as a better paradigm comes along, but only after it gains enough momentum to change the minds of the stubborn, close-minded majority. Kuhn inadvertently alienated himself from the scientific community with this revelation and never could repair the damage, but the truth hurts. And the truth is, there is a hole in the prevailing paradigm of physicalism that I’d like to drive a Mack truck through: physicalism misses half of reality, the functional half. Fortunately, functional existence doesn’t undermine physical theories; quite the opposite, it extends nature to include more kinds of phenomena than we thought it could support. That functional things can naturally exist using physical mechanisms makes nature broader than the physicalists thought it was. But putting this one point of ontological overreach aside, the physical sciences tend to be biased only in small ways.

The biological sciences are heavily functional but depend on physical mechanisms that can often be explained mechanically. Consequently, we have many biological models that are quite robust and uncontroversial, and this makes bias a minor issue for much of biology. Medicine in the United States, however, is strongly driven by the profit motive, and this often gets in the way of finding the truth. This bias leads to overmedication, overtreatment, and a reluctance in medicine to discover and promote a healthy lifestyle. Any scientific finding that was influenced by the profit motive is pretty likely to be skewed and may do more harm than good. Because of this, nearly all pharmaceuticals are still highly experimental and are likely more detrimental than beneficial. You have only to look at the list of side effects to see that you are playing with fire. Surgery is probably medicine’s greatest achievement because it provides a physical solution to a physical problem, so the risks and rewards are much more evident. However, its success has led to its overuse, because it does always come with real risks that patients may underappreciate.

Bias is a big problem for the social sciences because it has no established standard for objectivity, and yet there are many reasons for people to have partial preferences. The risk of wishful thinking driving research is great. To avoid overreach, the scope of claims must be constrained by what the data can really reveal. The social sciences can and do demonstrate correlations in behavior patterns, but it can never say with complete confidence what causes that behavior or predict how people will behave. The reason is pretty simple: the number of variables that contribute to any decision of the mind are nearly infinite, and even the smallest justification may rise up to become the determinant of action. An information processing system of this magnitude has a huge potential for chaotic behavior, even if most of its actions appear to be quite predictable. I can’t possibly tell you what I will be doing one minute from now. I will almost certainly be writing another sentence, but I don’t know what it will say. But I’m not just trying to find correlations; I am looking for explanations. If behavior can’t be precisely predicted, what can be explained? The answer is simply that we don’t need to predict what people will do, only explain what they can do. It is about explaining potential rather than narrowly saying how that will play out in any specific instance. Explaining how the mind works is thus analogous to explaining how a software program works — we need to delve into what general functions it can perform. What a program is capable of is a function of the hardware capabilities and the range of applications for which the algorithms can be used. What a mind is capable of is a function of species-specific or phylogenetic traits and developed or ontogenetic traits.

The Contribution of Nature and Nurture to the Mind

This brings us to the subject of nature vs. nurture. Nature refers to inherited traits and nurture refers to environmental traits, which are distinctions caused by nongenetic factors. While phylogeny is all nature, ontogeny is a combination of nature and nurture. How much is nature and how much is nurture? This question doesn’t really have an answer because it is like asking how much of a basketball game is the players playing and how much is the fans watching? From one perspective, it is entirely the players, but if the fans provide the interest and money for the game to take place, they arguably cause the game to happen, and if nearly all the minds participating are fans, then the game is mostly about the audience and the players are almost incidental. Nurture provides an add-on which is dramatic for humans. Most animals under normal environment conditions can fairly be said to have entirely inherited traits with perhaps no significant environmental variation at all. Given normal environments, our bodies develop along extremely predictable genetic lines, rendering identical twins into almost identical-looking adults. Most of ontogeny proceeds from the differentiation of cells into tissues and organs, called epigenesis, but environmental factors like diet, disease, trauma, and even chance fluctuations can lead to differences in genetic twins. Brains, however, can create a whole new level of nurtured traits because they manage real-time information.

While genetic and epigenetic information is managed at the cellular level by nucleic acids and proteins, the behavioral information of animals is managed at a higher level by the neuroendocrine system, which, for simplicity, I will just call the brain. The mind is the conscious top-level control subprocess of the brain, which draws support as needed from the brain through nonconscious processes. Anything the nonconscious mind can contribute to the conscious mind can be thought of as helping to comprise the whole mind. We can first divide nurtured traits in the cognitive realm according to whether they are physical or learned. Physically, things like diet, disease, and trauma affect the development of the brain, which in turn can affect the capabilities of the mind. I am not going to dwell on these kinds of traits because most brains are able to develop normally. Learned traits all relate to the content of information based on patterns detected and likelihoods established. While we create all the information in our minds, we are either the primary creators or secondary consumers of information by others. Put another way, primary intrinsic information is made by us and secondary intrinsic information is conveyed to us by others using extrinsic information; let’s call this acquisition primary and secondary learning. Most nonhuman animals, especially outside of mammals and birds, have little or no ability to gather extrinsic information and so rely almost entirely on primary learning. They use phylogenetic (inherited) information processing skills like vision to learn how to survive in their neck of the woods. Their specific knowledge depends on their own experience, but all of that knowledge was created by their own minds using their natural talents.

Humans, however, are highly dependent on socialization. We can gather extrinsic information from others either through direct interactions or through media or artifacts they create. Our Theory of Mind (TOM) capacity, as I previously mentioned, lets us read the mental states (such as senses, emotions, desires, and beliefs) of others. We learn to use TOM itself through primary learning as we don’t need to be taught how, but the secondary information we can learn this way is limited to what their body language reveals. Of greater interest here is not what we can pick up by this kind of “mind reading”, but what others try to convey to us intentionally. This communication is done most notably with language, but much can also be conveyed using pictures, gestures, etc. Language is certainly considered the springboard that made detailed communication possible. Language labels the world around us and makes us very practiced in specific ways of thinking about things. Language necessarily represents knowledge conceptually because words have meanings and meanings imply concepts, though the same word can have many meanings in different contexts. Language thus implies and depends on the existence of many shared conceptual models that outline the circumstances of any instance of language use. (Of course, this only relates to semantic content, which, as I previously noted, is only a fraction of linguistic meaning.)

Since we must store both primary and secondary information intrinsically, we each interpret extrinsic information idiosyncratically and thus derive somewhat different value from it. But if extrinsic information is different for each of us, in what sense does it even exist? It exists, like all information, as a conveyor of functional value. It indicates, in specific and general ways, how similarities can be detected and applied to do things similarly to how they have been done before. We may have our own ways of leveraging extrinsic learning, but it has been suitably generalized so that some functionality can be gleaned. In other words, our understanding of language overlaps enough that we usually have the impression we understand each other in a universal rather than an idiosyncratic way.

Except for identical twins, we all have different genes. Any two human genomes differ in about five million places, and while nearly all of those differences usually have little or no effect and nearly all the rest are neutral most of the time, there are still many thousands of differences that are sometimes beneficial or detrimental. Many of these no doubt affect the operating performance of the brain, which in turn impacts the experience of the mind. Identical twins raised together or apart are often strikingly similar in their exact preferences and abilities, but they can also be quite different. If even identical twins can vary dramatically, we can expect that it will be difficult and often impossible to connect learned skills back to genetic causes. People just learn different things in different ways because of different life experiences and reactions to them. We know that genetics differences underlie many functional differences, but the range of human potential is so large that everyone is still capable of doing astonishing things if they try. But this book is not about genius or self-help. My goal is just to explain the mental faculties we all have in common.

I’m not going to concern myself much going forward with which mental functions are instinctive and which are learned, or which are universal by nature and which by nurture. It doesn’t really matter how we come by the universal functions we all share, but I am going to be focusing on low-level functions whose most salient similarities are almost certainly nearly all genetic. The basic features of our personalities are innate; we can only change our nature in superficial ways, which is exactly why it is called our nature. But many of those features develops through nurture, even if the outcome is probably largely similar no matter what path we take in life.

How to Study the Mind Objectively

Getting back to objectivity, how can we study the mind objectively? The mind is mostly function and we can’t study function using instruments. The formal sciences are all function but can’t be studied with instruments either. We create them out of whole cloth, picking formal models on hunches and then deriving the implications using logic alone. We change formal models to eliminate or minimize inconsistencies. We strip needless complexities until only relevant axioms and rules remain. However, the formal sciences only have to satisfy themselves, while the experimental sciences must demonstrate a correlation between theory and evidence. But if the mind is a functional system that can create formal systems that only have to satisfy themselves, can’t we study it as a formal science independent of any physical considerations? Yes, we can, and theoretical cognitive science does that.

The first serious attempts to discover the algorithms behind the mind were made by GOFAI, meaning “Good Old-Fashioned Artificial Intelligence”. This line of research study ran from about 1956 to 1993 based mostly on the idea that the mind used symbols to represent things and algorithms to manipulate the symbols. This is how deductive logic works and is the basis of both natural and computer programming languages, so it seemed obvious at the time that the formal study of such systems would quickly lead to artificial intelligence. But this line of thinking made a subtle mistake: it completely misses the trees for the forest. Conceptual thinking in isolation is useless; it must be built on a much deeper subconceptual framework to become applicable or useful for any purpose. That intelligence was based on more generalized pattern processing and probably something akin to a neural network was first proposed in the 1940’s and 50’s, but GOFAI eclipsed it for decades mostly because computers were slow and had little memory. Symbolic approaches were just more attainable until the late 1980’s. Since then, neural network approaches that simulate a subconceptual sea have become increasingly popular, but efforts to build generalized conceptual layers on top of them have not yet, to my knowledge, borne fruit. It is only a matter of time before they do, and this will greatly increase the range of tasks computers can do well, but it will still leave them without the human motivation system. Our motivations channel our cognitive energies in useful directions, and without such a system one would have no way to distinguish a worthwhile task from a waste of time. We can, of course, provide computers with some of our motivations, such as to drive cars or perform internet tasks, and they can take it from there, but we couldn’t reasonably characterize them as sentient until they have a motivational system of their own. Sci-fi writers usually take for granted that the appropriate motivations to compete for survival will come along automatically with the creation of artificial conceptual thought, but this is quite untrue. Our motivational system has been undergoing continuous incremental refinement for four billion years, much longer than the distinctive features of human intelligence, which are at most about four million years old. That is both a long time and reflects on the amount of feedback one needs to collect to build a worthwhile motivational system. The idea that we could implant a well-tuned motivational system into a robot with, say, three laws of robotics, is preposterous (though intriguing). Perhaps the most egregious oversight of the three laws, just to press this point, is that they say nothing about cooperation but instead suppose all interactions can be black or white. Of course, the shortcomings of the laws were invariably the basis of Asimov’s plots, so this framework provided a great platform to explore conundrums of motivation.

I do think that by creating manmade minds, artificial intelligence research will ultimately be the field that best proves how the mind works. But I’m not going to wait for that to materialize; I’d like to provide an explanation of the mind based on what we already know. To study the mind as it is, we can’t ignore the physical circumstances in which it was created and in which it operates. And yet, we can’t overly dwell on them either, because they are just a means to an end: they provide the mechanisms which permit functionality, but they are not functional themselves. How can we go about explaining the functionality if the physical mechanism (mainly the biochemistry) doesn’t hold the answers? All we have to do is look at what the mind does rather than what the brain is. The mind causes the body to do things; not randomly, but as a consequence of information processing. In the same way that we can model physical things into functional units connected physically, we can model the mind using functional units connected functionally. Where we can confirm physical mechanisms by measuring them with instruments, we can confirm functional mechanisms by observing their behavior. The behaviorists held that only outward physical behavior mattered and that mental processes were direct byproducts of conditioning. While we now know short, direct feedback loops can be very effective, most of our thinking involves very complex and indirect feedback loops that include a lot of thinking things through. We do need to examine behavior, but at the level of thoughts and not just actions. All the information management operations conducted by the mind are organized around the objective of producing useful (i.e. functional) results, so we need to think about what kinds of operations can be useful and why.

If we aren’t using instruments to do this, how can we do it objectively? Let’s recall that the goal of objectivity is to reveal noumena. Physical noumena are fundamentally tangible, so instruments offer the only way to learn about them. But if we interpreted those findings using just our feelings and intuitions about them, i.e. percepts (which include innate feelings and learned percepts, aka subconcepts), the results would be highly subjective. Instead, we conceive of conceptual models that seem to fit the observations and then distill them into more formal deductive models called hypotheses, which specify simplified premises and logical implications. These deductive models and instrumental observations will qualify as objective if they can be confirmed by anyone independently. However, to do that, we need to be able to connect the deductive model back to the real world. Concepts are built on percepts, so conceptual models are already well-connected to the world, but deductive models have artificial premises and so we need a strategy to apply them. The hypothesis does this by specifying the range of physical phenomena that can be taken to correspond adequately to the artificial premises. Actually doing this matching is a bit subjective, so some bias can creep in, but having more scientists check results more ways can minimize the encroachment of subjectivity to a manageable level.

Functional noumena are fundamentally informational, which means that they exist to create predictive power that can influence behavior. Function isn’t tangible, so the only way we can learn about it is from observing what it does; not physically but functionally. Just as we form deductive models to explain physical phenomena, we need to form deductive models to explain functional phenomena. Our feeling and intuitions (percepts) about our mind are not in and of themselves helpful for understanding the mind. To the extent introspection only gives us such impressions, it is not helpful. If we form a deductive model that explains the mind or parts of it, then the model itself and the conclusions it reaches will be quite objective because anyone can confirm them independently. But whether the model conforms to what actual minds are doing is another story. Without instruments, how can we make objective observations of the behavior of the mind? Consider that conceptual models of both physical and functional phenomena have a deductive component and a part where the concepts are built out of percepts. This latter part is where subjectivity can introduce unwanted bias.

We can’t ignore the perceptual origins of knowledge; it all starts from these inductive sources, and they are not objective because they are necessarily not sharable between minds. But concepts are sharable and are the foundation of language and the principle leg up that humans have over animals, cognitively speaking. Every word is a concept, or actually a concept for each dictionary entry, but inflection, context, and connotation are often used to shift word meanings to suit circumstances. Words and the conceptual models built out of them are never entirely rigid, both for practicality reasons and because information necessarily has to cover a range of circumstances to be useful. But the fact that language exists and we can communicate with as much clarity as we wish by elaborating is proof that language permits objective communication. But if concepts derive from percepts, won’t they compromise the any objectivity one might hope to achieve? Yes, but objectivity is never perfect. The goal of objectivity is to share information, and information is useful whenever it provides more predictive power than random chance. So the standard for objectivity should only be that the information can be made available to more than one person and that its degree of certainty is shared as well. The degree of certainty of objective knowledge is usually implied rather than explicitly stated, and can vary anywhere from slightly over 50% to slightly under 100%.

Let me provide a few examples of objectivity conveyed using words. First, a physical example. If I say that that light switch controls that light, then that is probably sufficient to convey objective information to any listener. The degree of certainty of most information we receive usually hovers just under 100%, but probably more significant is the degree of belief we attach to it. But belief is a separate subject I will cover in an upcoming chapter. Under what circumstances might my statement about the light switch fail to be objective? The following scenarios are possible: I might be mistaken, I might be lying, or I might be unclear. When I speak of a switch, a light, or control, and make gestures, commonly understood conventions fill in details, but if I either overestimate our common understanding or am just too vague, then this lack of clarity could compromise the objectivity of the message. For example, I might expect you to realize that the light will take a minute to warm up before it gives off any light, and you might flip the switch a few times and think I provided bad information. Or I might expect you to realize I only meant if the power were on or if the light had a bulb, even though I knew neither was the case. Mistaken information can still be objectively shared, but will eventually be discovered to be false. Lying completely undermines information sharing, so we should always consider the intentions of the source. Clarity can be a problem, so we need to be careful to avoid misunderstandings. But if we’ve taken these three risks into account, then the only remaining obstacles to objectivity are whether the conceptual model behind the information is appropriately applied to the situation. In this case, that means whether the switch can causally control the light and how. That, in turn, depends on whether the light is electric, gas, or something else, and the switch mechanism, and our standard for whether light is being produced. We all have lots of experience with electric lights and standard light switches, so if this situation matches those familiar experiences, then our expectations will be clear. If the light fails to turn on, 99% of the time it will be a bulb failure rather than any of the other issues mentioned above.

Now let’s consider a functional example, specifically with a conceptual description of a mental phenomenon. If I say hearing gives me pretty detailed information about noises in my proximity, e.g. when a light switch is flipped, then that is probably sufficient to convey objective information to any listener. It’s not about the light switch; it is about the idea that my mind can process information about the noise it produces. Again, we have to control for mistakes, lies, and clarity, but taking those into account, we all have lots of experience with the sounds light switches make, so if this situation matches those familiar experiences, then our expectation of what objectively happened is about as clear as with the prior physical example. When we judge something to be objectively true even though it depends only on our experience, it is because of the overwhelming amount of confirming evidence. While not infinite, our experience with common physical and mental phenomena stretches back across thousands of interactions. While technically past performance is no guarantee of future results, we create deductive models based on past performance that absolutely guarantee future results, but with the remaining contingency that our model has to be appropriate to the situation. So if we fail to hear the expected sound when the switch is flipped, 99% of the time it will be because this switch makes a softer sound than we usually hear. And some switches make no sound, so this may be normal. Whatever the reason, we won’t doubt that “hearing” is a discrete conscious phenomenon. And like all conscious phenomena, the only we we can know if someone heard something is if they say so.

Our Common-Sense Theory of Consciousness

Our own conscious experience and reports from others are our only sources of knowledge about what consciousness is, so any theory of consciousness must be confirmed principally against these sources. Neuroscience, evolution (including paleontology, archaeology, anthropology, and evolutionary psychology), and psychology can add support to such theories, but quite frankly we would never guess that something like consciousness could even exist if we didn’t know it from personal experience. Still, although we believe our conversations lead to enough clarity to count as objective, is this enough to be the basis of science? For example, should we take hearing as a scientifically established feature of consciousness just because it seems that way to us and we believe it seems that way to others? To count as science, it is not enough for a feature to be felt, i.e. perceived; it has to be conceptualized and placed in a deductive model that can make predictions. Further, the support of the other sciences mentioned above is critical, because all evidence has to support a theory for it to become established. I propose, then, that the first step of outlining a theory of the mind is to identify the parts of the mind which we universally accept as components, along with their rules we ascribe to them. Not all of us hold a common-sense view of the mind that is consistent with science or, if it is, is entirely up to date. But most of us believe science creates more reliable knowledge and so subscribe to at least the broadest scientific theories that relate to the mind, and I will assume a common-sense perspective that is consistent with the most prevailing paradigms of science. My contention is that such a view is a good first cut at an objective and scientific description of how the mind works, though it has the shortcoming of not being very deep. To show why it is not very deep, I have put in bold below terms that we take for granted but are indivisible in the common-sense view. Here, then, is the common-sense theory of consciousness:

Consciousness is composed of awareness, feelings, thoughts, and attention. Awareness gives us an ongoing sense of everything around us. Feelings include senses and emotions, which I have previously discussed in some detail. We commonly distinguish a small set of primary senses and emotions, but we also have words for quite a variety of subtle variations. Thoughts most broadly include any associations that pass through our awareness, where an association is a kind of connection between things that only briefly lingers in our awareness. Attention is a narrow slice of awareness with a special role in consciousness. We can only develop conscious associations and thoughts about feelings and thoughts that are under our attention. We usually pay close attention to the focal point of vision, but can turn our attention to any sense (including peripheral vision), feeling, or thought if we believe it is worth paying attention to. We develop desires for certain things, which causes us to prioritize getting them. We also develop beliefs about what information can be acted on without dedicating further thought. We have a sense of intuition about many things, which is information we believe we can act on despite not having thought it through consciously. Broadly, we can take everything intuitive as evidence of a nonconscious mind. We also have reason, which is the ability to create associations we do process through conscious attention. We feel that we reason and reach decisions of our own free will, which is the feeling that we are in control of our mental experience, which we call the self. We know we have to learn language, but once we have learned it, we also know that words become second nature to us and just appear in our minds as needed. But words, and all things we know of, come to us through recollection, a talent that gives us ready access to our extensive memory when we present triggering information to it. Recognition, in particular, is the recollection of physical things based on their features. We don’t know why we have awareness, feelings, thoughts, attention, desires, beliefs, intuition, reason, free will, self, or recollection; we just accept them for what they are and act accordingly.

To understand our minds more deeply we need to know the meaning, purpose, and mechanism of these features of consciousness, so I will be looking closer at them for the balance of the book. I am explicitly going to develop conceptual explanations rather than list all my feelings about how my mind seems to be working because concepts are sharable, but feelings and subconcepts are not (excepting to the degree we can read other people’s feelings and thoughts intuitively). If stated clearly and if compatible with scientific theories, such explanations qualify as both objective and scientific, not because they are right but because they are the best we can do right now. The mental capacities about which I am going to develop conceptual explanations are about the mechanisms that drive feelings, subconcepts, and concepts. Although some conceptual mechanisms are themselves conceptual, many of them and all their underlying mechanisms are based on feelings or subconcepts, so mostly I will be developing conceptual explanations of nonconceptual things. This sounds like it could be awkward, but it is not because physical things are not conceptual either, yet we have no trouble creating conceptual explanations of them. The awkward part is just that explanations categorize and simplify, but no categorization is perfect, and every simplification loses detail. But the purpose of explanation, and information management in general, is to be as useful as possible, so we should not hesitate to start subdividing the mental landscape into kinds of features just because these subdivisions are biased by the very information from which they are derived. I can’t guarantee that every distinction I make can stand up to every possible objective perspective, only that the ones I make stand up very well to the perspectives I am aware of, which is based on a pretty broad knowledge and study of the field. To the limits of practicality, I have reconsidered all the categorizations of the mental landscape I have encountered in order to develop the set that I feel is most explanatory while remaining consistent with the theories I consider best supported. Since I am not really going very deep here, I am hoping that my judgments are overwhelmingly supported by existing science and common sense. Even so, I don’t think many before me have undertaken a disciplined explanation of mental faculties using a robust ontology.

Toward a Scientific Theory of Consciousness

If we have to be able to detect something with instruments for it to objectively qualify as a physical entity, we need be able to see implications of function to objectively qualify something as a functional entity. In mathematics, the scope of implications is carefully laid out with explicit rules and we can just start applying them to see what happens. In exactly the same way, function is entirely abstract and we need only think about it to see what results. The problem is that living systems don’t have explicit rules. To the extent they are able to function as information processors, they do so with rules that are refined inductively, not deductively. Yes, as humans we can start with induction to set up deductive systems like mathematics, which then operate entirely deductively. But even those systems are built on a number of implicit assumptions about why it is worthwhile to develop them the way they are, assumptions which relate to possible applications by inductively mapping them to specific situations. So we can’t start with the rules of thought and go from there because there are no rules of thought. We have to consider what physical mechanisms could result in IPs that could operate implicit rules of thought, and from there see if we can find a way to explicitly describe these implicit mechanisms. We know that living things evolved, and we know quite a bit about how nerves work, even if not to the point of understanding consciousness, and we know much about human psychology, both from science and personal familiarity. Together, these things put some hard constraints on the kinds of function that is possible and likely in living things and minds. So we aren’t going to propose models for the mind based solely on abstract principles of function as we do with mathematics, because we have to keep our models aligned with what we know of what has physically happened. But we do know that our only direct source of what happens in minds comes from having one ourselves, and our only secondary source comes from what we glean from others. The other sciences provide a necessary supporting framework, but they don’t hint at minds as we know them, only at information processing capacity, much like we think nonsentient robots might have.

So our hypotheses for what kinds of things minds can do, worded as a conceptual framework, must begin with what we think our own minds are doing. Any one thought we might have will be subjective in many ways, but objectivity grows as a thought is considered from more perspectives, or by more people, across a broader range of circumstances, to describe a more general set of happenings than the original thought. Once generalized and abstracted to a conceptual level, a deductive model can be manipulated apart from the percepts and subconcepts on which it is founded. In other words, it can be formulated as a free-standing hypothesis from which one can draw conclusions without knowing how or how well it corresponds to reality. It does matter, of course, that it correspond to reality, but demonstrating that correspondence can be undertaken as a separate validation process that will then elevate it from hypothesis to accepted theory. In practice, the scientific method is iterative, with reality informing hypothesis and vice versa to create increasing refinements to what counts as scientific truth. What this means is that our first efforts to delineate the mind should not come from an ivory tower but from an already heavily iterated refinement of ideas on the subject that draw on the totality of our experience. My views on the subject have been crystallizing gradually over the course of my life, as they have for everyone, but in writing this book, they have definitely become less jumbled and more coherent. It is asking too much of this present explanation to be as comprehensive as future explanations will be given further evidence and iterations, but it should at least be a credible attempt to tie together all the available information.

I believe I have by now sufficiently justified the approach I will use, which is essentially the approach I have used up to this point as well. That approach is, in a nutshell, to draw on experience I have gathered from my own mind, and my study of what others have said about the mind or about the systems that support it, to put forth a hypothesis for how the mind works that is consistent with both the common-sense view of the mind and what science has to offer. This hypothesis doesn’t originate with either common sense or science, but rather has been iteratively drawing on both across my own life and back through the history of philosophy and science. Although I hope this approach simply sounds sensible, it sharply breaks with both philosophic and scientific tradition, which are based principally in speculation. I would like to offer the words of Stephen S. Colvin from 1902 in defense of this approach, from his essay, The Common-Sense View of Reality3:

With whatever tenacity the common-sense view may have held its place in ordinary thinking, the history of philosophy shows that from the very beginning speculation broke away from the naive conception of reality in an attempt to harmonize the contradictions between logical thinking and perception. Even before systematic philosophy had developed in Greece, the Eastern sages had declared that the whole world of sense was illusion, that phenomena were but the veil of Maya, that life itself was a dream, and its goal was Nirvana. … Both Heraclitus and Parmenides speak of the illusion of the senses; while Zeno attempted with his refined logic to refute all assertion of the multiplicity and changeability of being. … The Sophists aimed at the destruction of all knowledge, and tried to reduce everything to individual opinion. Protagoras declared that man was the measure of all things, and denied universal validity. … The edifice which the Sophists destroyed Socrates, Plato, and Aristotle, tried to rebuild, but not with perfect success. Socrates does not attempt to gain insight into nature… Plato starts with an acknowledgment that perception can yield no knowledge, and turns his back on the world of sense to view the pure ideas. Aristotle returns in part to phenomena, yet he denies a complete knowledge of nature as such, and, since he considers matter as introducing something contingent and accidental, he finds in intuitive reason, not in demonstration, the most perfect revelation of truth.

Post-Aristotelian philosophy sought knowledge mainly for practical purposes, but was far from successful in this search. … Descartes begins with scepticism, only to end in dogmatism. … Locke’s polemic against the Cartesian epistemology, as far as the doctrine of innate ideas is involved, resulted in leaving the knowledge of the external world in an extremely dubious position; while Berkeley, following after, attempts to demolish the conception of corporeal substance; and Hume, developing Locke’s doctrine of impressions and ideas, removes all basis from externality and sweeps away without compunction both the material universe and the res cogitans, leaving nothing in their places but a bundle of perceptions. [Immanuel Kant tries] to restore metaphysics to her once proud position as a science, the Critique of Pure Reason is given to the world, and a new era of philosophy is inaugurated. … [but] few there are who today turn their attention to metaphysics, not because the questions raised are not still of burning interest, but because there is a general despair of reaching any result. Will this condition ever be changed? Possibly, but not until metaphysics has shaken off the incubus of a perverted epistemology, the pursuit of which leaves thought in a hopeless tangle; not until the common-sense view of the world in the form of a critical realism is made the starting point of a sincere investigation of reality.

Colvin’s conclusion that the common-sense view of the world must be the starting point to understanding the world as we know it has still not been embraced by the scientific community, mostly because the empirical approach to science has been successful enough to eclipse and discredit it. The idea that we can think logically about our own thought processes is starting to gain traction, but I have argued at length here that another stumbling block has been the limited ontology of physicalism. Not only should we allow ourselves to think about our own thought processes, but we also have to recognize that function and information are a completely different sort of thing than physical entities. The rise of the information sciences has made this pretty clear by now, but it was already apparent because scientific theories themselves (and all explanations) are and always have been entirely functional things. It has been sort of ridiculous to say that everything that exists is physical because that denies any kind of existence to the very explanation that says so.

The whole thrust of the above discussion is to justify my approach. I am going to break the mind down into pieces, largely the same pieces that comprise our common-sense view, and I am going to hypothesize how they work by drawing on common sense and science. There is some guesswork in this approach, and I regret that, but it is unavoidable. The chief advantage of doing things this way is that it will let me start to incorporate support from every source available. The chief disadvantage is that instead of strictly building my case from the ground up, as the physical sciences try to do, I have to also work from what we know and try to reverse engineer explanations for it. Reverse engineering is highly susceptible to bias, but being aware of that and taking all information into account seriously will help to mitigate its effects. As I say, any one thought can be subjective and biased, but a careful consideration of many perspectives with the objective of reaching independently verifiable results minimizes that slant.

We can’t test an individual thought, but we can test a conceptual model of how the mind works by seeing if it is consistent with the available evidence from science, from common sense, and from thought experiments. Sure, we can interpret our own thoughts any way we like, and this will lead to many incorrect hypotheses, but we can also conceive hypotheses that stand up to validation from many perspectives, and I posit that this is true for hypotheses about functional things the same way it is true of hypotheses about physical things. Our idiosyncratic ways of seeing things biases us, but bias itself is not bad; only the prejudicial use of bias is bad. All information creates bias, but more carefully constructed generalizations minimize it and eventually pass as objective. While we could probably all recognize a prototypical apple as an apple to the point where we would confidently say this it is objectively an apple, increasingly less prototypical apples become harder and harder to identify with the word apple and increasingly require further qualifications. Wax apples, rotten apples, sliced apples, apple pears, and the Big Apple are examples, but they don’t undermine the utility of the apple concept, they just show that the appropriate scope of every concept decreases as its match to circumstances becomes less exact. I’m not going to assume common-sense features or explanations of consciousness are correct, but I am going to seriously consider what we know about what we know.

All of the above is just a long-winded way of saying that I am going to start in the middle. I already had a theory of the mind based on my own experience before I started this book, but that was only the starting point. The point of writing it down was to develop my conceptual models and the network of intuitions behind them into well-supported conceptual models. I have found weaknesses in my thinking and in the thinking of others, and I have found ways to fix those weaknesses. I realized that physicalism was founded on the untenable wishful thinking that function is ultimately a physical process, which led to my explorations into what function is. That science has been guilty of overreach doesn’t invalidate all of it, but we have to fix the foundation to build stronger structures. Ironically, though, conceptual models themselves have inherent instabilities because they are built on the shifting sands of qualia and subconcepts. But, at the bottom, information really exists, where exists means it predicts better than chance alone, so the challenge of higher level informational models is not to be perfect but to be informative — to do better than chance. We know from experience that conceptual models can, in many cases, come very close to the perfect predictive ability of deductive models. Their instabilities may be unavoidable, but they are definitely manageable. So by starting in the middle and taking a balanced view that considers as many perspectives as possible, I hope to emerge with a higher level of objectivity about the mind than we have previously achieved.

Objectivity is an evolving standard that ultimately depends on certain subjective inputs because our preferences don’t really have an objective basis. Our underlying preference that life matters, with human life mattering the most, is embodied one way or another by all our actions. We can generalize this preference further to say that functionality matters, with greater respect given to higher levels of functionality, and in this way avoid anthropocentrism. Logically, this preference derives from life’s preference to continue living. I suggest we take this as a foundational “objectified” preference, if for no other reason than that a preference for no life or function would be stagnant and is thus already a known quantity. Choosing life over death doesn’t resolve all issues of subjectivity. Life competes with other life, which creates subjective questions about who should live and who should die. Objectively, the fittest should survive. Every living thing must be fit to have survived since the dawn of life, but it is not individual fitness that is selected but rather the relative fitness of genes and genomes in different combinations and proportions. But this is a subjective standard because it literally depends on each subject and what happens to them, while objective standards depend on making categorical generalizations that summarize all those individual events. But summarizing, explaining, and understanding, which underlie objectivity, are therefore built on a foundation of approximation; they can never precisely describe everything that happened to get to that point. But the case I have been making is that conceptual models can get closer and closer by taking more perspectives into account, and this is why we have to put all our concerns on the table from the top down rather than just specializing from the bottom up.

  1. I mentioned in Chapter 1.2 that proteins sometimes moonlight other functions.
  2. Jerzy P. Szaflarski et al, Left-Handedness and Language Lateralization in Children, Brain Res. 2012 Jan 18; 1433C: 85–97.
  3. Stephen S. Colvin, The Common-Sense View of Reality, The Philosophical Review, Vol. 11, No. 2 (Mar., 1902), pp. 139-151 (Colvin, 1869-1923, was professor of educational psychology at Brown University)

Leave a Reply