2.4 Civilization: 10,000 years ago to present

The Origins of Culture

Human civilization really began at about the same time that the human species started developing noticeably human traits several million years ago. For our purposes here, I am going to define civilization in terms of the presence and reuse of informational artifacts. An artifact is a pattern not found in nature, or, more accurately, a pattern created by cognitive processes using real-time information. In other words, we can exclude instinctive behavior that “seems” arbitrarily clever but is not a customized solution to a specific problem. Language is certainly the most pervasive and greatest of the early artifacts of civilization. For language to work, patterns must be created on a custom basis to carry semantic content. Humans have probably been creating real-time semantic content using language for millions of years, as opposed, for example, to genetically-driven warning calls. We have no real evidence of language or proto-language from back then; the first artifacts from which a case for language can be made before written languages are about 100,000 years old, but I think language must have evolved rather steadily over our whole history.12 Homo erectus used a variety of stone tools, and probably also non-stone tools3, and was able to control fire about a million years ago. This suggests early humans were learning new ways of getting food that had to be discovered and taught, and was thus able to expand into new ranges. Huts have been found in France and Japan that date back 400,000 to 500,000 years.

While we can’t say just how capable very early humans were, by about about 40,000–50,000 years ago humans had achieved behavioral modernity. While culture may take thousands of years to develop, it seems likely that some genetic breakthroughs facilitated later advancements. That said, I suspect that most people born in the past 100,000 years could probably pass for normal if born today. After all, all living races of humans alive today seem cognitively equivalent, despite having been separated from each other for 10,000 to 40,000 years. The range of human genes produces people with a range of intelligence which has gradually increased, slowly pushing up both the normal and the genius ranges. So rather than a night-and-day difference between early and modern man, we will see a shift in the bell curve to greater intelligence. But whatever mix of genes and culture contributed to it, we usually demarcate the dawn of civilization at about the 10,000-year point because that is about when the first large civilizations seem to have arisen.

According to the evidence we have, the large-scale domestication of plants and animals did not begin until about 12,000 years ago in Mesopotamia, although the Ohalo people of Israel were cultivating plants 23,000 years ago. This suggests that small-scale domestication may go back much further. Beyond Mesopotamia, ancient India, China, Mesoamerica, and Peru formed independent cradles of civilization starting around 7,000 to 4,000 years ago. These civilizations collectively comprise the Neolithic or Agricultural Revolution as they were founded principally on the stability of agricultural living and the cultural trappings that accompany it.

The Cultural Ratchet

A great deal has been made in recent years about the significance of memes to culture. While the word is most widely used now to refer to the catchiest of ideas, the idea that informational artifacts can be broken down into functionally atomic units called memes can be useful for discussing the subject. After all, if culture has a ratchet, then there must be some teeth (memes) that click in that don’t want to slide back. The only protection culture has from sliding back is memory; we have to pass our culture on or it will be lost. Every idea continuously mutates and is held in different ways by every person, but culture critically depends on locking gains in through traditions, which standardize memes. If I had to list key memes in the development of civilization, aiming for a high level of summarization, I would start with the idea of specialized tasks, especially using the hands, which are central to nearly every task we perform. The sharing of tasks brought about the development of language. These two metamemes drove cultural progress for the first few million years of our evolution in countless ways that we now take for granted, but they completely overshadow everything we have done since. Long before the establishment of civilizations, people made many hand tools from stone, wood, and bone. People also learned to hunt, control fire and build shelters, and they developed art and music. All of these things could be taught and passed down through mimicry; the power of language to describe events probably emerged only very gradually. An era of cognition in which people were learning to think about things likely preceded our era of metacognition in which reflections pervade most of our waking thoughts. As our innate ability to think abstractly gradually improved, mostly over the past million years, language also kept up and let us share new thoughts. Genes and culture coevolved, with the bell curve starting to overlap our present-day capacities between 200,000 and 50,000 years ago.

It becomes easier to call out the specific memes that were the great inventions of early civilization. Agriculture and the more sedentary and community living that it brought is usually cited first. Other key early physical inventions of early civilizations notably include textiles, water management, boats, levers, wheels, and metalworking, but they also depended on the purely functional inventions of commerce, government, writing, and timekeeping. Some of the most prominent physical inventions of modern civilization include gunpowder, telescopes, powered industrial machinery (first with steam, then with gas), electricity, steel, medicine, planes, and plastic. And increasingly relevant to technological civilization are physical inventions that manage information like the printing press, phone, television, computer, internet, and smartphone. And perhaps most relevant of all, but often overlooked, are the concepts we have invented along the way, which from a high level in roughly chronological order include math, philosophy, literature, and science. Within these academic disciplines exist countless specialized refinements, creating an almost uncountably large pool of both detailed and generalized memes.

All of these products of civilization are memes struggling to stay relevant to survive another day. They all roughly have a time of origination and then spread until their benefit plateaus. But they also often have multiple points of origin and evolve dramatically over time, making it impossible to describe them accurately using rigid buckets. (Internet memes and fads are memes that spread only because they are novelties rather than providing any significant function. Ironically, it is often their abundant lack of functionality that drives fads to new heights; this perversion of the cultural ratchet is funny.) So while we can equate genes to memes as units that capture function, genes are physically constrained to a narrow range of change (though any one mutation could make a large functional difference), but memes can be created and updated quickly, potentially at the speed of thought. The cognitive ratchet improved our minds very quickly compared to the scope of evolutionary time but was still limited to the maximum rate of evolution physically possible. But the cultural ratchet has no speed limit, and, thanks to technology, we have been able to keep increasing the rate of change. Like the functional and cognitive ratchets before it, the cultural ratchet has no destination; it only has a method. That method, like the other ratchets, is to always increase functionality relative to the present moment as judged by the relevant IPs. The functional ratchet of genes always fulfills the objective of maximizing survival potential, but the cognitive ratchet maximizes the fulfillment of conscious desires. Our conscious desires stay pretty well aligned with the goal of survival because they are specified by genes that themselves need to survive, but, just as no engine is perfectly efficient, no intermediate level can exactly meet the needs of a lower level. But our desires do inspire us pretty effectively to stay alive and reproduce. Though we tend to think of our desires as naturally meshing well with the needs of survival, the feedback loops that keep them aligned can also run amok, as is seen with Fisherian runaway, for which excessive plumage of the peacock is the paradigmatic example. This effect doesn’t override the need to survive, but it can amplify one selection pressure at the expense of others. Could some human traits, such as warlike aggression, have become increasingly exaggerated to the point where they are maladaptive? It is possible, but I will argue below that it is much more likely that human traits have been evolving to become more adaptive (by which I mean toward survival). But if we act to fulfill our desires, and our desires are tuned to promote our survival, do we deserve credit for the creation of civilization, or is it just an expected consequence of genetic evolution? The surprising answer is yes to both, even though they sound like opposing questions. Even though civilization in some form or other was probably inevitable given the evolutionary path humans have been on, we are entirely justified in taking credit for our role because of the way free will and responsibility work, which I will cover in Part 4.

From the beginning, I have been making the case that we are exclusively information processors, meaning that everything we do has to have value, i.e. function. The practical applications of cooperation and engineering are limitless and reach their most functional expression in the sciences. Science is a cooperative project of global scope that seeks to find increasingly reliable explanations for natural phenomena. It is purely a project of information processing that starts with natural phenomena and moves on to perceptions of them which, where possible, are made using mechanical sensors to maximize precision and accuracy and minimize bias. From these natural and mechanical perceptions and impressions, scientists propose conceptual models. When we find conceptual models that seem to work, we not only gain explanatory control of the world, but we also get the feeling that we have discovered something noumenal about nature (although our physical models are now so counterintuitive that reality doesn’t seem as real anymore). But in any case, our explanatory models of the physical world have enabled us to develop an ever-more technological society which has given us ever-greater control over our own lives. They have fueled the creation of millions of artifacts out of matter and information.

I have said little about art, but I am not going to make that case that it is highly functional. Art is fundamental to our psychological well-being because beauty connects knowledge. It pulls the pieces of our mind together to make us feel whole. More specifically, the role of beauty is to reinforce the value of generalities over particulars. The physical world is full of particulars, so our mind is continuously awash in particulars. And we already have great familiarity with most of the particulars we recognize. When we are very young, everything is new and different, and we are forming new general categories all the time, but as we get older, everything starts to seem the same or like a minor variation of something we have seen before. We can’t stop knowing something we already know, but we need to stay excited by and engaged with the world. This is where art comes in. A physical particular, by which I mean its noumenon, is mundane (literally: of the world), but generalities are sublime (literally: uplifted or exalted) or transcendent (literally: beyond the scope of), both of which suggest an otherness that is superior to the mundane. Thus art is sublime and transcendent because it is abstract rather than literal. While any physical particular is only what it is, and so can be reduced to hard, cold fact, our imagination is unlimited. We can think about things from any number of perspectives, and we do like to engage our imagination, but not without purpose. To better satisfy the broad goal of gratifying our conscious desires, we have to understand what we want. We can’t depend on raw emotion alone to lead the way. So we project, we dream, we put ourselves in imaginary situations and let ourselves feel what that would be like. The dreams we like the best form our idealized view of the world and are our primary experience of art. We all create art in our minds just by thinking about what we want. For any object or concept, we will develop notions about aesthetic ideals and ugly monstrosities. Although the world is ultimately mundane and becomes increasingly known to us, the ways we can think about it are infinitely variable, some of which will be more pleasing and some more displeasing.

When we produce art physically, we give physical form to some of our idealized notions. The creation is a one-way path; even the artist may not know or be able to reconstruct the associations behind each artistic choice, but if they are good then many of their considerations will resonate with our own ideals. When we appreciate art, we are not looking at physical particulars, we are thinking about ideals or generalities, which are both the groupings in which we classify particulars and the way knowledge is interconnected. Generalities are all about patterns, which is the substance of information at both a high and a low level. Patterns of lines and colors charm or repel us based on very general associations they strike in us, which are not random but link to our ideals. Art can be very subtle that way, or it can directly reflect our strongest desires, for example for sex, fun, and security. By helping us better visualize or experience our ideals, art helps us stay interconnected and balanced and prioritize what matters to us. Art lets us know that the representational world in our heads is real in its own right, that our existence depends as much (or more) on generalities as it does on particulars. So its benefit is not direct; by glorifying patterns, ideas, and abstractions for their own sake, art validates all the mental energy we expend making sense of the world. Its appeal is both intellectual and visceral. Art stimulates the gratitude, interest, and enthusiasm, emotions which keep us engaged in life. Art seems inessential because its value is indirect, but it keeps our cognitive machinery running smoothly.

In summary, while we could view civilization as a natural and perhaps expected consequence of evolution, we have expended great conscious effort creating the arts and sciences and all the works of man. Before I can consider further just how much credit we should give ourselves for doing all that, I need to dig a lot deeper into our innate human abilities. In Part 3, I will look in more detail at the evolution of our inborn talents, from rational to intuitive to emotional. Then, in Part 4, I will look at the manmade world to unravel what we are, what we have done, and what we should be doing.

Part 3: Strategies of Control

Now I’m going to start to explain the strategies the mind uses to control the body. This sounds at first to be very speculative territory. After all, we can’t see what the mind is doing or inspect it using any form of physical monitoring. We know that it arises in the brain, but only direct evidence comes from first-hand reports. We could guess at what the strategies might be, but if first-hand reports are the only direct evidence, what is to prevent any guess from being entirely subjective? The short answer is that it is ok if we guess or hypothesize strategies based on subjective evidence provided we devise objective ways of testing them. After all, all hypotheses start out as subjective guesses. Still, there are an infinite variety of guesses we might make, so we need a strategy to suggest likely hypotheses. In this part of the book, I will devote some time to exploring how we can be objective about the mind, given that the mind is fundamentally subjective. But for now, let’s just assume that we can be objective. The first question we need to consider in our hunt for strategies of control is the question of what control itself is.

Living things persist indefinitely by exercising control. Rocks, on the other hand, are subject to the vagaries of the elements and can’t act so as to preserve themselves. The persistence of life is not entirely physical; it doesn’t simply depend on the same atoms maintaining the same bonds to other atoms indefinitely. The ship of Theseus is a thought experiment that asks whether a ship in which every piece has been replaced is still the same ship. Physically, it is not the same, but its functional integrity has been maintained. A functional entity exists based on our expectations of it. It is fair to label any collection of matter, whether a person or a ship, as a persistent entity if it only undergoes small, incremental changes. However, we can interpret those changes either physically or functionally. We can then track the degree of change to report on net persistence. Physical changes always decrease net persistence. Once every piece of the ship has been replaced, it is physically a completely different ship. Functional changes can be functionally equivalent, resulting in no decrease in net persistence. So once every piece of the ship has been replaced by parts with functionally-equivalent parts, the ship is functionally the same. The replacement parts may also be functional upgrades or downgrades, which can result in a net shift in the functionality of the ship. For example, if all the wooden parts are gradually replaced with relatively indestructible synthetic parts, the ship will perform identically but will no longer require any further maintenance of parts. Metabolism replaces about 98% of the atoms in the human body every year, and nearly everything every five years.12 Nearly all of this maintenance has no functional effect, so even though we are physically almost completely different, nearly all our functionality remains the same. Of course, over their lifespans organisms mature and then decline with corresponding functional changes. Minds, and especially human minds, learn continuously, effectively causing nonstop functional upgrades. We also forget a lot, resulting in continuous functional downgrades. But these ongoing changes tend to be minor relative to overall personality characteristics, so we tend to think of people as being pretty stable. We can conclude that living things have only very temporary physical persistence because of continuous metabolic maintenance, but very good functional persistence over their lifetimes. Living things are thus primarily functional entities and only secondarily physical entities. They need a physical form to achieve functions, but as information processors, their focus is on preserving their functions and not their physical form.

Living things achieve this functional persistence because they are regulated by a control mechanism. Simple machines like levers, pulleys, and mathematical formulas use feed-forward control, in which a control signal that has been sent cannot be further adjusted. Locally, most physical forces usually operate by feed-forward control. Their effects cascade like dominoes with implications that cannot be stopped once set in motion. But some forces double back. A billiard ball struck on an infinite table produces only feed-forward effects, but once balls can carom off bumpers, then they can come back and knock balls that have been knocked before. These knock-on effects are feedback, and feedback makes regulation possible. Regulation, also called true control, uses feedback to keep a system operating within certain patterns of behavior without spiraling out of control. The way such a regulating system does this is by monitoring a signal and applying negative feedback to reduce that signal and positive feedback to amplify it. Living things monitor many signals simultaneously. Perhaps most notably, animals use feedback to eat when they need more energy and to sense their environment as they move to direct further movement. The mind is an intermediary in both cases; using appetite in the first case and body senses in the second.

We tend to think of control as changing the future, but it doesn’t actually do that. All natural causes are deterministic, and this includes the control mechanisms in minds. Much like billiard balls, minds just do what they are going to do based on their inputs and current configuration. Like computer algorithms, they would produce the same output given the same inputs, but the same inputs never happen twice. No time and place is identical to anything that came before, and this is particularly true with animals because they carry unique memories reflecting their entire experience. We think we encounter things and situations we have seen before, but they are really only similar to what we have seen before, and we make predictions based on that similarity. So although minds are technically deterministic, they don’t use knee-jerk, preprogrammed responses but instead develop strategies from higher-order interpretations based on general similarities. These higher-order interpretations are information (functional entities) that is indirectly abstracted from the underlying situation. This information from past patterns is used to regulate future patterns. Physically, the patterns, and hence the information, don’t exist. They are functional constructs that characterize similarity using a physical mechanism. Both the creation of the information and its application back to the physical world are indirect, so no amount of physical understanding of the mechanism will ever reveal how the system will behave. This doesn’t mean the system isn’t ultimately deterministic. At the lowest level, the physical parts of the information processor (IP) only use feed-forward logic, just like everything else in the universe. But this predictive capacity that derives from the uniformity of nature allows patterns to leverage other patterns to “self-organize” schemes that would otherwise not naturally arise. There is nothing physically “feeding back” — the universe doesn’t repeat itself — but a new capacity called function based on information arises. The IP has separated the components of control (information and information processing) from what is being controlled (ultimately physical entities) using indirection. This is why I say that physical laws can’t explain how the system will behave; we need a theory of function for explanations, i.e. to connect causes to effects.

Although physical laws can’t explain functional systems, we need functional systems to explain anything, because explanation is a predictive product of information. Physical laws attach functional causes and effects to physical phenomena, not to reveal the underlying noumena but to help us predict what will likely happen. Some physical laws, such as those governing fundamental forces or chemical bonds, describe what will happen when particles meet. Other physical laws, like those that describe gases, create fictitious functional properties like pressure to describe how sextillions of gas particles behave when they bounce off each other. Physically, there is no pressure; it is purely a functional convenience that describes the approximate behavior of gases. At the chemical level, genes encode proteins which may catalyze specific chemical reactions, all feed-forward phenomena. But knowing this sequence doesn’t tell you why it exists. At a functional level, we need a theory like natural selection to explain why genes and proteins even exist.

It is lucky for us that feedback control systems are not only possible in this universe, but that they can also self-organize and evolve under the right conditions. Living things are holistic feedback systems that refine themselves through a functional ratchet. Life didn’t have to choose to evolve; it just followed naturally from the way feedback loops developed into IPs that could refine their own development. We don’t yet have any idea how likely or unlikely it was for life to evolve on Earth. However probable or improbable relative to other places in the universe, all that matters to us is that conditions were right and it happened, needing nothing more than the same laws of physics that apply everywhere else in the universe. In other words, the output, life, resulted entirely and (with hindsight) predictably given the inputs. We can’t prove it was an inevitable outcome because quantum events can be uncertain, but we can say it was completely plausible based on causes and effects as we understand them from the laws of physics as we know them. The feed-forward and feedback control systems involved were deterministic, meaning that the outputs were within expectations given the inputs. Whether or not we can have complete certainty about physical outcomes is debatable, but we know we can’t have complete certainty about functional outcomes. The whole concept of functionality is based on similarity and approximate predictability, not certainty, so we should not have any illusions that it is completely knowable. The strategies of feedback loops can be very likely to work in future similar situations, and therein lies their power, but no future situation is exactly the same as the past situations which generated the information, so “past performance is no guarantee of future results”. But the uniformity of nature is so dependable that we often find that functional methods can achieve 51%, 90% , 99% or even 99.9999% effectiveness, and any success rate better than chance is helpful.

Where physical systems work in a feed-forward way through a sequence of causes and effects that cascade, information feeds back through functional systems where it is interpreted, leading to decisions whose effect is to make things happen that are similar to things that have happened before. We call this decision-making or choice, but one is not choosing between two actual sequences of events, one is choosing between two hypothetical sequences, both of which are only similar to any actual sequence without being the same as it. In other words, our choices are only about simulations, so choice itself is a simulation or functional construction of our imagination with no physical corollary. In making a choice, we select causes that produce desired effects in our simulations and trust in the effectiveness rates of our methods will work in our favor. Our IPs predict the effects before they happen; the cart pulls the horse, in apparent violation of the laws of physics. But it is entirely natural because control just capitalizes on the uniformity of nature to make higher-order patterns.

Living things manage information in DNA to maximize multi-generational survival, while minds manage information neurochemically to maximize single-generational survival. Minds depend heavily on genetic mechanisms, so they can be said to use both kinds of information processing. Whether genes persist or are discarded depends on their functional contribution to survival. Physically, a gene may make a protein for which some cause-and-effect roles can be proposed, but such purposes can only be right to the extent their ultimate effect is to benefit survival because that is the only feedback that can shape them. While genetic success must correlate directly to survival one way or another, mental success does not have to. This is because the mind uses an additional layer of real-time information processing on top of genetic mechanisms. The genetic mechanisms establish strategies of control that are heuristic or open-form rather than algorithmic or closed-form. Closed-form means by deductive means, while open-form also includes inductive or intuitive means. Of course, this is no surprise, as I have already extensively discussed how the nonconscious parts of the brain are inductive and intuitive, while the conscious part adds a deductive layer that manipulates concepts. But it is important to note here that the mind is fundamentally open-form because it means the mind is not formally constrained to aid in our survival, it is only informally constrained to do so through heuristics. These genetic heuristics, which I call strategies of control, are the subject of this part of the book.

In principle, we could potentially “sit just quietly and smell the flowers” all day in the shade of a cork tree like the bull in The Story of Ferdinand. People or bulls who have made permanent provisions for all their survival needs can afford to live such a life of Riley, and those that have may seek such idle entertainments. But our minds are genetically predisposed to ensure our own survival and to propagate, and this is where strategies of control help out. Understanding our genetic predispositions is the first step to understanding our minds. The second step is understanding how real-time knowledge empowers our minds. Real-time knowledge spans all strategies and information we have learned from our own experience and that of our predecessors, both inductive and deductive. This second-order level of understanding of the mind is, most broadly, all knowledge, because all knowledge is power and thus powers the mind in its own way. By that measure, we all already understand the mind, and it is entirely fair to say we all already have a very broad and useful understanding of the mind for this reason. But the purpose of this book is to take a more narrow view by generalizing from that knowledge to organizing principles that create higher-order powers of the mind. Such a generalized framework, if laid out as a conceptual model and supported by prevailing scientific theories, will also constitute a scientific theory of the mind. I will attempt to construct such a second-order theory of mind in the last part of the book.

3.1 Getting Objective About Control

In this chapter, I will discuss considerations that relate to how we can study strategies of control used by the mind scientifically, rather than just conversationally:

Approaching Control Physically
Approaching Control Functionally
Deriving Objective Knowledge
The Contribution of Nature and Nurture to the Mind
How to Study the Mind Objectively
Our Common-Sense Theory of Consciousness
Toward a Scientific Theory of Consciousness

Approaching Control Physically

I have argued that evolution pulls life forward like a ratchet toward every higher levels of functionality. The ratchet doesn’t compel specific results, but it does take advantage of opportunities. A digestive tract is a specific functional approach for processing food found only in bilateral animals (which excludes animals like sponges and jellyfish). Jellyfish may be the most energy efficient animals, but bilaterals can eat more, move faster, do more things, and consequently spread further. Beyond digestion, bilaterals developed a variety of nervous, circulation, respiratory, and muscular systems across major lines like insects, mollusks, and vertebrates. These systems and their component organs and tissues are functionally specialized, not just because we choose to see them that way, but because they actually have specific functional roles. Evolution selects functions, which are abstract, generalized capabilities that don’t just do one thing at one instant but do similar kinds of things on an ongoing basis. Similarly means nothing to physicalism; it is entirely a property of functionalism. A digestive tract really does localize all food consumption at one end and all solid food elimination at the other. Muscles really do make certain kinds of locomotion or manipulation possible. Eyes use a certain range of light to make images to a certain resolution that can be identified as certain kinds of objects. These are all fundamentally approximate functions, yet they are delineated to the very specific ranges that are most useful to each kind of animal.

Evolution often selects distinct functions we can point out. This is most often the case when the solution requires physical components, as is the case with the body of an organism. For example, tracts, blood vessels, somatic nerves, bones, muscles, have to stretch from A to B to achieve their function, and this places strong constraints on possible solutions. For example, tracts, vessels, and nerves are unidirectional because this approach works so well. I could find no examples where these systems or any part of them serve distinct, non-overlapping biological functions.1 One possible example is the pancreas. 99% of the pancreas is exocrine glands that produce digestive enzymes delivered by the pancreatic duct to the duodenum next to the stomach, while 1% is endocrine glands that release hormones directly into the blood. The endocrine system consists of glands from around the body that use the blood to send hormonal messages, and is considered distinct from the digestive system. But the hormones the pancreas creates (in the islets of Langerhans) are glucagon and insulin, which modulate whether the glucose produced by digestion should be used for energy or stored as glycogen. So these hormones control the digestive system are thus part of it. But this example brings me to my main point for this part of the book: control functions.

Let’s divide the functions animal bodies perform into control functions and physical functions. Control functions are the high-order information processing done by brains, and physical functions are everything else. Physical functions also need to be controlled, but let’s count local control beneath the level of the brain as physical since my focus here is on the brain. The brain uses generalized neural mechanisms to solve control problems using some kind of connectivity approach. While it is true that each physical part of the brain tends to specialize in specific functions, which typically relates to how it is connected to the body and the rest of the brain, the neural architecture is also very plastic. When parts of the brain are damaged, other parts can often step in to take over. It is not my intention to explore neural wiring, but just to note that however it works, it is pretty generalized. Unlike physical functions, for which physical layouts provide clues, the function of the brain is not related to its physical structure. Instead, it is the other way around: the physical structure of the brain is designed to facilitate information processing.

If the physical layout is not revealing the purpose, we need to look to what the brain does to tell us how it works. We know it uses subconcepts to pull useful patterns from the data, but these patterns are not without structure; the more they fall into high-level pattern groups, the more bang you can get for the buck. What I mean is that function always looks for ways to specialize so it can do more with less, improving the efficient delivery of the function. The circulation system does this by subdividing arteries down to capillaries and then collecting blood back through ever-larger veins. So it stands to reason that if there were any economies of scale that would be useful in the realm of cognitive function, they would have evolved. What’s more, concepts are ideal for describing such high-level functions. This doesn’t mean a conceptual description will perfectly match what happens in the brain, and realistically it probably can’t, because these systems of functionality in the brain can interact across many more dimensions and to a much greater depth than can physical systems. But if we recognize we can only achieve approximate descriptions, we should be able to make real progress.

Although I am principally going to think about what the brain is doing to explain how it works, let’s just start with its physical structure to see what light it can shed on the problem. Control in the body is managed by the neuroendocrine system. This system controls the body through somatic nerves and hormones in the blood. Hormones engage in specific chemical reactions, from which we have determined their approximate function and what conditions modulate them. Somatic nerves connect to each corner of the body, with sensory nerves collecting information and motor nerves controlling muscles, as I noted in Chapter 2.2, but knowing that doesn’t tell us anything about how they are controlled. All non-hormonal control in most animals is produced entirely by interneurons in the brain. Unfortunately, we understand very little about how interneurons achieve that control.

The brain is not just a blob on interneurons; it has a detailed internal anatomy that tells us some things about how it works. The brain has a variety of physically distinct kinds of tissues which divide it up into broad areas of functional specialization. The brain appears to have evolved the most essential control functions at the top of the spinal column, called the brainstem. These lowest levels of the brain are collectively called the “reptilian brain” as an approximate nod to having evolved by the time reptiles and mammals split. The medulla oblongata, which comes first, regulates autonomic functions like breathing, heart rate and blood pressure. Next is the pons, which helps with respiration and connections to other parts of the brain. The midbrain is mostly associated with vision, hearing, motor control, sleep, alertness, and temperature regulation. Behind the brainstorm is the cerebellum or hindbrain, which is critical to motor control, balance, and emotion. The balance of the brain is called the forebrain. The forebrain has some smaller, central structures, collectively called the “paleomammalian brain” because all mammals have them. These areas notably include the thalamus, which is mostly a relay station between the lower and upper brain, the hypothalamus, which links the brain to the endocrine system through the pituitary gland, and the limbic system, which is involved in emotion, behavior, motivation, and memory.

The largest part of the forebrain is the cerebral cortex or “neomammalian brain”. Its outer surface or neocortex has many cortical folds that are thought to be the source of higher cognitive function. Parts of the neocortex are connected to sensory nerves. In particular, the sensory cortex is prewired to receive neurons from sense organs to specific regions, most notably for sight, hearing, smell, taste, and touch. Touch maps each part of the body to corresponding areas of the primary somatosensory cortex. This creates a map of the body in the brain called the cortical homunculus. Most of this cortex is dedicated to the hands and mouth. The retina maps to corresponding areas of the visual cortex. The right hemisphere controls sight and touch for the left side of the body and vice versa. The rest of the cerebral cortex is sometimes called the association cortex to highlight its role in drawing more abstract associations, including relationships, memory and thought. It divides into occipital, parietal, temporal, and frontal lobes. The occipital lobe houses the visual cortex and its functions are nearly all vision-related. The parietal lobe holds the primary somatosensory cortex and also areas that relate to body image, emotional perception, language, and math. The temporal lobe contains the auditory cortex and is also associated with language and new memory formation. The frontal lobe, which is the largest part of the neocortex, is the home of the motor cortex, which controls voluntary movements, and supports active reasoning capabilities like planning, problem-solving, judgment, abstract thinking, and social behavior. Most dopamine neurons are in the frontal lobe, and dopamine is connected to reward, attention, short-term memory tasks, planning, and motivation.

Approaching Control Functionally

Does any of this detail really tell us much of anything about how interneurons control things? Not really, even though the above summary only scratches the surface of what we know. This kind of knowledge only tells us approximately where certain functions reside, not how they work. Also, no function is exactly in any specific place because the brain builds knowledge using many variations of the same patterns and so naturally achieves a measure of redundancy. To develop mastery over any subject, many brain areas learn the same things in slightly different ways, and all contribute to our overall understanding. Most notably, we have two hemispheres that are limited in how much they can communicate with each other, and so allow us to specialize similarly yet differently in each half. This separate capacity is so significant that surgically divided hemispheres seem to two create separate, capable consciousnesses (though only one half will generally be able to control speech). Live neuroimaging tracks blood flow to show how related functionality is widely distributed around the brain. Brain areas develop and change over time with considerable neuroplasticity or flexibility, and can actually become bigger and stronger like muscles, but with permanent benefit. But the brain is not infinitely plastic; each area tends to specialize based on its interconnections with the body and the rest of the brain, and the areas differ in their nerve architecture and neurochemistry. But each area still needs to develop, and how we use it will partially affect its development. The parts of the neocortex beyond the sensory cortex have a similar architecture of cortical folds and similar connectivity, and none seem to be the mandatory site of any specific mental function. Each subarea of each lobe does tend to be best at certain kinds of things, but the evidence suggests that different kinds of functionality can and does develop in different areas in different people. Again, this is most apparent in hemispheric specialization, called lateralization. The left half is wired to control the right side of the body and vice versa. The left brain is stereotypically the dominant half and is more logical, analytical, and objective, while the right brain is more creative and intuitive, but while this generalization is at best only approximately true and differences between the hemispheres probably matter less to brain function than their similarities. Still, it is true that about 90% of people process language predominantly in their left hemisphere, about 2% predominantly in the right, and about 8% symmetrically in each.2 Furthermore, while language processing was originally thought to be localized to Broca’s area in the frontal lobe and Wernicke’s area in the temporal lobe, we now know that language functionality occurs more broadly in these lobes and the chief language areas can differ somewhat in different people. It only makes sense that the brain can’t have specific areas for every kind of cultural knowledge, even though it will have strong predispositions for something like language which has many innate adaptations.

The upshot is that the physical study of the brain is not going to give us much more detail about how and why it controls the body. So how can we explain the mind? The mind, like all living systems, is built out of function. Understanding it is the same thing as knowing what it does. So instead of looking at parts, we need to think about how it controls the body. We need to look at what it does, not at its physical construction. What would a science founded entirely on functional instead of physical principles even look like? The formal sciences like math and computer science are entirely functional, so they are a good place to start. By abstracting themselves completely from physical systems, the formal sciences can concern themselves entirely with what functional systems are capable of. The formal sciences isolate logical relationships and study their implications using streamlined models or axiomatic systems, covering things like fields (number systems), topological spaces (deformations of space into manifolds), algorithms, and data structures. They define logical ground rules and then derive implications from them. Formalisms are entirely invented yet their content is both objective and true because their truth is relative to their internal consistency. Although functional things must be useful by definition, as the underlying nature of function is in its ability to predict what will happen, that capacity is entirely internal. Every formalism is its own little universe in which everything follows deterministically, and those implications are what useful means within each system. Conceptual thinking arose in humans to leverage the power of formal systems. Conceptual thinking uses formalisms, but just as significantly it requires ways to map real-world scenarios to formal models and back again. This mapping and use of formalisms is the very definition of application — putting a model to use.

Of course, many sciences do directly study the mind from a functional perspective: the social sciences. Most work in the social sciences is concerned with practical and prescriptive ways to improve our quality of life, which is an important subject but not the point of this book. But much work in the social sciences, particularly in psychology, anthropology, linguistics, and philosophy, focuses on increasing our understanding of the mind and its use without regard to any specific benefits. Just like the formal sciences, they seek general understandings with the hope that worthwhile applications will ensue. The social sciences are reluctant to talk too much about their philosophical basis because it is not firmly established. They would like to tacitly ride the coattails of physicalism on the grounds that humans evolved and are natural phenomena in a deterministic universe, and so our understanding of them should ultimately be able to enjoy the same level of objective certainty that the physical sciences claim. But this is impossible because information is constructed out of approximations, namely correlations of similarity that have been extrapolated into generalizations. The information of life is processed by cells and the information that manages the affairs of animals is managed by minds. The social sciences need only acknowledge their dual foundation on form and function dualism. This enhanced ontology shifts the way we should think about the social sciences. Where we could not previously connect functional theories back to physical mechanisms, we can now see that computational systems create information that connects back to referents indirectly. It is all a natural consequence of feedback, but it gives nonphysical things a way to manifest in an otherwise physical world. DNA allows information to be captured and managed biologically, and minds allow it to be captured and managed mentally. Culture then carries mental information forward through succeeding generations, and the Baldwin effect gradually sculpts our nature to produce culture better in a mutual feedback loop.

Deriving Objective Knowledge

More specifically, what would constitute scientific or objective knowledge about function? Objectivity seeks to reveal that which exists independent of our mental conception of that existence. In other words, objectivity seeks to reveal noumena, either physical or functional. We know that minds can’t know physical noumena directly and so must settle for information about them, which is to say, percepts and concepts about them. But how can percepts or concepts be seen as independent of our mental conception if they are mental conceptions? Strictly speaking, information about something other than itself can’t be perfectly objective. But information that is entirely about itself can be perfectly objective. I said before that we can know some functional noumena through direct or a priori knowledge by setting them true by definition. If we establish pure deductive models with axiomatic premises and fixed rules that produce inescapable conclusions, then truths in these models can be said to be perfectly objective because they are based only on internal consistency and are thus irrefutable.

If deduction is therefore wholly objective, what does that mean for percepts and concepts? Percepts are created inductively, without using axioms written in stone, and are therefore entirely biased. They have a useful kind of bias because information backs them up, but their bias is entirely prejudicial. One can’t explain or justify perception using perception, but it is still quite helpful. Concepts are both inductive and deductive. We create concepts from the bottom up based on the inductive associations of percepts, but the logical meaning of concepts is defined from the top down in terms of relationships between concepts, and this creates a conceptual model. From a logical standpoint, concepts are axiomatic and their relationships support deduction. Viewed from this perspective, conceptual models are perfectly objective. However, to have practical value, conceptual models need to be applied to circumstances, and this application depends on inductive mappings and motivations, which are highly susceptible to bias. So the flat earth theory is objectively true from the standpoint of internal consistency, but it doesn’t line up with other scientific theories and the observations that support them, so it seems likely that flat-earthers are clinging to preexisting beliefs and prejudicially excluding or interpreting other conceptual models as confirming their views (confirmation bias). We are capable of believing anything if we don’t know any better or if we choose to ignore contradicting information. It is risky to ignore our own capacity for reason, either because our conceptual models are inconsistent or because available evidence contradicts them, but we are allowed to go with our intuition instead if we like. But that route is not scientific. Experimental science must make an adequate effort to ensure that the theoretical model applies to appropriate experimental conditions, and this includes minimizing negative impacts from bias.

If maximizing objectivity means minimizing bias, then we need to focus on the biases most likely to compromise the utility of the results. Before I look at how we do that with science, consider how important minimizing bias is to us in our daily lives. If we worked entirely from percepts and didn’t think anything through conceptually, we could just proceed with knee-jerk reactions to everything based on first impressions, which is how animals usually operate (and sometimes people as well). As an independent human, if I am hungry, I can’t just go to my pantry and expect food to always be there. I have to buy the food and put it in the pantry first. To make things happen the way I want to an arbitrarily precise degree, I need lots of conceptual models. By going to the extra trouble of holding things to such rational standards, that is, by expecting causes to exist for all effects, I know I can achieve much better control of my environment. But I am not comfortable just having better control; I want to be certain. For this, I have to both have conceptual models and apply them correctly to relevant situations. Having done this, my certainty is contingent on my having applied the models correctly, but if I have minimized the risk of misapplication sufficiently, I can attach a special cognitive property called belief to the models, and through belief I can perceive this contingency as a certainty. The result is that I always have food on hand to the exact degree I expect.

As a lower-level example, we have many conceptual models that tell us what our hands can do. Our perception of our hands is very good and is constantly reinforced and never seems to be contradicted, so we are quite comfortable about what we can do with our hands. However, the rubber hand illusion demonstrates how easily we can be fooled into thinking a rubber hand is our own. When a subject’s real left hand is placed out of sight and a similar-looking rubber hand is placed next to their right hand, then when both hands are stroked synchronously with a paintbrush, the subject will start to feel that the rubber hand is her own. What this shows is that the connection between low-level senses and the high-level experience of having hands is not created by a fixed information pathway but triggers when given an adequate level of stimulation. In this case, providing visual but no tactile stimulation to the hand in question prompts us to feel the hand is our own. That activated experience is all-or-nothing — we believe it is our hand. Of course, this illusion won’t hold up for long; we have only to move our arms or fingers or touch our hands together and we will see through it.

Conscious belief works much the same way to let us “invest” in conceptual models. Once a threshold of supporting stimuli is achieved, we believe the model properly applies to the situation, which carries all its deductive certainty with it. We do realize on some level that the application of this model to the situation could be inappropriate, or that our conceptual model may be simplified relative to the real-world situation and thus may not adequately explain all implications, but belief is a state of mind that makes the models feel like they are correct. Movies feel real, too, although we know they are made up. We know belief has limits, but because we can feel it, it lets us apply our emotions and intuitions to develop fast but holistic responses. We need belief to make our conceptions feel fully-embodied so we can act on them efficiently.

Science proposes conceptual models. Formal sciences stay as rigorously deductive as possible and are thus able to retain nearly all their objectivity. Though some grounds for bias must always remain, bias is systematically eliminated from the formal sciences because it is so easily spotted. The explicit logic of formalisms doesn’t leave as much room for interpretation. For natural or real numbers to work consistently with the operations we define on them, we have to propose axioms and rules that can be tested for simplicity and consistency from many angles. That said, the axioms of formal systems have their roots in informal ideas that are arguably idiosyncratic to humans or certain humans. Other kinds of minds could conceivably devise number systems quite unlike the ones we have developed. However, I am concerned here with minds, which exist in physical brains and not simply as formalisms. We have to consider all the ways that bias can impact the experimental sciences.

The effect of bias on perception is generally helpful because it reflects knowledge gained through experience, but cognitive bias is detrimental because it subverts logical reasoning, leading to unsound judgments. Many kinds of cognitive biases have been identified and new ones are uncovered from time to time. Cognitive biases divide into two categories, hot and cold. Cold biases are inadvertent consequences of the ways our minds work. Illusory correlations lead us to think to things observed together are causally related. It is a reasonable bias because our experience shows that they often are. Neglect of probability is a bias which leads people to think very unlikely events are nearly as likely as common events. Again, this is reasonable because our experience only includes events that happened, so our ability to contemplate an event makes it feel like it could plausibly happen. This bias drives lottery jackpots sky high and makes people afraid that stories on the news will happen to them. Anchoring bias leads us to put too much faith in first impressions. It usually helps to establish a view quickly and then refine it over time as more information comes in, but this tendency gives stores an incentive to overprice merchandise and then run sales to give us the feeling we got a bargain. A related bias, insensitivity to sample size (which I previously mentioned was discovered by Kahneman and Tversky), relates to our difficulty in intuitively sensing how large a sample size needs to be statistically significant. Combined with our faith that anyone doing a study must have done the math to pick the right size, it leads us to trust invalid studies. Hot biases happen when we become emotionally invested in the outcome, leading to “wishful” thinking. When we want a specific result, we can rationalize almost any behavior to reach it. But why would a scientist want to reach a specific result rather than learn the truth? It can either be to avoid cognitive dissonance or because the incentives to do science are skewed. Cognitive dissonance includes confirmation bias that favor their existing beliefs, superiority bias that they know better than others, or simply the bias that they are not biased. We all like to think we have been thinking clearly, which leads us at first to reject evidence that we have not been. Skewed scientific incentives are the biggest bias problem that science faces. Private research is skewed to bolster claims that maximize profit. Public research is skewed by political motivations, but even more so by institutional cognitive dissonance, which carries groups of people along on the idea that whatever they have been doing has been justified. Individual scientists must publish or perish, which skews their motivations toward product rather than quality.

Knowing that cognitive bias is a risk, let’s consider how much it impacts the physical sciences. Most physical laws are pretty simple and are based on only a few assumptions, but because nature is quite regular this has worked out pretty well. The Standard Model of particle physics predicts what will happen to matter and energy with great reliability outside of gravitational influences, and general relativity predicts the effects of gravity. Chemistry, materials science, and earth science also provide highly reliable models which, in principle, reduce to the models of physics. Because the models in these sciences have such a strong physical basis and can be tested and replicated using precision instruments to high technical standards, there is not a lot of room for cognitive bias. For example, despite great political pressure to find that the climate is stable, nearly all scientists maintain that it is changing quickly due to human causes. Politics do not sway it much, and the proven value of the truth seems higher than people’s desires to support fictions. More could be done to mitigate bias, but overall it is a minor problem in the physical sciences. But systemic bias is a bit more than a theoretical concern. In what is widely thought to be the most significant work in the philosophy of science, Thomas Kuhn’s The Structure of Scientific Revolutions argued that normal science moves along with little concern for philosophy until a scientific revolution undermines and replaces the existing paradigm(s). This change, which we now call a paradigm shift, doesn’t happen as soon as a better paradigm comes along, but only after it gains enough momentum to change the minds of the stubborn, close-minded majority. Kuhn inadvertently alienated himself from the scientific community with this revelation and never could repair the damage, but the truth hurts. And the truth is, there is a hole in the prevailing paradigm of physicalism that I’d like to drive a Mack truck through: physicalism misses half of reality, the functional half. Fortunately, functional existence doesn’t undermine physical theories; quite the opposite, it extends nature to include more kinds of phenomena than we thought it could support. That functional things can naturally exist using physical mechanisms makes nature broader than the physicalists thought it was. But putting this one point of ontological overreach aside, the physical sciences tend to be biased only in small ways.

The biological sciences are heavily functional but depend on physical mechanisms that can often be explained mechanically. Consequently, we have many biological models that are quite robust and uncontroversial, and this makes bias a minor issue for much of biology. Medicine in the United States, however, is strongly driven by the profit motive, and this often gets in the way of finding the truth. This bias leads to overmedication, overtreatment, and a reluctance in medicine to discover and promote a healthy lifestyle. Any scientific finding that was influenced by the profit motive is pretty likely to be skewed and may do more harm than good. Because of this, nearly all pharmaceuticals are still highly experimental and are likely more detrimental than beneficial. You have only to look at the list of side effects to see that you are playing with fire. Surgery is probably medicine’s greatest achievement because it provides a physical solution to a physical problem, so the risks and rewards are much more evident. However, its success has led to its overuse, because it does always come with real risks that patients may underappreciate.

Bias is a big problem for the social sciences because it has no established standard for objectivity, and yet there are many reasons for people to have partial preferences. The risk of wishful thinking driving research is great. To avoid overreach, the scope of claims must be constrained by what the data can really reveal. The social sciences can and do demonstrate correlations in behavior patterns, but it can never say with complete confidence what causes that behavior or predict how people will behave. The reason is pretty simple: the number of variables that contribute to any decision of the mind are nearly infinite, and even the smallest justification may rise up to become the determinant of action. An information processing system of this magnitude has a huge potential for chaotic behavior, even if most of its actions appear to be quite predictable. I can’t possibly tell you what I will be doing one minute from now. I will almost certainly be writing another sentence, but I don’t know what it will say. But I’m not just trying to find correlations; I am looking for explanations. If behavior can’t be precisely predicted, what can be explained? The answer is simply that we don’t need to predict what people will do, only explain what they can do. It is about explaining potential rather than narrowly saying how that will play out in any specific instance. Explaining how the mind works is thus analogous to explaining how a software program works — we need to delve into what general functions it can perform. What a program is capable of is a function of the hardware capabilities and the range of applications for which the algorithms can be used. What a mind is capable of is a function of species-specific or phylogenetic traits and developed or ontogenetic traits.

The Contribution of Nature and Nurture to the Mind

This brings us to the subject of nature vs. nurture. Nature refers to inherited traits and nurture refers to environmental traits, which are distinctions caused by nongenetic factors. While phylogeny is all nature, ontogeny is a combination of nature and nurture. How much is nature and how much is nurture? This question doesn’t really have an answer because it is like asking how much of a basketball game is the players playing and how much is the fans watching? From one perspective, it is entirely the players, but if the fans provide the interest and money for the game to take place, they arguably cause the game to happen, and if nearly all the minds participating are fans, then the game is mostly about the audience and the players are almost incidental. Nurture provides an add-on which is dramatic for humans. Most animals under normal environment conditions can fairly be said to have entirely inherited traits with perhaps no significant environmental variation at all. Given normal environments, our bodies develop along extremely predictable genetic lines, rendering identical twins into almost identical-looking adults. Most of ontogeny proceeds from the differentiation of cells into tissues and organs, called epigenesis, but environmental factors like diet, disease, trauma, and even chance fluctuations can lead to differences in genetic twins. Brains, however, can create a whole new level of nurtured traits because they manage real-time information.

While genetic and epigenetic information is managed at the cellular level by nucleic acids and proteins, the behavioral information of animals is managed at a higher level by the neuroendocrine system, which, for simplicity, I will just call the brain. The mind is the conscious top-level control subprocess of the brain, which draws support as needed from the brain through nonconscious processes. Anything the nonconscious mind can contribute to the conscious mind can be thought of as helping to comprise the whole mind. We can first divide nurtured traits in the cognitive realm according to whether they are physical or learned. Physically, things like diet, disease, and trauma affect the development of the brain, which in turn can affect the capabilities of the mind. I am not going to dwell on these kinds of traits because most brains are able to develop normally. Learned traits all relate to the content of information based on patterns detected and likelihoods established. While we create all the information in our minds, we are either the primary creators or secondary consumers of information by others. Put another way, primary intrinsic information is made by us and secondary intrinsic information is conveyed to us by others using extrinsic information; let’s call this acquisition primary and secondary learning. Most nonhuman animals, especially outside of mammals and birds, have little or no ability to gather extrinsic information and so rely almost entirely on primary learning. They use phylogenetic (inherited) information processing skills like vision to learn how to survive in their neck of the woods. Their specific knowledge depends on their own experience, but all of that knowledge was created by their own minds using their natural talents.

Humans, however, are highly dependent on socialization. We can gather extrinsic information from others either through direct interactions or through media or artifacts they create. Our Theory of Mind (TOM) capacity, as I previously mentioned, lets us read the mental states (such as senses, emotions, desires, and beliefs) of others. We learn to use TOM itself through primary learning as we don’t need to be taught how, but the secondary information we can learn this way is limited to what their body language reveals. Of greater interest here is not what we can pick up by this kind of “mind reading”, but what others try to convey to us intentionally. This communication is done most notably with language, but much can also be conveyed using pictures, gestures, etc. Language is certainly considered the springboard that made detailed communication possible. Language labels the world around us and makes us very practiced in specific ways of thinking about things. Language necessarily represents knowledge conceptually because words have meanings and meanings imply concepts, though the same word can have many meanings in different contexts. Language thus implies and depends on the existence of many shared conceptual models that outline the circumstances of any instance of language use. (Of course, this only relates to semantic content, which, as I previously noted, is only a fraction of linguistic meaning.)

Since we must store both primary and secondary information intrinsically, we each interpret extrinsic information idiosyncratically and thus derive somewhat different value from it. But if extrinsic information is different for each of us, in what sense does it even exist? It exists, like all information, as a conveyor of functional value. It indicates, in specific and general ways, how similarities can be detected and applied to do things similarly to how they have been done before. We may have our own ways of leveraging extrinsic learning, but it has been suitably generalized so that some functionality can be gleaned. In other words, our understanding of language overlaps enough that we usually have the impression we understand each other in a universal rather than an idiosyncratic way.

Except for identical twins, we all have different genes. Any two human genomes differ in about five million places, and while nearly all of those differences usually have little or no effect and nearly all the rest are neutral most of the time, there are still many thousands of differences that are sometimes beneficial or detrimental. Many of these no doubt affect the operating performance of the brain, which in turn impacts the experience of the mind. Identical twins raised together or apart are often strikingly similar in their exact preferences and abilities, but they can also be quite different. If even identical twins can vary dramatically, we can expect that it will be difficult and often impossible to connect learned skills back to genetic causes. People just learn different things in different ways because of different life experiences and reactions to them. We know that genetics differences underlie many functional differences, but the range of human potential is so large that everyone is still capable of doing astonishing things if they try. But this book is not about genius or self-help. My goal is just to explain the mental faculties we all have in common.

I’m not going to concern myself much going forward with which mental functions are instinctive and which are learned, or which are universal by nature and which by nurture. It doesn’t really matter how we come by the universal functions we all share, but I am going to be focusing on low-level functions whose most salient similarities are almost certainly nearly all genetic. The basic features of our personalities are innate; we can only change our nature in superficial ways, which is exactly why it is called our nature. But many of those features develops through nurture, even if the outcome is probably largely similar no matter what path we take in life.

How to Study the Mind Objectively

Getting back to objectivity, how can we study the mind objectively? The mind is mostly function and we can’t study function using instruments. The formal sciences are all function but can’t be studied with instruments either. We create them out of whole cloth, picking formal models on hunches and then deriving the implications using logic alone. We change formal models to eliminate or minimize inconsistencies. We strip needless complexities until only relevant axioms and rules remain. However, the formal sciences only have to satisfy themselves, while the experimental sciences must demonstrate a correlation between theory and evidence. But if the mind is a functional system that can create formal systems that only have to satisfy themselves, can’t we study it as a formal science independent of any physical considerations? Yes, we can, and theoretical cognitive science does that.

The first serious attempts to discover the algorithms behind the mind were made by GOFAI, meaning “Good Old-Fashioned Artificial Intelligence”. This line of research study ran from about 1956 to 1993 based mostly on the idea that the mind used symbols to represent things and algorithms to manipulate the symbols. This is how deductive logic works and is the basis of both natural and computer programming languages, so it seemed obvious at the time that the formal study of such systems would quickly lead to artificial intelligence. But this line of thinking made a subtle mistake: it completely misses the trees for the forest. Conceptual thinking in isolation is useless; it must be built on a much deeper subconceptual framework to become applicable or useful for any purpose. That intelligence was based on more generalized pattern processing and probably something akin to a neural network was first proposed in the 1940’s and 50’s, but GOFAI eclipsed it for decades mostly because computers were slow and had little memory. Symbolic approaches were just more attainable until the late 1980’s. Since then, neural network approaches that simulate a subconceptual sea have become increasingly popular, but efforts to build generalized conceptual layers on top of them have not yet, to my knowledge, borne fruit. It is only a matter of time before they do, and this will greatly increase the range of tasks computers can do well, but it will still leave them without the human motivation system. Our motivations channel our cognitive energies in useful directions, and without such a system one would have no way to distinguish a worthwhile task from a waste of time. We can, of course, provide computers with some of our motivations, such as to drive cars or perform internet tasks, and they can take it from there, but we couldn’t reasonably characterize them as sentient until they have a motivational system of their own. Sci-fi writers usually take for granted that the appropriate motivations to compete for survival will come along automatically with the creation of artificial conceptual thought, but this is quite untrue. Our motivational system has been undergoing continuous incremental refinement for four billion years, much longer than the distinctive features of human intelligence, which are at most about four million years old. That is both a long time and reflects on the amount of feedback one needs to collect to build a worthwhile motivational system. The idea that we could implant a well-tuned motivational system into a robot with, say, three laws of robotics, is preposterous (though intriguing). Perhaps the most egregious oversight of the three laws, just to press this point, is that they say nothing about cooperation but instead suppose all interactions can be black or white. Of course, the shortcomings of the laws were invariably the basis of Asimov’s plots, so this framework provided a great platform to explore conundrums of motivation.

I do think that by creating manmade minds, artificial intelligence research will ultimately be the field that best proves how the mind works. But I’m not going to wait for that to materialize; I’d like to provide an explanation of the mind based on what we already know. To study the mind as it is, we can’t ignore the physical circumstances in which it was created and in which it operates. And yet, we can’t overly dwell on them either, because they are just a means to an end: they provide the mechanisms which permit functionality, but they are not functional themselves. How can we go about explaining the functionality if the physical mechanism (mainly the biochemistry) doesn’t hold the answers? All we have to do is look at what the mind does rather than what the brain is. The mind causes the body to do things; not randomly, but as a consequence of information processing. In the same way that we can model physical things into functional units connected physically, we can model the mind using functional units connected functionally. Where we can confirm physical mechanisms by measuring them with instruments, we can confirm functional mechanisms by observing their behavior. The behaviorists held that only outward physical behavior mattered and that mental processes were direct byproducts of conditioning. While we now know short, direct feedback loops can be very effective, most of our thinking involves very complex and indirect feedback loops that include a lot of thinking things through. We do need to examine behavior, but at the level of thoughts and not just actions. All the information management operations conducted by the mind are organized around the objective of producing useful (i.e. functional) results, so we need to think about what kinds of operations can be useful and why.

If we aren’t using instruments to do this, how can we do it objectively? Let’s recall that the goal of objectivity is to reveal noumena. Physical noumena are fundamentally tangible, so instruments offer the only way to learn about them. But if we interpreted those findings using just our feelings and intuitions about them, i.e. percepts (which include innate feelings and learned percepts, aka subconcepts), the results would be highly subjective. Instead, we conceive of conceptual models that seem to fit the observations and then distill them into more formal deductive models called hypotheses, which specify simplified premises and logical implications. These deductive models and instrumental observations will qualify as objective if they can be confirmed by anyone independently. However, to do that, we need to be able to connect the deductive model back to the real world. Concepts are built on percepts, so conceptual models are already well-connected to the world, but deductive models have artificial premises and so we need a strategy to apply them. The hypothesis does this by specifying the range of physical phenomena that can be taken to correspond adequately to the artificial premises. Actually doing this matching is a bit subjective, so some bias can creep in, but having more scientists check results more ways can minimize the encroachment of subjectivity to a manageable level.

Functional noumena are fundamentally informational, which means that they exist to create predictive power that can influence behavior. Function isn’t tangible, so the only way we can learn about it is from observing what it does; not physically but functionally. Just as we form deductive models to explain physical phenomena, we need to form deductive models to explain functional phenomena. Our feeling and intuitions (percepts) about our mind are not in and of themselves helpful for understanding the mind. To the extent introspection only gives us such impressions, it is not helpful. If we form a deductive model that explains the mind or parts of it, then the model itself and the conclusions it reaches will be quite objective because anyone can confirm them independently. But whether the model conforms to what actual minds are doing is another story. Without instruments, how can we make objective observations of the behavior of the mind? Consider that conceptual models of both physical and functional phenomena have a deductive component and a part where the concepts are built out of percepts. This latter part is where subjectivity can introduce unwanted bias.

We can’t ignore the perceptual origins of knowledge; it all starts from these inductive sources, and they are not objective because they are necessarily not sharable between minds. But concepts are sharable and are the foundation of language and the principle leg up that humans have over animals, cognitively speaking. Every word is a concept, or actually a concept for each dictionary entry, but inflection, context, and connotation are often used to shift word meanings to suit circumstances. Words and the conceptual models built out of them are never entirely rigid, both for practicality reasons and because information necessarily has to cover a range of circumstances to be useful. But the fact that language exists and we can communicate with as much clarity as we wish by elaborating is proof that language permits objective communication. But if concepts derive from percepts, won’t they compromise the any objectivity one might hope to achieve? Yes, but objectivity is never perfect. The goal of objectivity is to share information, and information is useful whenever it provides more predictive power than random chance. So the standard for objectivity should only be that the information can be made available to more than one person and that its degree of certainty is shared as well. The degree of certainty of objective knowledge is usually implied rather than explicitly stated, and can vary anywhere from slightly over 50% to slightly under 100%.

Let me provide a few examples of objectivity conveyed using words. First, a physical example. If I say that that light switch controls that light, then that is probably sufficient to convey objective information to any listener. The degree of certainty of most information we receive usually hovers just under 100%, but probably more significant is the degree of belief we attach to it. But belief is a separate subject I will cover in an upcoming chapter. Under what circumstances might my statement about the light switch fail to be objective? The following scenarios are possible: I might be mistaken, I might be lying, or I might be unclear. When I speak of a switch, a light, or control, and make gestures, commonly understood conventions fill in details, but if I either overestimate our common understanding or am just too vague, then this lack of clarity could compromise the objectivity of the message. For example, I might expect you to realize that the light will take a minute to warm up before it gives off any light, and you might flip the switch a few times and think I provided bad information. Or I might expect you to realize I only meant if the power were on or if the light had a bulb, even though I knew neither was the case. Mistaken information can still be objectively shared, but will eventually be discovered to be false. Lying completely undermines information sharing, so we should always consider the intentions of the source. Clarity can be a problem, so we need to be careful to avoid misunderstandings. But if we’ve taken these three risks into account, then the only remaining obstacles to objectivity are whether the conceptual model behind the information is appropriately applied to the situation. In this case, that means whether the switch can causally control the light and how. That, in turn, depends on whether the light is electric, gas, or something else, and the switch mechanism, and our standard for whether light is being produced. We all have lots of experience with electric lights and standard light switches, so if this situation matches those familiar experiences, then our expectations will be clear. If the light fails to turn on, 99% of the time it will be a bulb failure rather than any of the other issues mentioned above.

Now let’s consider a functional example, specifically with a conceptual description of a mental phenomenon. If I say hearing gives me pretty detailed information about noises in my proximity, e.g. when a light switch is flipped, then that is probably sufficient to convey objective information to any listener. It’s not about the light switch; it is about the idea that my mind can process information about the noise it produces. Again, we have to control for mistakes, lies, and clarity, but taking those into account, we all have lots of experience with the sounds light switches make, so if this situation matches those familiar experiences, then our expectation of what objectively happened is about as clear as with the prior physical example. When we judge something to be objectively true even though it depends only on our experience, it is because of the overwhelming amount of confirming evidence. While not infinite, our experience with common physical and mental phenomena stretches back across thousands of interactions. While technically past performance is no guarantee of future results, we create deductive models based on past performance that absolutely guarantee future results, but with the remaining contingency that our model has to be appropriate to the situation. So if we fail to hear the expected sound when the switch is flipped, 99% of the time it will be because this switch makes a softer sound than we usually hear. And some switches make no sound, so this may be normal. Whatever the reason, we won’t doubt that “hearing” is a discrete conscious phenomenon. And like all conscious phenomena, the only we we can know if someone heard something is if they say so.

Our Common-Sense Theory of Consciousness

Our own conscious experience and reports from others are our only sources of knowledge about what consciousness is, so any theory of consciousness must be confirmed principally against these sources. Neuroscience, evolution (including paleontology, archaeology, anthropology, and evolutionary psychology), and psychology can add support to such theories, but quite frankly we would never guess that something like consciousness could even exist if we didn’t know it from personal experience. Still, although we believe our conversations lead to enough clarity to count as objective, is this enough to be the basis of science? For example, should we take hearing as a scientifically established feature of consciousness just because it seems that way to us and we believe it seems that way to others? To count as science, it is not enough for a feature to be felt, i.e. perceived; it has to be conceptualized and placed in a deductive model that can make predictions. Further, the support of the other sciences mentioned above is critical, because all evidence has to support a theory for it to become established. I propose, then, that the first step of outlining a theory of the mind is to identify the parts of the mind which we universally accept as components, along with their rules we ascribe to them. Not all of us hold a common-sense view of the mind that is consistent with science or, if it is, is entirely up to date. But most of us believe science creates more reliable knowledge and so subscribe to at least the broadest scientific theories that relate to the mind, and I will assume a common-sense perspective that is consistent with the most prevailing paradigms of science. My contention is that such a view is a good first cut at an objective and scientific description of how the mind works, though it has the shortcoming of not being very deep. To show why it is not very deep, I have put in bold below terms that we take for granted but are indivisible in the common-sense view. Here, then, is the common-sense theory of consciousness:

Consciousness is composed of awareness, feelings, thoughts, and attention. Awareness gives us an ongoing sense of everything around us. Feelings include senses and emotions, which I have previously discussed in some detail. We commonly distinguish a small set of primary senses and emotions, but we also have words for quite a variety of subtle variations. Thoughts most broadly include any associations that pass through our awareness, where an association is a kind of connection between things that only briefly lingers in our awareness. Attention is a narrow slice of awareness with a special role in consciousness. We can only develop conscious associations and thoughts about feelings and thoughts that are under our attention. We usually pay close attention to the focal point of vision, but can turn our attention to any sense (including peripheral vision), feeling, or thought if we believe it is worth paying attention to. We develop desires for certain things, which causes us to prioritize getting them. We also develop beliefs about what information can be acted on without dedicating further thought. We have a sense of intuition about many things, which is information we believe we can act on despite not having thought it through consciously. Broadly, we can take everything intuitive as evidence of a nonconscious mind. We also have reason, which is the ability to create associations we do process through conscious attention. We feel that we reason and reach decisions of our own free will, which is the feeling that we are in control of our mental experience, which we call the self. We know we have to learn language, but once we have learned it, we also know that words become second nature to us and just appear in our minds as needed. But words, and all things we know of, come to us through recollection, a talent that gives us ready access to our extensive memory when we present triggering information to it. Recognition, in particular, is the recollection of physical things based on their features. We don’t know why we have awareness, feelings, thoughts, attention, desires, beliefs, intuition, reason, free will, self, or recollection; we just accept them for what they are and act accordingly.

To understand our minds more deeply we need to know the meaning, purpose, and mechanism of these features of consciousness, so I will be looking closer at them for the balance of the book. I am explicitly going to develop conceptual explanations rather than list all my feelings about how my mind seems to be working because concepts are sharable, but feelings and subconcepts are not (excepting to the degree we can read other people’s feelings and thoughts intuitively). If stated clearly and if compatible with scientific theories, such explanations qualify as both objective and scientific, not because they are right but because they are the best we can do right now. The mental capacities about which I am going to develop conceptual explanations are about the mechanisms that drive feelings, subconcepts, and concepts. Although some conceptual mechanisms are themselves conceptual, many of them and all their underlying mechanisms are based on feelings or subconcepts, so mostly I will be developing conceptual explanations of nonconceptual things. This sounds like it could be awkward, but it is not because physical things are not conceptual either, yet we have no trouble creating conceptual explanations of them. The awkward part is just that explanations categorize and simplify, but no categorization is perfect, and every simplification loses detail. But the purpose of explanation, and information management in general, is to be as useful as possible, so we should not hesitate to start subdividing the mental landscape into kinds of features just because these subdivisions are biased by the very information from which they are derived. I can’t guarantee that every distinction I make can stand up to every possible objective perspective, only that the ones I make stand up very well to the perspectives I am aware of, which is based on a pretty broad knowledge and study of the field. To the limits of practicality, I have reconsidered all the categorizations of the mental landscape I have encountered in order to develop the set that I feel is most explanatory while remaining consistent with the theories I consider best supported. Since I am not really going very deep here, I am hoping that my judgments are overwhelmingly supported by existing science and common sense. Even so, I don’t think many before me have undertaken a disciplined explanation of mental faculties using a robust ontology.

Toward a Scientific Theory of Consciousness

If we have to be able to detect something with instruments for it to objectively qualify as a physical entity, we need be able to see implications of function to objectively qualify something as a functional entity. In mathematics, the scope of implications is carefully laid out with explicit rules and we can just start applying them to see what happens. In exactly the same way, function is entirely abstract and we need only think about it to see what results. The problem is that living systems don’t have explicit rules. To the extent they are able to function as information processors, they do so with rules that are refined inductively, not deductively. Yes, as humans we can start with induction to set up deductive systems like mathematics, which then operate entirely deductively. But even those systems are built on a number of implicit assumptions about why it is worthwhile to develop them the way they are, assumptions which relate to possible applications by inductively mapping them to specific situations. So we can’t start with the rules of thought and go from there because there are no rules of thought. We have to consider what physical mechanisms could result in IPs that could operate implicit rules of thought, and from there see if we can find a way to explicitly describe these implicit mechanisms. We know that living things evolved, and we know quite a bit about how nerves work, even if not to the point of understanding consciousness, and we know much about human psychology, both from science and personal familiarity. Together, these things put some hard constraints on the kinds of function that is possible and likely in living things and minds. So we aren’t going to propose models for the mind based solely on abstract principles of function as we do with mathematics, because we have to keep our models aligned with what we know of what has physically happened. But we do know that our only direct source of what happens in minds comes from having one ourselves, and our only secondary source comes from what we glean from others. The other sciences provide a necessary supporting framework, but they don’t hint at minds as we know them, only at information processing capacity, much like we think nonsentient robots might have.

So our hypotheses for what kinds of things minds can do, worded as a conceptual framework, must begin with what we think our own minds are doing. Any one thought we might have will be subjective in many ways, but objectivity grows as a thought is considered from more perspectives, or by more people, across a broader range of circumstances, to describe a more general set of happenings than the original thought. Once generalized and abstracted to a conceptual level, a deductive model can be manipulated apart from the percepts and subconcepts on which it is founded. In other words, it can be formulated as a free-standing hypothesis from which one can draw conclusions without knowing how or how well it corresponds to reality. It does matter, of course, that it correspond to reality, but demonstrating that correspondence can be undertaken as a separate validation process that will then elevate it from hypothesis to accepted theory. In practice, the scientific method is iterative, with reality informing hypothesis and vice versa to create increasing refinements to what counts as scientific truth. What this means is that our first efforts to delineate the mind should not come from an ivory tower but from an already heavily iterated refinement of ideas on the subject that draw on the totality of our experience. My views on the subject have been crystallizing gradually over the course of my life, as they have for everyone, but in writing this book, they have definitely become less jumbled and more coherent. It is asking too much of this present explanation to be as comprehensive as future explanations will be given further evidence and iterations, but it should at least be a credible attempt to tie together all the available information.

I believe I have by now sufficiently justified the approach I will use, which is essentially the approach I have used up to this point as well. That approach is, in a nutshell, to draw on experience I have gathered from my own mind, and my study of what others have said about the mind or about the systems that support it, to put forth a hypothesis for how the mind works that is consistent with both the common-sense view of the mind and what science has to offer. This hypothesis doesn’t originate with either common sense or science, but rather has been iteratively drawing on both across my own life and back through the history of philosophy and science. Although I hope this approach simply sounds sensible, it sharply breaks with both philosophic and scientific tradition, which are based principally in speculation. I would like to offer the words of Stephen S. Colvin from 1902 in defense of this approach, from his essay, The Common-Sense View of Reality3:

With whatever tenacity the common-sense view may have held its place in ordinary thinking, the history of philosophy shows that from the very beginning speculation broke away from the naive conception of reality in an attempt to harmonize the contradictions between logical thinking and perception. Even before systematic philosophy had developed in Greece, the Eastern sages had declared that the whole world of sense was illusion, that phenomena were but the veil of Maya, that life itself was a dream, and its goal was Nirvana. … Both Heraclitus and Parmenides speak of the illusion of the senses; while Zeno attempted with his refined logic to refute all assertion of the multiplicity and changeability of being. … The Sophists aimed at the destruction of all knowledge, and tried to reduce everything to individual opinion. Protagoras declared that man was the measure of all things, and denied universal validity. … The edifice which the Sophists destroyed Socrates, Plato, and Aristotle, tried to rebuild, but not with perfect success. Socrates does not attempt to gain insight into nature… Plato starts with an acknowledgment that perception can yield no knowledge, and turns his back on the world of sense to view the pure ideas. Aristotle returns in part to phenomena, yet he denies a complete knowledge of nature as such, and, since he considers matter as introducing something contingent and accidental, he finds in intuitive reason, not in demonstration, the most perfect revelation of truth.

Post-Aristotelian philosophy sought knowledge mainly for practical purposes, but was far from successful in this search. … Descartes begins with scepticism, only to end in dogmatism. … Locke’s polemic against the Cartesian epistemology, as far as the doctrine of innate ideas is involved, resulted in leaving the knowledge of the external world in an extremely dubious position; while Berkeley, following after, attempts to demolish the conception of corporeal substance; and Hume, developing Locke’s doctrine of impressions and ideas, removes all basis from externality and sweeps away without compunction both the material universe and the res cogitans, leaving nothing in their places but a bundle of perceptions. [Immanuel Kant tries] to restore metaphysics to her once proud position as a science, the Critique of Pure Reason is given to the world, and a new era of philosophy is inaugurated. … [but] few there are who today turn their attention to metaphysics, not because the questions raised are not still of burning interest, but because there is a general despair of reaching any result. Will this condition ever be changed? Possibly, but not until metaphysics has shaken off the incubus of a perverted epistemology, the pursuit of which leaves thought in a hopeless tangle; not until the common-sense view of the world in the form of a critical realism is made the starting point of a sincere investigation of reality.

Colvin’s conclusion that the common-sense view of the world must be the starting point to understanding the world as we know it has still not been embraced by the scientific community, mostly because the empirical approach to science has been successful enough to eclipse and discredit it. The idea that we can think logically about our own thought processes is starting to gain traction, but I have argued at length here that another stumbling block has been the limited ontology of physicalism. Not only should we allow ourselves to think about our own thought processes, but we also have to recognize that function and information are a completely different sort of thing than physical entities. The rise of the information sciences has made this pretty clear by now, but it was already apparent because scientific theories themselves (and all explanations) are and always have been entirely functional things. It has been sort of ridiculous to say that everything that exists is physical because that denies any kind of existence to the very explanation that says so.

The whole thrust of the above discussion is to justify my approach. I am going to break the mind down into pieces, largely the same pieces that comprise our common-sense view, and I am going to hypothesize how they work by drawing on common sense and science. There is some guesswork in this approach, and I regret that, but it is unavoidable. The chief advantage of doing things this way is that it will let me start to incorporate support from every source available. The chief disadvantage is that instead of strictly building my case from the ground up, as the physical sciences try to do, I have to also work from what we know and try to reverse engineer explanations for it. Reverse engineering is highly susceptible to bias, but being aware of that and taking all information into account seriously will help to mitigate its effects. As I say, any one thought can be subjective and biased, but a careful consideration of many perspectives with the objective of reaching independently verifiable results minimizes that slant.

We can’t test an individual thought, but we can test a conceptual model of how the mind works by seeing if it is consistent with the available evidence from science, from common sense, and from thought experiments. Sure, we can interpret our own thoughts any way we like, and this will lead to many incorrect hypotheses, but we can also conceive hypotheses that stand up to validation from many perspectives, and I posit that this is true for hypotheses about functional things the same way it is true of hypotheses about physical things. Our idiosyncratic ways of seeing things biases us, but bias itself is not bad; only the prejudicial use of bias is bad. All information creates bias, but more carefully constructed generalizations minimize it and eventually pass as objective. While we could probably all recognize a prototypical apple as an apple to the point where we would confidently say this it is objectively an apple, increasingly less prototypical apples become harder and harder to identify with the word apple and increasingly require further qualifications. Wax apples, rotten apples, sliced apples, apple pears, and the Big Apple are examples, but they don’t undermine the utility of the apple concept, they just show that the appropriate scope of every concept decreases as its match to circumstances becomes less exact. I’m not going to assume common-sense features or explanations of consciousness are correct, but I am going to seriously consider what we know about what we know.

All of the above is just a long-winded way of saying that I am going to start in the middle. I already had a theory of the mind based on my own experience before I started this book, but that was only the starting point. The point of writing it down was to develop my conceptual models and the network of intuitions behind them into well-supported conceptual models. I have found weaknesses in my thinking and in the thinking of others, and I have found ways to fix those weaknesses. I realized that physicalism was founded on the untenable wishful thinking that function is ultimately a physical process, which led to my explorations into what function is. That science has been guilty of overreach doesn’t invalidate all of it, but we have to fix the foundation to build stronger structures. Ironically, though, conceptual models themselves have inherent instabilities because they are built on the shifting sands of qualia and subconcepts. But, at the bottom, information really exists, where exists means it predicts better than chance alone, so the challenge of higher level informational models is not to be perfect but to be informative — to do better than chance. We know from experience that conceptual models can, in many cases, come very close to the perfect predictive ability of deductive models. Their instabilities may be unavoidable, but they are definitely manageable. So by starting in the middle and taking a balanced view that considers as many perspectives as possible, I hope to emerge with a higher level of objectivity about the mind than we have previously achieved.

Objectivity is an evolving standard that ultimately depends on certain subjective inputs because our preferences don’t really have an objective basis. Our underlying preference that life matters, with human life mattering the most, is embodied one way or another by all our actions. We can generalize this preference further to say that functionality matters, with greater respect given to higher levels of functionality, and in this way avoid anthropocentrism. Logically, this preference derives from life’s preference to continue living. I suggest we take this as a foundational “objectified” preference, if for no other reason than that a preference for no life or function would be stagnant and is thus already a known quantity. Choosing life over death doesn’t resolve all issues of subjectivity. Life competes with other life, which creates subjective questions about who should live and who should die. Objectively, the fittest should survive. Every living thing must be fit to have survived since the dawn of life, but it is not individual fitness that is selected but rather the relative fitness of genes and genomes in different combinations and proportions. But this is a subjective standard because it literally depends on each subject and what happens to them, while objective standards depend on making categorical generalizations that summarize all those individual events. But summarizing, explaining, and understanding, which underlie objectivity, are therefore built on a foundation of approximation; they can never precisely describe everything that happened to get to that point. But the case I have been making is that conceptual models can get closer and closer by taking more perspectives into account, and this is why we have to put all our concerns on the table from the top down rather than just specializing from the bottom up.

3.2 Deduction is the Key to Intelligence

The Potential of Intelligence

Although humans are a relatively young species — just a few million years old out of over 600 million years of animal evolution — our intelligence makes us incomparable to all other animals. We also have other significant mental and physical differences, but I contend that intelligence drove them, so understanding humans comes down to understanding how our minds are intelligent. While we might loosely say that any information processor, be it a cell or a brain, is intelligent because it works purposefully, this is an inadequate characterization of intelligence. Intelligence refers to a real-time capacity to solve problems. Given enough time, good solutions to problems can evolve, but these solutions depend a lot on chance and have no conception that they are solving problems. Natural selection can and does do much to maximize what trial and error can accomplish, but it can’t figure things out.

Brains evolved to centralize control of animal bodies to provide coordinated movement and action around discrete purposes. This is a larger feedback control problem than cells face but can be solved using regulation mechanisms that evolve slowly through piecewise improvements that provide better control. In many situations, that control can be effectively accomplished using reflexes that directly respond to stimuli. Neurons are quite effective at this, but direct reactions are much less capable than indirect reactions that consider more factors and develop more nuanced responses. But how can neurons achieve more nuanced responses? Mechanically, it comes down to collecting more information and comparing that information for a more considered response. More afferent neurons can provide more and different kinds of information, but this by itself doesn’t help unless there is a way to integrate it. In principle, by using feedback loops, interneurons can inductively learn from experience how to best prioritize incoming information to achieve more effective outcomes. Although it is trial and error just like evolution, it learns much faster because it works in real time.

The shortcoming of inductive thinking is that it can only guess based on patterns and associations seen before. What would be better than guessing is knowing, which implies certainty. But what we think of as certainty is really just the idea that premises can be connected by rules to conclusions. If specific premises always imply specific conclusions, then one has established certainty. This logical relationship between causes and effects is also called deduction. Deduction is the key to intelligence because it creates the capacity to solve problems. Mathematics is proof that deduction can solve problems provided one establishes a formal system based on axioms and rules. Many things can be shown to be necessary truths, i.e. implications that come with certainty, within any formal system. But this alone solves no problems. The word “problem” implies having the perspective of wanting to solve the problem, which implies having a mind capable of imagining why this would be preferable. However, if we instead think about what problem-solving is objectively, we can see that it matches deductive models up to actual circumstances based on similarities with the hope that conclusions implied by the models will also apply to those circumstances. In other words, deductive models can be taken as simulations of actual situations if they are similar enough. The concepts in the models are generalizations and not real, but physical things that are sufficiently similar will often behave similarly to predictions the models make. In fact, we usually find that we can make mental models that so precisely match the external physical world that we can “interact” with our mental conception of the world and find that the physical world responds exactly as our models have predicted.

Given this circumstance, it is no wonder that we take for granted both our minds and their capacity to imagine the world. We can generally consider our conception of the world and the world itself to be part and parcel of the same thing. We understand the difference; we all know perfectly well that our senses can be fooled, leading to a disconnect between mental representation and physical reality. But short of that, we have no real call to distinguish how we think about a physical thing from the thing itself; we take our thoughts about the thing to accurately describe the thing itself, even though our thoughts only capture properties of things that have special interest to us.

But what kind of mind can suppose models to describe reality and then align those models to circumstances to make them bear fruit? We now know that all advanced animals (mammals and birds) are capable of deduction, and especially those with the largest brains, but still none are as capable as humans. But put humans aside for a moment. It is evident from observations that the smarter mammals and birds want to have things and scheme for ways to get them. This way of thinking could only bear fruit if they have some capacity to model the implications of their actions, which is a deductive process. They are not just automata with preprogrammed preferences; they can make trade-offs between alternatives. And the smartest animals can devise custom solutions to problems that require them to move things and use tools to reach their objectives. They aren’t just getting inductive impressions about what is likely, they have ideas about what will happen next because they have models of cause and effect.

Ants and bees use multistep strategies to locate food and share this knowledge with others, but they don’t do it by conceiving causes and reasoning out their effects. Instead, they use instincts, which are genetically-encoded information processing techniques. Given enough time, evolution can encourage fairly complex behaviors in animals instinctively, but it is also rather limited. Each kind of encouraged behavior has to be direct enough to provide a measurable benefit to survival to be selected. Behaviors that only work in special circumstances won’t do this and so can’t evolve. But it has been demonstrated that bees can learn to associate custom markings with food sources. As small as their brains are, they can associate unique visual cues with food because this ability has probably helped every bee to survive in their specific locale. Still, it is a high-level mental association that has to be established by trial and error. There is no reason to assume they develop deductive models that project how things might be and every reason to assume that this would require much more neural capacity than they have.

But why should deduction have evolved in more advanced animals? It is because, by following possible chains of causes and effects, deduction can far outperform induction. Such chains can produce almost certain outcomes, while hunches can only improve the odds. All our technology derives from deduced solutions that make many things possible that educated guesswork could never have done. And our technology today has only scratched the surface because nanotechnology could add almost infinitely more power (though also infinitely more risk). While one-off problem-solving behaviors can’t evolve, the capacity to solve one-off problems deductively can. For such a capacity to work, a brain would need to be able to group subconceptual impressions into conceptual units which could then be strung together with rules indicating cause and effect. I have talked about this division between first-order perceptual information (which includes subconcepts) and second-order conceptual information (which includes metaconcepts) at length. I also mentioned that brain power increased dramatically in certain animal lines, especially through the emergence of the cerebral cortex. We know that better senses (which process information inductively) need more neurons, so it makes sense that better deductive capacities will as well.

Problem-solving is not accomplished with deduction alone. All our other feelings and skills contribute to our intelligence. The feelings and skills of humans are quite different for people than other animals because intelligence has shaped them. Deduction is the driving force because problem-solving depends on it, but simultaneous changes to our senses, emotions, intuition, and other mental traits are inseparably related to increased intelligence, so we need to look at their contributions as well. I’ve previously divided the mind into three parts, for innate perception, learned perception, and conception. Let me now relabel those three parts the qualitative mind, the intuitive mind, and the rational mind. The qualitative mind further subdivides into the aware mind, the attentive mind, the sensory mind, and the emotional mind. The intuitive and rational minds don’t “feel” like anything; rather, they make connections between ideas. From an early age, these three aspects of our minds start to work together to accomplish more than they could separately, to the point where it becomes hard to say where one leaves off and the other begins. But they draw on different capacities of the brain and remain fundamentally separate throughout life despite close cooperation. We can and do learn from experience and reason how to better manage our awareness, attention, senses, and emotions, but most of what they do remains innate. Reason guides most of our top-level actions and so secondarily shapes the kinds of knowledge that develops subconceptually into intuition, but intuition still mostly needs time to develop and can’t be built from reason. And reason has nothing to work with without the qualitative and intuitive minds to support it. Let’s remember that memory itself is managed subconceptually and needs time to become established.

The Rational and Intuitive Minds

Albert Einstein said, “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift”. Einstein characterized the rational mind as a servant because it feels to us like it entirely under our conscious control and so does our bidding. On the other hand, we don’t feel like we directly control our intuition, so we must instead simply hope that it will provide us with insight when we need it. It Einstein had thought about it, he would probably have categorized memory as part of the servant rather than the gift because it seems the most faithful part of us. But memory is just the most active part of the intuitive mind. We repeat things to ourselves in the hope that they will stick and be there when we try to recall them, but whether they come back to us is ultimately a gift, and some have received a larger gift of memory than others. The rational mind is a chain of floating islands that are our deductive models, and they float on a vast sea of intuition and qualia, which is all the processes of our mind which we don’t understand. We don’t know how memory works, or vision, or anything really except for the stories we tell ourselves up on the islands. Einstein’s point is that we have become focused on building bigger and better islands at the expense of the world they float on, which, as a sacred gift, must be recognized as having some kind of deep and irreplaceable significance. Of course, beyond religious metaphors, the qualitative and intuitive minds conduct a great deal of information processing to make island life readily comprehensible to us. Not only must we not take this gift for granted, but we also need to build some chains of islands to explain it in terms we can understand, and that is what I am doing here.

How fair is it to say that the rational mind is the same as the conscious mind and the qualitative and intuitive minds are the same as the nonconscious mind? I want to get this out of the way so it doesn’t bog us down. To a first order of approximation, this is true. Rational information processing is conducted consciously, meaning we are both aware of it and attentive to it, and our qualitative and intuitive information processing is conducted nonconsciously outside our awareness and attention. But they are not the same, so we need to look closer at what they mean to identify the differences. Moving to the next level of detail, we are conscious of all our rational thought because if we weren’t consciously aware of the causal links holding rational ideas together, then they wouldn’t qualify as rational. However, we are also fully conscious of what our qualitative and intuitive minds have to say to us. In fact, it seems to us consciously that they are only there to provide us with information because our conscious selves seem to us to be running the show. Indeed, the conscious mind does coordinate top-level control, but this is not at all the same as saying it is running the show. The top level does make key decisions, but it sees the world through rose-colored glasses, which is to say from a perspective where the things it gets to make decisions about are appropriate for it. At the top level, the mind doesn’t really need to know it exists to perpetuate the species; it only needs to interact with the world in such a way that that happens. Our actions clearly align with that goal, but it is hard to provide an entirely rational explanation for them. That we struggle for our own survival is arguably rational enough, but that we should struggle so mightily to see our offspring succeed, even to our own detriment, seems irrational. Why should we care what happens when we’re dead? Only rather convoluted deductive models such as those provided by religion and evolution can make propagation seem worthwhile. But whether we embrace a rational explanation for it or not (and other animals certainly don’t), we will procreate and protect our young. This suggests that much of what we do is not so much under conscious control as conscious assistance, as our deeper inclinations about what we should spend our time doing are provided by the qualitative and intuitive minds.

The above implies that the nonconscious mind may be using any number of qualitative and intuitive approaches to control behaviors for which the rational mind is not consulted, presumably because conscious oversight is not helpful. Blinking and breathing generally fall in this category. When we move about, we don’t need to dwell on how each muscle must behave. While we can consciously control these things, they work without conscious control and even conscious awareness until there is a problem or need for conscious intervention. Instead, we can think about what we want from the fridge while our bodies breathe, blink, and move us there essentially on autopilot. Even at the top level, our inclination to eat stems from the need to survive and our inclination to flirt stems from the need to procreate, which raises the question of whether the rational mind is merely observing and only “rationalizes” that it plays an active role while the qualitative and intuitive minds really pull the strings. But for humans, at least, the rational mind has an integral part to play, though each part can be said to contribute its own kind of control. While chance plays enough of a role in evolution that no outcome can be said to be inevitable, the functional and cognitive ratchets have strongly favored changes that bring greater functionality, and intelligence gives animals more effective ways to do things. But in order to empower reason to supersede instinct, evolution has had to gradually release humans from the hold of many instincts, which is a two-edged sword. On the plus side, it gives us the power to make “choices”, which are evaluations that draw on our learned experience that select between many possible courses of action. On the minus side, it forces us to spend years learning how to make good choices in the absence of instinct to guide us. Humans must learn much more than other animals to survive, which leaves us helpless for much longer as well.

Looked at from another perspective, conscious experience creates a simplified internal model of the world which I previously called the mind’s-eye world. The mind’s-eye world is like a cartoon relative to the physical world because it outlines only the features of interest to the mind and glosses over the rest. Concepts and deductive models are the high-level denizens of that world, but qualia and subconcepts also live there, but their presence has been “filtered” by nonconscious processes into a form consciousness can understand. Consciousness depends on good memory to store and retrieve conceptual thoughts just as it does subconceptual thoughts, but this reliance on memory, which can be guided but not fully controlled by conscious effort, passes some of our conscious control to the intuitive mind. But then, many other processes that support our conscious thought also depend on nonconscious mechanisms. Analogously, when we watch a movie (or a cartoon), we know that hundreds of people worked on details we don’t understand to present a story to us in a way that is seamless and comprehensible. Of course, it is not fully seamless, but consciousness tries to fit available information into familiar models to make the cartoon “feel” real. When enough models line up, we feel like we comprehend what is happening. Different people can experience vastly different levels of comprehension in the same situation, depending on how many models they can line up and how powerful those models are. Generally speaking, people with mastery from thousands of hours of experience will understand something much better than amateurs.

Objectively, we can conclude that the conscious mind is summarized into the mind’s-eye world so that it can do things rationally. This inevitably includes certain biases in interpretation and application as it must work with information from that qualitative and intuitive minds, which are not rational themselves, but the net result is that logic does help and “Science works, bitches”1. The rational mind exercises considerable control from its perspective, just as the qualitative and intuitive minds do from theirs, or the heart and kidneys do from theirs. Every functional construct of life masters certain useful processes, but consciousness is special in two critical ways. First, it is the only process of life with which we have intimate familiarity. Second, it is the process that sits logically on top of all the others in a master control role. For these two reasons, we can single out consciousness, and rationality in particular within human consciousness, as the highest form of functional existence on the planet.

How do the qualitative and intuitive minds communicate with the rational mind? Qualia are mostly one-way; information just appears unbidden in our conscious minds from nonconscious processes called awareness, attention, senses, and emotion. But conscious feedback moves our bodies, which changes what qualia we experience, and our conscious thoughts also impact how we interpret qualia. The intuitive mind, too, is mostly one-way. It provides us with a live-stream of general familiarity with our surroundings. We just know from experience how most of the world around us works. But we can also request information from our intuition, most notably as memory. Although most of our memory is subconceptual, the learned perceptions that create familiarity, a very important fraction is conceptual, and we depend on nonconscious processes to store it and retrieve it along with the rest of our memory. We do this by consciously “trying” to remember something by thinking of details associated with what we want to remember. When we see an apple, we recognize it both subconceptually and up to conceptual identification as an apple, without trying consciously at all. But when we just think of fruits and apples, we can also “recognize” or recall thoughts based on these cues, and this uses conscious effort instead. Memory is so associative that we develop a very strong sense of how far our memory reaches even without pulling up details of that memory. Because unused memory fades over time, we are sometimes surprised to discover we have forgotten or at least have trouble recalling things that we know we know. So although our requests for memory must put our trust into brain processes we don’t understand, we understand pretty well what we can expect from them, so we understand how to use our memory, which is the only part that matters to us. As with all functionality, the only part that matters is the ability to use it, so it doesn’t matter to us that we don’t understand how our own minds work. (Of course, I see value in understanding how the mind works, which is a theme I will develop in part four).

So far as we can tell, the chaining of causes to effects only happens under conscious effort. This is arguably just because that is the specialized role of the rational mind, but that only repeats what is happening without saying why. We know that rational thought under conscious control is helpful, but we need reasons why rational thought outside conscious control would be unhelpful to support an argument that it doesn’t happen. Consider that if we could reason nonconsciously, then we would reach conclusions without knowing why and without considering interactions from a high level with other thoughts, deductive or not. It could be unsafe to act on conclusions reached entirely intuitively without considering knock-on implications. The “safe” level for intuitive thought is to make associations based on lots of experience. This doesn’t mean intuition is entirely nonconceptual. Our conceptual memories are stored with our subconceptual ones, and intuition can access both equally well. So if we have a lot of experience with a given conceptual model and that experience suggests an inductively-supported conclusion, then that is the same kind of associative thinking we do entirely with subconcepts. High-level intuitions can thus be seen to be based on experience that is heavily conceptually-based. This intuition doesn’t need to chain causes and effects or think in terms of concepts and rules at all; it only needs to see likelihoods based on amassed evidence. We can trust intuition that works this way because we know it is only making single-step inferences from a preponderance of the evidence, and it has saved us time because it sifted through all that evidence without requiring conscious attention on it. Quick, decisive action is often or even usually more helpful than slow, excessive care, and our brains are set up to give us the ability to act this way as much as possible. Conceptual models can easily produce surprising results unforeseen by experience, which gives them both great power and great risk. Having the conscious mind mediate their use in one place greatly reduces the risk of unintended consequences.2

As an example, if we fall through the ice in a river when we are alone, we will intuitively try to climb out because lots of experience tells us that this one step would solve our problem. If that doesn’t work, we need a plan — a chain of causes and effects that ends with us on land. Never having needed such a plan before, our intuition will just keep screaming, “Get out!”. If we could break the ice bit by bit all the way to the shore, we could then walk out. This actually saved someone’s life but didn’t occur to him until he stopped trying to get out and thought it through.


The process of creating information from experience is called learning. I’ve talked about information and information processing as if they are the same thing. Since the existence of information is tied to what it can do, they are the same from a functional perspective, but physically they are different. Physically, information is stored with the hope that it will prove useful, while information processing uses stored information and learns in the process from feedback just how useful it is, storing that as new information. From this example, one can see that not all information processing is learning. If you compute 3743, did you learn anything? You already knew how to multiply, so if you just went through the motions and got the answer, you have not really learned anything; at best, you have reinforced prior learning. However, it is hard to do anything without learning something. For example, so long as you remember the answer, 1591, you have learned that one fact and don’t need to compute it. If you also noticed that 40 * 40 = 1600 is about the same, you have learned that approximating this way is pretty accurate. Finally, if you noticed that the result differed by 9, which is 33, and 37 and 43 are both 3 different from 40, you may have wondered if (40-3)(40-3) = (4040)-(33) is a mathematical rule of some kind. In doing this, you have either remembered or taught yourself a little algebra, because (a-b)(a+b) simplifies to aa – bb for all a and b.

Both first- and second-order information record patterns about patterns with the hope that they will prove useful, but hope is not enough. I initially said that information “provides the answer to a question of some kind or resolves uncertainty” or, alternately, is the capacity to predict what will happen with better than random odds. Hope has nothing to do with it; information provides, resolves, and predicts. While it does not come with a guarantee of success, it does come with a guarantee that it will make a difference statistically (unless it is false or mistaken information). Although information has technically been abstracted from information processing because it is concerned only with the capacity to predict, and this capacity need not, in theory, be tested to see if it is borne out statistically, it is implied that an information processor must have existed to create the information and also will exist to use the information again, because otherwise the meaning of the information, and hence its existence, is lost. The existence of information is ultimately tied to its use and hence its function, so while it can be bottled up and stored independent of its function, information processing must have occurred to create it and must occur again to use it.

I point this out to emphasize again the ultimate inseparability of information from information processing. We have to keep this in mind because it changes our understanding of what information is. Information is never just the pattern that has been discovered or encoded; it is also the ways that pattern can be used to predict what will happen. Subconcepts learned from experience are not just impressions we have collected from being out in the world; they are feedback-reinforced impressions. What matters to us about our sensory, emotional, and cognitive experiences are not the patterns themselves, which are first-order effects, but the feedback we connect with them, which are second- and higher-order effects. Put another way, first-order effects can never be information by themselves; they must be accompanied by second- and higher-order feedback effects to give them value. My vision creates a first-order impression of colors and shapes that, by itself, is meaningless and thus devoid of information. The information comes from associating what I see through recognition to higher-order interpretations based on long experience. Recognition itself is an act of learning about current conditions, and this is vital because all conditions, ultimately, are current conditions. We can thus conclude that perception and the subconceptual cognition based on it depends heavily on learning from experience and that this first-order information is built using second- and higher-order feedback effects.

My multiplication example above shows that deduction can proceed without learning, and we often like to think of the conceptual knowledge we have acquired about the world as being a reliable and invariant store. But information, and hence knowledge, never really sits still. Concepts exist to idealize the world, and ideals are invariant, ideally. But the physical world is not ideal. To apply any deductive model, we must make many assumptions to fit the model to the circumstances, which includes many inductive compromises. Applying deductive conclusions back to the world also requires compromising assumptions. While we can and do develop conceptual models to help us apply conceptual models better, this fitting ultimately relies on the first-order information of experience, which means we need learning and experience to use concepts. Second- and higher-order conceptual information is built on a foundation of first-order subconceptual information. In practice, just as old, disused library books are weeded out and replaced by new volumes, our ideally invariant store of conceptual models is weeded as well based on use. Our minds reinforce and keep models that keep providing value and forget those that don’t. Science works much the same way on a larger scale. It is not enough for theories to be expounded, they must continue to provide value or they will become irrelevant and forgotten.

The Structure of Thought

Qualia, subconcepts, and concepts collectively comprise “thoughts”. Thought is not a rigid term but encompasses any kind of experience passing through our conscious awareness. We count sufficiently reliable thoughts as knowledge (I will discuss later what qualifies as sufficient). Knowledge can be either specific or general, where specific knowledge is tied to a single circumstance and general knowledge is expected to apply to some range of circumstances. Qualia are always specific and take place in real time, but subconcepts and concepts (including the memory of qualia) can be either specific or general, and can either take place in real time as we think about them or be remembered for later use. Though qualia thus constitute much of our current knowledge, they comprise none of our learning, experience, or long-term knowledge. The interpretation of qualia is to some degree not innate but depends on experience, but at this point subconcepts and concepts take over from qualia. Qualia are strictly immediate knowledge while subconcepts and concepts are learned knowledge. Usually when I use the words thoughts and knowledge I will be referring to learned knowledge, i.e. subconcepts and concepts, and I will instead use the word feelings to refer to qualia.

All feelings and thoughts create information from patterns discovered in real time, that is, through experience and not through evolution, even though they all leverage mental and cognitive mechanisms that evolved. Feelings are the “special effects” in the theater of consciousness that deliver awareness of phenomena through custom channels, while thoughts (i.e. subconcepts and concepts) have no sensation or custom feel. When we bring subconcepts and concepts to mind through our memory, we can also remember qualia we have remembered with them. This “second-hand” feeling is much less vivid, but it likely sends signals through the same channels first-hand qualia use to generate some measure of feeling. Knowledge doesn’t feel like anything; all it does is bring other knowledge to mind. Thoughts connect to other thoughts. As we think about any thought, we will remember related thoughts connected to it through our memory. We can focus on any such aspect and then remember more about it via free-association, following one thought to another by intuition or whim. Much of our thinking is goal-oriented, because the purpose of the conscious mind is to provide more effective control of the body, and the special value of deductive thinking is to consider different chains of events that can help with that. So beyond free association, we will sift through and prioritize our goals and then think through ways to achieve them. Our whole minds are set up to prioritize by devoting more attention to matters that are considered more urgent, so this heavily influences the order in which we assemble deductive strategies. However, we can also contemplate matters by digging depth-first into deeper levels of detail or breadth-first by reflecting across top-level associations first.

Our thinking process must divide and conquer a problem using a practical chain of events that uses an overall deductive model that fits the current circumstances well. Language has been a useful tool for organizing thoughts and describing causal chains for so long that we use an inner voice to mediate many of our personal thoughts, especially those that deal with abstract subjects which would be hard to keep straight without words and sentences as anchoring points. Since language necessarily connects thoughts serially, it makes us quite expert at reducing our thoughts to serial chains. Since language arose at least large to facilitate cooperation, it is arguably used more to convince others to think the way you want them to than to convey information. Every communication is a balance of persuasion and explanation that pits pros and cons against causes and effects. As a scientist, I am more interested in developing good explanations than in selling them, but if I don’t sell them well then I can’t do much good. These are the two faces of information in another guise — the patterns and how to use them. The best deductive logic is useless if it can’t be practically applied to real situations, so, across different levels, roughly equal effort must be spent prioritizing and logically connecting.

I have made much of the distinction between subconcepts and concepts, but I haven’t defined them very well. It turns out that concepts can be defined pretty well, but subconcepts can’t. The reason is definition is a conceptual notion; once something is well-defined, it ceases to be a subconcept and becomes a concept. Definition allows an idea to be placed into a conceptual model or framework connected by causes and effects. Everything we know that is outside such frameworks just generally floats in our vast neural sea of subconceptual experience. We can’t talk about specific subconcepts, we can only say that much of our understanding of trees or apples, for example, is based on having seen many of them and having developed many impressions about things that are tree-like or apple-like. These are vast networks of knowledge that would be severely compromised if they were reduced to a few words. To describe or define something does pick out a small number of fixed associations between it and other defined entities. Just as the mind’s-eye world contains a cartoon version of physical reality, so too are concepts cartoon versions of subconcepts which go almost infinitely deeper and wider. That said, I am not going to delve much more into the nature of subconcepts but will move directly to concepts.


Conceptual thinking, as I discussed in the chapter on dualism, is based on deductive reasoning. Deduction establishes logical models, which are sets of abstract premises, logical rules, and conclusions one can reach by applying the rules to the premises. Logical models are closed, meaning we can assume their premises and rules are completely true, and also that all conclusions that follow from the rules are true (given binary rules, but the rules don’t have to be binary). In our minds, we create sufficiently logical frameworks called conceptual models, for which the underlying premises are concepts. Concepts are abstract entities which have two parts in the mind: a handle with which we refer to it and some content that tells us what it means. The concept’s handle is a reference we keep for it in our minds like a container. Concepts are often named by words or phrases, but we know many more concepts than we named with words, including, for example, a detailed conceptual memory of events. From the perspective of the handle, the concept is fully abstract and might be about anything.

The concept’s meaning is its content, which consists of one or more relationships to other concepts. At its core, information processing finds similarities among things and applies those similarities to specific situations. Because of this, the primary feature of every concept’s content is whether it is a generality or a particular. A generality or type embraces many particulars that can be said to be examples of the type. The generality is said to be superordinate to the subordinate example or instance across one or more variable ranges. Providing a value for one of those ranges creates an example or instance called a token of the type, and if all ranges are specified one arrives at a particular, which is necessarily unique because two tokens with the same content are indistinguishable and so are the same token. A possible neural-net implementation is for many possibly far-flung neurons to represent any given concept’s handle, and their connections to other neurons to represent that concept’s content. Generalities are always abstract, while particulars can be either concrete or abstract, which, in my terminology means they are either about something physical or something functional. A concrete or physical particular will correspond to something spatiotemporal, i.e. a physical thing or event. Each physical thing has a noumenon (or thing-in-itself) we can’t see and phenomena that we can. From the phenomena, we create information (feelings, subconcepts, and concepts) which can be linked as the concept’s content. Mentally, we catalog physical particulars as facts, which is a recognition that the physical circumstance they describe is immutable, i.e. what happened at any point in space and time cannot change. Note that concrete particulars are still generalities with respect to the time dimension, because we take physical existence as something that persists through time. However, since concrete particulars eventually change over time, we model them as a series of particulars linked generally as if the thing was the “same” or persisted over time. What happens at a given point in space and time is noumenal, but we only know of it by aligning our perceptions of phenomena with our subconcepts and concepts, which sometimes leads to mistaken conclusions. We reduce that risk and establish trust by performing additional observations to verify facts, and from the amount of confirming evidence we establish a degree of mathematical certainty about how well our thoughts characterize noumena. Belief is a special ability which I will describe later that improves certainty further by quarantining doubt.

An abstract or functional particular is any non-physical concept that is specific in that it doesn’t itself characterize a range of possible concepts. The number “2” is an abstract particular, as it can’t be refined further. A circle is also an abstract particular until we introduce the concept of distance, at which point circle becomes a type whose radius can vary. A circle with a known radius is then a particular. If we introduce location within a plane, we would also need the coordinates of the circle’s center to make it into a particular again. So we can see that whether an abstract concept is particular or not depends on what relationships exist within the logical model that contains it. The number x such that x+2=4 is variable until we solve the equation, at which point we see it is the particular 2. The number x such that x^2=4 is variable even after we solve the equation because it can be either -2 or 2. So for functional entities, once all variability is specified within a given context one has an abstract particular. Mathematics lays out sets of rules that permit variability that then let us move from general to particular mathematical objects. Deductive thought employs logical models that permit variability and can similarly arrive at particulars. For example, we can conceive of altruism as a type of behavior. If I write a story in which I open a door for someone in some situation, then that is a fully specified abstract particular of altruism. So just as we see the physical world as a collection of concrete particulars that we categorize using abstract generalities about concrete things, we see the mental world as a set of abstract particulars categorized by abstract generalities about abstract things. Thus, both our concrete and abstract worlds divide nicely into particular and abstract parts. Concrete particulars can be verified with our senses (if we can still access the situation physically), but abstract particulars can only be verified logically, which means an appropriate logical model must be specified. In both cases, we can remember a particular and how we verified it.

While our senses send a flood of information to our minds which inherently form concrete particulars, the process of recognition aligns those things based open similarities to many subconcepts and concepts, which are thus abstract types. Subconcepts are not further relevant here, because we only use innate associative means to manipulate them, but concepts link to other concepts through relationships described by conceptual models, which themselves vary across a range of possibilities. What does the content of a concept look like? The surprising fact we have to keep in mind is that concepts are what they can do — their meaning is their functionality. So we shouldn’t try to decompose concepts into qualities or relationships but instead into units of purpose. Deductive models can provide much better control than inductive models because they can predict the outcomes of multistep processes through causal chains of reasoning, but to do that their ranges of variability have to align closely with the variability experimentally observed in the kinds of applications in which we hope to apply them. When this alignment is good, the deductive models become highly functional because their predictions tend to come true. Viewed abstractly then, the premises and rules of deductive models exist because they are functional, i.e. because they work. So concepts are not just useful as an incidental side effect; being useful is fundamental to their nature. This is what I have been saying about information all along — it is bundled-up functionality.

Given this perspective, what more can we say about content? Let’s start simply. The very generic concept in this clause’s use of the phrase “generic concept”, for example, is an abstract generality with no further meaning at all; it is just a placeholder for any concept. Or, the empty particular concept in this clause is an example of an abstract particular with no further meaning, since it is the unique abstract particular whose function is to represent an empty particular. But these are degenerate cases; almost every concept we think of has some real content. A concrete particular concept includes spatiotemporal information about it, as noted above, and all our spatiotemporal information comes originally from our senses as qualia. We additionally gain experience with an object which is generalized into subconcepts that draw parallels to similar objects. Much of the content of concrete particulars consists of links to feelings and subconcepts that remind us what it and other things like it feel like. Each concrete particular is also linked to every abstract generality for which it is a token. Abstract generalities then indirectly link to feelings and subconcepts of their tokens, with better examples forming stronger associations. What does it mean to link a concept to other feelings (sensory or emotional), subconcepts, or concepts? We suspect that this is technically accomplished using the 700 trillion or so synapses that join neurons to other neurons in our brains3, which implies that knowledge is logically a network of relationships linking subconcepts and concepts together and from there down to feelings. Our knowledge is vast and interconnected, so such a vast web of connections seems like it could be powerful enough to explain it, but how might it work? Simplistically, thinking about concepts could activate the feelings and thoughts linked by their contents by activating the linked neurons. Of course, it is more complicated than that; chiefly, activation has to be managed holistically so that each concept (and subconcept and feeling) contributes an appropriate influence on the overall control problems being solved. The free energy (surprise-minimization) principle is one holistic rule that helps provide this balance, but in more detail than that are attention and prioritization systems. But for now, I am trying to focus on how the information is organized, not how it is used.

Central to the idea of concepts is their top-down organization. To manage our bodies productively, we, that is our minds as the top-level control centers of our brains, have to look at the world as agents. When we first start to figure the world out we form learn simple categories like me and not-me, food and not-food, safe and not-safe. Our brains are wired to pick up on these kinds of binary distinctions to help us plan top-level behavior, and they soon develop into a full set of abstract generalities about concrete things. It is now impossible to say how much of this classification framework is influenced by innate preferences and how much was created culturally through language over thousands of generations, because we all learn to understand the world with the help of language. In any case, our framework is largely shared, but we also know how to create new personal or ad hoc classifications as the need arises. For categories and particulars to be functional we need deductive models with rules that tell us how they behave. Many of these models, too, are embedded in language and culture, and in recent centuries we have devised scientific models that have raised the scope and reliability of our conceptual knowledge to a new level.

Some examples of concepts will clarify the above points. The concept APPLE (all caps signifies a concept) is an abstract generality about a kind of fruit and not any specific apple. We have one reference point or handle in our minds for APPLE, which is not about the word “apple” or a thing like or analogous to an apple, but only about an actual apple that meets our standard of being sufficiently apple-like to match all the variable dimensions we associate with being an APPLE. From our personal experience, we know an APPLE’s feel, texture, and taste from many interactions, and we also know intuitively in what contexts APPLEs are likely to appear. We match these dimensions through recognition, which is a nonconscious process that just tells us whether our intuitive subconcepts for APPLE are met by a given instance of one. We also have deductive or causative models that tell us how APPLEs can be expected to behave and interact with other things. Although each of us has customized subconceptual and conceptual content for APPLE, we each have just one handle for APPLE and through it we refer to the same functionality for most purposes. How can this be? While each of us has distinct APPLE content from our personal experiences, the functional interactions we commonly associate with apples are about the same. Most generally, our functional understanding of them is that they are fruits of a certain size eaten in certain ways. In more detail, we probably all know and would agree that an APPLE is the edible fruit of the apple tree, is typically red, yellow or green, is about the size of a fist, has a core that should not be eaten, and is often sliced up and baked into apple pies. We will all have different feelings about their sweetness, tartness, or flavor, but this doesn’t have a large impact on the functions APPLEs can perform. That these interactions center around eating them is just an anthropomorphic perspective, and yet that perspective is generally what matters to us (and, in any case, not so incidentally, fruits appear to have evolved to appeal to animal appetites to help spread their seeds). Most of us realize apples come in different varieties, but none of us have seen them all (about 7500 cultivars), so we just allow for flexibility within the concept. Some of us may know that apples are defined to be the fruit of a single species of tree, Malus pumila, and some may not, but this has little impact on most functional uses. The person who thinks that pears or apple pears are also apples is quite mistaken relative to the broadly accepted standard, but their overly generalized concept still overlaps with the standard and may be adequate for their purposes. One can endlessly debate the exact standard for any concept, but exactness is immaterial in most cases because only certain general features are usually relevant to the functions that typically come under consideration. Generality is usually more relevant and helpful than precision, so concepts all tend to get fuzzy around the edges. But in any case, as soon as irrelevant details become relevant, they can simply be clarified for the purpose at hand. Suppose I have an apple in my hand which can call APPLE_1 for the purposes of this discussion. APPLE_1 is a concrete particular or token of an APPLE, and we would consider its existence a fact based on just a few points of confirming evidence.

The fact that a given word can refer to a given concept in a given context is what makes communication possible. It also accounts for the high level of consistency in our shared concepts and the accelerating proliferation of new concepts through culture over thousands of years. The word “apple” is the word we use to refer to APPLE in English. The word “apple” is itself a concept, call it WORD_APPLE. WORD_APPLE has a spelling and a pronunciation and the content that it is a word for APPLE, while APPLE does not. We never confuse WORD_APPLE with APPLE and can tell from context what content is meant in any communication. Generally speaking, WORD_APPLE refers only to the APPLE fruit and the plant it comes from, but many other words have several or even many meanings, each of which is a different concept. Even so, WORD_APPLE, and all words, can be used idiomatically (e.g. “the Big Apple” or “apple of my eye”) or metaphorically to refer to anything based on any similarity to APPLE. We usually don’t name instances like APPLE_1, but proper nouns are available to name specific instances as we like. We don’t have specific words or phrases for most of the concepts in our heads, either because they are particulars or because they are generalities that are too specific to warrant their own words or names. A wax apple is not an APPLE, but it is meant to seem like an APPLE, and it matches the APPLE content at a high level, so we will often just refer to it using WORD_APPLE, only clarifying that it is a different concept, namely WAX_APPLE, if the functional distinction becomes relevant.

Some tokens seem to be the perfect or prototypical exemplars of an abstract category, while others seem to be minor cases or only seem to fit partially. For example, if you think of APPLE, a flawless red apple probably comes to mind. If you think of CHAIR, you are probably thinking of an armless, rigid, four-legged chair with a straight back. Green or worm-eaten apples are worse fits, as are stools or recliners. Why does this happen? It’s just a consequence of familiarity, which is to say that some inductive knowledge is more strongly represented. All the subcategories or instances of a completely impartial deductively-specified category are totally equivalent, but if we have more experience with one than another, then that will invariably color our thoughts. Exemplars are shaped by the weighting of our own experience and our assessment of the experience of others. We develop personal ideals, personal conceptions of shared ideals, and even ideals customized to each situation at hand that balance many factors. Beyond ideals, we develop similar notions for rarities and exceptions. Examples that only partially fit categories only demonstrate that the category was not generalized with them in mind. Nothing fundamental can be learned about categories by pursuing these kinds of idiosyncratic differences. Plato famously conceived the idea that categories were somehow fundamental with his theory of Forms, which held that all physical things are imitations or approximations of ideal essences called Forms or Ideas which they in some sense aspire to. I pointed out earlier that William of Ockham realized that categories were actually extrinsic. They consequently differ somewhat for everyone, but they also share commonalities based on our conception of what we have in common.