The Science of Function

Contents

The Essence of Function
Function in Concepts
Function in the Sciences
Function in Non-living Things
Function in Living Things
Function in the Mind
Natural and Artificial Information

The Essence of Function

I’ve made a case for functional existence being independent of physical existence and said that it came into existence when living things began managing information through DNA. But speaking more abstractly, what is the essential character of a functional entity? The essence of a thing is also called the thing itself or its noumenon. The noumenon of a physical thing is its form in spacetime. The noumenon of a functional thing is its function, which is what it can do. Functional things are independent of space or time, but information management systems make it possible for some physical things to perform functions. I need to talk a bit more about those systems before we can talk about the functional entities they manage. Two different classes of information management systems exist that achieve functionality in two different ways. Biological organisms collect information in DNA. DNA can’t effect permanent functional changes more than once per generation because the whole organism uses a single set of DNA replicated from its first cell, the zygote. So DNA function is, in this sense, static over an individual’s lifetime (though how and when it is expressed varies over time). Animals also manage information using brains. The brain is built using genetic plans and is able to perform many skills innately. But using these skills, it also collects real-time information and stores it in memory, which lets the brain’s function grow from experience dynamically over an individual’s lifetime. This real-time identification and storage of patterns that improve functionality is called learning. In traditional evolutionary theory, DNA doesn’t learn but instead depends on random mutations to improve function by pure chance. I will review a new theory further down that proposes that DNA must learn and does, albeit slowly. But the more significant point regarding minds is that brains learn. Information is dynamically assessed neurochemically via different kinds of internal feedback loops, producing better quality information for use going forward. Sometimes these loops take the form of conscious simulations in mental models, and these comprise what we call an understanding of the underlying information. And, not incidentally, determining the essential character of functional entities means gaining an understanding of them from mental models. So we can conclude that biological organisms manage information genetically over multigenerational time and brains manage information neurochemically during a single generation.

We will use this dynamic capacity of our minds to learn to develop an understanding of physical and functional noumena. Our understanding is itself a functional entity, not to be confused with the objects under study. Understandings are models that describe aspects of the objects under study that are knowable from observations or considerations of those objects but are not the objects themselves. In other words, understanding can be said to be a phenomenal or observational account of a noumenal entity that characterizes it without being it. Understanding is fundamentally a phenomenal, indirect, “about” kind of functional entity that refers to another entity.

Functional entities that can work independently of other function entities are autonomous, and those that are component functions of autonomous entities are dependent. Only organisms are completely autonomous entities, though in the long run they need an ecosystem including other organisms. Brains, minds, civilizations, and computer programs are also autonomous entities, though they build on each other and ultimately on organisms. The dependent functions of organisms include the jobs of everything from proteins to cell organelles to organs. For brains, it includes a variety of nonconscious bodily functions such as heart rate, digestion, and rate of respiration, as well as the many nonconscious support functions needed to support the mind, such as 3-D vision processing and our knack for language. For minds, it includes the capacities created by feelings, subconcepts, and concepts. For civilizations, it includes the functions served by laws, institutions, and conventions. And for computer programs, it includes whatever functions the programs define. Any discussion we have about the dependent functions of any of these systems will itself be conducted using dynamic functions of our minds, which is to say ideas, which are composed of feelings, subconcepts, and concepts. So although function is not itself physical, functions do manifest physically in these kinds of information management systems by exploiting the uniformity of nature to gather information using feedback to achieve functional goals in an otherwise indifferent universe. This natural and perhaps inevitable consequence of the laws of nature means that both physical and functional existence are parts of the natural world, and not just the more visible and measurable physical existence. While it is a given that information management systems are functional entities, most of my discussions will concern the dependent functions they perform. Regardless of how much we know about the dependent functions of organisms, brains, minds, civilizations or computer programs, we can at least say that they are functional and that our understanding of them is functional, while rocks, streams and weather patterns are inanimate, are not under information management, and are not functional entities.

Our understanding of physical noumena is necessarily indirect because all knowledge of the physical world is mediated through our senses. Any understanding we develop of the mind as a natural phenomenon is also necessarily indirect because it must be mediated through our cognitive sense of it. Consequently, we can never know the “true” nature of natural noumena; we only know them through their phenomena. Physical phenomena can be observed using our senses, but can also be measured with much greater objectivity and accuracy using instruments. We can describe functional entities, too, by “observing” their phenomena, by which I mean considering their functions by thinking about them rather than using our senses. Since both kinds of observation are performed by our minds and hence are functional exercises, we do them and record them in much the same way. Specifically, the process involves collecting data and looking for patterns in the data that have been demonstrated by feedback to help in making useful predictions. The process has two feedback loops, one for forming hypotheses or generalizations and one for testing them. This generalized approach to understanding things is also the basis of the scientific method. The data and patterns we discern can be said to characterize or represent the object under observation but are quite different from it, even though we usually think of them as being the same thing. We do fully understand that our internal representation of something is a wholly different kind of thing than what it refers to, but references like this are only useful to us if we act on them as being equivalent, so we usually ignore the distinction. We have an innate capacity, called belief, for managing our degree of trust in this relationship, which I will discuss later.

Not all functional noumena are natural entities that can be understood only through observation. Some can be known directly, namely those that are true by definition or tautological. If one names a concept with a word or phrase, then the fact that the name references the concept is true by definition. When we define a formal system and the rules that drive it we have established a number of necessary truths. The definitions and rules are true by definition, and conclusions we can logically prove within these systems are necessarily true by deduction, which gives us direct access to noumenal knowledge, unlike the inductive knowledge we create from observations or considerations. Our conceptual understanding of the world is much more powerful than our subconceptual understanding because it organizes it into a hierarchy or network of causal models that give us much deeper and more accurate expectations. But to work, it means we need to conjecture conceptual models that approximately describe how the world works and then align those models with observed phenomena. Our knowledge of our own models is direct and deductive, but it only approximates reality itself inductively. So when we think about dynamic functions, we need to distinguish whether we are talking about our deductive, conceptual models or about the way we apply deduction and induction to the physical world, which is a necessary step for function to be realized physically. It is probably most fair to say that the functional entities of the mind have both conceptual and subconceptual aspects. Conceptually, they are bound to definitions and logical models that clearly delineate relationships, and subconceptually they are bound to a large pool of impressions formed from experience that qualify the kinds of circumstances the entities are likely to be useful. Once we have established a conceptual framework that is appropriately anchored to the world with subconceptual intuitions, we can reason with it deductively, quite abstracted from events in the physical world. We employ subconceptual and conceptual heuristics to keep our models aligned to physical-world circumstances to ensure our conclusions are relevant. For example, we consider how similar examples fared, and we look at how well each of our concepts fits (both on a subconceptual and conceptual level) to the situation at hand.

Let’s get back to our underlying question, “What is the essential character of a functional entity?,” by which we mean a dependent functional entity managed by an information management system. Most fundamentally, we know it has the character of employing information to achieve a function, and information is patterns in data that can be used to predict what will happen with better than even odds. But secondarily, and most critically to our calling it a function, a function must achieve a purpose. Functional things must have a telos, meaning a purpose or “final cause”. Aristotle rightly concluded that the purpose of an acorn is to grow into an oak tree. We now also know that the purpose of the oak tree is to create acorns, completing the cycle. Oak genes manage the information that propagates them forward in time by building trees to metabolize matter and energy and acorns to multiply. The trunk of the tree is tall and strong to provide a competitive edge over other plants. This and many other dependent functions are served by the oak organism. These purposes are intrinsic to the informational process that created the oak gene line. That genetic information confers many functional features to oak trees that have general-purpose utility across the range of challenges oaks have faced over time. Function or purpose is always general-purpose because function is a probabilistic enterprise. It is not about any one action, but about a general capability to do certain kinds of actions. Both the actual functions served (noumenally) and our understandings of them (phenomenally) only reflect likelihoods and have no ultimate meaning beyond that. However, although function is always general, that generality can become focused over a very small domain, which makes it very specific within that domain. For example, if we know a gene codes a certain protein, and we know a function that protein performs, we can say we understand the function of the gene. It doesn’t prove the function we found is the only function, but most proteins probably do serve one biological function, though this function might be used in a variety of ways. This is because proteins are broadly classified as fibrous, globular or membrane, which indicates their likely function. Fibrous proteins, like collagen, generally provide structural support. Globular proteins generally engage in chemical reactions like catalyzing, transporting, and regulating. Membrane proteins are embedded or attach to membranes, typically to serve as receptors. A protein that doesn’t benefit the organism will quickly be lost because its gene won’t be preserved if it mutates. My point is that biological functions do become very specific, even though viewed closer they are necessarily still general.

Living things are principally functional entities for which their bodies are the physical manifestations. Although DNA is physical and the proteins it codes are physical, their functions are not physical; functions are nonphysical strategies organisms use to survive well. The functions are produced and preserved by genes, which either encode functional proteins or control proteins. Traditionally, control genes were thought to be limited to regulating gene expression, which, akin to the executive branch of government, enforces a fixed set of regulating rules to increase or decrease the production of functional proteins. More recent thinking suggests they may also create new genes for future generations, akin to a legislative branch. Natural selection is the judicial branch that evaluates how well the other branches do. Both functional and control proteins will typically perform one chemical activity which typically reveals the gene’s primary purpose. Functional proteins are either structural building blocks like collagen or intermediaries that facilitate other chemical reactions like enzymes. We have identified control proteins that modulate many steps of gene expression, and we are inclined to consider such an identified control function as the gene’s primary purpose. We can’t rule out secondary or tertiary useful chemical activities for any protein, and until we discover such additional functions, we can’t be sure we know all the purposes of the gene. Much more significantly, control proteins form highly interdependent regulatory networks, which creates a combinatorial explosion of possible functions or functional effects from each gene. Functions can still be identified, but as the models become more complex and interdependent, our grasp or degree of certainty over all the implications necessarily goes down. But an understanding is always possible, forming a link between physical mechanisms and functional consequences, even though its predictive range is limited by our knowledge of the physical and functional phenomena and the models we constructed to join them.

Consider locomotion, a strategy animals commonly need. Locomotion is complex enough that it can’t be explained simply by studying genes and proteins. Yes, the proteins that underlie the mechanics of movement can be explained, but the reason certain ways of moving evolve and others don’t depend on macro-level functional forces, that is, the strategic value of moving. A paramecium is covered with simple cilia, hairlike organelles which act like tiny oars to move the organism in one direction.1 A model to explain how and why the paramecium moves the way it does could be well-informed by knowledge of every gene and protein involved and knowledge of the whole life cycle, but it would still be approximate because no model can capture all the interdependent effects that created its genome. Those effects selected for movement capabilities that worked well across the range of situations paramecia have encountered. Evolution essentially operated directly on the movement strategy or function for which the underlying mechanisms were just carriers. While any one feedback event is physical, its impact is to modify the mechanism in ways that only make sense in terms of their impact on generalized outcomes (functions). This has the effect of creating organizational structures out of physical matter that capture and use feedback loops in increasingly elaborate ways to produce ever more functional outcomes. Perhaps you can accept that feedback creates increasingly complex structures by providing a self-referential way of building on complexity, but don’t see it as being purposeful. But the way we see purposeful as being goal-oriented is anthropocentric. Really, purposeful means using information in ways that are likely to produce results similar to results seen before. One never actually reaches a “goal”, one only achieves a state that is similar to a goal. So purposeful really just means using heuristics with the expectation that they will tend to help. So I define function and purpose in terms of a process that uses information to predict and achieve useful outcomes, and whatever solutions result constitute the noumenal function. The ways we understand the noumenal function are phenomenal, both because we only see it through its phenomena and because understanding itself is a phenomenon. To develop a conceptual understanding, we group components into categories called kinds and we ascribe categories of behavior called causes and effects to these kinds of similar components and behaviors. It is all just a grand approximation, but reasoning this way using concepts is frequently so accurate that we tend to forget that it is just educated guesswork. We come to think our logical interpretation of things is simply true: legs were designed for walking, eyes were designed for seeing. But they were actually only designed to give us a good chance of walking or seeing, and doing a few other things. Walking and seeing are concepts or kinds that groups behaviors. Legs also help us stand and eyes help others recognize us. Additional or alternate categorizations always remain. The important point here is that functions exist, and our understanding of them through their phenomena reveals them to varying degrees.

Evolution would be impressive enough if it could only fashion organisms that accomplished everything through innate skills, i.e. using static, instinctive functions. But evolution promotes competition, so if ways to outperform instinct were possible, then they would probably evolve. Innate behavior has had millions of years to evolve, so it has been fine-tuned to cover all the routine challenges organisms face (if one takes routine to mean covered by instinct). But animals face many non-routine challenges because they move about a continually changing landscape and must compete with other animals that are always evolving more adaptive strategies. Instinctive strategies are quick and dependable, but are not as flexible as strategies customized to the circumstances. Some animals started to collect real-time feedback to make on-the-fly decisions based on experience rather than instinct. Learning is this capacity to dynamically acquire, assess, and act on patterns rather than waiting many generations for that feedback to be incorporated into DNA. Also, while some very elaborate behaviors are instinctive in many animals, this approach puts limits on their ability to adapt to new situations. Learning gives an animal a way to develop a dynamic store of customized strategies during a single lifetime. Where DNA gives a common set of abilities to all individuals in a species, learning gives each individual the ability to develop abilities to handle the novel circumstances it encounters. Each learned function evolves from feedback provided by cognitive assessment of the function’s success, but this feedback can generalize so well that even a single experience can teach a lesson that can improve predictive power. But more experience helps: “practice makes perfect”. Learning is dynamic, but the capacity to learn is genetic. The benefits for learning are so great that probably all plants and animals have some capacity for it (more on this later), but learning is mostly managed by brains, with higher animals (birds and mammals) showing substantially more capacity to learn. The conventional wisdom says that all organisms are equally evolved, having had the same amount of time, but have simply gone down different paths. By this view, we are no “better” or “worse” than amoebas. While it is true that all organisms are highly evolved, having had plenty of time to develop a near perfect fit to their environment, it is not true that evolution is equal or directionless. First, the rate of evolution becomes very slow sometimes, seeming even stopping based on the fossil record. And second, as a consequence of the first point, some species develop more functional capacities than others. Although it is premature to make too many specific conclusions at this stage because we are only beginning to understand the functions of the genes, we can safely conclude that brains open the door to a range of functional capacities that plants don’t have, that higher animals enjoy a range of cognitive flexibility beyond that of lower animals, and that humans have a uniquely generalized degree of cognitive flexibility.

Function in Concepts

My goal is to explain how consciousness and intelligence work in our minds, but explanations are products of minds themselves, so I must, to some degree, use the solution to explain itself. I have been doing that, and I will continue to do it, but I will also keep circling back to fill in holes. Let’s take a closer look at concepts to see how we can use them to understand functional entities better. As I noted above, concepts carry more explanatory power than feelings and subconcepts because they are well-organized, they form causal chains of reasoning, and they are not inherently subjective. Let’s take a look at some concepts to get a better idea of how they work. The concept APPLE (capitalization means we are referring to apple as a concept) refers to the generic idea of an apple and not to any specific apple. APPLE is not about the word “apple” or thinks like or analogous to apples, but about actual apples that meet our standard for being sufficiently apple-like to fall within our internal definition of the concept. We each arrive at our understanding of what APPLE means from our experience with apples. Even though we each have distinct apple experiences, our concept of what APPLE means is functionally the same for most purposes. How can this be? APPLE is generalized from all the objects we encounter which we learn are called apples. For example, we may come to know that the things we call apples are the fruit of the apple tree, are typically red, yellow or green, are about the size of a fist, have a core that should not be eaten, and/or are sliced up and baked into apple pies. Although each of us has an entirely unique set of interactions with apples, our functional understanding, namely that they are white-fleshed fruits of a certain size eaten in certain ways, holds for nearly all our experiences with apples. Some of us may think of them as sweet and others as sour or tart, but the functional interactions commonly associated with apples are about the same. That these interactions center around eating them is clearly an anthropomorphic perspective, and yet that perspective is generally what matters to us, and anyway, fruits appear to have evolved expressly to appeal to animal appetites, lending further credence to this notion of their function. Most of us realize apples come in different varieties, but none of us have seen them all (about 7500 cultivars), so we allow for variations within the concept. Some of us may know that apples are defined to be the fruit of a single species of tree, Malus pumila, and some may not, but this has little impact on most functional uses. The person who thinks that pears or apple pears are also apples is quite mistaken relative to the broadly accepted standard, but their overly generalized concept still overlaps with the “correct” one and probably serves their needs well enough. One can endlessly debate the exact standard for any concept, but exactness is immaterial in most cases because only certain general features are usually relevant to the functions that typically come under consideration. The fact that a given word associates with a given concept in a given context, and that our conception of the concept is functionally equivalent for most intents and purposes, makes communication possible. Many words have multiple definitions and each can fairly be called a distinct concept. We each develop temporary or permanent concepts for many things through our experience, but only common concepts are named with words or phrases, so until this happens, or some other cultural reference embodies the concept, we can’t share these concepts unless we explain them. A wax apple is not an APPLE, but we will generally call a look-alike by the same name if it is understood to be an imitation. Borrowing a word in this way often leads to the creation of additional definitions for the word. It is ok for a word to have many definitions provided the context reveals the definition of interest, e.g. mouse is usually fine as a short form of computer mouse.

If we look closer at our concepts of physical things, we start to see how much they are colored by functional distinctions. For starters, all man-made artifacts are fashioned the way they are to serve a purpose, and so while they have a physical composition, their function is always paramount in our minds. Other living things have their own innate goals, but we also view them and the physical world as natural resources we can turn to our own purposes. This isn’t surprising; the role of concepts is to increase our capacity for functional interactions with the world. Even the purest theories of physics, aside from not being physical themselves, defines physical spacetime from the perspective of what we can predict about it, which is the definition of functional, not physical. So all our concepts are not just colored by functional distinctions but actually are functional distinctions, whether they are about physical things or functional things.

About half the concepts we hold are about physical things, but that leaves half that are about functional things. Prototypically, nouns are physical forms and verbs, adjectives, etc., are functions. It is not quite that simple. Nouns that are strictly physical are called concrete; functional nouns are called abstract nouns. Many verbs, adjectives, etc., are concrete as well, e.g. TO ORBIT and TO RAIN or FASTER/SLOWER if they refer to purely physical processes. Most abstract concepts in verb form also have noun forms, like the HUNT or the TALK, or, using gerunds, the HUNTING or the TALKING. Noun-forming suffixes change DECIDE, COMPLETE, and MANAGE into DECISION, COMPLETION, and MANAGEMENT. Adjectives and adverbs let us characterize concepts by attaching a single trait to a noun or verb respectively that varies along an implied scale, e.g. BAD/GOOD, BIG/SMALL, SHARP/DULL, IMPORTANT/INSIGNIFICANT, GENTLY/ROUGHLY that puts the concept in relation to other concepts. We can discuss adjectives using nouns via the noun-forming suffix “-ness”, e.g. GOODNESS. Prepositions juxtapose concepts with each other with a relationship, like WITH, FROM, and INTO. We tend not to characterize prepositional relationships using nouns unless they have some permanence, like MARRIAGE, CANADIAN, and AUDIOPHILE.

Parts of speech aside, then, what is the range of things for which we have functional concepts? First are the elements and rules of formal systems, i.e. logic and math. Numbers, subjects, objects, propositions, and rules for counting or deducing are all functional. They are completely abstracted from biological functions, but they still fall within the definition of function because they relate to capabilities for using information to achieve purposes. The purposes are implications within the systems, but, because of the uniformity of nature, math and logic are used as foundations of the physical sciences, allowing us to apply formal systems to the physical world to understand and control them better than we otherwise could.

The remaining functional concepts are all biological. As I described in the prior chapter, organisms capture information at four levels: genetic, instinctive (senses, drives, and emotions), subconceptual, and conceptual. While we can discuss genetic functions, we can’t feel them because they are not part of conscious awareness, but the other three levels are. SIGHT, HUNGER, and JOY are examples of concepts with instinctive support, and HUNCH and PREMONITION have subconceptual support. Our conceptual grasp of our instincts and subconcepts lets us talk about them and include them in our logical reasoning. We also have abstract concepts about concepts, e.g. THOUGHT, IMAGINATION, PLAN, and DECISION. Many abstract concepts express actions that are strategic or functional, e.g. CATCH, JUDGE, STEAL, LAUGHTER. Others express functional states, e.g. CHAOS, DEFEAT, FREEDOM, TRUTH, WEALTH. Others are qualities, either in people or behaviors, like BEAUTY, COMPASSION, and CURIOSITY. Some are high-level generalizations or actions, states, or qualities, like ADVENTURE, CRIME, LUCK. And others are schools of thought encompassing formal or informal models, like CULTURE, DEMOCRACY, EDUCATION, HISTORY, MUSIC, RELIGION, SCIENCE. There are no exact boundaries between the different kinds of abstractions.

Function in the Sciences

Function has been part of language and thought from the beginning, but it hasn’t been overlooked in science, either, even if science has not yet granted it existential status. Viewed most abstractly, science divides into two branches, the formal and experimental sciences. Formal science is entirely theoretical but provides mathematical and logical tools for the experimental sciences, which study the physical world using a combination of hypotheses and testing. Testing gathers evidence from observations and correlates it with hypotheses to support or refute them. Superficially, the formal sciences are all creativity while the experimental sciences are all discovery, but in practice, most formal sciences need to provide some real-world value, and most experimental sciences require creative hypotheses, which are themselves wholly formal. Experimental science further divides into fundamental physics, which studies irreducible fields and/or particles, and the special sciences (all other natural and social sciences), which conceive aggregate properties of spacetime which are presumed by materialists to be reducible in principle to fundamental physics. Experimental science is studied using the scientific method, which is a loop in which one proposes a hypothesis, then tests it, and then refines and tests it again ad infinitum.

Alternately, we can separate the sciences based on whether they study form or function. Physical forms are more objectively observable, given that we can use instruments, but as noted above we can claim to observe functional entities through reflections and considerations of their functional claims. Consider the formal sciences, which establish functional entities through premises and rules from which one draws implications (which are functional). The formal sciences include logic, mathematics, statistics, theoretical computer science, information theory, game theory, systems theory, decision theory, and theoretical linguistics. They are named after formal systems, in which “form” means well-defined or having a well-specified nature, scope or meaning. However, that definition of form actually means function, because definition, specification, and meaning are functional. I will restrict my use of “form” to physical substance, in which form means a noumenon that can only be known through observed phenomena (though that knowledge itself is functional). While our knowledge of physical things must be mediated through phenomena, we know many functional noumena directly because we create them via definitions and logic. We don’t always immediately know the logical implications of a given formal system, but we can deduce proofs that demonstrate logical necessities which further unveil the underlying functional noumena. We can study formal systems inductively by running simulations that test more complex implications than we can deduce from the rules. Some formal sciences (e.g. weather modeling) are arguably more experimental than formal because they depend so much on inductive simulations. The implications of a formal system are either provable deductively or likely inductively. But what is the correct foundation of formal systems, e.g. the right foundation of mathematics. In Mathematics Form and Function, Saunders MacLane proposed six possible foundations: Logicism, Set Theory, Platonism, Formalism, Intuitionism, and Empiricism. The correct answer is that formal systems are not about correct or incorrect; they lay out assumptions and rules, which can be arbitrary, and work out implications from them. Their function or utility depends on how effective they are at solving problems. So any set of assumptions and rules make a good foundation, but some will be more effective than others. The most effective tend to be those which maximize simplicity, consistency, and applicability. In practice, set theory delivered these best and so most of mathematics now sits on a set-theoretic foundation, and on ZFC (Zermelo–Fraenkel set theory) in particular. Other set theories and systems from logicism and formalism can make good foundations for specific purposes. Platonism, Intuitionism and Empiricism are weak foundations for formal systems because they lack clear assumptions and rules.

While the experimental sciences, being physical, life, social, and applied science, are concerned with physical form, they are more focused on function. The physical sciences study form alone, specifically physical forms in our universe. However, formal models constitute the theoretical framework of the physical sciences, so its explanations are functional constructs. The life sciences principally study function, specifically functions that result from the theory of evolution. Of course, lifeforms have physical bodies as well, which must also be studied, but nearly all understanding of life derives from function and not form, so looking at the form is basically only helpful in furthering the understanding of function. The distinctly identifiable functions that living things perform gradually evolve through complex, interwoven, and layered feedback that gradually causes physical mechanisms that further functional goals to “emerge”. Functional existence leverages physical existence, and depends on it to continue to exist physically, but is not the same as physical existence. The social sciences also focus on function, specifically functions produced by the mind, despite the lack of a theory that explains the mind itself. Nearly all the understandings of the social sciences were built by observing patterns in human behavior. Behavior is a complex consequence of how minds function. Finally, applied science is principally concerned with functions that help us live better, but uses both form and function to develop technologies to do so. All the experimental sciences depend heavily on the formal sciences for mathematical and logical rigor. To summarize, understanding physical forms is central to physical science but is only incidental to the other experimental sciences and is irrelevant to the formal sciences. Understanding function, however, is central to the formal, life, social, and applied sciences, but it also underlies the theories of the physical sciences. After all, the reason we try to understand physical forms is to gain functional mastery over them (via laws of physics).

Function in Non-living Things

Experimental science has had its greatest success in the physical sciences because (a) nature is very uniform, (b) we can measure it with impartial instruments of great accuracy and precision, and, most importantly, (c) relatively simple theories can predict what will happen. That nature is uniform and measurable is, thankfully, a given, but the last point is a subtle consequence of conceptual thinking. I mentioned above that all concepts are generalizations. A generalization is an identification of patterns shared by two or more things. Categorizing them as the same kind of thing lets us make predictions based on our knowledge of that kind. But there is a hidden challenge: every pattern has many features, and the number of combinations of features grows exponentially, resulting in an almost infinite number of kinds. How do we avoid a runaway proliferation of categories? The answer is Occam’s razor, which is both a deeply seated innate skill and a rule of thumb we can apply consciously. Occam’s razor says that “when different models of varying complexity can account for the data equally well, the simplest one should be selected”2. Or, as Thales of Miletus originally put it, “maximum of phenomena should be explained by a minimum of hypotheses”. Simplicity trumps complexity for two reasons. First, the universe seems to follow a small set of fixed laws. While we can never know what they are with complete certainty, we can approximate them most effectively by trying to build as small a consistent set of laws as we can. And second, as the complexity of a pattern increases, it becomes less general and so works in fewer situations. Complexity is great for specific situations but bad for general situations. So again, to build a small but consistent set of laws, we should simplify wherever possible.

Consider an example. Theories to explain the motions of celestial bodies across the sky have been refined numerous times with the help of Occam’s razor. Ancient astronomers attached moving celestial bodies to geocentric (Earth at the center) celestial spheres. Ptolemy then explained the apparent retrograde motion of Mars as seen from Earth by saying a point on Mars’ sphere revolves around the Earth, and Mars revolves around that point along a smaller circle called an epicycle. (Note that additional epicycles were used to make small corrections for what were later learned to be elliptical orbits.) Copernicus realized that by using heliocentric (Sun at the center) celestial spheres, the epicycles used by Mars and the other planets to explain their retrograde motion would be unnecessary because this illusion is just a side effect of looking from one planet to another in a heliocentric model. Fewer epicycles were simpler to calculate and also made the model simpler to understand. Kepler’s discovery that orbits were ellipses replaced the celestial spheres with celestial ellipsoids and eliminated all the epicycles, again simplifying calculations and comprehension. Newton eliminated the need for celestial ellipsoids by introducing the universal law of gravitation, which both reduced all matter interactions to one equation but also explained gravity on Earth. However, Newton’s gravity depended on action at a distance, which seemed supernatural to Newton, who felt that interactions should require direct contact. The influence of gravity creates an exception to the rule that objects in motion move in a straight line. Einstein’s theory of general relativity eliminated that exception, allowing planets and all things to travel in a straight line once again, but now through curved space. And we still have a ways to go to reach a deeper and unified understanding spacetime, gravity and matter, which I will revisit later. Suffice to say for now that Newton and Einstein contributed to the general thrust of modern physical science to find formal models with predictive power rather than to look for an actual mechanism of the universe. But the success of this approach speaks for itself; physical science can make nearly perfect predictions about many of the physical phenomena we have identified.

Function in Living Things

But life is far less tractable. Unlike non-living physical systems, whose complexity roughly remains constant, living things increase in complexity with each moment of life because the consequences of living are captured and incorporated through feedback. The amount of feedback is staggering, a constant onslaught of information. For example, every step an ant takes provides feedback about the effectiveness of its means of locomotion. The feedback is very indirect; more effective ant gene lines will outcompete less effective ones. Every beneficial trait helps a bit, but it would be hard to say how much. The gene line doesn’t evolve in isolation, but in conjunction with all the gene lines in its niche and ultimately with the whole biosphere. It is helpful to study traits in isolation and to look at survival at the level of the individual, but the feedback loops of evolution create information management systems that protect the long-term needs of the gene lines, whose interests are more complex than those of any one gene or body. Those systems have been refined over billions of years and we can safely say that by now they leave nothing to chance, except, somewhat ironically, that which is mathematically safest in the hands of chance. We know that chance plays a role in which sperm makes it to the egg, but, more significantly, we need to realize that it plays the absolute minimal role possible. Every bit of feedback the gene line has ever collected is funneled through the sexual reproduction process to create the offspring with the greatest chance of providing greater adaptive advantages than the parent had. The view that the gene line would simply perpetuate itself and leave beneficial mutations to random chance, as the Modern Synthesis, also called Neo-Darwinism, proposed, is unrealistic. Suppose our gene lines were smart enough to alter or build new genes using educated guesses and to test them out using individuals as lab rats. Would such an approach be worth it? New evidence suggests it not only would be worth it but that exactly such a genetic arms race created this kind of gene-building technology before multicellular life even arose and that it was necessary to propel life to the levels of complexity and functionality it enjoys today.3

Although natural selection is the only force at work in evolution, that doesn’t mean it must depend on random point mutations in genes to move life forward. We would never get life as we know it, and not just because it would take trillions of years, but because there is no way to form the complex genes we have through an unbroken chain of viable descendants using only point mutations. If you hope to see a monkey type out the works of Shakespeare, don’t sit him at a typewriter, sit him at a phrase-writer. If he starts with all of Shakespeare’s greatest hit phrases, it no longer seems like such a long shot that he might come up with something good. This is how genes are really built: our existing genes and all the supposed “junk” DNA comprising 80% of our genomes are really raw materials used by a genetic phrase-writer that tweaks and builds genes. Mutations are not random events but are triggered by natural selection stresses. And that’s not all: just as the best phrases (e.g. transposable elements or TE’s), are kept handy, so are the most useful ways of employing them part of the long-term know-how of the genes in charge of gene construction. This is not science fiction; we have all sorts of gene editing enzymes that can do real-time gene editing to prepare some genes for use. And much of our supposed “junk” DNA has TE’s and even fully-formed or nearly fully-formed genes that have no apparent function. Though we have not yet found genes that direct the building of other genes, the biochemistry to do it is readily available. An opportunity arises to build and rearrange genes for the next generation at the moment of sexual reproduction, and mother nature would be foolish to simply roll the dice at this moment and let the chips fall where they may. It makes more sense to suppose that natural selection is also selecting mechanisms that can build the kinds of genes that are most likely to help. In times of low environmental stress, such built-in mutation mechanisms can slow way down or stop. Times of high stress should trigger more gene editing focused especially on the functional areas where deficiencies are detected. Such directed mutation to start effecting useful changes in a single generation for commonly needed adaptations, though it is more likely that most changes will take many generations to complete. Sometimes millions of years might pass before a fully-functional de novo gene just appeared and joined the other functioning genes in the genome. If this patient approach were the most efficient way to effect change, it would definitely outcompete less effective ways. It also explains sex, the differences between the sexes, and its near ubiquity in nature. Females and male, yin and yang, represent complementary forces in the evolutionary arms race, with yin representing stability and yang representing change. The egg is a large, stable investment that minimizes risk. The sperm is a small, more mutated investment that must prove its changes against millions of competitors. If sperm were as genetically stable as eggs, this extra step would not be necessary. But mutations aren’t random: they must appear in sperm either exclusively or with much greater frequency than in eggs, and the swim-off competition is their initial viability test. Miscarriage is a judgment call that is made to cut losses on an embryo with low viability. Then, finally, the individual has his or her chance to survive and propagate. Consistent with their large investment in a stable egg, females are more likely to invest more in having and raising offspring. Not similarly equipped (or burdened), males are more expendable and are more inclined to mate as often as possible. However, since reproduction requires one male and one female, the ESS (evolutionarily stable strategy) is for males and females to be produced in the same numbers. If male births became less common, it would become advantageous to produce more males as it would lead to more descendants.

Carefully recombined, altered and constructed genes are tested first in the sperm as primary lab rats, and then embryos are secondary and individuals are tertiary. The germline continues despite these casualties among the foot soldiers. Maximal adaptive benefit derives from a combination of experience and experimentation. The important point from the perspective of function is that the information captured by DNA is deeply multifunctional in a fractal, self-referential way, encompassing both the observed traits of the organism and its long-term adaptive potential. Adaptive benefits are served in both the short and long term. Many mechanisms have evolved, only some of which we yet recognize, to capture feedback so as to produce an adaptive advantage. All the traits and the distribution of traits we see in populations were selected because they conferred the greatest survival potential to the germline and not because of random beneficial mutations or genetic drift, which are random effects. Attributing genetic change to random effects is like saying card sharks win by luck. In both cases, luck exists very locally but hardly at all in aggregate because highly competitive strategies are in pal

Function in the Mind

All the functional capacity of the mind depends on the functional capacity of the brain, which, like the body, stores its blueprint in the genes. Still, it is a mistake to say that one can build life given those blueprints; life fundamentally depends on membranes, which are maintained by the genes but not constructed by them. The original cell membranes most likely arose as natural bubbles and were gradually co-opted and altered by living metabolisms. Life has no whole blueprint; you need a working copy and the genes. Together, the genes, the proteins coded by the genes, and the cells they maintain and propagate comprise both a physical machine and an information management system. Because the physical machine is designed based on adaptive feedback, it can only be understood by interpreting that feedback. We can’t see or figure out all the feedback, but we can look at adaptations and imagine reasons for them that relate to survival pressures. Reasoning teleologically this way, from purpose back to design, is the only way to make sense of what information management systems do. Understanding the physical mechanisms, too, is also necessary for a complete understanding, and also helps reveal details too mysterious to guess from observations of function alone.

All function in the physical world derives from living organisms, but information processing in life is not restricted to information captured in DNA. Most notably, animals use brains to manage and store information neurochemically. The functionality provided by the information managed in brains is quite different from genetic functionality because it is customized based on real-time experience. Where genetic assessment can only render an “opinion” once per lifetime, brains can make assessments on a continuous basis. These assessments create an information processing feedback loop that consists of inputs, processing/assessing, and outputs. Most metabolic control functions, such as heart rate, digestion, and breathing, can be done routinely with internal inputs and outputs, and so they require little if any integration with other control functions. But almost everything else bodies do requires top-level control because the body can only be in one place at a time. Since animals must engage in many activities, this means that their activities must be carefully prioritized and coordinated, and the solution to this problem that evolution arrived at is for brains to have a centralized top-level subsystem called the mind that performs this coordination. The distinction between the mind and the brain is arguably just one of perspective: the mind is consciously aware and only exists from the perspective of its own subjective awareness and agency, while the brain is an organ in the head that objectively exists. Most of the information processing in the brain is nonconscious and so not part of the mind (though I will use the term nonconscious mind to refer to it). It is hard to quantify the amount of processing that happens outside of our awareness, both because we have no way of measuring it and because awareness and processing could be defined in different ways, but I would put it at 95 to 99%. We are now aware of how our brains process vision into recognized objects, but we know it must take a lot of real-time processing. We are not aware of how our brains decide on an emotional reaction to a situation, but we know this, too, takes a lot of processing. And we don’t know how we understand or create language, but we know this is complicated as well.

Technically, we are each only sure that we alone have a mind, but we tend to accept that all people have minds because they say they do, their behavior meets our expectations, and we are all the same species and so probably work about the same way. But we also know people are very different, giving each person a very different perspective on life. Animals, of course, don’t claim to have minds, and we know that no other animal has even remotely our capacity for reasoning. But all animal behavior meets our expectation of how they would behave if they did have a mind; that is, they act as if they were aware. Also, all animals with central nervous systems are similar in that they collect sensory information, process it, and then act. So I am going to propose that all such animals have minds as subsystems of their brains that funnel decisions through special brain states called awareness, with the understanding that awareness is vastly different for different animals. We know that every animal has its own set of senses which, if we accept that they are conscious, produces a unique set of sensory feelings in the animal that make it “what it is like” to be that animal. Similarly, their bodies and their body sense and facility for controlling their bodies (experienced as agency) all contribute to that unique feeling as well. I don’t want to mitigate the enormity of these differences in inputs and outputs, but they are ultimately only peripheral to the central processing and assessing that the mind does. People without sight or hearing are handicapped relative to normals, but their general ability to think is unaffected. So my focus in understanding the mind is really on processing and assessing. I propose that all information in the mind falls into the following four categories. I number the first one zero because it has to do with inputs and outputs, which are peripheral, while the other three have to do with assessing, which is more central:

0. Percepts
1. Instincts
2. Subconcepts
3. Concepts

Though any one piece of information in the mind is strictly one of these four kinds, newly created information of one type often draws on information of the other types, resulting in a great deal of overlap that lets us leverage each kind of information to its strengths.

I am going to describe each of these four kinds of information, but first I am going to address the questions how and why. How does information in the genes become information in the brain, and why just these four kinds? As physical devices, the genes know nothing about minds and their needs, but as functional entities, they know exactly what needs to be done. What brains need to do can only be understood from a functional perspective: they need to control the body, and the more predictively they can deliver that control, the more successful they will be. So the genes have been selected to enable ever greater degrees of control. The genes, and in turn the proteins they encode, are selected not for their physical form but for what they do. Physically, this means the brain needs to be a computer with a body that provides inputs and outputs. Low-level functions in a computer must be hardcoded, meaning that they operate in a fixed way that does not change depending on circumstances. Nonconscious functions in the brain are hardcoded, and percepts and instincts (which are conscious) are too. Percepts provide hardcoded inputs and outputs, while instincts provide hardcoded assessments. Higher-level functions in a computer need memory, which is data that can change at runtime. In practice, this data catalogs patterns detected in the inputs. Hardcoded algorithms can leverage memory to produce subconcepts, which give us predictive capabilities based on fixed logic and memory, while softcoded algorithms can leverage memory to produce concepts, which are predictive capabilities based on variable or generalized logic and memory. In the mind, using fixed logic and memory is called reasoning, and using variable or generalized logic and memory is called logical reasoning. Reasoning manipulates percepts, instincts, and subconcepts while logical reasoning manipulates concepts, but this wording underplays their interactions, because most concepts are based on the other kinds of information, and concepts, once formed, change our experience and hence impact our subconcepts. Finally, note that some percepts and instincts use memory in a number of ways to develop or train their “fixed” responses. Some percepts and instincts use memory entirely nonconsciously, while others initially require conscious guidance to become habituated. With practice, we can, in principle, recover some measure of conscious control over any nonconscious process that uses our memory because we can consciously change our memory and retrain habits. So, to summarize, genes were selected to build brains because brains could provide a sufficiently generalized framework for centralized control and memory, i.e. a computer. Brains can deliver better-centralized control with a subsystem like the mind that can leverage percepts, instincts, subconcepts, and concepts to deliver focused, real-time operation. While all animal brains use minds to achieve coordination, no other animal minds have evolved the applications of concepts as far as humans have.

Percepts. Percepts are the sensory feelings that flow into our minds continuously from our senses. Their purpose is to connect the mind to the outside world via inputs and outputs. The five classic human senses are sight, hearing, taste, smell, and touch. Sight combines senses for color, brightness, and depth to create composite percepts for objects and movement. Smell combines over 1000 independent smell senses. Taste is based on five underlying taste senses (sweet, sour, salty, bitter, and umami). Hearing combines senses for pitch, volume, and other dimensions. And touch combines senses for pressure, temperature, and pain. Beyond these five we have a number of somatic senses that monitor internal body state, including balance (equilibrioception), proprioception (limb awareness), vibration sense, velocity sense, time sense (chronoception), hunger and thirst, erogenous sensation, chemoreception (e.g. salt, carbon dioxide or oxygen levels in blood), and a few more4. We also have a sense of certain internal mental states that feel like feedback from the mind itself as opposed to the outside world or the body, including our spatial sense, awareness itself, and attention. They are sensory in that they provide continual input and we can assess them, just as if they were external to the mind. We feel all of these percepts without having to reflect on them; they are immediate and hard-wired via nonconscious mechanisms that bring them into our conscious awareness without any conscious effort. Percepts find patterns in input data and send signals that represent them to our conscious minds without any need for past experience with the perceived items. We can perceive green innately even without any past experience.

Instincts. Instincts are genetically-based control functions. Their purpose is to provide as much innate control to all members of a species as evolution can muster. For example, beavers that have never seen a dam can still build one, so we know that this behavior is instinctive.5 We don’t yet know which genes create instincts, and one instinct likely depends on many genes, but we know instincts are genetic because any member of the species can perform an instinctive function when the behavior is triggered. Spider webs seem pretty creative, but we know they are created instinctively. A group of deaf Nicaraguan children taught no language quickly created one6, demonstrating that language acquisition is instinctive in humans. It appears that the genes can encode arbitrarily complex behaviors, and in the lower animals nearly every behavior seems to be instinctive. I propose, but can’t prove, that instincts influence behavior in all animals by consciously creating an internal “preference” or “nudge” with a relative strength. Strong nudges outweigh weaker nudges, allowing the mind to prioritize competing instincts to select one action for the body at a time. Complex instinctive behaviors can thus be interrupted by more pressing needs and then resumed. What would one of these nudges feel like? If it’s like being elbowed or told to do something, we would probably just ignore it. Instead, we feel instinctive nudges in two ways, as drives or emotions. We feel drives based only on our metabolic state, for example when we are hungry or tired. We feel emotions based on our psychological state, which is a nonconscious assessment of our subconcepts and concepts. Because emotions are computed beneath our awareness and can see what we really think, we can’t control them directly. However, since anything can look better or worse when viewed differently, emotions can be controlled indirectly by shifting perspective.

I will use the word feelings to describe both percepts and instincts. Percepts are sensory feelings and instincts are internally-generated nonsensory feelings.

Subconcepts. Animals store patterns from specific experiences in their memory, and over time they will notice associations between the patterns. These associations are what I call subconcepts. The purpose of subconcepts is to give animals insight from patterns in data. For example, all animals must move to a food source to eat, and they will start to notice different common features linking places where food can be found and places where it can’t. One can’t describe the meaning of subconcepts; they are just patterns can help predict other patterns. Co-occurring patterns are reinforced as subconcepts the more often they happen. One can’t put one’s finger on any one subconcept; all the associations from all of our experience form a large pool of subconcepts with useful relationship information which we can draw on. Subconcepts carry impressions without implying a logical connection or cause and effect. Subconcepts give us our feeling of familiarity. Subconceptual impressions about what will probably happen are called intuitions or hunches. We can trust our intuition to guide us through many of our actions without resorting to the next level of thinking, conceptual analysis. Subconcepts are analogous to machine learning. Machine learning algorithms recognize voices and drive cars using associative neural nets. Instead of logically reasoning out what was said or what to do they are essentially looking up the answer with pattern matching. This works great provided you have enough experience but falters in novel situations.

Concepts. Concepts turn associative memory into building blocks by adding labeling. Most generally, a concept is a bucket of associations to which one can refer as a unit, which makes it possible to build relationships between buckets in a logical chain of reasoning. We often use word or phrases to references a concept, but language is a layer on top of concepts that only covers a small fraction of all the concepts in our minds. We can think about things without using language, for example spatially or abstractly using any mental model we devise. When we think conceptually, the concepts are mental placeholders which we manipulate with logical operations. We typically use concepts without explicitly defining them, but that doesn’t mean they don’t have definitions. We form a concept whenever we suspect that a pattern of associations we see would most usefully be managed as a group. Multiple exposures to that pattern over time iteratively imply appropriate boundaries for what falls within the concept and what does not. Some of the contained associations are subconcepts and some concepts, but either way, they themselves have only implied boundaries. No matter how precisely we define a concept it always retains an inherent vagueness because concepts are buckets that collect similar items, and similarity is always a general grouping based on the vagaries of other subconcepts and concepts and so is never perfectly precise. The definition that matters most is the noumenal definition, which is the actual way the associations are hooked up in our brains. Dictionaries provide phenomenal definitions, which attempt to characterize those associations concisely using words. But no representational definition can match the complexity of the network underlying the real concept; it can only take one or more perspectives on the concept and describe it in terms of other concepts which must themselves be defined in an infinite regress. And even if we forgive that shortcoming, every concept can have different degrees of generality or detail or emphasis depending on the context in which we use it. We conceive a localized connotation of each concept relative to the mental model in which we employ it. This is a critical point because it means that concepts and words can’t really be defined independently of the mental models in which we use them. All concepts evolve to keep up with culture. While APPLE has not changed much since apple trees were first cultivated, except in the number of varieties and uses, TECHNOLOGY and FREEDOM mean quite different things now than in the past.

Instances. Now is a good time to introduce a special kind of concept, the instance. We generally exclude instances from the definition of the word concept, which is reserved for general notions, but instances work the same way in the mind except for one crucial detail: they denote a single noumenal entity, not a class of entities. Usually, an instance is a physical entity, such as a specific apple, person, place, or event. But functional instances exist as well, such as a hypothetical specific apple, person, place, or event, or a creative work such as a book or song. In English, we call concepts common nouns and write them in lower case, and we call instances proper nouns and capitalize them. All words that are not nouns are concepts because they have general applicability. Mentally, an instance is a refinement of a concept because it has all the normal associations of a concept plus a link to a unique entity. If we drop that link to a unique entity, it becomes a concept for which the unique entity is an example of the class. Our knowledge of many instances is detailed enough that only one entity would fit the class, but if we have a quarter, usually the only thing distinguishing that quarter from any other in our minds is our memory of where that quarter has been. This is why an instance is mentally just a special case of a concept.

In the most extreme sense, every pattern we detect is unique and is thus its own instance. But in practice, we use a continuous feedback cycle to confirm that phenomenal patterns align to noumenal entities. We can’t prove the existence of physical noumena, we can only keep empirically testing for them, but the evidence is so overwhelming if we get several confirming patterns that we rarely doubt they exist. In any case, it doesn’t matter if our understanding of the world is right in an absolute sense so long as things work the way we expect. Understanding is really a functional expectation. Further, we can say that, beyond a single moment, physical things exist over a span of time called their lifetime, during which they either don’t change or change in ways that fall within the scope of their conceptual meaning. This means we can distinguish two kinds of instances, those that are time-specific (events), and those that are time-invariant (items during their lifetime). We consequently each have an episodic memory of event instances and a time-invariant memory of item instances.

Arguably, animals would be most competitive if they had instincts for all the most adaptive behaviors they needed. And most animals, most of the time, do depend on instincts to lead them to the strategy they need. But the strategic arms race to maximize effectiveness opened the door for techniques to evolve that could figure out strategies customized to specific situations. Only strategies with general-purpose value in many situations can evolve, and although that can cover a wide range of behaviors given enough time, special situations will always arise for which a customized strategy would work better. Subconcepts provide a first line of attack to gain information from experience by giving animals common sense. Internal memory lookups against their large pool of experience will just give them an intuitive sense of what to do without any further reflection. Subconcepts are much older and more pervasive in animal brains than concepts, which provide a second line of attack. The value of concepts derives from the second-order analysis of relationships between concepts. While the relationships between subconcepts are only impressions supported by the data, relationships between concepts are nominated to be “true”. The way this works is that a mental model is hypothesized that proposes a set of relationships between a group of concepts because the data suggests the relationships might always hold. Although we at no point know whether those relationships do always hold, within the model they do by definition. The model is then further tested and through iterative feedback it is refined to the point where we develop a level of confidence for which the model applies to the phenomena it describes. Internally, the model is true, but externally it is only true to the degree it is applicable. However, we iteratively develop a very high confidence level for being able to judge circumstances for which the model is applicable. The value of establishing a mental model this way is that we can develop the internal relationships to an arbitrarily formal degree, and such relationships carry the power of logical implication. Logic can provide perfect foreknowledge, which means conceptual thinking can predict the future where subconceptual thinking can only give us good hunches and instinct can only act by rote.

In practice, conceptual thinking can directly solve problems quickly. Subconceptual thinking serves up solutions suggested by past experience. It improves with experience, so one can just practice more as a way of indirectly solving problems, provided the problems one wants to solve are the sort that tend to follow from trial and error. Instinctive thinking provides solutions in a way analogous to subconceptual thinking, but the accumulated experience takes much longer to become encoded genetically. Subconceptual and instinctive thinking don’t reveal why they work, they only carry the feeling that they will work. But conceptual thinking says exactly why, though the reasons hinge on the model being well-constructed and well-applied. Not only do logical models specify exact relationships, but logic can also be leveraged arbitrarily with chained causal reasoning. Subconcepts and instincts can also form chains given enough time, but only if there is an incremental benefit from each step in the chain. While problem-solving with concepts is not an exact science, it has unlimited potential because there are no limits on the number of models we can propose. While no model is perfect, many models can be found and then refined to become increasingly helpful, which is a proactive and fast approach relative to the alternatives.

Internally, a model can be said to be perfectly objective because its implications are completely independent of who uses it, but externally models must be applied judiciously, which invariably involves some subjective judgments. Scientific modeling requires extra precautions to maximize objectivity and minimize subjectivity which I will discuss more later, but we are confident enough about most of the modeling we do to understand and interact with the world that we believe it would count as objective against most standards. We also all maintain a highly subjective view of the world which is quite idiosyncratic to our own experience and perspective, but still counts as conceptual modeling because we draw implications from it.

When we look at something and recognize it as an apple, the work is done for us nonconsciously. The nonconscious work requires many pattern comparisons beneath our awareness that cascade from percepts to subconcepts to concepts. We perceive red, round, and shiny and associate subconceptually to organic, plant, food, good, etc., not as named associations but as impressions. Helped by these associations, conceptual associations are triggered, including concepts for some of the percepts and subconcepts, and more discrete concepts like fruit and apple. Percepts give us informant, subconcepts give us familiarity, and concepts give us models and hence understanding. In this case, once we understand what we are looking at, instinct may trigger hunger or happiness.

This has been a quick overview of function in the abstract and how it presents in the mind. What I have said so far is generally supported by common knowledge, but my goal is to develop a rigorous scientific basis. To get there, I am going to have to review the philosophy of science and then overhaul it to be more appropriate to the study of the mind.

Natural and Artificial Information

I divided information management systems above into two classes: life in general, which stores information in DNA, and brains, which store it neurochemically. Let me now make a further distinction of natural and artificial information, where the former derives from DNA and the latter goes above and beyond DNA in a crucial sense. All information in DNA is natural, and most of the information stored in brains is natural as well, but some is artificial. Information that is created nonconsciously or through percepts or instincts are direct consequences of genetic mechanisms and so are natural. Information that is created consciously through subconceptual or conceptual thinking is artificial. Subconcepts are arguably a gray area because they are formed by nonconscious mechanisms, but I refer here to the conscious component of subconceptual thinking, which includes developing common sense and an intuitive feel for how things work. Sometimes intuition produces eureka moments, but mostly it just helps us integrate our knowledge in familiar and practical ways. Concepts are more obviously artificial. If we see a pattern and form a concept to represent it, that is a conscious decision. For example, when we are very young we notice that one often gets in and out of a room through a flap that swings open and closed and we form the concept DOOR for that flap. We categorize all the flaps we have seen that provide access through openings as examples of doors. Later we learn the word for it. Our concept DOOR is artificial because it was created by real-time data analysis and is not a direct byproduct of genetic information. Our initial eureka moment that doors were a thing that one encounters repeatedly that share common traits derives from subconceptual knowledge: all remember all the doors we have ever seen based on the salient, functional properties we noticed about them at the time. Our inclination to group them under the concept DOOR came entirely naturally to us, as the ability to form concepts is innate. But although we are designed to think this way, the creations of thought are not anticipated by the genes and so are artificial. Whether or not we have free will to form and use concepts as we like is a more subtle question which I will get to later.

Physically, every moment of our lives is completely unique, so from a physical standpoint we should have no idea what will happen next, and for that matter, we should not even have ideas. In theory, if someone (Laplace’s Demon) “knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics.” We now know that quantum uncertainty that the world may run like a clock but still be unpredictable. Putting this aside, though, only the demon should be able to predict the future. But it turns out that we can predict the future of many things with near perfect accuracy, and have minds with ideas to do it. The broad reason is information management, but the narrow reason I am focusing on here is conceptual information management. We group the world into concepts, and the uniformity of nature allow those concepts to “reappear” many times in many ways, even though each instance was actually unique and is only related to the others functionally, not physically (because relations are always functional). So we look for patterns everywhere, and when we find one that seems to repeat in different contexts we automatically form a concept to group it. Most of our concepts have a very transient nature, but they group together into broader and broader categories and intersect with the concepts other people form through language. Each concept is continuously confirmed and revised by each instance we experience. And concepts work together in mental models to create logical frameworks that explain how the world works, which is the source of our more impressive predictive powers. Because we can reflect on our thoughts endlessly, we have unlimited potential to review and revise, and so can develop our concepts in new, unpredictable directions. Artificial information is chiefly characterized by logical frameworks that are continually revised through feedback loops.

While all advanced animals have some capacity to think subconceptually and conceptually, humans have a few strengths that let them leverage these capacities to produce accelerating change, an exponential growth caused by information. Exponential growth in nature usually results in an exhaustion of resources and a 99% die-off. Human civilization is quickly converging on such a catastrophe, but my point here is that accelerating change has been shaping human evolution for millions of years and civilization for thousands of years. Evolution improved our capacity for subconceptual and conceptual thinking, and then civilization helped us leverage artificial constructs functionally through knowledge and physically through artifacts. Artificial, meaning from an art or craft, is the conscious use of the imagination in a skillful way, which is to say functionally rather than randomly. While other advanced animals can use subconcepts and concepts, they can’t leverage them even remotely as well as we can to produce artifacts, though some may well develop significant knowledge.

The study of the mind includes its capacity to manage both natural and artificial information, but our stronger interest is not in the kinds of natural information we share with other animals but in the artificial information that make humans unique. We don’t yet understand a great deal about either, but natural information in inherently more tractable to study because its feedback loops are less abstract. Our percepts and instincts have fixed, identifiable functions and so probably mostly have relatively fixed, discoverable physical mechanisms. For example, we have a fair idea how visual processing works in humans, even though we don’t yet know how the images appear in our conscious mind or what our conscious minds physically are. Another example is hunger. The hormones leptin and ghrelin are produced by the body and their levels in the blood are read by receptors in the hypothalamus to produce the feeling of hunger. Emotions are produced entirely within the brain using feedback from subconcepts and concepts. The amygdala is a pair of almond-shaped clusters in the brain that are central to emotional response. While much is known, much more still is not, emotions involve a vastly more complex interplay of information feeding back on itself than hunger. Even so, the genetic basis of emotion can eventually be teased out, even if the way we deal with our emotions remains subject to many artificial influences.

What makes humans unique is our capacity to think in ways that generate artificial information, which constitutes much of what we think of as understanding. It is something of a lie, because artificial information, much like the related word artifice, consists solely of models that bias the interpretation of what is happening towards our purposes. Even when we purport that our purpose is to expose the truth, our real purpose is to gain functional advantage. But it’s not deceit; maximizing function advantage is effectively the real nature of truth. We can’t see physical noumenal truth directly, so we must settle for the most effective phenomenal approximations. So the question is, how do we generate artificial information? Our underlying capacity to think with subconcepts and concepts is entirely natural, provided by good genes. That natural ability generates artificial information, and that information includes new ways to think, some of which we dream up ourselves and some of which we learn from others. These new ways to think are give us a boosted capacity to think that is artificial, and our artificial capacity can be leveraged to make it more powerful in some ways than our native capacities. So the study of how we think needs to encompass both our natural talents and our learned methods.

Additionally, the study of how we think is itself necessarily an artificial informational construct, so I will have to devote some attention to the question of how to study the mind. What science studies how the mind works? The natural part is covered by the life sciences, which study natural information. Artificial information is studied by the social and applied sciences. The other sciences, including the physical and formal sciences, don’t study information (except that some formal sciences study abstract (non-living) information systems). Psychology, which is the study of the mind, is a life science to the extent it is concerned with innate mechanisms which are fixed and predictable, and it is a social science to the extent it is concerned with the effects of cultural or artificial feedback on the behavior of the mind. So psychology has sufficient scope to study the whole mind, making this work primarily a work of psychology. I have to note again here that an important new branch of psychology, cognitive science, was launched in the 1970’s to take the computational aspects of mind more seriously than the existing subfields of psychology were doing. This push came mostly from the artificial intelligence community, but included linguistics and neuroscience. Where psychology is largely seen as a soft science, cognitive science wanted more hard science credibility. In my terminology, hard and soft correspond to natural and artificial. Yes, it is easier to study natural information because the mechanisms managed by DNA, although complicated, are relatively static and tend to be shaped by long-term adaptive pressures which we have some hope of identifying. In other words, we can mostly figure it out given enough time. Artificial information, on the other hand, is arbitrarily more complicated, very dynamic, and is shaped by short-term pressures. Sciences that study artificial information are therefore intrinsically less exact. Many branches of psychology necessarily encompass both natural and artificial information, which arguably compromises their explanatory reach. But quite a few branches, including behavioral neuroscience, cognitive psychology, developmental psychology, and evolutionary psychology, explicitly exclude artificial information (perhaps we should refactor cognitive science back into the psychology department). Note that most of our talent for thinking with artificial information is itself natural. As we mature, we will naturally learn to think better, improving our ability to prioritize, broaden our perspective, and think logically or rationally. No academic field studies how we think or has anything to say about it; nor do we attempt to teach people how to think. We have cultivated rhetoric, the persuasive use of language, and we have some tips and tricks to spur creativity, like brainstorming, thinking outside the box, and improvisation, and some strategies for prioritization and organization, like the 80/20 rule, top-down, bottom-up, breadth-first, depth-first, and flow charting, but we don’t know how we think and so can’t say what might help us do it better. I would contend that we know more than we think we know about how we think; we just need to organize what we know from the top down to see where we stand.

  1. Paramecium, Wikipedia
  2. Structure learning and the Occam’s razor principle: a new view of human function acquisition, 2014 Sep 30
  3. Interaction-based evolution: how natural selection and nonrandom mutation work together, Adi Livnat, University of Haifa, 2013
  4. Other internal senses, Wikipedia
  5. The Sound of Running Water Puts Beavers in the Mood to BuildMichele Debczak, Mental Floss
  6. Nicaraguan Sign Language, Wikipedia

Leave a Reply