Hey, Science, We’re Over Here

Scientists don’t know quite what to make of the mind. They are torn between two extremes, the physical view that the brain is a machine and the mind is a process running in it, and the ideal view that the mind is non-physical or trans-physical, despite operating in a physical brain. The first camp, called eliminative materialism (or just eliminativism, materialism or reductionism), holds that only the physical exists, as supported by the hard scientific theories and evidence of physics, chemistry, and biology. At the far end of the latter camp are the solipsists, who hold that only one’s own mind exists, and a larger group of idealists, who hold that one or more minds exist but the physical world does not. Most members of the second camp, however, acknowledge physical existence but think something about our mental life can never be reduced to purely physical terms, though they can’t quite put their finger on what it is. Social science and our own intuition both assume that mental existence is something beyond physical existence. A materialist will hold either that (a) there are no mental states, just brain states (eliminativism), or that (b) there are mental states, but they can be viewed as (reduced to) brain states (reductionism). An idealist will hold that mental states can’t be reduced, which means they have some kind of existence beyond the physical, and that the physical might exist but if so it can never be conclusively proven.

I propose to resolve all these issues by reintroducing dualism, the idea of a separate kind of existence for mind and matter, but I will do so in a way that is compatible with materialist science. While I agree that there is a physical basis for everything physical, I don’t agree that all physical things are only physical. Some physical things (namely living things and artifacts they create) are also functional, which is a distinct kind of existence. While one can explain these things in physical terms, physical explanations can’t address function, which is a meaningless concept from a physical perspective. I will argue that functional existence transcends physical existence and is hence independent of it, even though our awareness of functional existence and use of it are mediated physically. This is because function is indirect; while it can be applied to physical circumstances, functional existence is essentially the realm of possibility, not actuality. The eliminativist idea that everything that exists is physical, aka physical monism, is only true so long as one is referring only to physical things that are not functional. But physical systems can become functional by managing information, which is a physical way of tracking possibilities indirectly. Once physical things start to manage information, physical monism breaks down. Function is an additional kind of existence that characterizes a system’s capabilities rather than its substance. Functional systems can be physical or fictional systems, but if they exist physically then their functional reach has physical limitations and their physical reach depends on their functional influence, which makes them quite unlike any other physical system because information is applied using feedback to predict and influence physical events. A capability can be thought of as a “power” to do something, which is a distinct kind of thing from the underlying physical mechanisms that help make it possible. Living things principally exist functionally and only secondarily physically because natural selection selects functions, not forms. Consequently, any attempt to understand living things must focus first on function and then on form. The brain is an organ that dynamically manages function using information, and the mind is a subprocess of the brain that manages high-level functionality through a first-person interface. How the brain manages function is so abstracted from physical mechanisms that we only marginally understand how it physically works, and the mind is still further abstracted so that we have almost no idea how it physically works or even what fraction of the brain constitutes the mind. What we even mean by the word “mind” is somewhat nebulous, but I think most people would agree that “mind” refers to our conscious awareness and its capacities, notably our continuous sense of self, sense of agency (control), memory, and capacity for thought. These features are but a small part of the brain’s responsibilities, so the mind can fairly be considered a subset of the brain’s functions. Finally, note that scientific explanations themselves (even those of materialists) are entirely functional as they speak to capabilities and have no fixed physical form. An eliminative materialist will hold that a scientific theory is ultimately a physical thing, having a physical form in each of our brains. I hold that this is silly and that all ideas and theories are functional things built out of a fabric of relationships between other functional things.

Reductionists reject downward causation1 as a nonsensical emergent power of complex systems. The idea is that some mysterious, nonphysical power working at a “higher” level causes changes at a “lower” level. For example, downward causation might be taken to imply that a snowflake’s overall hexagonal shape causes individual water molecules to attach to certain places to maintain the symmetry of the whole crystal. But this is patently untrue; only the local conditions each molecule encounters affects its placement. Each new water molecule will most likely attach to spots on the crystal most favored by current conditions encountered by that snowflake, which constantly change as the snowflake forms. But those favored spots at any given instant during its formation are hexagonally symmetric, making symmetrical growth most likely. The symmetry only reflects the snowflake’s history, not an organizing force. But just because downward causation doesn’t exist in purely physical systems, that doesn’t mean it doesn’t exist in functional systems like living things. Any system capable of leveraging feedback can make adjustments to reach an objective, hence “causing” it, if it also has an evaluative mechanism to prefer one objective over another. Mechanisms that record such objectives and have preferences about reaching them are called information management systems, and living things use several varieties of these systems to use feedback to bring about downward causation. It is a misnomer to call this capacity of life an “emergent” property because it doesn’t appear from nothing; it is just that certain physical systems can manage information and apply feedback. So we can see that the “higher” levels of organization are these information management systems and the “lower” levels are the things under management. Living things use at least two and arguably four or more information management systems at different levels (more on these levels later). Reductionists hold that causation is the direct consequence of subatomic forces, which can be demonstrated for nonliving natural systems, and they further conclude that, since life is natural with nothing added (e.g. no mystical “spirit”), the mind is an epiphenomenon or an illusion, and in either case, is certainly not causing anything to happen but is merely observing. Although this strongly contradicts our experience, reductionists will just say that our perception of time and logic is biased, so we should ignore our feelings and just accept that our existence as agents in the world is only a convenient way of describing matters really managed at the subatomic level.

Bob Doyle (the Information Philosopher) explains it like this:


Some biologists (e.g., Ernst Mayr) have argued that biology is not reducible to physics and chemistry, although it is completely consistent with the laws of physics. Even the apparent violation of the second law of thermodynamics has been explained because living beings are open systems exchanging matter, energy, and especially information with their environment. In particular, biological systems have a history that physical systems do not, they store knowledge that allows them to be cognitive systems, and they process information at a very fine (atomic/molecular) level.

Information is neither matter nor energy, but it needs matter for its embodiment and energy for its communication.

A living being is a form through which passes a flow of matter and energy (with low or “negative” entropy, the physical equivalent of information). Genetic information is used to build the information-rich matter into an overall information structure that contains a very large number of hierarchically organized information structures. Emergent higher levels exert downward causation on the contents of the lower levels.2

I wrote most of this book before I discovered Bob Doyle’s work, so I did not know that anyone else had proposed the full-fledged existence of function/information independent of physical matter. But I’m glad to see that someone else thinks along the same lines as me. Doyle focuses mostly on information while I focus mostly on explaining the mind, so we have taken this common starting point in different directions.

I am not proposing property dualism, the idea that everything is physical but has both physical properties and informational properties. No physical thing has an informational property. Rather, I agree with eliminative materialists that physical things are just physical so far as physical existence goes. But physical systems called information management systems can arise that can exploit feedback to arbitrary degrees by recording that feedback as information, and this information can be viewed from a different perspective as having a distinct kind of existence. It doesn’t exist if your perspective is physical, but it is useful to have more than one perspective about what constitutes existence if your goal is to explain things. Ironically, explanation itself is not physical but is a functional kind of thing, so eliminative materialists can only argue their case by ignoring the very medium they are using to do so. But functional existence is a bit harder to put your finger on because function and information are abstractions which can never be perfectly described. Perfect descriptions are impossible because they are ultimately constructed out of feedback loops, which can reveal likelihoods but not certainties. Tautological information can be considered to be perfect, but only because it is true by definition. Tautological information doesn’t actually become functional until it is applied in a useful way, and use implies that our system of interest contains at least some element that is not known by definition or construction. In other words, mathematical perfection is useful as far as you can push it, but will not solve all problems in the real world or in sufficiently complex imaginary worlds. So function and our capacity to understand function are both nonphysical perspectives about what feedback can tell us. They achieve a kind of physical existence through information, which persists independent of whether it is being processed actively in a brain or computer. This independent persistence constitutes a latent reserve of functional capacity and as such is a nonphysical entity existing in a physical world. We can prove that the transphysical aspect of their existence is real with a simple thought experiment. For any functional capacity we can think of, we can imagine that any person could have this capacity, or we can imagine that a computer or robot could be built with this capacity, or we can imagine that the capacity can exist theoretically without any physical manifestation. In so doing we have abstracted the function away from the physical and freed it as an independent kind of existence. The indirect nature of information makes it distinct from its possible applications or connections to the physical world. So we can say that existence can be broken down into direct forms (of which physical is the only form we know of) and indirect (of which the information that drives life and the mind are the only physical forms we know of, be we can imagine more).

Though all functional things must take a physical form in a physical universe such as the one we are in, this doesn’t mean function “reduces” to the physical. The quest for reductionism is misguided and has been holding science back for entirely too long. We have to get past it before we can make meaningful progress in subjects where functional existence is paramount. To do that, let’s take a closer look at where and how function arises in a physical universe. The only place function has appeared is in living organisms, who achieved it through evolution, which applies feedback from current situations to improve the chances of survival in future situations. The biochemical mechanisms they employ matter more from a functional standpoint than a physical standpoint because they are only selected for what they can do, giving them a reason to exist, and not for how they do it. In the nonliving world, things don’t happen for a reason, they just happen. We can predict subatomic and atomic interactions using physics, and molecular interactions using chemistry. Linus Pauling’s 1931 paper “On the Nature of the Chemical Bond” showed that chemistry could in principle be reduced to physics34. Geology and earth science generalize physics and chemistry to a higher level but reduce fully to them. However, while physical laws work well to predict the behavior of simple physical systems, they are not enough to help us predict complex physical systems, where complexity refers to chaotic, complex, or functional factors or a combination of them. Chaos is when small changes in starting conditions have butterfly effects that eventually change the whole system. Complex factors are those whose intricate interactions exceed the predictive range of our available models, which necessarily simplify. We see both chaos and complexity in weather patterns, and yet we have devised models that are pretty helpful at predicting them. These models are based on physical laws but use heuristics to approximate how systems will behave over time. But the weather and all other nonliving systems don’t control their own behavior; they are reactive and not proactive. Living things introduce functional factors, aka capabilities. Organisms are complex adaptive systems (CAS) that exploit positive feedback to perpetuate changes through DNA. I can’t prove that complex adaptive systems are the only way functionality could arise in a physical universe, but I don’t see how a system could get there without leveraging cycles of positive and negative feedback. Over time, a CAS creates an abstract quantity called information, which is a pattern that has occurred before and so is likely to occur again. The system then exploits the information to alter its destiny. Information can never reveal the future, but it does help identify patterns that are more likely to happen than random chance, and everything better than random chance constitutes useful predictive power.

Functional systems, i.e. information management systems, must be physical in a physical universe. But because the mechanisms that control them are organized around what they can do (their capability) instead of how they do it (their physical mechanism), we must speak of their functional existence in addition to their physical existence. By leveraging feedback, these systems acquire a capacity to refer to something else, to be about something other than themselves that is not directly connected to them. This indirection is the point of detachment where functional existence arises and (in a sense) leaves physical existence behind. At this point, the specific link is broken and a general “link” is established. Such an indirect link refers only to the fact that the information can be applied appropriately, not that any link is stored in any way. Just how a functional system can use information about something else to influence it can be implemented in many ways physically, but understanding those ways is not relevant to understanding the information or the function it makes possible. At the point the information detaches it gains existential independence; it is about something without it particularly mattering how it accomplishes it. It has a physical basis, but that won’t help us explain its functional capabilities (though it does place some constraints on those functions). While every brain and computer has different processing powers (memory, speed, I/O channels, etc.), in principle they can manage the same information in completely different ways because the measure of information is the function it makes possible, not how it is implemented. That said, in practice, physical strengths and limitations make each kind of brain and computer stronger or weaker at different tasks and so must be taken into consideration.

I call the new brand of dualism I am proposing form and function dualism. This stance says that while everything physical is strictly physical, everything functional is strictly functional. Physical configurations can act in functional ways via information management systems, and these systems can only be understood from a functional perspective (because understanding is functional itself). Consequently, both physical and functional things can be said to exist, never as different aspects of each other but as a completely independent kind of existence. Functional existence can be discussed on a theoretical basis independent of physical information management systems to implement them. So, in this sense, mathematics exists functionally whether we know it or not. More abstractly, even functional entities entirely dependent on physical implementations for their physical existence, like the functional aspect of people, could potentially be replicated to a sufficient level of functional detail on another physical platform, e.g. a computer, or it could be spoken of on a theoretical basis. In fact, when we speak of other people, we are implicitly referring to their functional existence and not their physical bodies (or, at least, their bodies are secondary). So, entirely aside from religious connotations, we generally recognize this immaterial aspect of existence in humans and other organisms as the “soul.”

Information management systems that do physically exist include:

  • biological organisms, which store information in genes using DNA,
  • minds, which store information in brains neurochemically,
  • civilizations, which store information in institutions (rule-based practices) and artifacts (e.g. books), and
  • software, which stores information in computer memory devices.

    Organisms, minds, civilizations, and software can be said to have functions, and it is meaningful to discuss such functions independently of the underlying physical systems they run on. More importantly, most of the meaning of these functions is quite independent of their underlying physical systems. Also note that minds heavily leverage the information stored in organisms, so one could not replicate a mind comprehensively on a computer without a functional understanding of both. Civilizations leverage minds and so would require an understanding of genes, minds, and culture to be replicated, but software stands alone. A software system employs an independent model to represent and process information. The extent to which the information in a software system can be correlated or applied outside that system is entirely subject to the interpretation of the user, at which point the combined system of software plus user acquires a dependence on the information management system of the user too.

Joscha Bach says, “We are not organisms, we are the side-effects of organisms that need to do information processing to regulate interaction with the environment.” 5 This statement presumes we define organisms as strictly physical and “we” as strictly functional, which are the senses in which we usually use these words. But saying that we are the side-effects is a bit of a joke, because it is really the other way around: the information processing (us) is primary and physical organisms are the side-effects. Bach points out that mind starts with pleasure and pain. This is the first inkling of the mind subprocess, a process within the brain, separate from all lower-level information processing, whose objective is to make top-level decisions. By summarizing low-level information into an abstract form, the behavior of the brain can be controlled more abstractly, specifically: “Pleasure tells you, do more of what you are currently doing; pain tells you, do less of what you are currently doing.” All pleasure and pain are connected to needs: social, physiological and cognitive. In higher brains like ours, consciousness is like an orchestra conductor who coordinates a variety of activities by attending to them, prioritizing them, and then integrating them into a coherent overall strategy (his “conductor theory of consciousness”). You can do a number of complex behaviors while sleepwalking, even including going to the fridge or in some cases even making dinner or answering simple questions, but there is “nobody home,” there will be no reflection or goal directiveness. We collect that experience together as a memory of what just happened, and access to that memory gives us our experience of being conscious. Running simulations on that memory and further integrating it with the world makes our conscious experience seem meaningful to us. Bach identifies the dorsolateral prefrontal cortex as the brain region that does this in our brain, and this may well be where the lion’s share of advanced mental simulation happens, but our experience of consciousness draws other features from many other parts of the brain, some found only in higher animals and some found in almost all animals. Bach’s theory proposes that “You are not your brain, you are a story that your brain tells itself,” which is correct except for a small logical error — it puts physicalism ahead of functionalism by implying that it is meaningful to say that a brain has an “itself”; it doesn’t. The sentiment should be, “You are not your brain, you are a story that drove the evolution of your brain.” The brain is not the actor here; it is an instrument, just like the body.

I’m not going to go too deeply into the mechanisms of the brain, both because we only know superficial things about them and because my thesis is that function drives form, but I would like to talk for a moment about the default mode network (DMN). This set of interacting brain regions has been found to be highly active when people are not focused on the world around them, either because they are resting or because they are daydreaming, reflecting, or planning, but in any case not engaged in a task. It is analogous to a running car engine when the clutch is engaged and power isn’t going to the wheels. Probably more than any other animal, people maintain a large body of information associated with their sense of self, their sense of others (theory of mind), and planning, and so need to be able to comfortably balance using their mind for these activities versus using them for active tasks. We like to think we naturally maintain a balance between the two that is healthy for us, but we now know that our culture has prioritized task-oriented thinking over reflection. Excessive focus on tasks is stressful, and more engagement of the default mode network is the solution. This can be achieved through rest and relaxation, meditation and mindfulness exercises, and, most effectively of all, via psychotropic drugs like psilocybin and LSD. Even a single experience with psychedelic drugs can permanently improve the balance, potentially curing depression or improving one’s outlook, though more research needs to be done to establish good guidelines (and they also need to be decriminalized!)678

Socrates and Plato recognized that function stands qualitatively apart from the material world and explained it using teleology. Teleology is the idea that in addition to a form or material cause things also have a function or purpose, their final cause. They understood that while material causes were always present and hence necessary, they were not sufficient or final to explain why many things were they way they were. Purposes humans imposed on objects like forks were called extrinsic, while purposes inherent to objects, like an acorn’s purpose to become a tree, were called intrinsic. Aristotle listed four causes to explain change in the world, not just two: material, formal, efficient, and final. The formal cause attaches meaning to the shape something will have, essentially a generalization or classification of it from an informational perspective. While this was an intrinsic property to the Greeks, nowadays we recognize that classification is extrinsically assigned for our convenience. The efficient cause is what we usually mean by cause today, i.e. cause and effect. Physicalism recognizes material, formal and efficient causes as physical substance, how we classify it, and what changed it, but physicalism rejects the final, teleological cause because it sees no mechanism. After all, objects don’t sink to lower points because it is their final cause, but simply because of gravity. But for functional systems, teleology is both intuitively true and actually true, and the mechanism is information management.

Although Aristotle had put science on a firm footing by recognizing the need for teleological causes, he failed to recognize the source of purpose in natural systems. I contend that information management systems are the source; they accomplish purposes and functions whenever they apply information to guide their actions. The Scientific Revolution picked up four hundred years ago where Aristotle left off, but information as such would not be discovered until 19489, which then led into systems theory10, also called cybernetics, in the following decades. Complex adaptive systems are complex systems that evolve, and living organisms are complex adaptive systems with autopoiesis, the ability to maintain and replicate themselves. Brains are dynamic information management systems that create and manage information in real-time. Minds are subprocesses running in brains that create a first-person perspective to facilitate top-level decisions. Civilizations and software are human-designed information management systems that depend on people or computers to run them.

Gilbert Ryle famously put the nail in the coffin of Cartesian dualism in The Concept of Mind11 in 1949. While we know (and knew then) that the proposed mental “thinking substance” of Descartes that interacted with the brain in the pineal gland does not exist as a physical substance, Ryle felt it still had tacit if not explicit “official” support. While our lives unfold in two arenas, one of “inner” mental happenings and one of “outer” physical happenings, each with a distinct vocabulary, he felt philosophy presumed more than this: “It is assumed that there are two different kinds of existence or status. What exists or happens may have the status of physical existence, or it may have the status of mental existence.” He disagreed, contending that the mind is not a “ghost in the machine,” something independent from the brain that happens to interact with it. To explain why, he introduced the term “category mistake” to describe a situation where one inadvertently assumes something to be a member of a category when it is actually of a different sort of category. His examples focused on parts not being the same sort of thing as wholes, e.g. someone expecting to see a forest but being shown some trees might ask, “But where is the forest?”. In this sort of example, the mistake arises from a failure to understand that forest has a different scope than tree.12 He then contended that the way we isolate our mental existence from our physical existence was just a much larger category mistake which happens because we speak and think of the physical and the mental with two non-intersecting vocabularies and conceptual frameworks, yet we assume it makes sense to compare them with each other. As he put it, “The belief that there is a polar opposition between Mind and Matter is the belief that they are terms of the same logical type.” Ryle advocated the eliminativist stance: if we understood neurochemistry well enough, we could describe the mechanical processes by which the mind operates instead of saying things like think and feel.

But Ryle was more mistaken than Descartes. His mistake was in thinking that the whole problem was a category mistake, when actually only a superficial aspect of it was. Yes, it is true, the mechanics of what happens mentally can be explained in physical terms because the brain is a physical mechanism like a clock. But that is not the whole problem, and it is not the part that interested Descartes or that interests us, because saying how the clock works is not really the interesting part — the interesting part is the purpose of the clock: to tell time. The function of the brain cannot be explained physically because purpose has no physical corollary. The brain and the mind have a purpose — to control the body — but that function cannot be deduced from a physical examination. One can tell that nerves from the brain animate hands, but one must invoke the concept of purpose to see why. Ryle saw the superficial category mistake (forgetting that the brain is a machine) but missed the significant categorical difference (that function is not form). Function can never be reduced to form, even though it can only occur in a physical system by leveraging form. When we talk about the mind, we now know and appreciate that it is the product of processes running in the brain, but talking about the mind is not the same as talking about those processes any more than talking about cogs is the same as talking about telling time. The subject matter of the brain and mind is functional and never the same as the physical means used to think about them. Ryle concluded, “It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different types of existence, for ‘existence’ is not a generic word like ‘colored’ or ‘sexed.'” But he was wrong. In this case they really do indicate two different types of existence. Yes, the mind has a physical manifestation as a subprocess of the brain, so it is physical in that sense. But our primary sense of the word mind refers to what it does, which is entirely functional. This is the kind of dualism Descartes was grasping for, but he overstepped his knowledge by attempting to provide the physical explanation. The true explanation is that functional things can have physical implementations, and they must for function to impact the physical world, but function and information are fundamentally not physical and their existence is not dependent on space or time; they are pure expressions of hypothetical relationships and possibilities.

The path of scientific progress has influenced our perspective. The scientific method, which used observation, measurement, and experiment to validate hypotheses about the natural world, split the empirical sciences from the formal sciences like mathematics, which deal in immaterial abstractions. The empirical sciences then divided into natural sciences and social sciences because progress in the latter was only possible by making some irreducible assumptions about human nature, chiefly that we have minds and know how to use them. These assumptions implicitly acknowledge the existence of function in the life of the mind without having to spell it out as such. Darwin’s discovery of evolution then split the natural sciences into physical and biological sciences. Until that point, scientists considered living organisms to be complex machines operating under physical laws, but now they could only be explained through the general-purpose advantages of inherited traits. This shift to general from specific is the foundation of information and what distinguishes it from the physical. So both the biological and social sciences tacitly build on a foundation of the teleological existence of function, but they are reluctant to admit it because material science has rejected teleology as mystical. But a physical science that ignores the existence of natural information management systems can’t explain all of nature.

The social sciences presume the existence of states of mind which we understand subjectively but which objectively arise from neural activity. The idea that mental states are not entirely reducible to brain activity is called emergentism. An emergent behavior is one for which the whole is somehow more than the parts. Emergence is real, but what is actually “emerging” in information management systems is functionality. From a physical perspective the system is not doing some “new” kind of thing it could not do before; it is still essentially a set of cogs and wheels spinning. All that has changed is that feedback is being collected to let the system affect itself, a capacity I call functionality. The behavior that results builds on a vastly higher level of complexity which can only be understood or explained through paradigms like information and functionality. While there are an infinite number of ways one could characterize or describe information and functionality, all these ways have in common that they are detecting patterns to predict more patterns. Because one must look to information and function to explain these systems and not only to physical causes, it is as if something new emerged in organisms and the brain. Viewed abstractly, one could say that the simplistic causal chains of physical laws are broken and replaced by functional chains in functional systems. This is because in a system driven by feedback, cause itself is more of a two-way street in which many interactions between before-events and after-events yield functional relationships which the underlying physical system leverages to achieve functional ends. The physical system is somewhat irrelevant to the capabilities of a functional system, which is in many ways independent of it. Ironically, the functional system could thus equally claim the physical system emerges from it, which is the claim of idealism. All of language, including this discussion and everything the mind does, are functional constructs realized with the assistance of physical mechanisms but not “emerging” from them so much as from information and information management processes. A job does not emerge from a tool, but, through feedback, a tool can come to be designed to perform the job better. Thus, from an ideal perspective, function begets form.

Before life came along, the world was entirely physical; particles hit particles following natural physical and chemical rules. While we don’t know how the feedback loops necessary for life were first bootstrapped, we know conditions must have existed that allowed feedback and competition to take hold. I will discuss a scenario for this later, but the upshot is that DNA became an information store for all the chemicals of life, and it became embedded in single and later multicellular organisms that could replicate themselves. According to form and function dualism, the information in DNA is a nonphysical aspect that confers capabilities to the organism. We characterize those capabilities in biology as genetic traits that confer adaptive advantages or disadvantages relative to alternative versions (alleles) of the same genes. Chemically, a gene either codes for a protein or regulates other genes. It doesn’t matter to the feedback loops of natural selection what a given gene does chemically, just whether the organism survives. Survival or death is the way the function of a gene is measured. Chemistry is necessary for function, but survival is indifferent to chemistry. In many cases, the chemical basis of a genetic advantage seems clear, while in others the “purpose” of the gene can be hard to identify. It seems likely that each protein would fulfill one biological function (e.g. catalyzing a given chemical reaction) because heritability derives from selection events on one function at a time, so multiple functions would be challenging for natural selection to maintain because it seems mutations would be unlikely to be mutually beneficial to two functions. However, cases of protein moonlighting, in which the same protein performs unrelated functions, are now well-documented. In the best-known case, different sequences in the DNA for crystallins code either for enzyme function or transparency (as the protein is used to make lenses). A majority of proteins may moonlight, but, in any case, it is very hard to unravel all the effects of even a primary protein function. So any causal model of gene function will necessarily gloss over subtle benefits and costs. Its real purpose is a consolidated sum of the role it played in facilitating life and averting death in every moment since it first appeared. The gene’s functionality is real but has a deep complexity that can only be partially understood. Even so, approximating that function through generalized traits works pretty well in most cases. Although evolution is not directed, competition preferentially selects effective mechanisms, which is such a strong pressure that it tends to make genes very good at what they do. Mutations create opportunities for new functionality, but can also disable genes and their traits when niches change. To recap, genes collect information using DNA from feedback that predicts that a competitive advantage for a forebear will yield a competitive advantage for a descendant. It is a slow way to collect information, but evolution has had plenty of time and it has proven effective.

Beyond the information managed by the genes that form the blueprint of the body, organisms need to manage some information in real time, and instinct is a tool they all possess to do this. The mechanisms that regulate the flow of metabolites in plants via xylem and phloem are not called instincts because this feedback is applied directly without information processing (which only happens in animals via brains). Instinct covers all behavior based on information processing that doesn’t leverage experience or reasoning. Without experience or reasoning, an instinct will work the same “hardwired” way from birth to death. Instincts arise subconsciously but sometimes present themselves consciously as urges, covering all our hardwired inclinations for things like eating, mating, emotions, and drives. Instinct also includes sensory processing, such as smell, touch, body sense and most notably vision, which creates high-fidelity 2D images and transforms them into 3D objects which are further recognized as objects, either specific objects or types of objects.

Instincts take ages to evolve and solve only the most frequently-encountered problems, but creatures face challenges every day where novel solutions would help. A real-time system that could tailor the solution to the problem at hand would provide a tremendous competitive advantage. Two approaches evolved to do this, and each makes use of experience and reasoning. I call the first conceptual thinking, which can also called logical reasoning. Logical reasoning starts with premises, which are statements (predicates about subjects) taken to be true, and draws consequences from them. Subjects, predicates, and premises are concepts viewed from a logical perspective. The second approach is subconceptual thinking, which is a kitchen sink of innate data analysis capabilities plus a lookup ability. These capabilities run in parallel subconsciously and we have no conscious awareness of their inner workings, but our whole capacity for recall which makes the “right” information to pop into our minds when we need it comes from accessing this subconceptual store. While instincts alone have fixed reactions, subconceptual thinking goes further to build on experience by accessing what has been stored before to produce customized responses. Subconceptual thinking includes common sense, pattern recognition, and intuition, but also includes much of our facility for math, language, music and other largely innate but talents that can be refined with experience. Much of what we learn from experience is subconceptual in that it is not dependent on conceptualizing or logical reasoning. Conditioning, for example, with or without reinforcement, is subconceptual. Much, or even most, of the data our brains gather about the world is subconceptual and can be trusted to help us even with little or no underlying conceptual understanding. One might loosely call all this knowledge “stereotypical” in that it all follows from patterns we have seen before, but the word stereotype connotes patterns that are applied more broadly than is justifiable. Subconcepts, however, are usually highly justifiable and give us great confidence that the world and our place in it are what we think they are. All our concepts are also stored in our subconceptual store because memory is a subconscious, parallelized capacity. But where subconcepts are just a nameless pool of associations, concepts are linked to each other by a network of relationships which can name them, give them properties, or specify logical connections. Building concepts out of subconcepts is a bit like building on quicksand in that the foundations are always weak (and arguably nonexistent). But concepts also benefit from great fluidity and can be quickly adapted to build conceptual frameworks to suit the task at hand. Conscious thinking with concepts and subconcepts to derive new patterns is called reasoning. Reasoning is the conscious capacity to “make sense” of things by producing useful information. We have just one stream of consciousness along which our attention is focused at any given moment, but we have innumerable lines of thought, which are thoughts (and the context that goes with them) that were previously in our stream of thought. We easily switch between many lines of thought from moment to moment, generally keeping track of what we are doing in the short, medium and longer terms, both looking forward and back. While reasoning is orchestrated consciously, it is like the ten (or one) percent of an iceberg that is visible, while most of the processing is done subconsciously. Reasoning draws heavily on memory, for which nearly all the work is subconscious, and on language, for which vocabulary, grammar, and semantics are largely subconscious, and on the weighing of preferences, which is also almost entirely subconscious. Much of reasoning boils down to iterating on variations of a line of thought until we are convinced we have found the one our subconscious likes the most. But a small part of it, logical reasoning, is more explicitly conscious. Logical reasoning (aka conceptual thinking) is restricted to concepts and firm relationships between them, and we look to logical reasoning to ensure many of our top-level decisions make sense.

Subconceptual thinking uses subconceptual data (subconcepts) while conceptual thinking uses concepts. The most primitive subconcepts, percepts, are drawn from the senses using internal processes to create a large pool of information akin to big data in computers. Subconcepts and big data are data that is collected without knowing the data’s purpose. It is the sort of data that has been helpful in the past, so it is likely to be useful again. Over time we develop algorithms that mine subconcepts or big data to find useful patterns that lead to helpful actions, still without having a clear idea about what the data “means.” We don’t have to understand common sense, intuition or music to be talented at them. Concepts, on the other hand, are akin to structured data in computers. A concept is an idealization of a pattern found in subconcepts into a generalized element with specific associated properties. While the patterns are primarily subconceptual, a network of relationships to other concepts also forms. A concept is a pure abstraction (i.e. having no corollary in the physical world) that is defined by its subconceptual properties and its relationships to other concepts. The patterns are frequently chosen so that the concept can be reliably correlated to a generalized class of entities in the physical world, but this connection is indirect and does not make the concept itself physical. Simple reasoning depends entirely on subconceptual pattern analysis, specifically recognition, intuition, induction (weight of evidence) and abduction (finding the simplest explanation). But deduction (aka entailment or cause and effect) cannot be done subconceptually, because by construction entailment requires discrete premises, not diffuse, unstructured data. The other kinds of reasoning can also leverage concepts, but deduction specifically requires them. Also, deduction can’t be done subconsciously (according to the theory I am presenting) because it requires an iterated interaction of the stream of consciousness with subconscious memory and capabilities. Logical reasoning principally means deduction, though it arguably also includes logical treatments of induction and abduction, but I will use the term logical reasoning to refer specifically to our conscious conceptual thinking capacity. Note that while all reasoning, being the top level or final approval of our decision-making process, is also strictly conscious, concepts and subconcepts are stored subconsciously where subconscious capabilities (like recognition and language) can work with them.

Relationships that bind subconcepts and concepts together form mental models, which constitute our sense for how things work or behave. Mental models have strong subconscious support that lets them appear in our heads with little conscious effort. The subconceptual aspects of these models give them a very “real,” sensory feel to us, while the conceptual aspects that overlay them connect things at a higher level of meaning. Although subconceptual thinking supports much of what we need to do with these models (akin to an autopilot), logical reasoning is much more effective for solving problems than pattern recognition, common sense, and intuition, which are often poor at solving novel problems. Logical reasoning, however, gives us an open-ended capacity to chain causes and effects in real time. As we mature we build a vast catalog of mental models to help us navigate the world. Though we may remember the specific times they were applied, we mostly remember how to use them in a general sense. Note that although logic helps hold mental models together, it doesn’t follow that understanding is a consequence of logic. John von Neumann once said, “Young man, in mathematics you don’t understand things. You just get used to them.”13 But it’s not just mathematics; all of understanding is really a matter of getting used to things. We feel naturally inclined to get used to things our subconscious tells us “make sense,” but all of knowledge is ultimately relative: logical systems are internally consistent but are based on premises whose support is ultimately subjective. The important thing about understanding is that it is functional; it is news you can use.

The physical world lives in our minds via mental models. Our minds hold an overall model of the current state of the physical world that I call the mind’s real world. Whatever the physical world might actually be, we only know it consciously through the mind’s real world. The mind’s real world leverages countless mental models we have that have helped us understand everything we have ever seen. These models don’t have to be right or mutually exclusive; whatever models help provide us with our most accurate view of physical reality comprise our conception of it. The mind’s real world “feels” real to us, although it is purely a mental construct, because the mind is inclined to interpret its sensory connections to the physical world that way instinctively, subconceptually and conceptually. But we don’t just live in the here and now. Because the mind’s primary task (and the whole role of information and function) is to predict the future, mental models flexibly apply to a range of circumstances. We call the alternative ways things could have been or might yet be possible worlds. In principle, the mind’s real world is a single possible world, but in practice our knowledge of the physical world is imperfect, so our model of it in the past, present and future is always a constellation of possible worlds.

In summary, all behavior results from instinct, subconceptual thinking, and conceptual thinking. Our mental models combine these approaches to leverage the strengths of each. Genetic data is a first-order bearer of information that is collected and refined on an evolutionary timescale. Instincts (senses, drives, and emotions) are second-order bearers of information that process patterns in real time whose utility has been predetermined by evolution. Subconcepts are third-order bearers of information in which the exact utility of the patterns has not been predetermined by evolution, but which do tend to turn out to be valuable in general ways. Finally, concepts are fourth-order bearers of information that are fundamentally symbolic; concepts are pure abstractions that represent a block of related information distilled from patterns in the feedback. Some subconscious thought processes (e.g. vision and language processing) manipulate concepts in customized ways without applying general-purpose logical reasoning, which can only be done consciously. Logic finds reasons, i.e. rules, that work reliably or even perfectly in mental models. The utility of logical reasoning ultimately depends on correlating models back to the real-world, and for this we depend on mostly subconscious but conceptual reverse recognition mechanisms that fit our models back to reality. Recognition and reverse recognition are complex problems requiring massive parallel computation for which present-day computers are only recently developing some facility, but for us they just happen with no conscious effort. This not only lets us think about more important things, it makes our simplified, almost cartoon-like representation of the world through concepts feasible.

Our three real-time thinking talents — instinct, subconceptual thinking, and conceptual thinking — are distinct but can be very hard to cleanly separate. We know instinct influences much of our behavior, but we are quite unsure where instinct leaves off and tailored information management begins because they integrate very well. And even complex behavior, most notably mating, can be driven by instincts, so we can’t be too sure instinct isn’t behind any given action. While subconceptual and conceptual thinking can be readily separated based on the presence of concepts, it can be difficult to impossible to say at exactly what point a concept has coalesced from subconcepts. In theory, though, I believe there must be a logical and physical point at which a concept comes to exist, the moment that a set of information is referenced as a collective. This suggests that conceptual processes differ from subconceptual ones because they involve objectification of data by reference. Logical reasoning introduces the logical form, which abstracts logical operations from their content, making it possible to devise internally consistent logical models within which everything is necessarily true. Reasoning and especially logical reasoning can only be done consciously. Reasoning is considered a conscious activity even though some parts of reasoning, e.g. intuition, happen subconsciously. The decision to act does not need to be strictly conscious. Habitual or snap decisions are sometimes made subconsciously before conscious awareness, but such decisions can be considered to have been “preapproved” by prior conscious thought. We do always have the conscious prerogative to override “automated” behavior, though it may take us some time to decide whether to do so. But only consciousness is equipped to pursue a chain of reasoning, so habitual responses can only replay stored sequences. Our capacity to reason logically works when we dream and daydream using our stream of consciousness, even though consciousness is reduced at those times.

Dam building is a complex behavior in beavers that seems like it needed to be reasoned out and taught from generation to generation, and yet “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”14 So it is entirely instinctive. We know language acquisition is similarly innate in humans because humans with no language will create one15. But we know that all the artifacts of civilization (except perhaps the hand axe, which may have been innate), including all its formal institutions, are primarily the products of thinking, but subconceptual and conceptual, and the experience they created. Our experience of our own minds is both natural (i.e. instinctive) and artificial (i.e. extended by thinking and experience), but these aspects are so intertwined in our perspective that they can be difficult to impossible to distinguish in many cases. For example, we know our sense of color is innate, and yet our reactions to colors are heavily influenced by culture. Or, we sometimes have the impression we have made a logically reasoned argument when all we have done is rationalized an instinctive impulse or an intuitive hunch. But although the three capacities always work together and overlap in their coverage, I believe they arise from fundamentally different kinds of cognitive processes that can be examined separately.

While nearly all animals learn from experience, demonstrating subconceptual thought, not all can think conceptually. Birds and mammals (let’s call them advanced animals for short) demonstrate problem-solving behavior including novel planning and tool use that indicate the conceptual use of cause and effect, which goes beyond what instinct and subconceptual thinking could achieve. Other animals do not, and I suspect all others lack even a rudimentary conceptual capacity. We only know we are conscious and that our logical reasoning is conscious from introspection, so we can’t prove it in advanced animals, but observations and shared evolution makes it very likely for mammals and pretty likely for birds as well. Still, we know humans are “smarter,” but what is it that distinguishes us? It is our greater capacity for abstract logical reasoning. Abstraction is the ability to decouple information from physical referents, to think in terms of concepts and mental models in logical terms independent of physical reality. We consequently don’t need to constrain our thoughts to the here and now; we can dream in any direction. This greater facility and impetus to abstraction has coevolved with a better ability to think spatially, temporally, logically and especially linguistically than other animals. Loosening this tether back to reality began with small changes in our minds, but these changes opened a floodgate of increased abstraction as it provides greater adaptive power. Though we must ultimately connect generalities back to specifics, most words are generic rather than specific, meaning that language is based more on possible worlds than the mind’s real world specifically. I call our ability to control our thoughts in any direction we choose directed abstract thinking, and I maintain animals can’t do it. Advanced animals can logically reason, focus, imitate, wonder, remember, and dream but their behavior suggests they can’t pursue abstract chains of thoughts very far or at will. Perhaps the ecological niches into which they evolved did not present them with enough situations where directed abstract thinking would benefit them to justify the additional costs such abilities bring. But why is it useful to be able to decouple information from the world to such a degree? The greater facility a mind has for abstraction, the more creatively it can develop causal chains that can outperform instinct and subconceptual thinking.

Abstraction opens the door to an unlimited range of possibilities, but evolution has kept this facility practical. The range of what is functionally possible is the domain of philosophy, and the range of what is functionally actual (and hence practical) is the domain of psychology, the study of mental life. My explanation of the mind starts by enlarging our philosophical framework to include functional existence and moves from there to explain our psychology. Psychology spans neuropsychology, behavioral psychology, evolutionary psychology, cognitive psychology, psychoanalysis, humanistic psychology, introspection, and a portion of cognitive science, and so brings a lot of perspectives to bear on the problem. They each constrain the possible to the actual in a different way depending on their functional objectives. Before I look at what psychology tells us, I’d like to develop a philosophy of information management. I described above how information is managed in the mind through instinct and thinking. Instinct imposes ingrained behaviors while thinking customizes behaviors in novel circumstances. With ingrained behaviors, the ends (essentially) justify the means, which makes the means a gestalt indivisible into explanatory parts. Explanation is irrelevant to the feedback loops that create instinct, which produce supporting feedback based on overall benefit to survival. Subconceptual thinking is also a gestalt approach that applies innate algorithms to subconcepts (big data) and uses feedback to collect useful patterns. Conceptual thinking (logical reasoning) creates the criteria it uses for feedback. A criterion is a functional entity, a “standard, ideal, rule or test by which something may be judged.” What this implies is that reasoning depends both on representation (which brings that “something” into functional existence) and entailment (so rules can be applied). Philosophically, reasoning can never work in a gestalt way; it requires that the pool of data be broken down into generalized elements called concepts that interact according to logical rules. Logical reasoning operates in self-contained logical models, which lets it be perfectly objective (repeatable), whereas subconceptual thinking is a subjective gestalt and hence may not be repeatable. Objective, repeatable models can build on each other endlessly, creating ever more powerful explanatory frameworks, while subjective models can’t. There may be other ways to manage information in real time beyond instinct and thinking, but I believe these are sufficient to explain minds. To summarize, functional existence arises in some complex physical systems through feedback loops to create information, which is a pattern that has predictive power over the system. The feedback loops of instinct use natural selection over millennia to create gestalt mechanisms that “work because they work” and not because we can explain how they work. The feedback loops of thinking use neural processing over seconds to minutes. Subconceptual thinking works because life is repetitive, so we have developed general-purpose skills to find certain kinds of patterns. Conceptual thinking adds more power because self-contained logical models are internally true by design and can build on each other to explain the world better.

I’ve made a case for the existence of functional things, which can either be holistic in the case of genetic traits and subconceptual thinking or differentiated in the case of the elements of reason. But let’s consider physical things, whose existence we take for granted. Do physical entities also have a valid claim to existence? It may be that we can only be sure our own minds exist, but our minds cling pretty strongly to the idea of a physical world. Sensory feedback and accurate scientific measurement and experimentation of that world seem almost certainly to establish that it exists independent of our imagination. So we have adequate reason to grant the status of existence to physical things, but we have to keep in mind that our knowledge of the physical world is indirect and our understanding of it is mediated through thoughts and concepts. Ironically, considering it has a secondary claim to existence, physical science has made much more definitive, precise, and arguably useful claims about the world than biology and the social sciences. And even worse for the cause of the empirical functional sciences is that the existence of function has (inadvertently) been discredited. Once an idea, like phlogiston or a flat earth, has been cast out of the pantheon of scientific respectability, it is very hard to bring it back. So it is the case that dualism, the idea of a separate existence of mind and matter, has acquired the taint of the mystical or even the supernatural. But dualism is correct, scientific, and not at all mystical when one formulates function as a distinct kind of existence, one that becomes possible in a physical universe given enough feedback.

The laws of physical science provide reliable explanations for physical phenomena. But even though living systems obey physical laws, those laws can’t explain functionality. Brains employ very complex networks of neural connections. From a purely physical standpoint, we could never guess what they were up to no matter how perfectly we understood the neurochemistry. And yet, the mind arises as a consequence of brain activity, which is to say it is a process in the brain. The success of physical science coupled with the physical nature of the brain has led many to jump to the conclusion that the mind is physical, but the mind is just the functional aspect of the brain and not physical itself per se. Pursuing an eliminativist stance, the neurophilosopher Paul Churchland says the activities of the mind are just the “dynamical features of a massively recurrent neural network”16. From a physical perspective, this is entirely true, provided one takes the phrase “massively recurrent neural network” as a simplified representation of the brain’s overall architecture. The problem lies in the word “features,” which is an inherently non-physical concept. Features are ideas, packets, or groupings of abstract relationships about other ideas, which, as I have been saying, are the very essence of non-physical, mental existence. These features are not part of the mechanism of the neural network; they are signals or information that travel through it. This “traveling” is a consequence of complex feedback loops in the brain that capture patterns as information to guide future behavior. Any one feature can be thought about in different ways at different times by different people but will still fundamentally refer to the same feature, which means the functions it makes possible.

Philosophers have at times over the ages proposed different categories of being than form and function, but I contend they were misguided. Attempts to conceive categories of being all revolve around ways of capturing the existence of functional aspects of things. Aristotle’s Organon listed ten categories of being: substance, quantity, quality, relationship, place, time, posture, condition, action, and recipient of action. But these are not fundamental, and they conflate intrinsic properties with relational ones. Physically, we might say all particles have substance, place, and time (to the extent we buy into particles and spacetime), so these are inherent aspects of physical objects. All the other categories characterize aspects of aggregates of particles. But we only know particles have substance, place and time based on theories that interpret observed phenomena of them. Independent of observations, we can posit that a particle or object itself exists as a noumenon, or thing-in-itself. Any information about the object is a phenomenon, or thing-as-sensed. We have no direct knowledge of noumena; we only know them through their phenomena. Noumena, then, are what I call form, while the interpretation of phenomena is what I call function. Some aspects of noumena are internal and can never be known, while others interact with other noumena to propagate information about them, called phenomena. We can only learn about noumena by analyzing their phenomena using information management processes. We further need to break phenomena down into three aspects. A tree that falls in a forest produces a sound, being a shockwave of compressed air, that is the transmitted phenomenon. If we hear it, that is the received phenomenon. If we interpret that sound into information, that is the true phenomenon or thing-as-sensed, since sensing means “making sense of” or interpreting. The transmitted phenomenon and received phenomenon are actually noumena, being a shockwave or vibration of eardrums in this case, so I will reserve the word phenomenon for the interpretation or information processing itself. Interpretation is strictly functional, so all phenomena are strictly functional. Similarly, all function is strictly phenomenal in the sense that information is based on patterns and so is about them, because patterns are received phenomena. Summarizing, form and function dualism could also be called noumenon and phenomenon dualism. This implies that phenomena are not simply the observed aspects of noumena but are functional constructs in their own right and are consequently not ontologically dependent on their noumena. One could also so that all knowledge is phenomenal/functional while the objects of all knowledge are noumena/forms. Finally, I’d like to note that while form usually refers to physical form, when we make a functional entity the object of observation or description, it becomes a functional (non-physical) noumenon. For example, “justice” is an abstract concept about which we can make observations and develop descriptions. Our interpretations or understanding of it are phenomenal and functional, but justice itself (to the extent it can abstractly exist by itself) is just noumenal. While we can’t know anything for certain about noumena, a convenient way to think about them is that if we had exhaustive, detailed phenomenal knowledge of a noumenon would reveal the noumenon to us. It’s not the same, because it is descriptive and not the thing itself, but it would eliminate mysteries about the noumenon. Put another way, our knowledge of (noumenal) nature grows more accurate all the time through (phenomenal) theories. We like to think we know our own minds, but our conscious stream can only access small parts at a time, which gives a phenomenal view into our own mind, whose noumena are only partially known to us. In other words, we know our minds have functions, but our knowledge of them is indirect and imperfect. But we tend not to think that way because we can study our minds indefinitely until we are quite confident we have gotten sufficiently “close” to the noumena. So abstract nouns like “apple” about physical noumena and like “justice” about functional noumena both define noumenal concepts which we feel we understand well even though we can only define and describe them approximately using words. While the full noumenal nature of “apple” and “justice” varies from person to person and depends on all their experience with the concepts and assessments they have made about them, our phenomenal understanding of them at a high level intersect well enough that we can agree on basic definitions and applications in conversation.

Because information management systems physically exist, one can easily overlook that something more is happening. Let’s take a closer look at the features of functional existence. It is not that the functional exists as a separate substance in the brain as Descartes proposed or that it is even anywhere in the brain, because only physical things have a location. Instead, any thought I might have simultaneously exists in two ways, physically and functionally. The physical form of a thought is the set of neurons thinking the thought, including the neurochemical processes they employ. While we can pinpoint certain neurons or brain areas that are more active when thinking certain thoughts, we also know that the whole brain (and all the control systems of the body) participates in forming the thought because everything is connected. Everything is connected because instinct and subconceptual thinking are gestalts that draws on all our knowledge (including concepts), and logical reasoning uses closed models based on concepts, which are in turn built on instincts and subconcepts. The functional form of a thought is the role or purpose it serves. When we reflect on this purpose logically we form a concept of it that can be the subject or the predicate of a proposition with features that relate it to all other concepts. Functionality in minds has a practical purpose (even function in mathematics must be practical for some context, but here I refer to evolutionary practicality). A thought’s purpose is to assist the brain in its overall function, which is to control the body in ways that stand to further the survival and reproduction of the gene line. “Control” broadly refers to predictive strategies that can be used to guide actions, hopefully to achieve desired outcomes more often than chance alone would. And even seemingly purposeless thoughts participate in maintaining the overall performance of the mind and can be thought of as practice thoughts if nothing else. For a predictive strategy to work better than chance, it needs to gather information, i.e. to correlate past situations to the current situation based on similarities. The relationship between information and the circumstances in which it can be employed is inherently indirect because information is indirect; it is necessarily about something else. So we can conclude that function acts in our physical world to control the actions of living things, or, by extension, in information management systems we create to control whatever we like.

  1. Downward Causation, Principia Cybernetic Web, 1995
  2. Downward Causation, The Information Philosopher
  3. Chemical bonding model, Wikipedia
  4. Pauling, L. (1932). “The Nature of the Chemical Bond. IV. The Energy of Single Bonds and the Relative Electronegativity of Atoms”. Journal of the American Chemical Society. 54 (9): 3570–3582. doi:10.1021/ja01348a011.
  5. Joscha Bach, From Computation to Consciousness: Can AI reveal the nature of our minds?, Ted Talk
  6. Why psychedelic drugs could transform how we treat depression and mental illness, A conversation with author Michael Pollan on becoming a “reluctant psychonaut.”, Sean Illing, Vox
  7. How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence., Michael Pollen, 2018
  8. I did have a few experiences with psilocybin and LSD in college and I do consider myself permanently improved by them. I have always been heavily into introspective thoughts, but I felt I was able to reach a sufficient state of epiphany that I could just take things easier going forward.
  9. A Mathematical Theory of Communication, Claude Shannon, 1948
  10. Science and Complexity, Warren Weaver, 1948
  11. Gilbert Ryle, The Concept of Mind, University of Chicago Press, 1949
  12. Ryle’s examples are more involved, e.g. that colleges and libraries comprise universities and that battalions, batteries, and squadrons comprise divisions.
  13. John von Neumann, Wikiquote
  14. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  15. Nicaraguan Sign Language
  16. Paul Churchland, Neurophilosophy at Work, Cambridge University Press, 2007, p2

1 thought on “Hey, Science, We’re Over Here”

Leave a Reply