Introduction: The Mind Explained in One Page or Less

“If we do discover a complete theory, it should in time be understandable in broad principle by everyone, not just a few scientists. Then we shall all, philosophers, scientists, and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason—for then we would know the mind of God.” — Stephen Hawking, A Brief History of Time

We exist. We don’t quite know why we or the universe exists, but we know that we think, therefore we are. The problem is that we don’t know we know anymore. Worse still, we have convinced ourselves we don’t. It is a temptation of modern physical science to mitigate ourselves right out of existence. First, since the Earth is a small and insignificant place, certainly nothing that happens here can have any cosmic significance. But, more than that, the laws of physics have had such explanatory success that surely they must explain us as well, reducing the phenomenon of us to a dance of quarks and leptons. Well, I am here to tell you that Descartes was right, because we are here, and that science took a left turn at Francis Bacon and needs to get back on the right track. The problem is that we’ve been waiting for science to pull the mind down to earth, to dissect it into its component nuts and bolts, but we’ve had it backward. What we have to do first is use our minds to pull science up from the earth into the heavens, to invest it with the explanatory reach it needs to study imaginary things. Because minds aren’t made of nuts and bolts; brains are. Minds — and imagination — are made of information, function, capacity, and purpose, which are all well-established nonphysical things or forces from the human realm which science can’t see under a microscope.

In the above quote, Stephen Hawking is, of course, referring only to a complete theory of how we physically exist. Explaining the physical basis of the universe has been the driving quest of the physical sciences for four hundred years. It has certainly produced some fascinating and unexpected theories, but despite that, it matters much less to us than a complete theory of how we mentally exist, or, for that matter, of how we and the universe both physically and mentally exist. Can one theory embrace everything we can think of as existing, and if so, could we even understand it? Yes, it can and we can. All we have to do is take a few steps back and approach the subject again confidently. Everything we know, we learned from experience, and at no point in human history have we had more scientific knowledge and personal experience based on fairly robust scientific perspectives. In short, we know more than we think we know, and if we just think about the subject while keeping all that science in mind, we should be able to ferret out a well-supported scientific explanation of the mind.

I am going to go back to first principles to reseat the foundation of science and then use its expanded scope over both real and imaginary things to approach the concept of mind from the bottom up and the top down to develop a unified theory. The nature of the mind was long the sole province of philosophers, who approached it with reason but lacked any tools for uncovering its mechanisms. Wilhelm Wundt, the “father of modern psychology”, took on the conscious mind as a subject of experimental scientific study in the 1870’s. Immanuel Kant, himself probably the greatest philosopher of mind, held that the mind could only be studied through deductive reasoning, i.e. from an a priori stance, which (it turns out) makes the incorrect assumption that deduction in the mind stands apart from observation and experience. He disputed that psychology could ever be an empirical (experimental) science because mental phenomena could not be expressed mathematically, individual thoughts could not be isolated, and any attempt to study the mind introspectively would itself change the object being studied, and not to mention would introduce innumerable opportunities for bias.1 Wundt nevertheless founded experimental psychology and remained a staunch supporter of introspection, provided it was done under strict experimental control. Introspection’s dubious objectivity caught up with it, and in 1912, Knight Dunlap published an article called “The Case Against Introspection” that pointed out that no evidence supports the idea that we can observe the mechanisms of the mind with the mind. This set the stage for a fifty-year reign of behaviorism, which, in its most extreme forms, sought to deny that anything mental was real and that behavior was all there was2. Kant had made the philosophical case and the behaviorists the scientific case that the inner workings of the mind could not be studied by any means.

A cognitive revolution slowly started to challenge this idea starting in the late 1950s. In 1959, Noam Chomsky famously refuted B.F. Skinner’s 1957 Verbal Behavior, which sought to explain language through behavior, by claiming that language acquisition could not happen through behavior alone34. George Miller’s 1956 article “The Magical Number Seven, Plus or Minus Two” proposed a mental capacity that was independent of behavior. Ideas from computer science that the mind might be computational and from neuroscience that neurons could do it started to emerge. The idea that the nature of the mind could be studied scientifically was reborn, but it was clear to everyone that psychology was not broad enough to tackle it alone. Cognitive science was conceived in the 1970s as a new field to study how the mind works, driven mostly at first by the artificial intelligence community. But because the overall mission would need to draw support from many fields, cognitive science was intentionally chartered as a multidisciplinary effort rather than a research plan. It was to be about the journey rather than the destination. Instead of a firm foundation of one or more prevailing paradigms, as one finds in every other science, it floats on a makeshift interdisciplinary boat built that lashes together rafts from psychology, philosophy, artificial intelligence, neuroscience, linguistics, and anthropology. Instead of a planned city, we now have an assorted collection of flotsam and jetsam covering every imaginable perspective, with no means or mandate to sort it all out. Detailed work is being done on each raft, with assistance from the others, but there is no plan to fit everything together. While it has been productive, the forest can’t be seen for the trees. That it is relatively open-minded is a big improvement over the closed-mindedness of behaviorism, but cognitive science desperately needs to establish a prevailing paradigm as one finds in other fields. To pull these diverse subfields together, we need to develop a philosophical stance that finds some common bedrock.

We need to roll the clock back to when things started, to the beginning of minds and the beginning of science. We need to think about what really happened and why, about what went right and what went wrong. What we will find is that the essence of more explanatory perspectives was there all along, but we didn’t develop them past the intuitive stage to form an overall rational explanation. With a better model that can bridge that gap, we can establish a new framework for science that can explain both material and immaterial things. From this new vantage point, everything will fit together better and we can explain the mind just using available knowledge. I don’t want to hold you in suspense for hundreds of pages until I get to the point, so I am going to explain how the mind works right here on the first page. And then I’m going to do it again in a bit more detail over a few pages, and then across a few chapters, and then over the rest of the book. Each iteration will go into more detail, will be better supported, and will expand my theory further. I’m going to start on firm ground and make it firmer. My conclusions should sound obvious, intuitive, and scientific, pulling together the best of both common sense and established science. My theory should be comprehensive if not yet complete, and should be understandable in broad principle by everyone and not just by scientists.

From a high level, it is easy to understand what the mind does. But you have to understand evolution first. Evolution works by induction, which means trial and error. It keeps trying. It makes mistakes. It detects the mistakes with feedback and tries another way, building a large toolkit of ways that work well. Regardless of the underlying mechanisms, however, life persists. It is possible for these feedback structures to keep going, and this creates in them a logical disposition to do so. They keep living because they can. Living things thus combine physical matter with a “will” to live, which is really just an opportunity. This disposition or will is not itself physical; it is the stored capacities of feedback loops tested over long experience. These capacities capture the possibilities of doing things without actually doing them. They are the information of life, but through metabolism they have an opportunity to act, and those actions are not coincidentally precisely the actions that keep life alive another day. The kinds of actions that worked before are usually the kinds that will work again because that is how natural selection works: it creates “kinds” as statistical averages of prior successes that by induction are then statistically likely to work again. And yet, ways that can work better still can be found, and organisms that can find such better ways outcompete those that don’t, creating a functional arms race called evolution that always favors sets of capacities that have been, and thus are likely to continue being, more effective at perpetuating themselves. Survival depends on many such capacities, each of which competes against its variants in separate arms races but which are still selected based on their contribution to the overall fitness of the unit of selection, which is most significantly the individual organism because the genes and cells of an individual live or die together. As Richard Lewontin put it in his seminal 1970 paper “The Units of Selection5, “any selective effect of variation” in biomolecules, genes, ribosomes, and everything “under the control of nuclear genes” “will be at the level of the whole organisms of which they are a part.” So genes are not selfish; it is all for one and one for all6. Lewontin also pointed out that any subpopulation of individuals can also act as a unit of selection relative to other subpopulations.

Freedom of motion created a challenge and an opportunity for some living things to develop complex behavioral interactions with their environment, if only they could make their bodies pursue high-level plans. Animals met this challenge by evolving brains as control centers and minds as high-level control centers of brains. At the level the mind operates, the body is logically an agent, and its activities are not biochemical reactions but high-level (i.e. abstract) tasks like eating and mating. Unlike evolution, which gathers information slowly from natural selection, brains and minds gather information in real time from experience. Their primary strategy for doing that is also inductive trial and error. Patterns are detected and generalized from feedback into abstractions like one’s body, the seen world, friends, foes, and food sources. Most of the brain’s inductive work happens outside of conscious awareness in what I call the nonconscious mind (I am going to avoid the word unconscious as it also implies consciousness is turned off. I am also avoiding subconscious as it implies only that which is just below the surface.) The nonconscious mind is like the staff of a large corporation and the conscious mind is like the CEO. Only the CEO makes top-level decisions, but she also delegates habituated tasks to underlings who can then take care of them without bothering the CEO. Another good analogy is to search engines. The conscious mind enters the search phrases and the nonconscious mind brings back the results in the form of memories and intuitions.7 Nearly all the processing effort is nonconscious, but the nonconscious mind needs direction from the top to know what to do. The nonconscious mind packages up instincts, senses, emotions, common sense, and intuition, and also executes all motor actions. The conscious mind sits in an ivory tower in which everything has been distilled into the high-level, logical perspective that we think of as the first-person. This mind’s-eye view of the world (or mind’s-eye world for short) is analogous to a cartoon vs. a video, and for the same reasons: the cartoonist isolates relevant things and omits irrelevant detail. For minds to be effective at their job, the drive to survive needs to be translated into preferences that appropriately influence the high-level agent to choose beneficial actions. Minds therefore experience a network of feelings from pain and other senses through emotions that can influence complex social interactions that motivate them to act in their agent’s best interests. Where DNA evolves based on its contribution to survival, knowledge in minds evolves based on its contribution to satisfying or frustrating conscious desires. The subjective lens of consciousness is only accountable to survival indirectly. We do always try to protect our interests, but we protect the interests we feel as desires rather than the ultimate reasons those desires evolved. In the long run, those selections must result in increased rates of survival, but this requires tuning over time, and the possibility thus exists that some survival advantages may become selected for at the expense of others, eventually reducing net fitness. For example, some birds select mates for the appeal of their plumage, and because mate selection produces a direct survival advantage, this can eclipse the physical fitness of the birds, making them less competitive against other species. I will argue that this effect, called Fisherian runaway, has not happened to humans and is not a factor in gender differences, our concepts of beauty, war, or anything else. For humans, conscious desires are well-tuned to survival benefit.

Humans and some of the most functional animals also think a lot, making special use of the brand of logic we call deduction. Where induction works from the bottom up (from specifics to generalities), deduction works from the top down (generalities to specifics). Deduction is a conscious process that builds discrete logical models out of the high-level abstractions presented to consciousness. First, it takes the approximate kinds suggested by bottom-up induction and packages them up into abstract buckets called concepts. Concepts are the line in the sand that separates deduction from inductions, but they are built entirely out of bottom-up inductive knowledge I call “subconcepts”. We feel around in our subconceptual stew to establish discrete, conceptual relationships and rules between concepts like causes and effects. Although concepts and the deductive models we build out of them only form simplified high-level views that only approximately line up with the real world, they comprise that better part of our knowledge and understanding because they are explanatory. Deduction makes explanation possible. It is mostly what separates human intelligence from animal intelligence. Chimps, dolphins, elephants, and crows have a basic sense that objects can have general properties which lets them be used in different ways, and so can fashion and/or use simple tools, but they can’t go very far down this path, even though their minds are very similar genetically.

How did conceptual thinking become easy for us? The leap from the jumble of subconcepts to cleanly delineated concepts required a sufficiently worthwhile motive to make it worth the effort. It was language that forced us to put concepts into neat little buckets called words (or, more strictly, into morphemes or semantic units, which include parts of words (or, more strictly, into morphemes or semantic units, which include roots, prefixes, suffixes, whole words, and idioms). To communicate, we have to respect the implied lines around each word. After we learn how to delineate concepts using language, we then develop a conceptual approach to thinking with concepts, creating concepts around every little thing we think about to a much finer level of granularity than language alone supports. This begs the question, “Where did language come from?” Humans have been communicating more for over two million years, probably first with hand gestures and later with improved vocalizations. These efforts gave an edge to early humans who could think more conceptually and verbalize better, gradually making us naturally inclined to create and learn language. In any case, once we have learned a language and are thinking more conceptually, our nonconscious mind can develop habits and intuitions that leverage both subconceptual and conceptual knowledge. Creating concepts and deriving implications from them is a conscious activity because it requires forming new long-term memories, which seems to require conscious participation. Most higher-order thinking and deduction need capacities that are only available to consciousness, most notably the ability to create long-term memories of concepts, models, and their intermediate and final results.8 Consciousness uses many very short cognitive cycles that we perceive as a non-overlapping train of thought, but which do overlap somewhat. These cycles have privileged access to different kinds of memory and processing abilities that distinguish it from nonconscious processing and account for how it feels and for its ability to make top-level decisions.

The real world is not a conceptual place, so conceptual models never perfectly describe it. But they can come arbitrarily close, and as we shape the world into a collection of manmade artifacts that are each conceptual in nature, this mapping gets even better. Still, though, to some measurable degree, all conceptual models are quite uncertain because they must be shoehorned to any given application. However, perfect is the enemy of good, so seeing uncertainty for what it is would be crippling. To function well using deductive models, we need a way to establish a threshold above which we won’t care about uncertainties, and we have a built-in mechanism for doing this called belief or trust. When either inductive or deductive knowledge feels good enough to act on, we invest it with our trust and push nagging second thoughts aside. Having pushed the second thoughts aside, we proceed from that moment forward as if the deductive model were actually true, even though all models are generalizations, which mean all sorts of compromises have been made both in their definition and their application. Trust is not permanent, and, ironically, we establish a level of trust in everything which helps us manage exceptions and decide when to review our beliefs. The important benefit, however, is that we can act confidently when trust has been conferred. Trust is a 100/0 rule rather than an 80/20 rule because doubts have been swept aside as irrelevant, even though they could become relevant at any moment given sufficient reason.

The mind is more to us than just the capacity to experience feelings and thoughts. We also have the feeling that our existence as mental entities is both meaningful and consequential. While we also think cats and all feeling creatures have a claim to some level of mental existence, we think a human’s concept of self with a robust inner life full of specific memories, knowledge, and reflections, makes us more worthy. It is fair; the capacity to feel and think more deeply because we have been through more stuff makes human experience more compelling. That said, all animals should be treated with respect proportional to their mental capacity, which is a standard we often ignore because it is inconvenient. While anything about our inner life could be called an aspect of the self, what we primarily mean by the term is our tendency to think of our integrated sense of agency in the world as an entity in its own right. The stability of our desires and memories creates permanence and makes the self undeniable. This self identifies with consciousness as the active part of its existence, but its memories and plans for the future are inactive parts that it frequently consults. Its memories, plans, and current state of mind are data constructs which it consciously processes to make decisions. The initiative of consciousness derives from attention, short-term memory, and consideration, which is the capacity to reflect on ideas in short-term memory. The nonconscious mind gathers patterns from memory, but I don’t think it can iteratively turn ideas over in a concerted fashion to derive associations systematically and develop perspectives. Consideration uses attention and short-term memory to form concepts and conceptual models and then derives logical implications from them. Nonconscious thought is a first-order use of neural networks and conscious thought is a second-order use. Nonconsciously, we can push data through the net to recognize or intuit things, and we can also find new patterns and add to the net. Consciously, we build a secondary net out of concepts, models, perspectives, and we derive logical ways of processing these higher-level entities. This second-order network, which is mostly what we mean by intelligence, is entirely grounded in the first-order nonconscious network but can achieve cognitive feats well beyond it. This is what makes humans more capable than other animals and what makes our self more significant.

As an agent with a self, we believe we act with free will to choose our path into the future. If our minds make the top-level decisions that control our bodies, then this makes sense. But cognitive science increasingly suggests, and I agree, that we always and only pick the best plan we can given our constitution and background — our nature and nurture. How can we say we have any choice or free will if our choice must be the best one? It is just because picking the best plan is hard. It requires a staggering amount of information processing. Every decision we make draws indirectly on everything we know, and the outcome of that process can’t be predicted correctly by any imaginable science. Yes, if you know someone well, you can predict what they will do a good part of the time, but not always. And if real-time brain scanning technology improves, it could become arbitrarily good at knowing in advance what we are thinking about and our most likely actions. But it will always fall short of making perfect predictions because our decisions draw on the whole network, and in an iterative feedback system, even the tiniest differences can eventually produce big effects. In any case, whether someone else knows what you are going to do or not, you still have free will because you don’t know what you are going to do, but you have to decide. Quite the opposite of lacking free will, you don’t have any choice but to exercise free will so long as you have the mental capacity to make evaluations. The mentally incompetent are rightly recognized as being without free will, but everyone else is a mental entity whose only real job is to consult their nature and nurture to see what they say. Although the decisions you reach are deterministic, you are the machine and are obeying yourself, which is not such a bad thing. Neither you nor anybody else knows for sure what you are going to do until you do it, so for all practical purposes, you have chosen your path into the future. You can’t double-think your way out going through the motions, because all of that falls within your nature and nurture. You have to take responsibility for the outcome because you own yourself; our mind has to identify with the brain and body to which it is assigned because that is its job.

How could aliens studying people decide whether they were automatons or were exercising free will? In other words, at what point does a toaster or a robot develop a capacity for “free choice”? The answer just depends on the kind of algorithm it uses to initiate action. If that algorithm consults a knowledge network that was gathered from real-time experience and that can be extended when making a decision, then the choice is free. Because different minds have different amounts of knowledge and ability to consider implications, they differ in their degree of free will. The mentally incompetent retain some measure of free will, but the scope falls far enough below the normal range that they can’t grasp the implications of many decisions adequately. Insects act mostly from instinct and not gathered knowledge, but they can learn some things, and actions attributed to such knowledge must be considered acts of free will, even if the scope of their free will is shockingly limited relative to our own. When we make a decision, bringing all our talent, insight, and experience to bear, we give ourselves a lot of credit for marshaling all the relevant information, but from a more omniscient perspective we have brought little more to the table than an ant. In the end, we are just acting on our best hunches and hoping it will work out. The measure of whether any of our decisions “work out” is not physical; it is functional. Physically, atoms will still be atoms regardless of whether they are locally organized as rocks or living things. Life “works out” if the patterns and strategies it has devised continue to function through some physical mechanism. The mind “works out” if our conception of the world continues to function through some physical mechanism. We each develop our own conception of adequate, degraded, and improved functionality, and we struggle to always maintain and improve functionality from our own perspective.

That the mind exists as its own logical realm independent of the physical world is thus not an ineffable enigma; it is an inescapable consequence of the high-level control needs of complex mobile organisms. Our inner world is not magic; it is computed. My goal here is to start from the top down, starting with the knowledge of which we are the most certain, to explain how I came to the above conclusions. By sticking to solid ground in the first place, I hope to avoid intuitive leaps to unwarranted conclusions. In other words, everything I say should be well-supported scientifically and nothing I say should be controversial. This is very hard to do, because I am wading into very controversial waters, but bear with me and challenge me if I go too far. What I will be presuming from the outset is that the mind is a real natural phenomenon that can be explained and is not fundamentally ineffable or an illusion. All our direct knowledge of the mind comes from our own experience of having one, so we have to seek confirming evidence from our own minds for any theory we might put forth. In other words, our hypothesis must not only be supported by all relevant scientific disciplines, it has to pass the smell test of our own intuitive and deductive assessment of its legitimacy. Conventionally, science rules out subjective assessment as an acceptable measure, which is a very good rule for the study of non-subjective subjects. But the mind, and indeed all the social sciences, are very subjective. To exclude subjectivity on the grounds that it can be biased eliminates the possibility of achieving much of an explanation at all. Instead, we need to be looking for a scientifically credible approach to explain the mind on its own terms. We have to participate in the process, finding ways to pull objective meaning from subjective experience. The explanation that emerges should not be my opinion but should be an amalgam of the best-supported philosophical and scientific theories. Ideally, weaker or incorrect scientific theories will not find a way into this discussion, but I will sometimes refer to the more prominent of these if they have significantly distracted science from its mission.

Although I promise a full explanation, that fullness refers to breadth and not depth. Explanation provides a deductive model that comes at a subject from the top down, but in so doing it necessarily glosses over much detail that is visible from the bottom up. Also, there is an infinite number of possible deductive models one can construct to explain any phenomenon, but I am only presenting one. More accurately, I am going to present a set of models that fit together into a larger framework. We generally hold that there is one scientific model that best explains any phenomenon
Science strives to find the one model that best explains each phenomenon, which it accomplishes in some measure by finding the simplest models that cover just certain aspects. Once one is constrained by simplicity (Occam’s razor), the number of possible models that correctly explain things to a certain level of detail goes way down, though one needs a framework of such models. The concepts that underly these models are not completely rigid; they allow some room for interpretation. They are, as I say, full of breadth but a little uncertain in depth. We have to remember that the sea of inductive subconceptual knowledge is vast and can never be explained except in this compromised way offered by deductive explanation. So although I will deliver a comprehensive explanation of the mind, most of our experience of having a mind is complex beyond explanation. The mind is far more amazing in the details than any theory can adequately convey.

Beyond the theoretical limits of explanation, I am also constrained by some practical limits. The theory I will develop here is derived from our existing knowledge using the most objective approach I could devise. Although I think we know enough right now to provide a pretty good overall explanation of the mind, it is still very early days in the history of civilization and future explanations will provide better breadth and depth. However, if I have done my job right, such explanations will still fundamentally concur with my conclusions. Science is safe from revision to the extent it doesn’t overreach, so I will be taking great pains to avoid claims that can be faulted. Physicalists overreach when they claim all phenomena are physical and social scientists overreach when they claim behavior is narrowly predictable. These claims, on inspection, are untrue and impossible. But Descartes, in saying, “I think, therefore I am”, stated an inescapable and irreducible conclusion, as much as physicalists have tried to suggest it can be reduced. Quite the contrary, what science has been lacking is a periodic table of existing elements that can’t be reduced to each other. I will propose here that two such elements exist: form, including all things physical, and function, including all things informational, which includes the mind. The physical sciences strictly describe forms but use function to do it, while the social sciences mostly describe functions, also using function to do it. Biological sciences describe both forms and functions, and formal sciences strictly describe functions using function. Form and function have been the bedrock of science from the beginning, but they have never been called out before as its fundamental elementary building blocks. This starts to create real problems in the study of subjects heavily dependent on both form and function, like biology, because functional theories like evolution can’t be supported by purely physical theories like chemistry. The mind is a much more functional realm still, for which the underlying chemistry is almost beside the point, so looking to chemistry for answers is like examining the Library of Congress with a microscope. To understand the mind, we will need a framework of explanations that spans both physical and functional aspects, so I am going to start out by establishing the nature of form and function and how they pertain to the mind.

  1. Wilhelm Maximilian Wundt, Stanford Encyclopedia of Philosophy, 2006, 2016
  2. The American Response: Behaviorist Iconophobia and Motor Theories of Imagery, Stanford Encyclopedia of Philosophy
  3. Chomsky, A. Noam (1959), “A Review of Skinner’s Verbal Behavior”, Language. 35 (1): 26–58. doi:10.2307/411334. JSTOR 411334. Retrieved 2014-08-26., Repr. in Jakobovits, Leon A.; Miron, Murray S. (eds.). Readings in the Psychology of Language. New York: Prentice-Hall. pp. 142–143.
  4. Ironically, even as Chomsky challenged us to take the mind’s computational capacity seriously, he also slammed the door shut by claiming that grammar was instinctive. The neural connections that drive thought all develop after conception, so nothing as specific as a grammar rule is instinctive. Behavior creates and is an output of these connections, and so is only an aspect of a larger system. More on Chomsky and language in Chapter 2.3.
  5. R. C. Lewontin, The Units of Selection
    Annual Review of Ecology and Systematics, Vol. 1 (1970), pp. 1-18
  6. Richard Dawkins arguably confused a whole generation six years later with The Selfish Gene. His characterization of natural selection was misleading, to say the least. One could generously say he was really making the case for the selfish genome, an abstract level of selection which is selfishly concerned with the propagation of its genes rather than the welfare of any individuals. Dawkins is correct that evolution is not concerned with individuals; Darwin never said it was. But even though individuals are just statistical units fighting to create the most fit genome, they are still the unit of selection. Any one gene can be lost in as little as a single generation the moment its functionality is either usurped or superseded by other functionality in the genome, and neither the individuals nor the genome will experience any ill effects (as demonstrated by the loss of the SRY gene (and the Y chromosome) in Transcaucasian mole voles, which acquired a different genetic mechanism to determine the sex of offspring in a single generation). It is the combined capacity of the genome that is selfish.
  7. Robert A. Burton M.D., A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves, St. Martin’s Griffin (April 22, 2014)
  8. Much more on this later. Stan Franklin, Bernard J. Baars, Uma Ramamurthy, Matthew Ventura, The Role of Consciousness in Memory

Leave a Reply