2.3 Humans: 4 million to 10,000 years ago

How Cooperation and Engineering Evolved Through Niche Pressure and the Baldwin Effect

People are much more capable than our nearest animal relatives, but why? Clearly, something significant happened in the seven million years since we diverged from chimpanzees. To help figure out what mental functions are uniquely human, let’s first take a look at the most advanced capabilities of animals. Apes and another line of clever animals, the corvids (crows, ravens and rooks), can fashion simple tools from small branches, which requires cause and effect thinking using conceptual models. Most apes and corvids have complex social behaviors. Like many other animals, group living helps them defend against predators and extend foraging opportunities, but they also groom each other, share care of offspring, share knowledge, and communicate vocally and visually for warning, mating, and other purposes.12 Apes and corvids have also a substantial capacity to attribute mental states (such as senses, emotions, desires, and beliefs) to themselves and others, an ability called Theory of Mind (TOM)34. In particular, if they see food being hidden and they are aware of another animal (agent) observing it being hidden, this knowledge of the other animal’s knowledge will affect their behavior. Mammals and birds evolved all these capabilities independently, which indicates both that a functional ratchet is at work and that there is some universality to the kinds of functions that it is useful for animals to achieve.

But while apes, corvids, and a few other smart animals can do some clever things in isolation, they can’t abstract beyond the here and now to more generalized applications. If they devise a customized (non-instinctive) strategy to solve a problem, the goal needs to be very obvious. And while their social interactions can be mutually beneficial, they can’t cooperate in novel ways to solve problems. Rather than exhibiting full cooperation, they are “co-acting” by acting only on private motivations which happen to benefit the group when they act in concert. Social insects, for example, seem to work cooperatively to solve problems, but their strategies are instinctive, even to the point of having specialized roles. Apes and corvids can’t achieve that level of cooperation because they can’t devise plans beyond themselves. Humans, however, can engineer solutions for which the goal, the means to it, and the benefits are abstracted to any degree. Because of this, they can easily imagine a group of people working together using specialized talents to accomplish a project that one alone could not do. Only humans can both create and refine tools (engineering), and then communicate their plans to each other and execute them in a coordinated fashion. These coordinated activities depend critically on language. The semantic content of language is conceptual in that it uses words to call out generalizations in a top-down way. Gestures contribute to semantic content, and language may even initially have been mostly gestural at first, but except for sign languages, most kinds of semantic content now depend on verbal language.5 Language also conveys nonconceptual content through connotation, including emotional and intuitive undertones. There are two basic reasons for this. First, people will only be willing to cooperate if they trust each other, which develops mostly from our theory of mind ability to pick up on what others are thinking from their words, actions, emotions, etc. But to read people well, we need to interact with them a lot, and small talk creates lots of opportunity for people to develop trust in each other. Secondly, our thoughts connect to other thoughts in a vast network, but language forces us to distill that down to a single stream of primary meaning that follows an overt or implied conceptual model shared by the speaker and the listener. We use connotation and emotion to imply all sorts of secondary meaning, either subtly to persuade without being too direct, or subliminally as we draw on associations that have helped in the past and so are likely to help again. Language is inherently metacognitive because it represents shared generalizations with words (or, most granularly, with morphemes, the smallest grammatical unit in a language), and this means we need to devote some thought to what each word means. The meaning of a morpheme is established by how people use it, so it can never be completely rigid because usage patterns both vary and shift.

While evolution could not realize it, and early man did not either, cooperation and engineering uncorked an entirely new kind of cognitive ratchet that quickly drove human evolution toward our present capabilities. The reason is that cooperation and engineering can be used together in an unlimited number of ways to improve our chances of survival. They also come with costs which must be carefully managed to produce net benefit, so the right balance of ability and restraint of ability had to be struck over the course of human evolution. Perhaps most notably, cooperation and engineering quickly lead to the need for specialized service and manufacturing roles. Hunting is an oft-cited activity requiring both kinds of specialization — the creation of spears and weapons and the teamwork to use them — but if we could do this we could also cooperate to engineer housing, clothing, and food production, and we have probably been doing these things for more than a million years. Starting with rudimentary uses of semantic communication and tools, bands of humans established roles for group-level strategies that slowly evolved into elaborate cultural heritages. Other animals have an intuitive sense of time, but culture greatly expanded our need to contemplate the persistence of artifacts and practices both in our past and into our future. Remembering specific details of past exploits or social interactions matters much more to us, and our much greater ability to plan makes the future matter more as well. By comparison, animals live mostly in the here and now, but humans more substantially live in the past and future. This expansion in time is accompanied by an expansion into possible worlds. Our greater capacity to project our thoughts abstractly is so great that we can think of ourselves as having two distinct lives, one in the real world and a second, virtual life in our own imaginations.

Of course, just because something is possible doesn’t mean it will happen, let alone happen quickly. Assuming conditions were finally right for a species to start deriving new functional benefit from cooperation and engineering, what accounts for such seemingly significant evolutionary changes happening so quickly, considering how long evolution usually seems to take? It was well known by Darwin’s time that the fossil evidence shows organisms stay relatively unchanged for millions of years. Darwin said of this: “the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form.”6 Niles Eldredge and Stephen Jay Gould published a paper in 1972 that named this phenomenon punctuated equilibrium, and contrasted it with the more widely subscribed notion of phyletic gradualism, which held that evolutionary change was gradual and constant. Evolutionary theory, from Darwin through the Modern Synthesis, states that only the mutation rate affects the rate of evolution, and since it must be constant, evolutionary change should be gradual. To this date, nobody has explained why punctuated equilibrium happens. But I propose a simple explanation, which I call niche pressure. In brief, change happens quickly when the slope toward local maxima of potential functionality is the steepest, and then slows down and nearly stops when the local maximum is achieved.

Niche pressure will cause genetic change to decline the longer an organism has lived in the same niche. This is because it gradually exhausts the range of physically reachable advantages from small functional changes to the existing genome, causing the organism to climb up to a local maximum in the space of all possible functionality. Humans that could fly might be more functional, but flight is not physically reachable from small changes. Evolutionary change always happens fastest when the fit of a species to its niche is worst and slows as that fit is perfected. This is not because mutation is any faster, it is just because mutations can make bigger strides when the range of reachable functional possibilities is largest and make less difference when all it can do is make subtle refinements. In other words, evolution is a function of environmental stability. If the environment changes, evolution will be spurred to make species fit better. If the environment stays the same, each interbreeding population will approach stasis as its gene pool comes to represent an optimal solution to the challenges presented by the niche. However, if that population is separated geographically into two subpopulations, then this divides the niche as well, and differences which were previously averaged now impact the two populations in different ways. Each population will quickly evolve to fit its new subniche. Rapid evolution can happen both when the environment changes quickly or when a niche is divided in two, but the difference is that in the latter case a new species will form. In both cases, however, a single interbreeding population changes rapidly because mutants survive better than standards when they fit the new niche better.

It is usually sufficient to view functional potential from the perspective of environmental opportunity, but organisms are also information processors and sometimes entirely new ways of processing information create new functional opportunities. This was the case with cooperation and engineering. Cooperation with engineering launched a new cognitive ratchet because they greatly extended the range of what was physically reachable from small functional changes. Michael Tomasello identified differences in ape and human social cognition using comparative studies that show just what capacities apes lack. Humans do more pointing, imitating, teaching, and reassessing from different angles, and our Theory of Mind goes deeper, so we not only realize what others know, but also what they know we know, and so forth recursively. These features combine to establish group-mindedness or what he calls “collective intentionality”, which are ideas of belonging with associated expectations. Though our early cooperating ancestors Australopithecus four million years ago and Homo erectus two million years ago didn’t know it, they were bad fits for their new niche, because they had barely begun to explore the range of tools and tasks now possible. (We are still bad fits for our new niche because even more tools and tasks are possible than ever, so we nervously await the arrival of the technological singularity when everything possible will be attainable). In fact, we were the worst fit for our niche that the history of life had ever seen because the slope toward our potential achievements was steepest (and growing steeper). Of course, we were also the only creatures yet to appear that could attempt to fill that niche.

Even given niche pressure, the idea that humans could evolve from something chimp-like to human-like in just a few million years seems pretty fast based on random mutations. The extended evolutionary synthesis and other attempts to update evolutionary theory include a variety of mechanisms that could “speed up” evolution. Taken together, these mechanisms basically all leverage the idea that most of the genetic sequences that comprise our genomes today predate the chimp-human split. They can consequently be thought of as a reserve of genetic potential which niche pressure drew on to shape us. Most of these mechanisms are still hypothetical, and since I am trying to stick to established science, I am not going to describe or defend them in detail. But some of these mechanisms will pan out to show how feasible it is for niche pressure to work through punctuated equilibrium. First, it is well-established that the genetic variation of each gene in a population provides a large reserve of adaptability to new circumstances. We used this diversity to domesticate animals and crops over a much shorter timespan than humans have been evolving. Less demonstrable are proposed mechanisms that could pull inactive genetic sequences into active use. Because DNA replication must primarily produce error-free copies, it seems sensible to assume that mechanisms could not evolve that allowed or encouraged certain kinds of genetic changes, which is what I am suggesting. But this is a bad assumption. Consider this: organisms that could promote useful genetic changes more often than those that could not would quickly come to predominate. This adaptive advantage alone creates constant demand for such (currently unknown) mechanisms that could produce useful changes more frequently than could blind chance alone, even if they are heavily dependent on chance themselves. Consequently, any such mechanisms that are physically possible are likely to have evolved. In fact, such mechanisms are unavoidable because inaction is also a kind of action. A mechanism that guarantees perfect replication would be an evolutionary dead end, which means that selection pressures on the kinds of errors that can happen during replication exist. Over time, mechanisms that allow certain kinds of “mistakes” to happen that have been more helpful than chance can arise. The existence of transposons, sometimes called jumping genes, demonstrates that active gene editing can occur through highly evolved mechanisms, and at least suggests we have barely scratched the surface of their potential. Considering that nearly all organisms have transposons and they comprise 44% of the human genome, the possibility that they participate in tinker-toy mechanisms of new gene creation is significant. In any case, a tendency for helpful new genes to come together by “lucky” accidents by a variety of subtle mechanisms is more likely than not. While we don’t know at this point to know just how important active gene editing is to evolution, it has seemed likely for some time that random pointwise mutation alone is not enough to account for what we see.

Language is often cited as the critical evolutionary development that drove human intelligence, and while I basically agree with this, there is more to the story than that. First, because language is a universal skill found in all human societies and seems so deeply entrenched in our thought processes, it has been proposed that it is an instinct, sometimes called the language acquisition device. This is completely untrue, as language is entirely a manmade communication system that must be learned from years of study. The innate (instinctive) skills we possess than enable us to learn language are all fairly general-purpose in nature, but it is true that only humans have a sufficient enough set of such skills to learn language readily. I will be making a considerable effort as the book proceeds to identify these and other skills that contribute to the more general intelligence of humans, but it must be understood at the outset that none of our intelligence or linguistic ability derives from specialized modules of the brain that process grammar or give us a “language” of thought (aka “mentalese”). The additional innate skills of humans vs. other animals are best thought of as subtly shifting our interests and focus rather than providing qualitatively different abilities. All animals use general-purpose neural networks to study sensory inputs to create a mental model of their bodies and the world around them by finding and reinforcing patterns in the data. Language is unique in that the patterns must be coordinated with patterns in the minds of others, and their underlying content is consequently relevant to higher-order actions as well. But each mind makes sense of these signals in its own way by making neural connections from its own experience and nothing else. We understand and control our bodies through long habituated processing of feedback from senses. We understand our own conscious thoughts only because of our long habituated processing of feedback from thoughts. And we also understand language and all other learned skills only because of our long habituated processing of feedback from using those skills. The mind has one general systematic approach that it uses for everything: look for patterns and attach more significance to the ones you see and use the most.

Now, this said, we do have innate talents that work at a low level, and because every species has a unique set of such talents, they are all cognitively different. As we cooperated and engineered more, we were both literally and figuratively playing with fire because we initiated a cognitive ratchet that let to the development of a wide variety of human-specific cognitive talents. None of these are entirely human-specific; they have just been weighted and refined a bit, so we can see them in slightly different forms in other animals. I’m not going to be able to describe any of them from a genetic perspective because we just don’t have that kind of knowledge yet. However, the good news is that this doesn’t really matter at this stage because function drives form. We will, in time, be able to provide genetic explanations, but to do that we first have to know what we are looking for and why. We need to unravel what functions the cognitive ratchet was clicking into place, which means we have to know what traits were providing cognitive benefit. It’s a lot harder to contemplate the genetic basis of these traits because, unlike eye color, the brain is very holistic, with every innate talent potentially providing subtle benefits and costs across the whole system. The value of cooperation and engineering cascaded, which led to the selection of untold innate talents that facilitated doing them better. We can test for specific mental talents to see how humans vary. For example, a simple memory test shows that chimps surprisingly have much better short-term working memory than humans.7 But it is very hard to assess most cognitive skills from such tests, which still leaves us knowing next to nothing about why humans seem to be more intelligent.

While we don’t know much about our mental skills from experiments, we can still ferret out information about them in other ways, most notably from general considerations of how we think, which I will take up in Part 3. But first I’d like to point out that while all our mental talents are broadly general-purpose, they can and have become specifically selected for how well they help us with language through the Baldwin effect. The Baldwin effect, first mentioned by Douglas Spalding in 1873 and then promoted by American psychologist James Mark Baldwin in 1896, proposes that the ability to learn new behaviors will lead animals to choose behaviors that help them fit their niche better, which will in turn lead to natural selections that make them better at those behaviors. As Daniel Dennett put it, learning lets animals “pretest the efficacy of particular different designs by phenotypic (individual) exploration of the space of nearby possibilities. If a particularly winning setting is thereby discovered, this discovery will create a new selection pressure: organisms that are closer in the adaptive landscape to that discovery will have a clear advantage over those more distant.” The Baldwin Effect is Lamarckian-like in that offspring tend to become better at what their ancestors did the most. It is entirely consistent with natural selection and is an accepted part of the Modern and Extended Synthesis because it in no way causes anything parents have learned to be inherited by their offspring. All it does is slowly cause behavior that is learned in every generation to become increasingly natural and innate, as those that can do naturally what they are doing anyway will prosper more. Language has likely been evolving for millions of years, which make it very likely that a number of instincts that help us with language are Baldwin instincts. None of them are language itself, but they probably help us to make and recognize sounds. As language starts to help us convey meaning better, those who can use it more effectively will be selected more, leading to natural talents that help us manage words and concepts better. For the capacity to conduct a conversation to evolve, the participants need to have enough attention span and memory to participate. Many small Baldwin refinements to general innate talents evolved as people communicated more, which led to us being highly predisposed to learning language specifically without it becoming an instinct per se.

Could these talents evolve far enough to make an ability for Universal Grammar (UG) innate, as Noam Chomsky proposes through his principles and parameters approach to generative grammar? While anything could evolve to an instinctive level given enough time and the right pressures, UG would be wildly maladaptive because it would be a drastic overspecialization. The whole power of thought and language derive from not being written in stone; to legislate any aspect of them would quickly paint us into a corner we could not get out of. Rather than demonstrating that grammar is universal, the study of the world’s languages shows that they are highly idiosyncratic, with highly specific words and word orders for everything. Nothing about them springs into our minds, either in the primary or secondary languages we learn, but must be memorized from long exposure and use. They only generalize along very narrow paths, and one can find exceptions that defy any generalization one might make about them. That they have similarities only reflects common origins and ultimately the common need to communicate, not any underlying regularity. That said, although nothing is guaranteed, languages do have lots of regularities because people are trying do want communication to be easy, and so they will use similar patterns to express similar ideas. In many languages, you can categorize words into well-defined parts of speech and describe well-defined rules of grammar, but such descriptions are, as is the nature of description, a simplification that only characterizes predominate patterns of usage and not hard and fast rules. Language can also be used poetically to an arbitrary degree, as James Joyce did in Finnegans Wake, in which the obfuscation of direct meaning helps to highlight the significance of indirect meaning. In any case, grammar is a superficial aspect of language, which itself offers a superficial window into our thought processes, which far from being linear, symbolic, or syntactic as mentalese proponents argue, float in the vast network of information held in the brain.

The Baldwin effect shaped many, and perhaps most, complex animal behaviors. I consider dam building in beavers to be a Baldwin instinct. It seems like it might have been reasoned out and taught from parents to offspring, but actually “young beavers, who had never seen or built a dam before, built a similar dam to the adult beavers on their first try.”8 Over the long period of time when this instinct was developing, beavers were gnawing wood and sometimes blocking streams. Those that blocked streams more did better. Beavers, like all animals, have some capacity for learning and so, in any given generation, will learn a few tricks on their own or from their parents that made a dam-oriented lifestyle more effective. This little bit of learning, over many generations, could have nudged evolution more toward making dam-building instinctive. We know now that the instinct to block water is not triggered unless they hear running water because it can be turned on and off with a recording of the sound. This is a great trigger because it leaves much of the logistics up to general beaver intelligence, which, in addition to using logs, can also use twigs, mud, and debris to block the flow of water. Consequently, without having originally conceived that dams would be a good idea, their ability to learn translated over time into a complex instinctive behavior. Chance mutations that make them more inclined to do the kinds of things they are doing anyway from learning let Baldwin instincts backfill learned behaviors into instincts.

Children raised without language will not simply speak fluent Greek. Both Holy Roman Emperor Frederick II and King James IV of Scotland performed such experiments in the 13th and 15th centuries9. In the former case, the infants died, probably from lack of love, while in the latter they did not speak any language, though they may have developed a sign language. The critical period hypothesis strongly suggests that normal brain development including the ability to use language requires adequate social exposure during the critical early years of brain development. Children with very limited exposure to language who interact with other similar kids will often develop an idioglossia or private language, which are not full-featured languages. Fifty deaf children, probably possessing idioglossia or home sign systems, were brought together in Nicaragua in a center for deaf education in 1977. Efforts to teach them Spanish had little success, but in the meantime, they developed what became a full-fledged sign language now called Idioma de Señas de Nicaragua (ISN) over a nine-year period10. Languages themselves must be created through a great deal of human interaction, but our facility with language, and our inclination to use it, is so great that we can quickly create complete languages given adequate opportunity. While every fact and rule about any given language must be learned, and while our general capacity for learning includes the ability to learn other complex skills as well, language has been with humans long enough to be heavily influenced by the Baldwin effect. A 2008 study on the feasibility of the Baldwin effect influencing language evolution using computer simulations found that it was quite plausible11. I think human populations have been using proto-languages for millions of years and that the Baldwin effect has been significant in preferentially selecting traits that help us learn them.

While linguists tend to focus on grammar, which is related only to the semantic content of language, much of language is nonverbal. Consider that Albert Mehrabian famously claimed in 1967 that only 7% of the information transmitted by verbal communicating was due to words, while 38% was tone of voice and 55% was body language. This breakdown was based on two studies in which nonverbal factors could be very significant and does not fairly represent all human communication. While other studies have shown that 60 to 80% of communication is nonverbal in typical face-to-face conversations, in a conversation about purely factual matters most of the information is, of course, carried by the semantic content of the words. This tells us that information carried nonverbally usually matters more to us than the facts of the matter. Cooperation depends more on goodwill and trust than good information, and that is the chief contribution of nonverbal information. Reading and writing are not interactive and don’t require a relationship to be established, so they work well without body language. But written language also conveys substantial nonverbal content through wording that evokes emotion or innuendo.

Reason and Responsibility: The Hidden Prerequisites of Greater Intelligence

Even though the cognitive ratchet created new opportunities for adaptive success, it also expanded the number of ways we could fail. Superficially, it seems that cooperating and planning better would translate to surviving better. But being able to do these things better also creates more ways to use them disadvantageously. Without safeguards, we will either use these talents counterproductively or antagonistically. Just being able to think better doesn’t motivate us to succeed. We might just play more, and this risk actually does keep many people from achieving their potential. To inspire us to apply ourselves productively, we have dispositional qualia like pain, temperature, some smells, hunger, thirst, and sex that give us subjective incentives to protect our adaptive self-interests. Those usually protect our physical well-being adequately, but we also have emotions to protect our social well-being. On the positive side, strong relationships with other build affection, love, trust, confidence, empathy, pride, and social connection, while erosion of these relationships leads to hostility, distrust, guilt, envy, jealousy, resentment, embarrassment, and shame. How emotions affect us is at least partially a social construct, but our difficulty in being able to control them consciously suggests a strong innate component as well. We don’t decide what emotions we will feel; rather, the nonconscious mind computes reviews our conscious thoughts and computes what emotions we should feel, which it presents to our consciousness as emotional qualia. Some emotions can be easily read from our faces and behavior, which would be maladaptive if revealing that information to others gave them an advantage over us, but adaptive if it fostered support and trust. Since we need people to work with us, and the rewards of double-crossing are great, ways that telegraph our honest feelings were distinctly adaptive.

We don’t make our decisions based on what will maximize the survival of our gene line; we decide things based on our conscious desires. Conscious desire is the indirect approach minds use to translate low-level needs into high-level actions. Any high-level decision needs to balance the relevant factors, and to help us balance a wide variety of factors efficiently, the mind has dispositional qualia that make us feel like doing desirable things and feel like avoiding undesirable things. We don’t eat because we need energy to survive; we eat because it tastes good and it starts to hurt if we don’t. We don’t do good deeds because they will build our reputation and lead to more money and procreative success; we do them because they build positive emotions like pride and deflect negative ones like shame. The filter of consciousness acts as an intermediary, roughly translating physical needs to psychological ones. In the very long run, the basis of mental decisions needs to sync up fairly well with physical needs, but over a shorter time frame, traits can evolve that are preferred by minds even though they are detrimental to survival. For example, we seem to prefer junk food to healthy food, which was not a problem over the time frame our taste buds evolved because we didn’t have access to junk food. Alternately, many female birds prefer their mates to have ostentatious plumage rather than, say, physical strength12. Eating badly negatively impacts your survival, so a preference to eat healthier food will start to evolve. Selecting mates for plumage, however, positively impacts their survival up to a point, because reproduction is such a critical part of the life cycle. My point here is just that conscious desires are pushed by evolution to line up with adaptive needs.

While awareness, attention, and qualia are all computed by nonconscious processes and fed to our conscious minds, much and probably most of our thought processes are also computed nonconsciously. One significant part, rational thinking, does appear to be entirely conscious so far as I can tell, but let me briefly list some important nonconscious parts. First, and most significantly, long-term memory is an entirely nonconscious process that works quite conveniently from a conscious perspective in retrieving memories based on associations. Somehow, we are not only able to recall many things in significant detail, we approximately know in advance what we will find. Our knowledge is indexed in a fractal way such that we have an approximate sense of all of it, and as we start to consider special areas of our knowledge, we recall first our approximate sense of the range of our knowledge in that area, and so forth. A considerable part of our memory itself summarizes how much more memory we have. Forgetting is likely an adaptive feature of memory since nearly everyone forgets nearly everything they have seen eventually. The existence of Eidetic (sometimes called photographic) memory and hyperthymesia (ability to remember one’s own life in almost perfect detail) demonstrate that the brain is capable of keeping nearly all long-term memories. When we forget, have we just lost conscious access to memories which may still be used for nonconscious purposes, or are they gone? We don’t yet know, but it does seem likely that our memories are good enough in general for the purposes for which we need them.

The next most significant nonconscious thought process is belief. However helpful planning can be, with or without the cooperation of others, we have to be able to deploy it decisively and parsimoniously, leveraging our past experience to produce a quick and confident response. Otherwise, we could fall into analysis paralysis, either hesitating a bit too long or freezing entirely when quick action is needed. Belief is how the brain keeps this from happening. We catalog all our knowledge with an appropriate degree of belief, and then belief gives us the green light to act on that knowledge without further consideration about whether it is right or not. I believe chairs will support my weight, so I sit on them without further thought.

The third most significant nonconscious talent is habituated thought processes. Belief is a way of habituating our use of long-term memory so we can trust it and act on it safely and efficiently. In a similar way, we develop ways of thinking from a lifetime of practice which conveniently become habituated so we can use them without thinking about how or why they work. This applies most significantly to language, which is complex enough that we can spend a lifetime improving language skills. Beyond language, our whole conception of cause and effect and each of the life skills we have learned across thousands of areas become second nature to us, effectively letting us think without active effort.

And the last nonconscious talent I want to mention at this time is our facility with mental models. It is easy to take this one for granted; the only way we can have any conception of the outside world is through mental models that represent it to us. But we need those models, and they need to seamlessly integrate awareness, attention, and qualia to knowledge structures that tell us what we are seeing and experiencing. Mental models do that for us, and we only have to wish for them to appear and they come to us.

Finally, given the right nonconscious supports, the crown jewel of intelligence, rational thinking, can flourish. In the next part of the book, I’m going to look closer at how rational thought works and at the nonconscious systems that support it, which will lead into the final part that examines how the whole system works together.

  1. The social life of corvids, Current Biology, VOLUME 17, ISSUE 16, PR652-R656, AUGUST 21, 2007
  2. Larissa Swedell, Primate Sociality and Social Systems, Queens College, City University of New York; New York Consortium in Evolutionary Primatology), 2012 Nature Education
  3. Katharina Friederike Brecht, A multi-facetted approach to investigating theory of mind in corvids, University of Cambridge, April 2017
  4. Jeremy I. Skipper et al, Speech-associated gestures, Broca’s area, and the human mirror system, Brain Lang. 2007 Jun; 101(3): 260–277.
  5. Darwin’s theory, Punctuated equilibrium, Wikipedia
  6. Chimps outperform humans at memory task, New Scientist, 3 December 2007
  7. Dam Building: Instinct or Learned Behavior?, Feb 2, 2011, 8:27 PM by Beaver Economy Santa Fe D11
  8. Language deprivation experiments, Wikipedia
  9. Nicaraguan Sign Language, Wikipedia
  10. Yusuke Watanabe et al, Language Evolution and the Baldwin Effect, Graduate School of Information Science, Nagoya University, Japan, 2008
  11. Fisherian runaway, Wikipedia

Leave a Reply