A Functional Look at the History of the Philosophy of Science

I have made the case that it is reasonable to use the mind to study the mind and I have outlined how minds, and functional systems in general, are variable and open instead of fixed and closed. This has implications for how we should study them which I am going to consider now.

First, let’s take a closer look at how we study fixed systems to see what we can learn. I have noted that science stands by the scientific method as the best way to approach experimental science. In addition to the basic loop — observe, hypothesize, predict, and test — the method now attempts to control bias through peer review, preregistering research, and more, but needs to go further to ensure that scientific research is motivated only by the quest for knowledge and not corrupted by money and power. Still, the basic scientific method that iterates the steps as often as needed to improve the match between model (hypothesis) and reality (as observed) works pretty well. In general, feedback loops are the source of all information and function, but the scientific method aims for more than just information — it is after truth. Scientific truth is the quest for a single, formal model that accurately describes the salient aspects of reality. General-purpose information we gather from experience and access through memory uses a mixture of data and informal models; it is more like a big data approach that catalogs impressions and casual likelihoods. And when we do reason logically, we usually do it quite informally with a variety of approximate models. But science recognizes the extra value that rigorous models can provide. Although we can’t prove that a scientific model is correct because our knowledge of the physical world is limited to sampling, all particles of a given type do seem to behave identically, which makes near-perfect predictions in the physical sciences possible. While the exact laws of nature are still (and may always be) a bit too complex for us to nail down completely, the models we have devised so far work well enough that we can take them as true for all the intents and purposes to which they apply. We regard scientific theories as laws once their robustness has been demonstrated by an overwhelming amount of experimental evidence. If one of these laws seems to fail to work, we still won’t doubt the law but can safely conclude that physical reality didn’t live up to the model, meaning imperfections in the materials or our grasp of all the forces in play were to blame.

This pretty straightforward philosophy of science is sufficient for physical science. We accept a well-supported theory as completely true until an exception can be rigorously demonstrated, at which point we start to look for a theory that covers the exception. Scientific knowledge is not intended to be absolute but is meant to be contextual within the scope of situations the laws describe. This is a very workable approach and supports a lot of very effective technology. This approach is also serviceable for studying the functional sciences, but it can only take us so far. Using it, we can lay out any set of assumptions we like and then test theories based on them. If the theories hold up reasonably well, that means we can make somewhat reliable predictions, even if the assumptions have no foundation. This is how the social sciences are practiced, and while nobody would consider any conclusions of the social sciences to be definitive, we do assume that a reputable study should sway our conception of the truth. The shortcomings of science as practiced are still large enough that we know that we should doubt any one study, but we still hope that anything demonstrated by a preponderance of studies has some truth to it. But couldn’t we do better? The social sciences should not be built out of unsupported assumptions about human nature but from the firm foundation of a comprehensive theory of the mind. My objective here is to expand the philosophy of science to encompass the challenges of studying functional systems, and minds in particular.

I’m going to build this philosophy from first principles, but before I start, I’m going to quickly review the history of the philosophy of science. Not all philosophy is philosophy of science, but perhaps it should be, because philosophy that is not scientific is just art: pretty, but of dubious value.1 I’m going to discuss just a few key scientists and movements, first listing their contributions and then interpreting what they did from a functional stance.

Aristotle is commonly regarded as the father of Western philosophy, along with Plato and Socrates, whose tradition he inherited. Unlike them, Aristotle also extensively studied natural philosophy, which we have renamed science. Aristotle was an intuitive functionalist. He focused his efforts on distinctions that carried explanatory power, aka function, and from careful observations almost single-handedly discovered the uniformity of nature, which contrasted with the prevailing impression of an inherent variability of nature. Through many detailed biological studies, he established the importance of observation and the principle that the world followed knowable natural laws rather than unknowable supernatural ones at the whims of celestial spirits.

Francis Bacon outlined the scientific method in the <a href=”https://en.wikipedia.org/wiki/Novum_OrganumNovum Organum (1620) by emphasizing the value of performing experiments to support theories with evidence. Bacon intentionally expanded on Aristotle’s Organon with a prescriptive approach to science that insisted that only a strict scientific method would build a body of knowledge based on facts instead of conjectures. Controlled induction and experiments would accurately reveal the rules behind the uniformity of nature if one were careful to avoid generalizing beyond what the facts demonstrate. In practice, most scientists today adopt this attitude and don’t think too much about the caveats that arose in the coming centuries that I will get to next.

René Descartes established a clear role for judgment and reason in his Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences (1637). His method had four parts: (a) trust your judgment, while avoiding biases, (b) subdivide problems into as many parts as possible, (c) start with the simplest and most certain knowledge and then build more complex knowledge, and (d) conduct general reviews to assure that nothing was omitted. Further, Descartes concluded, while thinking about his own thoughts, “that I, who was thinking them, had to be something; and observing this truth, I am thinking therefore I exist”2, which is known popularly as Cogito ergo sum or I think, therefore I am. He felt that whatever other doubts he might have about the world, this idea was so “secure and certain” that he “took this as the first principle of the philosophy I was seeking.” He further concluded that “I was a substance whose whole essence or nature resides only in thinking, and which, in order to exist, has no need of place and is not dependent on any material thing. Accordingly this ‘I’, that is to say the Soul by which I am what I am, is entirely distinct from the body and is even easier to know than the body; and would not stop being everything it is, even if the body were not to exist.”3 Descartes attempted a physical explanation based on the observation that most brain parts were duplicated in each hemisphere. He believed that since the pineal gland “is the only solid part in the whole brain which is single, it must necessarily be the seat of common sense, i.e., of thought, and consequently of the soul; for one cannot be separated from the other.”4 In this, he was quite mistaken, and it ultimately undermined his arguments, but it was a noble effort! Looking at Descartes functionally, he recognized the role our own minds play in scientific discovery and simply implored us to use good judgment. His assertion that some methods are more effective for science than others was a purely functional stance (because it does all come down to what is effective). He further recognized the preeminence of mind and reason, to the point of proposing substance dualism to resolve the mind-body problem, which I have reformulated into form and function dualism. Descartes was entirely correct in his cogito ergo sum statement, if we interpret it from a form and function dualism perspective. In this view, the function of our minds requires no place or time to exist but can be thought of as existing in the abstract by virtue of the information it represents. Although Descartes fascination with brain anatomy and assumption of the irreducibility of the soul (no doubt derived from a desire to align Catholicism with science) led to some unsupported and false conclusions, he was on the right track. The mind arises entirely from physical processes but is more than just physical itself, because information has a functional existence that transcends physical existence because it is referential and so can be detached from the physical. It is not that there is a “nonphysical” substance that is connected to the physical brain, it is that function is a different kind of thing than form. Physical mechanisms leverage feedback to create the mind, but function and behavior of these mechanisms can’t be explained by physical laws alone because information generalizes function into abstract entities in their own right. Descartes anatomical conclusion that the soul could not be distributed across the brain and so had to be concentrated in the one part that was not doubled was wrong. His assertion that common sense, thought, and the soul cannot be separated is similarly wrong; our sense of self is an aggregation of many parts, including the sense that it is unified and not aggregate.

David Hume anticipated evolutionary theory in his A Treatise of Human Nature (1739), which saw people as a natural phenomenon driven by passions more than reason. Hume divided knowledge into ideas (a priori) and facts (a posteriori). One studies ideas through math and the formal sciences and facts via the experimental sciences. As we ultimately only know of the physical world through our senses, all our knowledge of it must ultimately come from the senses. He further recognized via the problem of induction that we could never prove anything from experience or observation; we could only extrapolate from it. This meant we have no rational basis for belief in the physical world, though we have much instinctive and cultural basis. Hume expanded on Descartes’ “cogito ergo sum” by proving that knowledge from induction could not be proven and that we must therefore remain perpetually skeptical of science. Hume is arguably the founder of empiricism, the idea that knowledge comes only or primarily from sensory experience. While empiricism is a cornerstone of scientific inquiry, this focus on the source of knowledge may have inadvertently moved science away from functionalism, which focuses on the use of knowledge.

Though principally a sociologist, and the inventor of the word sociology, August Comte also lifted empiricism to another level called positivism, which asserted that all knowledge we know for sure or positively must be a posteriori from experience and not a priori from reason or logic. He proposed in 1822 in his book Positive Philosophy that society goes through three stages in its quest for truth: the theological, the metaphysical, and the positive (though different stages could coexist in the same society or in the same mind). The theological or fictitious stage is prescientific and cites supernatural causes. In the metaphysical or abstract stage people used reason to derive abstract but natural forces such as gravity or nature. Finally, in the positive or scientific stage, we abandon the search for absolutes or causes and embrace the power of science to reveal nature’s invariant laws through an ever-progressing refinement of facts based on empirical observations5. While Comte did not insist that this progression was necessarily sequential or singular, but could happen at different times in different societies, institutions, or minds, he broadly proposed that the world entered the positivistic stage in 1800 and used this generalization to support his reactionary authoritarian agenda that sought to elevate scientists to elite technocrats who governed according to the findings of the new science of sociology that he founded. In Comte’s mind, skepticism of science was unnecessary; instead, we should embrace it as proven knowledge that could be refined further but not overturned. Although Hume may have been technically right, empiricism moved progressively toward positivism because it just worked so well, and by the end of the 19th century, many thought the perfect mathematical formulation of nature was nearly at hand.

In 1878, Charles Sanders Peirce wrote a paper called, “How To Make Our Ideas Clear,” which distinguished three grades of clarity we can have of a concept. The first grade was visceral, the understanding that comes from experience without analysis, such as our familiarity with our senses and habitual interactions with the world. The second grade was analytic, as evidenced by an ability to define the concept in general terms abstracted from a specific instance. The third grade was pragmatic, being a conception of “practical bearings” the concept might have. While Peirce had some considerable difficulty grappling with whether a general scientific law could be taken to imply practical bearings, in the end he did endorse such scientific implications even in instances where one could not test them. Peirce’s first grade of clarity describes what I call instinctive and subconceptual knowledge. The second grade characterizes conceptual knowledge. While being able to provide a definition is good evidence of conceptual knowledge, it is not actually necessary to provide a definition to use a concept. Peirce put great stock in language as the bearer of scientific knowledge, but I don’t; language is a layer above the knowledge which helps us characterize and communicate it, but which also inevitably opens the door for much to be lost in translation. I would describe the third grade of clarity as actually being the function. Instincts, subconcepts, and concepts all have functions, and the functions of the former contribute to the functions of the latter as well. Where empiricism tied meaning to the source of information, i.e. to empirical evidence, pragmatism shifted meaning to the destination, i.e. its practical effects. The power of science is that it focuses on the practical effects at the conceptual level as carefully and rigorously as we can manage. By construction, all information is pragmatic, but scientific information uses methods and heuristics to find the most widely useful information. While pragmatism has been slowly gathering support, it had little impact on science at the time.

Positivism made another big leap forward in the 1920’s and 30’s when a group of scientists and philosophers called the Vienna Circle proposed logical positivism, which held that only scientific knowledge was true knowledge and, brashly, that knowledge from other sources was not just false and empty, but meaningless. These other sources included not just tradition and personal sources like experience, common sense, introspection, and intuition, but also the whole metaphysics of academic philosophy. Logical positivism sought to perfect knowledge through reason and from there all of civilization. It all hinged on the hope that physical science (and by extension natural and social science) was “proving things” and “getting somewhere” to attain “progress”. To this end, they sought to unify science under a single philosophy that captured meaning and codified all knowledge into a standardized formal language of science. They maintained the empirical view that knowledge about the world ultimately derived from sensory experience but further acknowledged the role of logical reasoning in organizing it. Perhaps more accurately, logical positivism was part of a movement called logical empiricism across several decades and continents of leading scholars intent on improving scientific methodology and the role of science in society rather than espousing any specific tenets, but logical positivism as I described it approximates the philosophies of circle members Rudolf Carnap and Moritz Schlick. Logical positivism attempted to formalize what science seemed to do best, to package up knowledge perfectly. But even at the time, this idealized modernist dream was starting to crack at the seams. Instead of progressively adding detail, physics had revealed that reality was more nebulous than expected with wave-particle duality, curved space and time, and more. Gödel’s incompleteness theorems proved that no formal system could ever be complete or consistent but must be inherently limited in its reach. Willard Van Orman Quine famously wrote in Two Dogmas of Empiricism in 1951 that “a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith.” Analytic statements are a priori logical conclusions, while synthetic statements are a posteriori statements based on experience. The flaws Quine cited relate to the fact that statements are linguistic, and a linguistic medium in intrinsically synthetic because it is not itself physical. Logical positivism invested too much in the power of language, which is descriptive of function but not the same as function, and so it was left behind, along with the rest of modernism, to be replaced by the inherent skepticism of postmodernism. From my perspective, functionally, I would say that the logical positivists correctly intuited that science creates real knowledge about the world, but they just grasped for an overly simplified means of describing that knowledge.

If positivist paths to certainty were now closed, where could science look for a firm foundation? Thomas Kuhn provided an answer to this question in The Structure of Scientific Revolutions in 1962, which is remembered popularly for introducing the idea of paradigm shifts (though Kuhn did not coin the phrase himself). Without exactly intending to do so, Kuhn created a new kind of coherentist solution. An epistemology or theory of knowledge must provide a solution to the regress problem, which is this: if a belief is justified by providing a further justified belief, then how do you reach the base justified beliefs? There are two traditional theories of justification: foundationalism and coherentism. Aristotle and Descartes were foundationalists because they sought basic beliefs that could act as the foundation for all others, eliminating the perceived problem of infinite regress. Coherentists hold that ideas support each other if they are mutually consistent, much like the words in a language can all be defined in terms of each other. The positivists were struggling to make foundationalism work, and in the end it just didn’t because Hume was right: knowledge from induction could not be proven, so the logical base was just not there. Into this relative vacuum, Kuhn claimed that normal science consisted of observation and “puzzle solving” within a paradigm, which was a coherent set of beliefs that mutually support each other rather than depending on ultimate foundational beliefs. He further, somewhat controversially, proposed that revolutionary science occurred when an alternate set of beliefs incompatible with the normal paradigm overtook it in a paradigm shift. While Kuhn’s conclusions are right as far as they go, which helps explain why this was the most influential book on the philosophy of science ever written, he inadvertently alienated himself from most physical scientists because it made it look as if science was purely a social construction, which was not his intent at all. But once he had let the cat out of the bag, he could not put it back in again. With the door open for social constructionists to undermine science as an essentially artistic endeavor, scientific realists took on the challenge of restoring certainty to science.

Scientific realism (~1980-present) has supplanted logical positivism as the leading philosophy of science by looking to fallibilism for epistemological support. Fallibilism is not a theory of justification, but it is an excuse for claiming justification is unnecessary. Instead of looking to axioms, or mutual support, or support from an infinite chain of reasons, fallibilism just acknowledges that no beliefs can be conclusively justified, but asserts that “knowledge does not require certainty and that almost no basic (that is, non-inferred) beliefs are certain or conclusively justified”. They recognize that claims in the natural sciences, in particular, are “provisional and open to revision in the light of new evidence”. The difference between skepticism and fallibilism is that while skeptics deny we have any knowledge, fallibilists claim that we do, even though it might be revised following further observation. Knowledge can be said to arise because while “a theory cannot be proven universally true, it can be proven false (test method) or it can be deemed unnecessary (Occam’s razor). Thus, conjectural theories can be held as long as they have not been refuted.”6. This suggests that until it has been proven false or redundant, it can be taken as effectively true. Realists further propose that this mantle of scientific truth not be extended to every scientific claim not yet disproven, but should be reserved for those satisfying a quality standard, which is generally taken to be include things like having maturity and not being ad hoc. Maturity suggests having been established for some time and been well tested, and not being ad hoc suggests not being devised just to satisfy known observations without having undergone suitable additional testing.

With this philosophical underpinning, scientific realists feel justified in thinking that the observed uniformity of nature and success of established scientific laws can be taken to mean that the physical world described by science exists and is well characterized by those laws. Put another way, “The scientific realist holds that science aims to produce true descriptions of things in the world (or approximately true descriptions, or ones whose central terms successfully refer, and so on).”7 In a nutshell, Richard Dawkins summarized the realist sentiment in 2013 by noting that “Science works, bitches!”8. It sounds pretty plausible, but is it enough? The determination of what is mature enough and not too ad hoc is ultimately subjective, and a function of the paradigms of the day, which suggest that the social constructive view still permeates scientific realism. Furthermore, it takes for granted that the idealized models of science can be objectively applied to reality but specifies no certain way to do that. The methods and approaches that have become mature and established, though also subjective, are taken as valid ways to match theory to reality. So the question remains, is scientific realism actually justified, and if so, how?

Superficially, the central idea of scientific realism is that the physical world described by science exists. But I would claim that this is irrelevant and incidental; the deeper idea of scientific realism is that it works, where “works” means that it provides functionality. We do engage in science because we want to know the truth about nature, both because the knowledge brings functional power and just because it is cool — the potential power that elegant explanations bring is very satisfying to our function-seeking brains. Scientific laws are general; beyond specific situations, they specify general functionality or capacity for a range of possible situations. But none of this changes the fact that we can never prove that the physical world really exists. Its actual existence is not the point. The point is what science has to say about it, which is a functional existence, that we experience through the approximate but strong sense of consistency between our theories and observations. As I will explore later, our minds are wired to think about things as being certain even though deep down we can appreciate that nothing is certain. That deeper reality (that nothing is certain) just doesn’t impress our mental experience as much as the feeling of certainty does. So scientific realism is just an accommodation to human nature and our desire to feel certainty. The real philosophy of science has to be functionalism, which isn’t concerned with certainty, only with higher probabilities for desired outcomes. I am ok with scientific realism so long as we understand it is a slightly misleading shorthand for functionalism.9

“Epistemologically, realism is committed to the idea that theoretical claims (interpreted literally as describing a mind-independent reality) constitute knowledge of the world.”10 We can see what realism is after: it seems intuitive that since the scientific laws work we should just be able to think of them as knowledge. But was Newton’s law of gravity knowledge? We know it was not right; because of relativistic effects it is never 100% accurate, and because his model proposed action at a distance, even Newton felt it was unjustifiably mystical. Einstein later corrected gravity for relativity and also formulated it as a field effect and not an “interaction” between objects, but we know that general relativity is not the whole story about gravity either. So, if the models aren’t right, on what basis are we entitled to think we have knowledge? Is it our willingness to “commit” to it? Willingness to believe is not good enough. I interpret realism as an incomplete philosophy that takes the important step of affirming aspects of science we know intuitively make sense, without being too demanding about providing the ontological and epistemological basis for those aspects.

In the 1990s, postmodernists did push the claim that all of science was a social construction in the so-called science wars. Scientific realism alone was inadequate to fight off postmodern critiques, so formally science is losing the battle against relativism. I contend that the stronger metaphysical support of functionalism is enough to push the postmodernists back into the hills, but only if science embraces it. The Sokal affair, a bogus and meaningless scientific paper that actually did get published, highlights a fundamental flaw in science as practiced: it becomes divorced from foundational roots. The foundation must never be taken for granted but must always be spelled out to some level of detail in every scientific paper. The current convention is for a scientific paper to presume some level of innate acceptance of unspoken paradigms, and the greater the presumption, the more authoritative the paper sounds. But this is the wrong path; papers should start from nothing and introduce the assumptions on which they build, with a critical eye. This philosophical backdrop doesn’t need to take over the paper, but without it, the paper is only of use to specialists, which undermines generalism, which is ultimately as important to functionalism as specialization.

Now I can reveal the real solution to the regress problem. The answer is not in the complete support of foundationalism or the mutual support of coherentism, or any other theories put forth so far. It is in “bootstrapism”. Information is captured by living information systems through four levels: genetic, instinct, subconcept and concept, and only the last level leverages logic, and only a small part of that logic is based on logical systems we have thought up, e.g. the three traditional laws of thought. Furthermore, there is a further “fifth” level, the linguistic level, that is not really level of information but a level of representation of information from the other four levels. Also, note that the four to five interacting information management systems are not the only levels; we create virtual levels with every model that builds on other models and lower-level information. So the regress problem boils down to bootstrapping, which is done by building more powerful functional systems with the help of simpler ones. The solution to the seeming paradox of infinite regress doesn’t require infinite support (though feedback can cycle endlessly), it just requires a few levels of information that build on each other. The levels also interact with each other to become mutually supporting, which can create the illusion that the topmost, conceptual level, or even more absurdly, the linguistic level, might be keeping the whole boat afloat by itself. It just isn’t like that; the levels depend on each other, and language just renders a narrow slice of that information. The idea that well-formed sentences of a language have meaning is flawed; the sentences of languages, formal or natural, have no meaning in and of themselves, though they may stimulate us to think of things with meaning. The Vienna Circle inadvertently put too much faith in formal logic (which is one-leveled) and conflated it with thought (which is multi-leveled).

Science works because scientific methods increase objectivity while reducing subjectivity and relativism. It doesn’t matter that it doesn’t (and, in fact, can’t) eliminate it. All that matters is that it reduces it. This distinguishes science from social construction by directing it toward goals. Social constructions go nowhere, but science creates an ever more accurate model of the world. So one could fairly say that science is a social construction, but it is one that continually moves closer to the truth, if truth is defined in terms of knowledge that can be put to use. In other words, from a functional perspective, truth just means increasing the amount, quality, and levels of useful information.

It is not enough for scientific communities to assume their best efforts will produce objectivity, we must also discover how preferences, biases, and fallacies can mislead the whole community. Tversky and Kahneman did groundbreaking work exposing the extent of cognitive biases in scientific research, most notably in their 1971 paper, “Belief in the law of small numbers.”1112. Beyond just being aware of biases, scientists should not have to work in situations with a vested interest in specific outcomes. This can potentially happen in both public and private settings but is more commonly a problem when science is used to justify a commercial enterprise. Scientists must not be put in the position of having a vested interest in supporting a specific paradigm. To ensure this, they must be encouraged and required to mention both the paradigm they support and its alternatives, at least to a sufficient degree to fend off the passive coercion that failing to do so creates psychologically.


As practiced, physical science (arguably) starts with these paradigmatic assumptions:

(a) the physical world exists independent of our conception of it,

(b) its components operate only via natural causes and with perfect consistency,

(c) evidence from the physical world can be used to learn about that consistency, and

(d) logical models can describe that consistency, making near-perfect prediction possible.

I have explained why assumption (a) is ultimately irrelevant since knowledge derives from phenomena and not noumena themselves. Point (b) is relevant, but not necessary either because functionalism doesn’t require perfect consistency, only enough consistency to be able to make useful predictions. Assumption (c) forms the practical basis for functionalism; the creation of information relies exclusively on feedback. We start with our senses and move on to instruments for greater accuracy and independence from subjectivity. And point (d) simply goes to the power of information management systems to build more powerful information from simpler information, though the physical sciences only scratch the surface by sticking to logical models and near-perfect prediction. Statistical analyses can reveal useful patterns without logic or perfection and are essential tools of the mind and any comprehensive information management system. So functionalism is largely consistent with science as practiced and vice versa. But as we look to explain purely functional phenomena, like the mind itself, we need to move beyond these simplified assumptions to the broader and stronger functional base, because they won’t get us very far.

The stronger functional base is simply that function as an entity exists; i.e. that information and its management exist, both theoretically and via physical manifestations of information management systems. The concept of information is that patterns exist and can be detected (observed) and represented to predict future patterns. Information can be about physical things, or not, and can be represented using physical means, or not. Either way, it is abstracted from the physical via indirect reference and consequently is not physical itself, despite the assistance physical mechanisms provide.

  1. Art does, of course, have real value, which I will discuss much later on. Art addresses subjective needs, but such needs objectively exist, and so a philosophy of art can be objective and hence scientific itself. These subjective needs do fall within the purview of many social sciences, so their philosophy will need to consider the value of art.
  2. René Descartes, Discourse on the Method, Oxford University Press, 2006, part four, p 28
  3. René Descartes, Discourse on the Method, Oxford University Press, 2006, part four, p 29
  4. Descartes and the Pineal Gland, Stanford Encyclopedia of Philosophy, from Descartes writings
  5. Law of Three Stages: The Corner Stone of Auguste Comte’s
  6. Scientific Realism, Stanford Encyclopedia of Philosophy, 2011
  7. Aaron Souppouris, “Richard Dawkins on science: ‘it works, bitches’“, The Verge, at Oxford’s Sheldonian Theater, 2013
  8. The Stanford article hints at other ways scientific realism aspires to functionalism: “There is a weak implication here to the effect that if science aims at truth, and scientific practice is at all successful, the characterization of scientific realism in terms of aim may then entail some form of characterization in terms of achievement.” In other words, that science aims for or achieves a function is thought by some to be a critical part of this realism. While there are purist realists who are unconcerned whether scientific knowledge is useful or not, “most scientific realists commit to something more in terms of achievement”. Scientific Realism, Stanford Encyclopedia of Philosophy, 2011
  9. Scientific Realism, Stanford Encyclopedia of Philosophy, 2011
  10. Amos Tversky and Daniel Kahneman, “Belief in the law of small numbers.“, Psychological Bulletin, 1971
  11. Michael Lewis, The Undoing Project: A Friendship That Changed Our Minds, W. W. Norton & Company, 2016

Leave a Reply