Blog Image

The London Libertarian

About the blog

Commentary and debate on politics, economics and culture from a libertarian perspective. To Libertarian Alliance Website >

Anyone can make individual contributions on any subject covered in this blog by emailing

Reply to Huemer on Popper

Uncategorised Posted on Sun, September 13, 2020 00:45:13

Are Critical Rationalists Completely Out of Their Minds?

Michael Huemer has published a brief and blistering attack on the thinking of Karl Popper. He proclaims that Popper’s ideas are “insane.” He maintains that if you actually agree with Popper, “you are completely out of your mind,” which implies that Popper himself was completely out of his mind.

Huemer titles his piece, posted on his blog, “You Don’t Agree with Karl Popper.” The point of this title is that, in Huemer’s opinion, a lot of people (he mentions libertarians, many of whom admire Popper’s ideas) think they agree with Popper, only because they don’t fully understand what Popper is saying. If they did understand it, as Huemer does, they would find they wouldn’t agree with it at all, unless they were completely out of their minds.

Huemer gives a summary of some of the ideas people attribute to Popper, and which he is willing to concede they can accept without being completely out of their minds. He then goes on to impute to Popper additional views which he considers to be seriously wrong, and which people can’t accept without being completely out of their minds.

Here’s his summary of the first set of views, the ones Huemer acknowledges that sane people, in his judgment, can readily accept:

“It’s impossible to verify a theory, with any number of observations. Yet a single observation can refute a theory. Also, science is mainly about trying to refute theories. The way science proceeds is that you start with a hypothesis, deduce some observational predictions, and then see whether those predictions are correct. You start with the ones that you think are most likely to be wrong, because you’re trying to falsify the theory. Theories that can’t in principle be falsified are bad. Theories that could have been falsified but have survived lots of attempts to falsify them are good.”

Is this, as far as it goes, a correct statement of Popper’s views? It’s not wildly off, but there are some things which are not strictly accurate.

Most obviously, Popper did not hold that theories that can’t in principle be falsified are necessarily “bad,” merely that they don’t belong to empirical science. He held, for example, that metaphysical theories, such as whether or not the universe is deterministic, can’t be empirically falsified and are therefore not scientific, thought they can be, and should be, seriously argued about. Popper wrote an entire book arguing against determinism and for indeterminism (The Open Universe), an issue which he insists does not belong to science. In his view there is no empirical test which could conceivably falsify determinism or indeterminism, and so this is an issue which belongs to metaphysics.

Another example is realism. Are trees, mountains, and stars things which exist independently of our awareness, or are they products of our minds and senses? Popper argues strongly for realism (Realism and the Aim of Science, 80–158). At the same time, he maintains that there is no way of empirically testing realism. So, this is not a scientific question but a metaphysical one. It can’t be tackled by empirical research but only by philosophical argument.

Popper holds that with some metaphysical theories, it may be possible to reframe them so that they become falsifiable. This may then lead to an advance in scientific knowledge. For example, the atomic theories of some of the ancient Greeks were not falsifiable. They were therefore, in Popper’s terminology, metaphysical rather than scientific, though this does not imply that they were not meaningful, important, or interesting. Nor does it imply that they were not objectively true or false.

John Dalton’s theory of the atom published in 1807 made numerous claims about atoms which could be tested by experiment: we could attempt to falsify them. Thus theories about atoms moved from metaphysics into science. Popper does not think this can happen to all metaphysical theories; he does not seek or expect the elimination of metaphysics. His view is that metaphysics will never be disposed of, and some metaphysics will always be required, even for and by science.

There are some non-falsifiable theories that may present themselves as scientific, so in that sense we might say that they’re “bad,” that’s to say, not really what they claim to be. Examples include some of the propositions of psychoanalysis, such as the theory that all dreams represent fulfillment of wishes (Realism and the Aim of Science, 163–174), or that all infants undergo an Oedipal phase of wanting to kill their fathers so that they can have sex with their mothers (Edelstein et al., Therapy Breakthrough, 255–266).

Another possibility is that something might look superficially like a scientific theory, but on examination might turn out to be tautologous, and therefore not empirically refutable. And, Popper points out, truly scientific theories can always be turned into tautologies by interpreting them in such a way that they are immune to empirical refutation. Thus, ‘All swans are white’ becomes a tautology if we add that anything not white cannot be a swan.

That “science is mainly about trying to refute theories” is a bit misleading as a statement of Popper’s logic of science. Popper holds that science is about trying to find good and better explanations of the world, which involves comparing and evaluating theories according to several criteria, one of which is whether they agree with observations.

When Huemer says, “You start with the ones that you think are most likely to be wrong”, I’m not sure what he means. You might perhaps be inclined to start with the theories that have the widest acceptance, since then you will make the most progress by showing them to be false. More generally, I suppose, you’re most likely to start with theories which you find unsatisfactory. You may criticize a scientific theory in ways other than empirical testing, for example by arguing that the theory does not address a problem it was claimed to address, has hidden ambiguities or inconsistencies, or has components that can be eliminated without loss of empirical content (Realism and the Aim of Science, 55–56).

Having looked at the elements of Popper which Huemer thinks we can accept without being completely out of our minds, let’s now turn to those which he considers so terribly mistaken that agreeing with them is tantamount to insanity.

Huemer does not seriously attempt to give an outline of what he takes to be Popper’s philosophy and then explain what he thinks is wrong with it. Instead, he identifies specific assertions he attributes to Popper, and he appears to think that these assertions are so outrageous, ridiculous, and self-evidently “insane,” that it’s enough to cite them and pull faces at them to convince his readers that Popper is completely out of his mind. Because of this procedure of Huemer’s, I will list each of the distressing propositions Huemer imputes to Popper and try to say something helpful about each of them.

These assertions are a mixed bag, some of them are roughly the same as each other or overlap somewhat. Some of them accurately reproduce Popper’s thinking; others don’t. I list them here in the order Huemer gives them, without any attempt to sort them, rank them, or make sense of them. Double quotation marks indicate that Huemer is directly quoting Popper; single quotation marks indicates that Huemer is reporting in his own words what he takes to be Popper’s position. No quotation marks indicates this is my paraphrase of something Huemer attributes to Popper.
I identify seventeen of these statements, which all express contentions Huemer attributes to Popper, and all of which allegedly constitute evidence that Popper is completely out of his mind.

1. ‘. . . the only legitimate kind of reasoning is deduction. Induction is completely worthless. . . . His [Popper’s] point is that there is not the slightest reason to think that any scientific theory is true, or close to true, or likely to be true, or anything else at all in this neighborhood that a normal person might want to say’.

2. ‘There’s no reason to think it’s any more likely that we evolved by natural selection than that God created us in 4004 B.C. The Theory of Evolution is just a completely arbitrary guess’.

3. ‘. . . the goal of science must be to refute theories’.

4. “We must regard all laws and theories as guesses.”

5. “There are no such things as good positive reasons.”

6. “Belief, of course, is never rational: it is rational to suspend belief.”

7. “I never assume that by force of ‘verified’ conclusions, theories can be established as ‘true’, or even as merely ‘probable’.”

8. “Of two hypotheses, the one that is logically stronger, or more informative, or better testable, and thus the one which can be better corroborated, is always less probable—on any given evidence—than the other.”

9. “In an infinite universe . . . the probability of any (non-tautological) universal law will be zero.

10. ‘Popper is not just denying that we can be certain of these theories, and not just denying that they are likely to be true; he claims that they are absolutely certain to be false’.

11. ‘When you get done testing your scientific theory, and it survives all tests, you can’t say that it’s likely to be correct; it’s less likely to be correct, even after you’ve gathered all the evidence, than some unfalsifiable, unscientific theory’.

12. ‘We have no reason to believe in science, and pseudoscience is more likely to be correct, and in fact the paradigmatic scientific theories are definitely wrong. . . .’

13. ‘. . . you can’t logically deduce the falsity of the probability claim from observations. And again, that’s the only thing you’re allowed to appeal to. So, on Popper’s view, quantum mechanics must be unscientific’.

14. The existence of vestigial legs in the bodies of some snakes is evidence for evolution, and this ‘isn’t a matter of deduction’.

15. ‘. . . Popper’s philosophy entails that the Theory of Evolution and the asteroid-impact theory are unscientific, besides that we have no evidence at all for either of them’.

16. ‘Of course, the obvious problem is that it’s absurd to say that we don’t have any reason to think any scientific theory is true’.

17. ‘. . . scientific theories are less likely to be correct than unscientific ones, even after they survive stringent tests’.

By my count, nine of these seventeen propositions are false—in these nine cases Huemer attributes a view to Popper which Popper most definitely does not hold. A couple of the propositions are a bit indeterminate, so only six of the seventeen are definitely correct. In those six cases where Huemer gives Popper’s view correctly, I will generally defend what Popper says. The one case where I don’t defend what Popper himself says, or at least the way he expresses it, is #6.

I will comment specifically on each of Huemer’s seventeen allegedly Popperian positions, but before I do that, I will now give a very brief account of that part of Popper’s philosophy which Huemer judges to be insane and which, in Huemer’s view, you cannot accept unless you are completely out of your mind.

Critical Rationalism in a Nutshell

Before Popper, it was generally accepted by philosophers that science, as well as ordinary common-sense knowledge, relied upon a procedure known as induction. Induction is supposedly the way we get from the particular to the general, from the individual to the universal, from a limited number of observations to a universal law. For example, it is held that objects near the Earth will fall toward the Earth with an acceleration of 9.8 meters/second/second. Certain conditions have to be imposed. The object must be in a vacuum, otherwise the atmosphere will make a difference to the acceleration because of air resistance. But these conditions don’t matter for our discussion here. We assume an acceleration of 9.8 m/s/s, then make an added calculation to allow for air resistance. Several additional and very minute adjustments have to be made for good accuracy. The point is that apart from changes in these stated conditions, the acceleration is always the same.

The question is: what entitles us to say that because we have observed a number of instances of falling bodies which conform to an acceleration of 9.8 m/s/s, the same will be true of all falling bodies near the Earth, for example, a thousand miles away from where we have made our observations, or a thousand years in the past or the future?

In 1739, David Hume pointed out that there is no logical operation which enables us to make this leap: to say that what we have observed of a certain finite range of instances will apply to other instances which we have not observed. We observe today in Edinburgh that sodium chloride dissolves in water whereas iron filings do not dissolve in water. There is no logically sound way to infer from today’s findings that next week in Edinburgh, or today in London, we will not find that sodium chloride does not dissolve in water whereas iron filings do. Nor does the fact that things have behaved a certain way in the past, or in our neighborhood, even make it more probable that they will behave the same way in the future, or in a distant location—not an eentsy weentsy bit more probable. These are elementary logical points. They hold equally for the Aristotelian logic with which Hume was acquainted and for the modern logic developed by Peano, Frege, and Russell at the end of the nineteenth century. Consequently, there can be no valid method of induction.

Both science and everyday common sense require that we do form conclusions about what happens invariably, in all times and places. Therefore Hume’s insight looks as if it must undermine both science and everyday common sense—as long as we cling to the notion of induction, the assumption that we can logically derive statements with a boundless range from a limited set of observations.

As a result of this discovery of Hume’s, the philosophy known as empiricism was put in question. Empiricism holds that we get knowledge of the world only through the evidence of our senses, through observation, and through logical deductions from our observations, but since logic does not permit us to extend our conclusions beyond a list of our past observations, much of our knowledge (which is of a general or law-like character) can never be logically obtained by following empiricism.

Philosophers have attempted to tackle this problem in a number of ways, but nothing has shaken Hume’s finding that, according to logic, valid induction is impossible and therefore gaining knowledge of the world by a purely empiricist approach is impossible.

Skipping over the history of the many failed attempts to find a reasonable basis for induction, which have always necessarily tended to move in the direction of saying that we know some things prior to any experience, we can now explain Popper’s radically different approach.

Popper accepts Hume’s conclusion that, in our search for general laws, we cannot support these laws by induction. The fact that some regularity has been observed on all occasions, around here and up to now, cannot logically offer any support for the proposition that the same regularity will continue to be observed in other places or other times. So, Popper entirely agrees with Hume’s rejection of the possibility of valid induction.

Unlike Hume, who considered that we do use induction, even though it is logically indefensible, Popper goes on to say that we never do use induction; if we think we have arrived at a conclusion by induction, we are victims of a kind of optical illusion, whereby we falsely reconstruct the actual steps of our reasoning (Realism and the Aim of Science, 35). Consequently, Huemer’s #1, that ‘Induction is completely worthless’, is a little misleading. Rather, in Popper’s view, induction does not exist; there is no such thing as induction. It would be odd to say, ‘Levitation is completely worthless’.

Against Hume, Popper says that we do not have to conclude that we cannot gain knowledge of the world and extend and improve our knowledge. We can do this by the method of conjecture and refutation. First, we come up with a conjecture (a surmise, guess, or hypothesis) about some apparent regularity in the world. Then we test that conjecture by comparing it with subsequent experience. Sometimes we find that our conjecture is contradicted by an experience or observation, and then we may decide to abandon that conjecture and replace it with a second conjecture. If we find that experience contradicts our first conjecture while not contradicting our second conjecture, we may conclude that our first conjecture has to be scrapped, while our second conjecture can survive being scrapped, at least for the moment.

So, returning to the question of what entitles us to say that objects always fall to Earth with an acceleration of 9.8 m/s/s, the answer is that we cannot soundly deduce this conclusion from our observations, but that since no observation has been found to contradict this guess, we have decided to stick with it. Obviously, this does not mean that it is true or likely to be true, though, as far as we can tell, given our present stock of knowledge, including all our past observations, it might be true. (However, in this case we have found a better theory, in which 9.8 m/s/s occurs as a special case: a general theory of gravitational attraction between two bodies, which enables us to say that the acceleration will be different on other planets and moons.)

Unlike induction, which must always be logically unsound, conjecture and refutation is logically impeccable. It would not be logically impeccable if we were to claim of our first guess that we have proved it from observation or experience, or even if we were to claim that it had a probability greater than zero. We have not proved it, in the sense of logically deriving it from observations, nor have we shown it to be more probable because it is consistent with all observations so far. As the controversy over ‘grue’ should have reminded us, an infinite number of false theories are always entirely consistent with all observations so far.

We can remain logically correct if we simply say that we have made a guess at the truth, that our guess might be true, and that we’re going to stick with it for the time being. That’s what we can say about our first theory, then about a second theory which might replace our first theory, then about a third theory which replaces our second theory, and so on indefinitely. We stick to each guess up to a point, and abandon it when it apparently becomes incompatible with experience or observation. We then move on to a new guess, and the same process repeats itself, possibly without end.

Having come up with a guess, we at first stick to that guess, since we have nothing better. Our minds are pre-programmed by millions of years of evolution to search for patterns and generalities in the incoming flood of experiences. Now, in practice it may be that we become emotionally committed to our guess—we may believe in our guess—and we have in fact evolved to be prone to believe stuff (perhaps because it was advantageous to the survival of humans to stick to their guesses quite tenaciously, or perhaps because there is something in the very nature of consciousness that induces us to believe). But from a logical point of view, belief is extraneous, redundant, and immaterial. Logic is not psychology. Scientific method has nothing to do with belief, just as the proof or disproof of a mathematical theorem has nothing to do with belief.

As we move from one guess to another, our currently accepted guesses tend to get better—better in the sense of seeming to us, given the totality of our knowledge, to be more promising stabs at the truth. When we replace theory A with theory B because we have refuted theory A by finding a counter-example to it, we can conclude that theory A is false, whereas theory B might be true but is not necessarily true. We prefer theory B to theory A because theory B has not yet been refuted, while theory A has been refuted. If someone proposes theory C, we can look for some way in which the predictions of B and C are different, and then perform the ‘crucial experiment’ which will tell us which of the two predictions comes true. We can never demonstrate the truth of our theory, but we can demonstrate that one theory is better than another because it tests out better, and we may be able to say that the theory we have provisionally accepted is the best we have been able to come up with so far. We therefore prefer it to any of its known rivals, even though we have no guarantee, and can never have any guarantee, that it is true.

Conjecture and Refutation in the Crib

Popper’s process of conjecture and refutation, or trial and error, is the method used by human babies, as they learn about the world. The research of Alison Gopnik and her associates, which has been able to reconstruct by very strict attention to human baby behavior, what babies believe, even in the first few weeks of life outside the womb, shows that babies adopt a theory about the world, revise or replace it when it clashes with their subsequent experience, and do this sequentially and progressively, getting gradually closer to what we regard as the common-sense knowledge of grown-ups—which will then be further revised and in large part discarded by those grown-ups who pursue an education in science.

By the way, it seems that Gopnik did not know about Popper and did not realize how well her research fit the Popper conception. She refers to a baby’s first guess as “induction.” Since reading Gopnik, and being impressed by her congruence with Popper, I have found that other psychologists have argued along somewhat similar lines, notably Robert Siegler.

It was reading and pondering Gopnik’s exciting results (The Scientist in the Crib) which prompted me to understand that, not only is the critical rationalist theory of conjecture and refutation the correct account, but all other accounts are absurd and preposterous. There is no other way in which conscious animals could conceivably have developed a culture involving progressive accumulation of knowledge, except the Popperian system of conjecture and refutation. But even if I were to be wrong about that, it would remain true that conjecture and refutation is in fact the only method by which knowledge does accumulate.

Over many years of thinking and talking about Popper, I have noticed some common misinterpretations which tend to lead people astray. Here I will just point out one of these: the assumption that Popper’s philosophical account of the logic of science offers a recipe for doing science. According to Popper, there can be no recipe for doing science successfully, any more than there can be a recipe for creating a great work of art. The logic of science is not a cookbook for doing science, any more than a textbook of logic is a handbook for winning debates.

In particular, we ought to keep clear the distinction between the purely logical and the practical applications of falsification. Whereas no accumulation of observing white swans and only white swans can substantiate or even make slightly probable the universal statement that all swans are white, a single observation of a black swan automatically and necessarily refutes the statement that all swans are white. That is a simple truth of logic. But in the actual practice of science, it may be that a theory continues to gain acceptance despite the existence of a falsifying observation (Schilpp, 1021, 1035).

A particular observation may be dismissed after failure to reproduce it. But even a reproducible falsifying observation may be acknowledged without abandoning the theory it contradicts. This will be recognized as a troubling anomaly, something that ought to be resolved somehow, but scientists will not always feel that they ought to immediately make the problem go away by abandoning the theory. Our preferred theory may have such merits that we’re prepared to put on hold the problem that some observation conflicts with it—that, in simple terms, it appears to have been refuted.
Another qualification to the simplest model of conjecture and refutation is that certain methodological conventions must be adhered to if science is to work at all, the best-known of these being that, wherever possible, an experiment or observation should be reproducible by many different researchers on many different occasions. A different convention or set of conventions is required to cope with the fact that measurement is never perfectly precise, so we need practical rules to determine what degree of approximation of a result will agree or disagree with a prediction. And, as I will explain shortly, another convention is required to enable us to treat some predictions of probabilities as falsifiable.

Huemer’s Seventeen Propositions

One thing which will immediately strike anyone who has read Popper and then looks at Huemer’s seventeen propositions is that Huemer sometimes attributes to Popper views which Popper very definitely rejects or which contradict things Popper explicitly and frequently states. Nine of Huemer’s seventeen propositions are false accounts of Popper’s thinking: they are never stated by Popper and directly contradict what Popper repeatedly and emphatically asserts.

Huemer might reply that this just shows that Popper contradicts himself. But I’m not sure that would really harmonize with the tone of Huemer’s polemic. The impression he gives is that Popper takes an unambiguous and consistent position which Huemer says is crazy. He doesn’t convey the impression that he thinks Popper occasionally makes crazy remarks which are at odds with the main body of his less crazy philosophy.
In any event, it can be shown that these points where Popper’s statements contradict Huemer’s account of Popper’s thinking cohere quite naturally with the rest of Popper’s theory of science. They are not isolated departures from Popper’s general account.

I will now comment on Huemer’s seventeen points, moving from examples where Huemer incorrectly attributes some position to Popper to examples where Huemer reproduces Popper’s view accurately and where I will defend what Popper says.

Huemer’s #1 (that science has nothing to do with truth) is contradicted by Popper’s numerous statements that science is trying to get at the truth. There are hundreds of these statements; you can hardly read a few pages of Popper without tripping over them. For example Popper writes (combatting instrumentalism, the view that scientific theories are merely tools for prediction rather than claims about objective reality): “in the search for knowledge, we are out to find true theories, or at least theories which are nearer than others to the truth—which correspond better to the facts” (Conjectures and Refutations, 226).

Huemer’s #3 (the assertion that the goal of science is to refute theories) contradicts Popper’s assertion that the aim of science is “to find satisfactory explanations,” which means explanations “in terms of testable and falsifiable universal laws and initial conditions” (Realism and the Aim of Science, 132–35). Popper sees the growth of knowledge, the progress of science, as the overthrow of currently accepted theories by better theories, which occurs by criticizing existing theories and offering competing alternative theories, which may turn out to be preferable.

Popper adds that it is reasonable, sensible, and sound policy to prefer the best theories we have, and we keep trying to improve our theories by a process of critical discussion and debate. In the realm of empirical science, refuting a theory empirically by showing that it contradicts observations is one, but not the only, important means to that end. We never arrive at a point where we can demonstrate that we have the truth, though we can hope to make progress towards the truth. Thus, in Popper’s account, we can say that Einstein’s relativity theory is better than Newton’s theory of gravitation. Newton’s theory was, in Popper’s words, “a splendid approximation” (The Open Universe, 47), yet it was supplanted by Einstein’s theory, also a splendid approximation, which has advantages over Newton’s theory. Newton is a good approximation to Einstein in a range of circumstances, but not in all circumstances. Observation corroborates Einstein better than it does Newton, because many singular observations contradict Newton without contradicting Einstein.

In his #4, Huemer gives Popper’s claim that all theories and laws are guesses, as one of Popper’s self-evidently insane statements. Huemer does not explain why he supposes this is insane or what else theories and laws might be. The craziness is not self-evident to everyone. Consider the following quotation from a talk by the outstanding physicist Richard Feynman:

“Now I’m going to discuss how we would look for a new law. In general, we look for a new law by the following process. First, we guess it [audience laughter]. No, don’t laugh, that’s the truth. Then we compute the consequences of the guess, to see what, if this is right, if this law we guess is right, to see what it would imply and then we compare the computation results to nature or we say compare to experiment or experience, compare it directly with observations to see if it works. If it disagrees with experiment, it’s wrong. In that simple statement is the key to science.” [Feynman 2020. Ungrammatical expressions in the original.]

Another of the Popperian contentions which Huemer apparently takes to be self-evidently insane is his #5, that there are no such things as good positive reasons. (The context makes clear that Popper is talking about good positive reasons for accepting a theory.) Again, Huemer does not explain why he disagrees, or what he thinks might be a good positive reason. Maybe he’s thinking of examples where a piece of evidence clearly supports a particular theory. What Popper would say is that, in such a case, the piece of evidence is incompatible with rival theories. It is therefore, despite what we may at first suppose, a negative reason; it supports a theory insofar as it contradicts that theory’s competitors.

Huemer’s #10 and #12 both have Popper claiming that scientific theories “are absolutely certain to be false.” This is a serious misreading of Popper. Popper contradicts this proposition over and over again. Popper consistently maintains the view, advanced by the ancient philosopher Xenophanes, that we might very well arrive at a true theory, but we could never be in a position to demonstrate conclusively (or to ‘know’) that it was true (Realism and the Aim of Science, 33; Conjectures and Refutations, 114–16, 151–53).

It seems to be a difficulty for some readers of Popper that he combines the objectivity of truth with fallibilism. Popper thinks (and I agree) that the truth or falsity of a theory is absolute and objective, while our being able to determine its truth or falsity can be very difficult, and in many cases impossible—especially when we’re looking at general, law-like theories, which are the most fertile and useful. We should not confuse truth with guaranteed truth.

Huemer does not quote Popper as asserting anything like #10 or #12, and he doesn’t try to show that #10 or #12 can be inferred from anything Popper says. Why then does Huemer make the totally ludicrous claim that Popper asserts that our best scientific theories are “absolutely certain to be false”? It appears that Huemer simply assumes that a true theory cannot have zero probability. Since Popper says that theories have zero probability (Huemer’s #9), Huemer thinks he must be saying that those theories are false. If Huemer supposed that a theory with zero probability must be false, then he would think that his #10 and #12 followed from his #9. Huemer’s supposition here is no doubt the supposition of many people. But it is incorrect, where infinity is involved (Cthaeh 2017; 3blue1brown 2020).

An impossible event necessarily has the probability of zero, but the converse is false. Any event has zero probability where the sample space is infinite. What applies to events applies to propositions specifying events, and so Popper has not made a leap in extending this conclusion of probability theory from events to theories.

If you possess a lottery ticket, one of a thousand tickets, your probability of winning (stipulating that precisely one ticket has the winning number) is one in a thousand; if your ticket is one of a million tickets, your probability of winning is one in a million, and so on. If your ticket is one of an infinity of tickets, your probability of winning is zero. This does not mean you cannot win, because, after all, you do have a ticket, one ticket must win, and your ticket is equally as likely as any of the other tickets to be the winner.

Since critical rationalism does not require that its adherents don’t have beliefs—it just says that belief is a subjective psychological quality immaterial to the logic of science—a critical rationalist may very well believe that a theory is true, while acknowledging that it has zero probability. (But what if the universe isn’t infinite? Well, it’s still quite big.) As I inspect the contents of my own mind, I find that I do believe in conservation of momentum. So I believe, simultaneously, that conservation of momentum is true and that the probability of its being true is zero. But, remember, these are just my beliefs, which, like your beliefs, or anyone else’s beliefs, always count for nothing.

By the way, if we adopt the metaphysical principle that nature is strictly governed by universal laws which can be grasped by humans, we may (with a few more steps) hope to be able to avoid the ‘zero probability’ conclusion, but we don’t know a way to demonstrate the truth of any such metaphysical principle. We can’t validly deduce from observations that we don’t live in a universe bereft of universal laws, or bereft of any that we could possibly discover. I’m guessing, though, that a lot of people do tacitly hold to some such metaphysical principle, which may help to explain why they seem to find it intuitively obvious that a well-corroborated theory like relativity or quantum electrodynamics must have a greater than zero probability of being true.

But can’t we reasonably say that the current theories of scientific cosmology are more probably true (or closer to the truth) than, say, a literal reading of the first chapter of Genesis? Of course we can! But we should clarify what is meant by such a claim.

Popper points out that we often use words like ‘probable’ and ‘likely’ in ways that are not defined in terms of the calculus of probabilities—the branch of mathematics which we learn when we study probability theory. Confusion can arise when we start supposing that a use of the term ‘probable’ which owes nothing to the calculus of probabilities is an application of the calculus of probabilities (Realism and the Aim of Science, 282–83). We sometimes use ‘probable’ or ‘likely’ as a synonym for ‘rationally preferable’ or ‘promising as a candidate for truth’. We cannot meaningfully say that the probability that Genesis is correct is one number, and the probability that the Big Bang happened is a different and presumably somewhat higher number. That would be a strange thing to try to do, since we judge that the Genesis theory has been refuted.

But what about when one theory replaces another, as Einstein replaced Newton? Can’t we compare the probability of Newton’s being right with the probability of Einstein’s being right? Well, although Popper himself took a dim view of arguments from the history of science, I can point out that this is just not what happens. There was a crucial experiment (precession of the perihelion of Mercury) which corroborated Einstein and falsified Newton. Other crucial observations have been consistent with Einstein and not with Newton. So, most scientists, some more quickly than others, came around to the view that Newton was false and Einstein possibly true. No one, in 1916, thought about the probability of Newton or Einstein being correct; they thought about making observations to determine which one was correct (or closer to being correct).

Huemer apparently presents #8 (that more informative theories are always less probable than less informative theories) as one of the things you can only accept if you are insane or completely out of your mind. And yet this statement by Popper is not really controversial. The point Popper is making is that theories with more content are to be preferred to theories with less content, and the theories with more content must be less probable than the theories with less content.

For example, the theory that all persons named Huemer are color-blind is more probable than the theory that all persons named Huemer are color-blind and left-handed, which is in turn more probable than the theory that all persons named Huemer are color-blind, left-handed, and bald. The theory that all cyclists in Chicago have antibodies for SARSCov2 is more probable than the theory that all cyclists in the Midwest have antibodies for SARSCov2, which is in turn more probable than the theory that all cyclists in the US have antibodies for SARSCov2. These are elementary applications of a fundamental truth of probability theory.

The more informational or empirical content a theory has, the more it claims, the more improbable it is. We want theories that claim as much as possible, and the more they claim, the less probable they must be. A theory with more content prohibits more; it is bolder; it takes more risks. If we find we can adopt it, it is more useful because it yields more information. It follows that we look for theories that are as improbable as possible, since these, caeteris paribus, must be the best theories.

There is therefore a special sense in which scientific theories may often be less probable than pseudoscientific theories. If the pseudoscientific theories are vacuous, impossible to pin down, compatible with an unlimited range of observations, then the pseudoscientific theories will be more probable; in fact their probability of being true will be 1. They don’t commit themselves to anything in the world because they are vague and waffly. They say so little that it is impossible to find counter-examples, and therefore impossible to disprove them.

Astrology is an example. Not counting the serious statistical work of Michel Gauquelin (‘neo-astrology’), the problem with traditional astrology is that it’s compatible with an indefinite range of imaginable observations; we can’t think of any conceivable observation which could refute it. The same is true of Freudian or Jungian analysis. ‘Pseudoscientific’ systems, like traditional astrology, Freudianism, and Jungianism, are generally characterized by including centrally important theories which are compatible with any conceivable observable events; they therefore have a maximally high probability of being unrefuted by observations. It’s characteristic of genuinely scientific theories that they imply that many conceivable observable events will never, in fact, be observed, and therefore these theories could easily be refuted by observing one of these events.

Aside from that kind of example, due to a difference in content, I don’t see that Popper’s theory implies that a scientific theory must always be less probable than a pseudoscientific theory, as Huemer seems to be suggesting in his #11, and #17. Is it always true that a pseudoscientific theory is more probable than a scientific theory? That’s an interesting question which might be worth pursuing, but, as far as I can recall, Popper has made no claim anent it. On quick reflection, it’s not obvious to me that, say, Velikovsky’s theory, normally considered pseudoscience, is more probable, because more vacuous, than standard cosmology.

Non-Lawlike Theories

According to Huemer, Popper holds that “The Theory of Evolution is just a completely arbitrary guess.” This is an astounding misreading. In Popper’s view, all theories are guesses but these guesses are rarely arbitrary. In science, they are usually attempts to solve problems. Perhaps the main problem Darwin attempted to solve was ‘How can we account for the diversity of life forms, along with the varying degrees of similarity of some of them?’ And there are various related problems, such as ‘How can we account for the fact that different geological strata bear the fossils of different life forms?’

Guesses arise as solutions to problems, and different guesses compete with each other. Guesses which are better at solving problems tend to win out in competition with less successful guesses. So guesses which survive are far from arbitrary, though they always remain guesses.

According to Huemer’s #15, Popper holds that evolution and the asteroid-impact theory of the extinction of dinosaurs are “unscientific.” Popper didn’t write about the asteroid-impact theory, but he wrote quite a bit about Darwin’s theory and accepted it as a momentous achievement. (And he often pointed out that the growth of human knowledge—conjecture and refutation—is a form of natural selection, in which theories compete and die.)

These two theories, evolution and asteroid-impact, do have the peculiarity, however, that they are each accounts of a unique chain of events that happened just once—the evolution of life on Earth and (a subset of that) the extinction of the dinosaurs. These are theories which do not take the form of universal laws. They do require the application of a number of universal laws, from physics, chemistry, and biology. Much of what Popper says about some scientific theories would not apply in these cases, since they do not possess a law-like logical structure, and Popper tends to be focused on theories which do possess a law-like logical structure.

Huemer asserts that “Real scientific theories . . . are not normally of the form ‘All A’s are B’ (as in philosophers’ examples).” This perhaps carries the innuendo that philosophers are making some sort of mistake by giving so much attention to theories of the form ‘All A’s are B’ (or ‘All swans are white’). But all scientific theories having the form of laws (or putative laws) do indeed take the form ‘All A’s are B’. All other theories, such as a theory, or story, of what happened at a particular time and place, involve applications of these law-like or universal theories. Philosophers are not slipping up when they give so much attention to theories of the form ‘All A’s are B’.

Nonetheless, theories of what happened historically, such as the theory of evolution, are still subject to conjecture and refutation. We can compare the theory of what happened with observations. The creationist theory in its most popular form can be refuted by many observations, including the existence of datable fossils gradually changing over billions of years.

In both the general account of evolution and the asteroid extinction theory, the theory in question has competed with alternative theories. The best-known alternative to evolution is special creation. Given evolution, the best-known alternative to natural selection is the Lamarckian theory involving the inheritance of acquired characteristics. In both these examples, we decide by refutation. Special creation, especially special creation less than ten thousand years ago, has been falsified. As an explanation for complex adaptations, inheritance of acquired characteristics has been falsified.

Alternatives to the asteroid extinction theory still have some following among the relevant scientific specialists: continental movement, volcanic activity, climate change, and competition from mammals are among the contenders. Here we don’t look at probabilities, but at the possibility of refuting one or more of the competing theories. (We accept there was an asteroid impact at approximately the right time, because of the evidence of worldwide iridium deposits, but that doesn’t prove that the impact was the cause of the extinction.)

Theories about a unique historical succession of events have the quality that in these theories the ‘problem of induction’ does not arise. Since these theories take numerous universal laws for granted, and try to establish what happened in some specific instance, they don’t make any attempt to proceed from the particular to the general, from the singular to the universal. They may make use of probability, but the use of probability is purely deductive. Probability is involved, but it is not involved in the process of arriving at a universal law. These theories are—like the theories of Sherlock Holmes in solving crimes—all a matter of observation and deduction, with no place for induction.

Huemer gives the observation that some snakes have vestigial legs as evidence in favor of Darwinian evolution and against creationism. As a somewhat facetious aside, let me point out that the most popular form of creationism derives from the first few chapters of Genesis, where we are indeed informed that snakes originally had limbs (as well as being highly intelligent and fluent in Hebrew), which they lost after a snake persuaded Eve to eat the forbidden fruit (Genesis 3). So a fundamentalist Jew or fundamentalist Christian might not be fazed by the vestigial legs in some snakes.

A better example would be the feet on some fossilized whales. Creationists scoffed at the evolution story that whales descended from land-dwelling animals and ridiculed the claim that whales had ever possessed feet. After many decades of arguments about evolution, fossil whales with feet were dug up in Egypt, one of many examples where new observations have strikingly corroborated—in a highly ‘improbable’ way—Darwin’s theory.

Huemer seems to reason like this. Snakes’ vestigial legs are not required by the theory of evolution; the absence of vestigial legs would not contradict evolution nor would it contradict the specific example of an evolutionary pathway, that snakes are descended from animals with limbs. Therefore, no refutation is involved in the finding that snakes have vestigial legs, nor would it be involved in the opposite finding.

There are numerous potential falsifiers to the prevailing account of evolution—to take the most popular example, if a fossil of a rabbit were to be found in pre-Cambrian strata. But Huemer’s point is that the vestigial legs on snakes do help to support the theory that evolution occurred, and that snakes are descended from animals with limbs, simply because this is the kind of thing you might expect to find if evolution were true, even though such a finding is not required. Hence, we have here a case of the strengthening of a theory because of something that increases the probability of the theory’s being correct.

All this seems quite persuasive, but I don’t accept that it contradicts Popper’s account because we are not dealing here with the attempt to arrive at a universal law. ‘Probability’ here has no inductive implication. I might add that we’re not compelled to approach even this matter in anything like the Bayesian manner. We can simply say that the vestigial legs in fossil whales constitutes an observation that has to be explained; a theory which offers a good explanation has an advantage; a theory which predicted precisely this would be even better (the theory did not predict that fossils of whales with vestigial feet would be found but it did assert that there were once whales with feet; finding precisely that kind of fossil is a good corroboration for that very prediction).

To avoid a possible misunderstanding, I should mention something that Huemer does not raise: that Popper at one time explicitly stated that the Darwinian theory was not a scientific theory, but a “metaphysical research programme” (Schilpp, 133–143) because, he argued, it was not falsifiable. In taking this position he did not dispute the fact that it might very well be a true account of evolution, nor did he dispute that it had a productive role to play in guiding scientific research. Nor did he dispute that many narrower components of the evolutionary story would be falsifiable (no rabbits in the pre-Cambrian). Later Popper reversed himself on this issue, and accepted Darwinism as falsifiable and therefore scientific (Radnitzky and Bartley, 143–47).

Deduction and Probability

Huemer reports Popper as holding that “Induction is completely worthless; probabilistic reasoning is worthless” (Huemer’s #1). Most of what we might want to call ‘probabilistic reasoning’ is not inductive. From the fact that a coin has a one-half chance of landing heads, plus the fact that a die has a one-sixth chance of landing 4, it follows that the chance of the coin landing heads and the die landing 4 is one-twelfth. This is presumably a case of probabilistic reasoning, yet there is nothing inductive about it. It is purely deductive—with the multiplication law for the joint probability of independent events as a premiss. And then, we can take the whole of inferential statistics: there’s no attempt at induction here. I assume that Huemer might call statistics probabilistic reasoning, yet it’s all purely deductive.

Huemer gives quantum theory as an area where the predictions are probabilities, and he says that probabilities cannot be falsified, which is true. Therefore, the implication seems to be, quantum theory is not scientific by Popper’s definition. He then says that quantum theory is “weird” so he will not rely on it as a counter-example to Popperian falsifiability, and goes on to his examples of evolution and the asteroid extinction theory.

Yet, as well as its predictions of probabilities, quantum theory makes some predictions which are not probabilities and are extremely precise. In fact, famously, the most extraordinarily precise prediction in the entire history of science, the anomalous magnetic moment of the electron, is made by quantum electrodynamics. So, in addition to its predictions of probabilities, quantum theory also makes amazingly precise non-probabilistic predictions, and quantum theory would therefore still qualify as scientific.

However, long before quantum theory, physics relied heavily on predictions of probabilities, notably in statistical mechanics—and there is surely nothing less weird than statistical mechanics. And even in quantum theory, we don’t want to say that the predictions of probabilities, taken by themselves, are always unscientific.

From the beginning, Popper confronted the obvious problem for the falsifiability criterion that predictions of probability cannot be refuted. He pointed out that physicists themselves have never seen this as a practical problem, and have routinely viewed their statistically-based theories as refutable. Popper’s theoretical solution to this problem is essentially along the lines developed by practical scientists: we adopt a methodological convention which requires us to disregard extremely low probabilities. (The solution is more precise and elegant than I can unpack here; see The Logic of Scientific Discovery, 190–97).

Duhem’s Argument Against Falsifiability

Huemer advances the “Duhem-Quine Thesis” as a reason for dismissing Popper out of hand. In this case Huemer doesn’t claim that Popper is insane or completely out of his mind. But since he does maintain that this argument very simply disposes of Popper’s account of science, I will touch upon it here.

            Pierre Duhem pointed out in 1906 that when we try to test a theory by deriving a prediction from it, a failure of the prediction can’t conclusively refute the theory, because when we use a theory to predict, we always rely on other assumptions or hypotheses, assumptions not contained in the theory but necessary to derive a prediction. So, we never test the theory alone, but only the theory in conjunction with other propositions. Therefore, we can’t be sure that an observation refutes our theory. It could be that if we changed one of those other propositions, the prediction would be confirmed rather than contradicted. These auxiliary assumptions may include such things as the reliability of our instruments as well as our assumed initial conditions or other background knowledge that we might take for granted.

            Popper holds that, typically, a theory is not abandoned after one contradictory observation or even many. It is usually abandoned when there is an alternative theory which is able to prove its mettle by making correct predictions where its rival made false predictions. In Popper’s view, the process of conjecture and refutation typically takes place in the context of two or more competing, rival theories. If two theories are accompanied by the same auxiliary hypotheses, these cancel out, and a crucial experiment will be a pure test of one theory as against the other.

            Popper’s view is that every effort should be made to make the theory falsifiable. It’s always open to anyone to challenge the crucial experiment with a new interpretation which places the onus for the refutation on one of the accompanying assumptions and thus rescues the theory itself from refutation.  Here, Popper says that it would be preferable if the person ‘saving’ the theory in this way would offer a new formulation to make the theory, along with the changed assumption, independently falsifiable.

            Huemer also claims that Newtonian dynamics is not falsifiable because it does not say anything about the “total forces” acting on bodies. I’m puzzled by what Huemer is getting at here. Newton’s theory doesn’t rule out the possibility of non-gravitational forces which would have to be explained. If there is, for instance, a supernova (an exploding star) the shock wave pushes nearby bodies away. So this might be another force, a non-gravitational part of ‘total forces’. The supernova itself is not part of Newtonian dynamics but does not contradict Newtonian dynamics. The effects of the explosion on nearby bodies would follow Newton. The same applies to any major non-gravitational forces not yet known.

When applying Newton’s theory. it’s normally part of the stated or assumed initial conditions and background assumptions that other forces which might move bodies, such as magnetic attraction/repulsion, or the propulsive effect of volcanic eruptions, are negligible. Within a planetary atmosphere, Newton has no trouble coping with the modification of gravitation by atmospheric density. It’s always open to anyone to assert that some heretofore overlooked force has been partly responsible for some motion not accounted for by Newton. The mere existence of some forces other than gravity and momentum does not contradict Newton. In the absence of some motions not explicable by gravity and requiring the introduction of other non-gravitational forces, Newton’s theory explains all routine bodily motions.

On the other hand, there’s an infinity of conceivable observations which would clearly refute Newtonian dynamics. If, for instance, it were to be found that orbiting bodies were indeed elliptical but their speeds were constant (or if this were found to be true, say, for orbiting bodies outside the Milky Way, or above a certain mass), this would refute Newtonian dynamics. The same applies if, say in some distant region, gravitational attraction declined as the cube of the distance. That does not necessarily mean that Newtonian dynamics would be rejected. In Popper’s account, refutation is never beyond criticism and does not automatically lead to rejection. And then, of course, there are the Einsteinian predictions which did contradict Newton and led to the rejection of Newton. It didn’t seem to bother anyone that this refutation and rejection didn’t account for unknown and unspecified non-gravitational forces. So I can’t see the point of Huemer’s assertion that Newton’s theory does not predict the total forces acting on a body.

Duhem understood that laws of nature can never be arrived at by induction. He was a conventionalist; he did not consider major scientific theories, laws of nature, to be literally true or false, but rather unquestioned assumptions or definitions used to guide our thinking, and not subject to empirical proof or disproof. Duhem is part of the intellectual movement, beginning with Kant, that tried to come to terms with Hume’s disproof of the possibility of induction by supposing that fundamental physical laws are imposed on nature by the human mind. Popper accepted that conventionalism can never be logically refuted. Any theory can always be saved from refutation by holding the theory true by definition. But Popper insisted that a theory like Newton’s should be treated as objectively true or false.

Einstein had shown that we could conceive of Newton’s theory being false, and we could conceive of Kant’s ‘categories’ as being false (because, among other things, space doesn’t have to be Euclidean). Popper therefore embraces a metaphysical and methodological commitment to the view that no physical theory may be decreed immune from attempts to falsify it. Popper believes that without this commitment, science would eventually die, becoming ossified into an apodictic scholasticism.

            Huemer presents the Duhem argument as a refutation of the possibility of falsifying a theory by observation, and therefore a refutation of Popper’s theory of science. Yet there’s surely something strange about Huemer’s claim that Duhem’s argument shows the impossibility of ever falsifying a theory. For every imaginable conception of how science operates, including whatever conception Huemer would defend, must make some logical link between theory and observations.

If no observation could ever contradict a theory, then neither could any observation ever confirm a theory; any logical link between theory and observations would be severed. Any theory of the relation between observation and theory, not just Popperian falsification, would be dismissed by Huemer’s employment of the Duhemian argument. Indeed, Duhem saw his argument as specifically combating Francis Bacon’s ‘inductive’ approach to laws of nature. If Duhem’s argument disposes of falsificationism, it must equally dispose of any brand of inductivism.

This is all the more relevant because Huemer apparently does accept Popper’s claim that a theory ought to be falsifiable (even though Huemer supposes that other criteria are also required for theory selection). According to Huemer:

“There really is something important about falsifiability. Intuitively, there is something bad about unfalsifiable theories, and we have Popper to thank for drawing attention to this.”

If falsifiability can be shown to be out of the question, so that all theories without exception are just not falsifiable, then how could it be the case that there really is something important about falsifiability and why would we want to thank Popper for drawing attention to it?

            Quine’s account is not exactly the same as Duhem’s and is more complex. Since Huemer doesn’t go there, neither will I. It’s worth pointing out, though, that Quine’s argument arrives at the conclusion that no scientific theory can be tested. Only the whole of science (let’s assume this means physics) can be tested. This would not be compatible with Huemer’s evident view that individual scientific theories can be tested by some version of Bayesianism.

The Irrelevance of Belief

Huemer’s #2 and #6 refer to beliefs (#2 to what someone “thinks” is true). Popper has no great interest in the philosophy of belief, and the methodology of science does not need to say anything about beliefs. Science is not about beliefs. We should bear in mind that neither Newton nor Einstein believed their theories to be true—Newton because he could not accept action at a distance, and Einstein because he felt general relativity was not complete and would eventually be supplanted by a better theory, a unified field theory.

Although neither Newton nor Einstein believed their theories to be true, they did believe their theories were objectively superior to their predecessors. Any account of scientific discovery has to make room for the fact that we can prefer one theory to another, we can even say that one theory is objectively better than another, without believing the preferred or better theory to be true.

Belief is a subjective feeling of conviction that something is true. It is a fact about human psychology that people have a need, or at least a strong tendency, to believe. But we make no appeal to belief when we try to explain scientific methodology.

We might hit upon a theory which is actually true though we would not be able to demonstrate its truth, since any putative universal law might turn out to have exceptions and thus to be false. For critical rationalists, science is not about subjective feelings but about what can be demonstrated logically, to explain events in the world, in light of the observational evidence.

As a Bayesian, Huemer must be a pure subjectivist, who must suppose that science is all about beliefs, and this no doubt helps to explain why he finds the Popperian theory so hard to fathom. A Gestalt switch is needed to abandon the paradigm of subjective knowledge and embrace the paradigm of objective knowledge.

Now of course, scientists often do believe their theories, and scientific debates often display passionate commitment to beliefs, just as much as religious or political debates. Popper’s aim, however, in most of his writing, is not to give a history of science, accounting for all the aspects of science, including the psychological ones, as they actually played out, but to reconstruct the ‘logic’ of scientific research. Similarly, the theory of probability tells us nothing about the emotional states of gamblers, and the subject known as logic ignores what happens psychologically when people make conflicting assertions in debate. If you look at a textbook on decision theory, you will probably not find an index entry for ‘agony’.

Belief as a motivating force is generally ignored in the great majority of Popper’s discussion of the logic of empirical science, though it is a perfectly legitimate field of study, no doubt belonging mainly to psychology.

Popper sometimes mentions that he believes such and such a theory to be true (Realism and the Aim of Science, 72, 75), but when he does this, his belief is not offered as a reason to accept that theory, and is not logically compelled by the evidence supporting that belief. Such remarks by Popper are informal and illustrative. They do incidentally refute Huemer’s assertions that Popper considered all scientific theories to be false. Popper also volunteers that he believes certain metaphysical propositions, for example realism and indeterminism, but also such metaphysical theories as “There exists at least one true law of nature,” a metaphysical, and therefore non-scientific, claim which can never be empirically tested, but which Popper argues for on philosophical lines (Realism and the Aim of Science, 79).

This brings me to Popper’s statement, Huemer’s #6, that “belief is never rational.” I think I understand what Popper was driving at here, and to that extent I agree with it, but I would never word it that way. For that matter, if we take it with pedantic literalness and ignore context, it is flatly contradicted by what Popper says elsewhere (“But this belief, I assert, is rational.” Realism and the Aim of Science, 57).

Popper’s central position is that science is not about belief at all. The relation between a theory and the observational evidence is not a matter of belief, nor is the relation between a theory and other theories. So the ‘rationality’ of science owes nothing to belief—and belief is therefore non-rational in the sense that it is a psychological phenomenon which intrudes into science from somewhere other than ‘scientific rationality’. But in most cases, when people accept that a theory is a good theory, they tend to believe that theory, and that tendency to believe, while not probative in any way, and irrelevant to the validity of scientific reasoning, is usually heavily influenced by what the believer perceives to be the available evidence. As a matter of fact, I agree with Ray Scott Percival (The Myth of the Closed Mind) that all belief is rational, but that’s a revelation for which I guess most of the world, including Michael Huemer, is not yet quite ready.

I think I have said enough to show that Michael Huemer has misunderstood and mischaracterized Karl Popper. You do not need to be completely out of your mind to agree with Popper and me—though I don’t deny that it might help.



Cthaeh, The. 2017. Not All Zero Probabilities Are Created Equal.

Duhem, Pierre. 1991 [1906]. The Aim and Structure of Physical Theory. Princeton University Press.

Edelstein, Michael R., Richard K. Kujoth, and David Ramsay Steele. 2013. Appendix: Is Psychoanalysis Falsifiable? In Therapy Breakthrough: Why Some Psychotherapies Work Better than Others. Open Court.

Feynman, Richard. 2020. Richard Feynman Teaches You the Scientific Method. Farnam Street .

Gopnik, Alison, 2009. The Philosophical Baby: What Children’s Minds Tell Us about Truth, Love, and the Meaning of Life. Farrer, Straus, and Giroux.

Gopnik, Alison, and Andrew N. Meltzoff. 1997. Words, Thoughts, and Theories. MIT Press.

Gopnik, Alison, Andrew N. Meltzoff, Patricia K. Kuhl. 2001 [1999]. The Scientist in the Crib: What Early Learning Tells Us about the Mind. Harper Perennial.

Huemer, Michael. 2020. You Don’t Agree with Karl Popper. Fake Nous (25th January).

Hume, David. 1986 [1739]. A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects. Penguin.

Miller, David. 1994. Critical Rationalism: A Restatement and Defence. Open Court.

Percival, Ray Scott. 2012. The Myth of the Closed Mind: Understanding Why and How People Are Rational. Open Court.

Popper, Karl R. 1968 [1962]. Conjectures and Refutations: The Growth of Scientific Knowledge. Harper and Row.

———. 1979 [1972]. Objective Knowledge: An Evolutionary Approach. Oxford University Press.

———. 1982. The Open Universe: An Argument for Indeterminism. Rowman and Littlefield.

———. 1982. Quantum Theory and the Schism in Physics. Rowman and Littlefield.

———. 1983. Realism and the Aim of Science. Rowman and Littlefield.

———. 2002 [1935]. The Logic of Scientific Discovery. Routledge.

Radnitzky, Gerard, and W.W. Bartley III. 1987. Evolutionary Epistemology, Rationality, and the Theory of Knowledge. Open Court.

Schilpp, Paul Arthur, ed. 1974. The Philosophy of Karl Popper. Two volumes. Open Court.

Siegler, Robert S. 1996. Emerging Minds: The Process of Change in Children’s Thinking. Oxford University Press.

3blue1brown. 2020. Why ‘Probability of Zero’ Does Not Mean ‘Impossible’. <>.


I first posted this piece on September 13th 2020. At that time I did not say anything about Huemer’s appeal to the Duhem argument, because he did not use it to claim that Popper was insane or out of his mind, which was the assertion I was responding to. On further reflection, I considered that Huemer did appeal to Duhem to argue that falsification of a theory is always impossible and therefore Popper’s account can be summarily dismissed, even assuming Popper’s mental competence. So, on September 25th 2020, I added a few paragraphs about the Duhem argument.


Is It a Fact that Facts Don’t Matter? (2018)

Uncategorised Posted on Thu, May 14, 2020 07:04:53

Is It a Fact that Facts Don’t Matter?
David Ramsay Steele

Facts don’t matter, or so Scott Adams keeps telling us.
This looks like an outrageous claim. He sometimes qualifies it by saying that “Facts matter for outcomes but not for persuasion” and sometimes seems to back away from it by saying that “Facts are over-rated” (implying they do matter at least a little bit).
And despite his flat assertion that facts don’t matter, Scott spends much of his time on his blog and on Periscope disputing matters of fact. He tells us that he was one of the few to predict Trump’s victory—he assures us that this is a fact, and that it matters a lot. More generally, he tells us that Persuasion is “a good filter because it predicts well”—he tells us that this is a fact, and that it matters a lot. And of course he repeatedly informs us that “facts don’t matter,” which if true must be a fact that matters a lot (and that would be a performative contradiction, but hey, Scott’s impatient with technicalities so we’ll steer clear of them).
In fact, Scott can’t talk for five minutes or write for two pages without making his argument depend on matters of fact which really do matter for his argument. So how can it possibly be that facts don’t matter?
Well, maybe he thinks that facts don’t matter for most people, though they quite obviously do matter to him? Or maybe we can make some sense of his strange claim that “facts matter for outcomes but not for persuasion”? Or perhaps he means only that politicians sometimes win elections despite making a lot of factually inaccurate claims? Or perhaps he’s practicing what he sees as Donald Trump’s “anchor” strategy—making a seemingly outlandish claim to attract attention and situate the negotiation, a claim which he will later dial back to a more moderate statement?

The Two Meanings of “Facts”

What are facts? Dictionaries give several alternative (and sometimes incompatible) definitions of the word “fact.” However, these alternative definitions can be grouped into two basic ideas:

1. “Facts” are the way things really are (or were), independent of what anyone thinks.

2. “Facts” are statements which have been certified as true, either by common consent or by some authority, such as a consensus of experts.

It can be confusing that there are these two common uses of the word “fact,” as they are often contrary in meaning. In sense #1, it’s possible for everyone to be wrong about a fact, or just to be totally unaware of it, whereas in sense #2, nothing can be a fact until someone has become aware of it and considered it to be a fact.
A little thought shows, in fact, that the vast majority of facts in sense #1 can never be known by anyone—for example, think about such facts as the precise configuration of molecules inside a distant star, or how many beans were in that can I opened a year ago. The universe contains an infinity of facts in sense #1, and very nearly all of them are forever unknowable.
Furthermore a fact in sense #2 may not be a fact in sense #1, because common consent or the judgments of experts may be mistaken. Facts in sense #2 sometimes change. It used to be a “fact” in sense #2 that continents do not move, that homosexuality is a mental illness, and that it’s hazardous to your health to go swimming immediately after a meal. None of these are “facts” in sense #2 any longer.
Assuming that we’ve now got these facts right, then the sense #2 facts we now possess always were sense #1 facts, and the older sense #2 facts were never sense #1 facts, though people thought they were. Sense #1 facts never change, as long as we stipulate the date—a sense #1 fact may stop being a fact at a point in time, but then it’s still a fact that this fact was a fact before that point in time.
Although the two senses are sometimes opposed to each other, there is an intimate connection between them. We’re concerned about sense #2 facts because we think that they’re generally likely to give us sense #1 facts, at least a lot of the time. If we thought that a sense #2 fact had only a fifty-fifty chance of being a sense #1 fact, we would lose interest in sense #2 facts.
Confusion may arise if we don’t keep the distinction clear between sense #1 and sense #2. When Kellyanne Conway said that she would look for some “alternative facts,” this became viral and was taken by many to imply that she thought we could pick and choose our reality, like O’Brien in Nineteen Eighty-Four. But close attention to that actual exchange between Kellyanne and Chuck Todd, and the other comments by President Trump and Sean Spicer, reveals that Kellyanne, Sean, and the president were very definitely talking about sense #2 facts. They weren’t disputing for a moment that sense #1 facts are objective and independent of what anyone believes, though in this particular disputed case, whether Trump’s Inaugural crowd was bigger than Obama’s, it looks to me that the Trump people were probably sincerely mistaken.
The attribution to Trump and his supporters of the view that facts in sense #1 can be chosen at will is not only wrong (not a fact); it’s extremely weird, because there are indeed a lot of people who deny the objectivity or absoluteness of truth (post-modernists, social constructivists, anti-realists, and truth-relativists) and these people are all on the left. This is a characteristic belief of leftist intellectuals, and is never found on today’s right.

Cognitive Dissonance

Scott talks a lot about “Cognitive Dissonance,” a concept which plays a big role in his theory of how people form their ideas. In Win Bigly (p. 48), he introduces Cognitive Dissonance by citing the Wikipedia definition. The basic idea is that Cognitive Dissonance is the discomfort or mental stress people have when they find a conflict between one thing they believe and something else they have come to believe.
The first thing to notice here is that this phenomenon of Cognitive Dissonance does not arise in most everyday cases where we find we have been mistaken. I was sure I had left my keys on the coffee table, but when I look, they’re not there. I start to search in the other likely places, and soon find them in my coat pocket. I had made a mistake; my memory was slightly faulty; no big deal. I’m not distressed. People revise their beliefs and acknowledge their mistakes all the time. Scott is demonstrably wrong when he says that Cognitive Dissonance “often” happens in “daily experience.” It almost never happens in daily experience.
But there certainly are cases (a small minority of cases) where a major assumption is challenged by events, leading to emotional distress and sometimes to the production of what Scott calls “hallucinations,” highly fanciful stories which reconcile the person’s prior assumption with what has unexpectedly happened. Scott, in fact, soon forgets the Wikipedia definition and then begins to use his own definition of Cognitive Dissonance, in which “your brain automatically generates an illusion to solve the discomfort” (pp. 48–49).
So, for Scott, the crux of Cognitive Dissonance is an illusion. This presupposes a distinction between illusion and reality, and therefore presupposes that facts matter a whole darn freaking lot. Exhibit A for Scott’s argument is, of course, the election of Donald Trump on November 8th 2016. Many people had thought the election of Trump, though an appalling hypothetical, was practically impossible, but it happened, and so these people experienced mental discomfort, and some of them began to believe very fanciful stories.
As Scott reminds us, these “hallucinations” (a term he extends to include any belief in tall tales) are more common among the party out of power. In the time of Obama, some Republican voters believed that Obama was a Muslim, while in the time of Trump some Democratic voters believed that Trump had “colluded with the Russians.”
When we look at these exceptional cases of what Scott calls “Cognitive Dissonance,” what do we see?
The first thing we notice is that this Cognitive Dissonance is brought about by the realization that something is seriously wrong: we find ourselves inclined to believe in two things which can’t both be true, and we know that this can’t be right. Sometimes, as with the election of Donald Trump to the presidency, the contradiction arises because we have to accept that something has happened which our prior beliefs implied could not happen.
A standard example would be a religious sect which preaches that the world is going to end on a particular day. That day comes and goes without any obvious disruption, and the sect has to decide what to make of this—they may begin to preach that the world did end on that date, despite superficial appearances, or they may conclude they got their calculations wrong, and fix on a new, future date when the world will end.
The awareness that something is seriously wrong arises because of our acceptance of facts. What it shows is that facts are tremendously important. Facts matter more than almost anything else could possibly matter! There is (as a matter of fact) just one thing—only one!—that matters more than facts, and I’ll tell you what it is in a moment.
Without our acceptance of facts, this Cognitive Dissonance could not arise. It’s only because we accept that Trump did in fact become president-elect that we perceive a clash between this acceptance and our prior theory which told us it could not happen. This Cognitive Dissonance also requires that we recognize the law of logic which states that we can’t simultaneously accept a statement and its negation. So, we can’t accept that “Trump was elected president” and “Trump was not elected president.” The understanding that elementary logic is supreme is innate in all competent humans, in all cultures and social classes, at all historical times.
When we come up with what Scott calls an “illusion” to reconcile the new facts with our prevailing assumptions, what we’re doing is to accept the newly discovered facts while trying to preserve as much of our prevailing assumptions as we can, without self-contradiction, especially those assumptions we see as most fundamental. This is a rational response.

Coming to Terms with the Reality of Trump

After Trump had been elected but before the Inauguration, Scott predicted that Trump’s opponents in the first year of the Trump presidency would go through the following stages:

1. They would at first say that “Trump is Hitler.”

2. About halfway through the year, they would concede that Trump is not Hitler, but would say he was incompetent, perhaps even crazy.

3. By the end of the year, they would concede he was highly competent and therefore effective, but would assert that they didn’t like his policies.

Scott is justly very proud of this series of predictions, which have broadly come true (though he didn’t foresee the eruption of the “Russian collusion” story, nor did he foresee the brief revival of the “Trump is crazy” theory following the release of Michael Wolff’s book Fire and Fury in January 2018). Scott’s latest prediction is that people will soon start talking about America’s new “Golden Age.”
However, as Scott’s account makes clear (but Scott himself apparently doesn’t notice), the fulfillment of these predictions depended on the over-arching importance of brute facts. According to Scott’s account:

1. The disappearance of the claim that Trump is Hitler results from unavoidable awareness of the fact that Trump has not done any Hitler-like things.

2. The disappearance of the claim that Trump is incompetent results from unavoidable awareness of the fact that he accomplished more than most presidents in his first year.

By Scott’s own account, then, in these two cases, the facts are absolutely decisive. He just takes for granted, without any hesitation, that people had no alternative but to acknowledge these facts.
When Trump was elected, we can imagine the anti-Trump believers “hallucinating” that Hillary had been declared winner, that Trump had conceded, that Hillary gave the Inaugural Address on 20th January 2017, and that Hillary was now in the Oval Office, carrying out the duties of president, no doubt superbly. But not one of the millions of Hillary supporters reacted in this way. Quite the opposite, they wept and wailed, bemoaning the undeniable fact that Hillary had lost the election. Clearly, facts are sometimes decisive, according to Scott’s own account.
Another way the Hillary supporters could have failed to accept the demonstrated fact of Trump’s election victory would have been to “hallucinate” that on November 8th 2016 the world was occupied by space aliens who abolished the United States of America along with its constitution and election procedures. These space aliens now directly governed what had been the US and we all became subject to their edicts. Not one of the millions of Hillary supporters opted for that theory!
Why did all the millions of Hillary supporters, without exception, fail to adopt one of these theories, or any of numerous other fanciful yarns we could dream up? According to Scott’s own account, there was just one explanation for this: all these millions of people had to accept the facts. The facts were irresistible.
Having accepted the unwelcome fact that Trump was now president, the Hillary supporters responded to this unwelcome fact by claiming that Trump was Hitler. Although inaccurate, this was not entirely arbitrary. It was essentially a continuation of what many of them had been saying before the election. They had been saying that if you elected Trump you would be electing Hitler. No doubt to some of them this was hyperbole, but they didn’t mind taking the risk that many others would interpret it literally, and now they found themselves hoist by their own hyperbole.
As the months went by, Trump failed to do anything remotely like Hitler. He did not set up concentration camps, outlaw all political parties except his own, murder his critics or rivals, or act in any way outside the previously existing law. He criticized Obama for having usurped the legislative role of Congress, complied with the decisions of courts, and did not propose that judicial review should be abolished. Nor did he grow a mustache.
The involuntary acceptance of facts caused changes in ideas. We can easily imagine that the Hillary supporters might have “hallucinated” that concentration camps were under construction, that all political parties except the Republicans had been outlawed, that Hillary, Bill, Barack, Michelle, Elizabeth Warren, John McCain, and Michael Moore had been assassinated in a “June Purge.” But not one of the Hillary supporters reacted like this. Instead, they all accepted that Trump was not Hitler after all, and moved on to the theory that he was “incompetent” or even “crazy” and that the White House was “in chaos.”
This was also factually inaccurate, but again, it was not entirely arbitrary. It returned to charges made against Trump during the election campaign. Trump’s decisive management style, his plebeian bluntness of speech, and his readiness to let people go who hadn’t worked out could easily be represented as someone just flailing around. His tweets could be described as impulsive, ill-considered responses to immediate provocations. It took a while before perceptive people, with the help of Scott Adams, came to understand that the Trump tweets were essentially strategic and adroitly crafted: Trump was counterbalancing the hostile propaganda pouring out from CNN and MSNBC; he was reaching a hundred million followers several times a day, and he was doing so (as he occasionally pointed out) for free.
The “incompetent or crazy” theory was killed by the demonstrable fact that Trump was effective; more than most presidents he was getting things done. Of course, we may not like some of the things he was getting done (and when it comes to the Wall, protective tariffs, and the wars in Syria and Afghanistan, I don’t), but, as Scott rightly insists, that’s a separate matter. More than half the country does like them.
Notice that, once again, acceptance of the fact that Trump was fully competent was involuntary. It was thrust upon the reluctant Hillary supporters by factual evidence that could hardly be contested, culminating in the successful passage of the Tax Cuts and Jobs Bill in December 2017, which all experienced observers attributed in large part to Trump’s management skills and capacity for hard work. By the time Trump achieved a rare perfect score on a standard test of cognitive ability, most people had already abandoned the theory that he was incompetent.

What Kind of a Genius?

Scott tells us that Trump is a Master Persuader. He goes so far as to claim that Trump could have taken a different policy agenda and won with it, because of his persuasive skills (Win Bigly, pp. 92–93). He even says Trump could have won by persuasion if his and Hillary’s policies were simply switched.
While Trump’s persuasive skills are certainly extraordinary, and Scott has helped me and thousands of others to appreciate that, I believe we can explain Trump’s political success differently, and I very much doubt that Trump could have won with a substantially different agenda. I believe his choice of agenda was part of a shrewdly calculated political strategy. A linchpin of this strategy is the traditional working class in the Rust Belt states. These people had seen their real wages reduced, they had seen mining and manufacturing decimated as companies moved offshore, and they had seen that the Democratic Party would do nothing for them, not even to the extent of paying lip-service to their interests or having candidates visit their neighborhoods.

Trump, Hillary, and the Issues

In the 2016 election campaign, Trump constantly hammered away at the issues, while Hillary ran away from the issues. This was obvious to all those who followed the speeches and the TV ads on both sides, but if anyone had any doubts, there was a scholarly study of precisely this point, conducted in March 2017 by the Wesleyan Media Project. This study corroborated what was evident to anyone who followed both sides of the campaign.
All of Trump’s many rally speeches were densely focused on the policies he advocated. Only briefly would he make a nasty remark about Hillary’s personality or past misdeeds, then he would swiftly return to his advocacy of very specific policies. The same was even more true of the TV ads for Trump. On Hillary’s side, both speeches and TV ads gave very little attention to policy issues—far less than any other presidential candidate in living memory—and put all the emphasis on Trump’s horrible and frightening personality. As the Wesleyan study cautiously put it, “Clinton’s message was devoid of policy discussions in a way not seen in the previous four presidential contests.”
Trump’s rally speeches never wandered far from the specific issues, so that anyone following the campaign even casually became acutely conscious of Trump’s policy proposals, whereas most voters had little idea of Clinton’s policies. Trump made many commitments, broad and narrow, about tightening up immigration, whereas Clinton rarely spelled out her own policy on immigration, and most voters had no idea what it was. Voters might assume that Clinton favored doing nothing to change immigration controls or even that she favored moving to “open borders.” Dedicated policy wonks might be able to ascertain that actually Clinton also favored tightening up immigration controls, though perhaps slightly less severely than Trump, but voters who merely watched the news would never have guessed this.
It’s clear that Clinton just could not talk too much about immigration policy, for this would be to concede, in effect, that she shared a lot of common ground with Obama and with Trump. She could hardly boast about the steep increase in deportations of aliens under Obama, while denouncing Trump for his proposed deportations, much less could she promise voters that deportations would be accelerated once she was in the Oval Office. That would tend to go against the claim that Trump was uniquely evil for wanting to deport aliens. For similar reasons, she could hardly brag about Obama’s facilitation of oil and gas pipelines and promise to continue or escalate this policy.
There has probably never been a previous election in American history where one candidate’s numerous policy proposals have been so familiar to the general electorate, while the other candidate’s proposals were almost unknown. Clinton based everything on the proposition that she was personally superior to Trump—more specifically that Trump was a monster and at least fifty percent of his supporters (she meant a quarter of the population, the working class) were “deplorable” monsters.
Everyone who followed the campaign, even superficially, would know that Trump was advocating:

1. A tightening up of controls on immigration, especially more effective enforcement of existing laws restricting immigration.

2. Repeal or renegotiation of trade agreements such as NAFTA and TPP.

3. Revival of manufacturing in the Rust Belt, partly because of #2 but also because of targeted protectionist measures such as penalties for companies which opened up plant abroad, tariffs on imports, and a general government policy of “Buy American, Hire American.”

4. Defense of the Second Amendment—Americans’ constitutional right to own and carry guns.

5. Appointment of conservative Judges who would follow the Original Intent of the Constitution.

6. A “pro-life” stance which in effect meant giving abortion policy back to the democratic process in the states, rather than a court-imposed “pro-choice” policy.

7. Repeal and replacement of Obamacare.

8. Abstention from wars (like Iraq and Libya) which don’t yield any net benefit to the US.

9. Major reforms in the treatment of veterans.

10. Increased military spending.

11. A major drive to repair and modernize infrastructure.

Everybody knows that these were Trump’s policies. Now, quick, what was Hillary’s policy on each of these issues? You see? You don’t have the foggiest notion. You might guess that she would keep Obamacare, though she said she would overhaul it, and in politics the line between overhaul and replacement is fuzzy.
Trump vacillated between extreme and moderate versions of these policies, but he never reversed them during the campaign. What was, in effect, Clinton’s reply to these proposals? First, Trump is an evil person and we are not Trump. Second, we are entitled to be president because we are a woman. However, according to Clinton’s leftist supporters, anyone who decides to be a woman becomes a woman, and therefore Trump could at any time become the first woman president simply by announcing “I’m a woman!”
Most of the time, Clinton avoided responding to Trump’s policy proposals with her own. She did her best to avoid any comparison of the opposing policies, and to keep the focus on Trump’s personality, a risky strategy as many people found her own personality unendearing and her own past conduct questionable. But don’t forget that if she had won, this strategy would have been hailed as awesomely clever.
The thing that most caused me to rapidly revise my very dismissive view of Trump shortly after the election was not just that he won, but that he won in precisely the way he said he would win. He knew what he was doing; he had better intelligence about the voters. TV interviews with personnel of his polling firm, Cambridge Analytica, corroborated this interpretation.
My guess is that Trump, years before the election, had already seen that a dramatic comeback for American manufacturing and mining was inevitable—indeed, was already in its early stages—alongside the ignorant conventional view that manufacturing and mining were in permanent decline. He could therefore not only make political capital from the plight of the Rust Belt but also, once elected, ride the wave of manufacturing and mining revival. In business circles, people were already talking about “reshoring”—the phenomenon of companies bringing their plant back into the United States. This talk originated at the beginning of the century but had mostly still not trickled down into the popular media, and now it is doing so it will be difficult to separate from the achievements of Trump, especially as Trump has admittedly done a number of things to give it a boost.
The inevitable comeback for American manufacturing was a commonplace among business analysts years before the election (see for instance the 2012 study, The US Manufacturing Renaissance: How Shifting Global Economics Are Creating an American Comeback). Reshoring has several causes, including the spectacular and continuing rise of Chinese wages and the development of fracking, which guarantees amazingly cheap American energy for many generations to come. During the campaign, anti-Trump commentators often showed their ignorance by proclaiming that the decline of manufacturing and mining were irreversible, even as both were already rebounding robustly.
Obama did occasionally try to explain what was going on, but the one line that resonated was “Some of these jobs are just not going to come back.” Oops. There go several thousand Michigan votes. And Hillary: “Because we’re going to put a lot of coal miners and coal companies out of business, right?” Oh, dear. There go several thousand Pennsylvania and West Virginia votes. The fact that these lines were taken out of context and hurt the Democratic campaign shows that there is cunning in Trump’s apparent crudeness in making bold assertions and almost never qualifying them.
The Obama administration officially began measures to promote reshoring in 2011, but Hillary didn’t make much of this during the campaign. This was in keeping with her avoidance of policy talk and her haughty disdain for the working class, those dumb rednecks, who, just like Blacks and Hispanics, could be relied upon to vote Democratic without being offered any serious incentive to do so. And while Hillary knew enough to understand that fracking is a tremendous boon to humanity and a guarantee of economic growth, she was no doubt afraid to drive voters to Bernie Sanders and then to Jill Stein by enthusiastically embracing cheap energy, underwritten by fracking. Obama had celebrated fracking but Hillary didn’t dare to do so.
A general theme of Clintonism is that it relied on harnessing the energies of leftists while favoring ruling-class privilege. Hillary was embarrassed by any shining of the light on specific policies, because she wanted both the votes and the activist work of “progressives” and the financial donations of “neo-liberals” and “globalists,” and she feared that frank talk about specifics could only scare away one or the other.
Scott occasionally mentions Hillary’s discussions of “policy details” (p. 164), implying that this was a boring and fact-oriented preoccupation by contrast with Trump’s nebulous and exciting “Persuasion.” Nothing could be further from the truth. The Hillary campaign was simply astounding and unprecedented in its avoidance of any talk about policies, as the Wesleyan study proves. As far as most voters could tell, Hillary had just one policy: hatred for Trump’s personality. This avoidance of policy issues is connected with another feature of the Hillary campaign, familiar from the book Shattered. Hillary never came up with a story as to why she was running. Trump was running to “Make America Great Again,” and he would sometimes unpack it: “Make America Rich Again, Make America Strong Again, Make America Safe Again”—tightly linked to all the eleven policy proposals mentioned above.
The Democrats made things worse for themselves by talking about Trump’s appeal to the “white working class.” Plenty of Blacks and Hispanics had lost manufacturing jobs in the Rust Belt. Trump picked up unexpected Black, Hispanic, and Muslim votes, and among white workers he did especially well with former Obama and Sanders voters, beginning his long-term plan of permanently detaching the working class of all races from the Democratic Party.
Trump plays a long game. A tightening of immigration controls is popular with voters, including those Hispanics and Muslims who are already here legally. Purely from the standpoint of political opportunism, what’s even better than being elected to tighten up immigration controls and then doing so? What’s better is being publicly opposed at every step in struggling to tighten up immigration controls. This continually reminds voters that there are forces at work plotting to frustrate the president and the popular will, and therefore constantly broadcasts the urgency of continuing to support the president. The Sanctuary City–Sanctuary State movement might have been engineered to guarantee Trump’s re-election by a landslide in 2020.
Trump finds issues where the majority is on his side, and where he’s therefore likely to win in the long term, yet where he has to visibly battle against opposition. Even before he won in 2016—and he knew he was going to win—he was thinking of how he would manage his first term to ensure his re-election in 2020. As I have learned from my own earlier blunders in this area, the biggest mistake you can make about Trump is to suppose that he ever acts on impulse. Trump is a supremely self-controlled person who always acts methodically according to a long-range plan. Ignore this fact, and you may already have lost against Trump.

“People Are Not Rational”

As Scott repeatedly tells us, his contention that facts don’t matter arises from his fundamental conviction that people are not rational. According to Scott, “humans are not rational. We bounce from one illusion to another, all the while thinking we are seeing something we call reality” (Win Bigly, p. 37).
The theory that people are fundamentally irrational is the fashionable one. We are constantly bombarded by books and articles from a wide range of sources telling us that people don’t make decisions rationally but emotionally, and then invent false reasons for why they decided the way they did.
However, as we’ve seen, when Scott is not intoning the fashionable dogma that people are irrational, he keeps forgetting it, and keeps reminding us, unintentionally, that people do change their beliefs in accordance with facts and logic.
So what about the rare exceptional cases which Scott calls “Cognitive Dissonance”? What about the theory held by Hillary supporters in January through June 2017 that Trump was Hitler? Or the theory held after June 2017 that Trump was incompetent or crazy?
Though both these beliefs were seriously mistaken, I wouldn’t call them irrational. The view that humans are rational doesn’t require that they never make mistakes—quite the contrary: only a rational being can make a mistake.
So, can I defend the “hallucinations” of Cognitive Dissonance as rational? I believe I can. The first thing to note is that such illusions are generally short-lived. Scott’s ideas about Persuasion focus on the short-range and the short-term. Theories about Trump as Hitler or Trump as mentally defective, as well as theories about “Russian collusion,” have now largely evaporated.
What happens when something occurs that people’s previous ideas had been telling them could not possibly occur? They adjust their previous ideas, and their first stab at adjusting their ideas may not be the long-range adjustment.
Karl Popper has explained how people develop their ideas through conjecture and refutation, in other words by making unjustified guesses and then disproving those guesses, and moving on to new and better (but still unjustified) guesses. That’s how human rationality works. That’s the only way it could work. That’s what happens in the examples offered by Scott.

Can We Handle the Truth?

A recurring theme in Scott’s writing and speaking is that we’re not equipped to get at the truth. Remarks like this are scattered throughout his written and oral output: “The human brain is not capable of comprehending truth at a deep level” (p. 28).
Scott often talks about the fact that people of different opinions can be watching “two movies on the same screen.” Another metaphor he uses is that of “filters.” He says that he prefers to use the “Persuasion” filter, while other people may use other filters.
But can’t we say that one movie or filter is to be preferred to another because it is more accurate? Here Scott equivocates. At times he implies that any such preference is a matter of taste. But, naturally, he doesn’t want to let go of the notion that his Persuasion movie or filter has something to recommend it! If he did that, there would be no reason to pay any attention to his arguments.
What Scott repeatedly says is that we can never really know the truth, but we can prefer one “movie” or “filter” to another because

1. It makes us happy and

2. It is predictive.

So, Scott argues, we adopt a point of view not because we think it’s true, but because it makes us happy to think about it and it gives us good predictions (pp. 38–47).
But if a theory (what Scott calls a “filter”) makes us happy and makes good predictions, is that so different from being true? These are not exactly the same, but they do seem to overlap quite a bit—especially because a theory most often makes us “happy” by making sense to us, by striking us as a reasonable explanation. So, if someone had said in 2015 that a powerful coven of witches in Kazakhstan had cast a spell to ensure that Trump would win the Republican Party nomination and go on to win the US presidency, this would have been predictive, but would not have made us “happy,” only because we don’t believe that witches can influence the outcome of elections by casting spells.
What makes us happy is largely a matter of our existing theories about the world. A new theory tends to make us happy when it fits with the totality of our existing theories—and this, I claim, is perfectly rational (though, of course, not infallible).
As well as Cognitive Dissonance, Scott talks a lot about Confirmation Bias. He sees this as an example of irrationality. But confirmation bias is rational! As Karl Popper pointed out, our theories would be useless if we gave them up too easily. If the power goes out in my apartment, I don’t immediately abandon my belief in Coulomb’s Law or Ohm’s Law. I automatically save my most fundamental beliefs and give up more minor beliefs: in this case, my belief that the fuses were not overloaded.
While facts do matter, theories matter more. Our preconceived assumptions—our theories—tend to dominate our thinking, and that’s rational, but sometimes these theories can be tested against facts, and sometimes the facts are decisive in causing us to change our theories. That’s rational too.
If facts matter and theories matter, what about Scott’s exalted idea of persuasion? Everyone knows that persuasion can have some independent effect. Philosophers have always known that persuasion has a role, complementary to theories and facts. Two and half thousand years ago, Aristotle wrote a textbook of logic, his Prior Analytics. He also wrote a textbook of persuasion, his Rhetoric.
As Ray Scott Percival has argued (in The Myth of the Closed Mind), persuasion, advertising, and propaganda can all be explained within the theory that humans are rational. Here I will just throw out one hint. When he claims that “facts don’t matter” and that “people are irrational,” Scott always focuses his attention on the very short run. He looks at people’s immediate responses to “Cognitive Dissonance.” When he considers events lasting more than a few months, he always, in practice though not explicitly, acknowledges that facts can be decisive and usually are.
Election campaigns are comparatively brief events which take place within a framework of prevailing ideas that can’t be challenged without political loss, and these ideas are often the outcome of influences working slowly over decades or centuries. For example, who was the first newly elected US president to be openly in favor of gay marriage? The answer (surprising to some) is: Donald J. Trump. When Barack Obama was elected in 2008, he presented himself as a most emphatic and deeply committed opponent of gay marriage. If he had come out in favor of gay marriage in that year, it would have been too risky.
Between 2008 and 2016, public opinion changed so that it became more of an electoral liability than an advantage to oppose gay marriage. And this change was itself the culmination of slow changes in opinion over many decades.

(This first appeared in Scott Adams and Philosophy: A Hole in the Fabric of Reality, edited by Daniel Yim, Galen Foresman, and Robert Arp. Open Court, 2018.)

Comments on “Public and private confusion” by Robert Henderson.

Politics Posted on Tue, May 05, 2020 08:25:39

Comments on “Public and private confusion” by Robert Henderson.

Below begins a revised series of posts criticising an interesting contributor to the Libertarian Discussion mailing list, Robert Henderson. He feels he has found many short comings with the market system and that it vitally needs a state and this is just one essay where he expresses that idea.

I feel that to foster human flourishing, we need to get rid of the injustice of the state, as the state was aptly designed for war.

Below Robert will be prefaced by RH and myself by McD.

RH: I have never formally joined the LA. I contribute to this group because Sean [Gabb] asked me to include it on my  Daily List.  However, don’t read into that I am a reluctant member. I have strong libertarian instincts. 

I am not statist as you put it, but I am practical  My political views are based on  human  psychology and sociology, or if you prefer human nature  at the individual and social levels. My ends are libertarian but I have to arrive at those ends by convincing myself that they are practical.  For example I am absolutist when it come to free expression but  tribal when it comes to composition of  any society. 

1. Unquestioned ideas

Because they have the word free in them, the terms “Free markets” and “free trade” have seduced those of all political colours to treat them uncritically as ideas. They are considered good or bad but their intellectual coherence is rarely questioned.

McD: Well, I do feel that you are a de facto statist, Robert, but so was Hayek and Milton Friedman, so I welcome the fact that you are basically also a liberal.

No, LAers do question the reforms of the 1980s and they nearly always call them freer trade or freer markets rather than completely free markets. A completely free market requires no state. “Free” here means free of the state. It could only be complete in anarchy.

Two of the main reasons liberals prefer the market to the state are, first, that it is just and it honours social liberty whereas the state scotches liberty and is thereby unjust and second is that the state is wasteful whereas the market attempts to cut down on waste and, indeed, is the main source of social economy in the mass urban society.

RH: Neo-liberals believe in a childlike quasi-religious fashion in the workings of Adam Smith’s “invisible hand”, which, moved by enlightened self-interest, supposedly creates the best of all possible material worlds through the operation of the market. 

McD: Neo-liberals today means non-statist liberals or rather a less-statist liberal than has been the case since the 1870s. Nearly all who called themselves liberal were less-statist before the 1870s. In that decade, the UK Liberal Party itself went from freer trade and what was then seen as the same thing, laissez faire towards welfare statism, such that by 1880, the Liberal Party was no longer a liberal party in the pristine sense but more like a one nation Tory Party. Since then, the ideal of free trade between counties has been largely retained by that statist UK Liberal Party, and by others, but laissez faire has been used for free trade within each country and condemned utterly as the new objective has been redistribution and welfare. The pristine liberals held the market was good on distribution and Adam Smith held it lead to ever greater equality in the long run as the price system itself tends to even out prices. This doctrine was questioned by J. E. Cairnes but not effectively refuted.

No one is truly religious in the sense that many think of that being so today. Very vicar knows the text “ye of little faith” and, occasionally, some vicars give sermons on apathy to the few eccentrics who still attend church. We might even say that most of them attend services by Pastor Sellbydate.

But to get the truth, the Pastor needs to say “ye of no faith whatsoever”, for no animal ever had any faith: the idea is null set. To think is to re-think and belief is usually improvised for human action whenever we look where we are going. No one dumps this checking with our senses for the sort of trust that faith is supposed to be. Nor has anyone in the past.

People are educated into Christianity and there is no faith involved, as they need to think and to re-think, to learn the doctrines  and when they adopt such doctrines it is usually a matter of value or acceptance rather than of belief or truth.

What guides self interest to serve that of one and all on the market needs no hidden hand of Jupiter but is there in the division of labour where anyone can further himself by training himself to serve others. People say it is simply getting a job. They all thereby specialise, to some extent, to serve others.

RH: Socialists see “free markets” and “free trade” as economic “state of natures” which must be ameliorated by the state before a civilised society can be realised.

McD: No, they do not. They are not like Hobbes and Locke. Few think of a pristine anarchy or of a state of nature.  

Most of them have next to no idea of what they mean by socialist, we might all agree that it is a great looking word, and anyway they are mainly just students conforming to what they find in the colleges but they are usually more interested in other things.

Few doctrinaires  can be found in the workplace. Nor in the pub’s and clubs. Politics bores most people and the government is  a thing they sometimes read about or hear on the mass media news. But they work in the market place for some firm and they usually use some shops every day, they tend to like the market as customers, though it is far from ideal, and tend to dislike it as workers, owing to the normal disutility of labour. But a few do like their job.

 RH: Conservatives in the traditional sense no longer exist as a recognisable political force in the West, but when they did exist they opposed “free markets” and “free trade” primarily on the grounds of national security and the general disruption to society that they caused. 

McD: You are conflating conservatism with the Tories, Robert. Most people are conservative, and they always will be. Socialism is a new nineteenth century name for Toryism. It is not very conservative. What Peel renamed the Conservative Party in 1834 has still nationalised more entities than the Labour Party.

RH: Nationalists of the fascistic kind have traditionally opposed the ideas because they see the nation as a single organism which can only be strong if it is master of its own destiny, something which can only be achieved (they believe) through state direction of both the internal  market and of external trade.

McD; Historically fascists have been disillusioned Marxists, as D. R. Steele held on the LA website and in his new book The Mystery of Fascism: David Ramsay Steele’s Greatest Hits (2019)

As the author explains, Mussolini was a top Marxist up till 1914. 

RH: There are varying quantities of truth in all these ideological responses, but their utility is seriously tainted by the lack of any  objective or even properly defined and permanent prescriptive truth in the concepts of “free markets” or “free trade”. The reality of these ideas is that they are arbitrary chosen bundles of behaviours which  are excluded or included at the will of their proponents. Moreover, the bundles of behaviours are not static.

McD: No, we are objectively more or less free of the unjust anti-social wasteful state. The pristine liberal ideology is coherent.  

RH: The widespread negligence in examining the coherence of these ideas is all the more remarkable because their incoherence as theories and the arbitrary and dishonest nature of their practical realisation is not only readily apparent but fundamentally undermining of the claims made for them by their champions.

McD: That seems to be not one whit true, Robert.

Where are those dishonest liberal ideologues. Are there any in the Libertarian Alliance? I doubt it. I know of not even one. But I do know many who regularly ponder over the pristine liberal paradigm, so at any time they might well notice an error.

New Trump Initiative

Uncategorised Posted on Sun, April 19, 2020 06:27:52

Today President Trump announced his latest measure to fight the Covid19 infection, the amputation of the hands of every person residing in the United States and territories. “I have taken this bold and decisive action because any cost is worth it to save lives by fighting the virus,” he stated.

The president’s proposal calls for the setting up of hand amputation centers in every community, staffed by medical professionals. Dr. Anthony Fauci, of the National Institutes of Health, applauded the measure as salutary and appropriate. “There is a clear scientific consensus that the virus is spread by people using their hands to touch things.” Dr. Fauci added that the plan would be phased in by occupational groups, with medical professionals, White House staff, and members of Congress, given a temporary deferral.

Senator Charles Schumer slammed the president’s plan as “too little, too late.” “Why have we had to wait two months for hand amputation?” queried Senator Schumer, who went on to explain that the president’s measure didn’t go far enough. “People can also spread the infection by touching things with their feet, yet the president’s disgraceful, half-hearted, and ill-thought-out policy provides not a penny for foot amputation.”

Presidential contender Bernie Sanders said that while hand amputation of the entire population of the world is long overdue, Trump’s proposal is a scheme to reward his billionaire friends with juicy government contracts. A new department of the federal government should immediately be established to carry out all amputations, the money to be raised by taxing billionaires.

Speaker of the House Nancy Pelosi declared that the first candidates for hand amputation should be the blue-collar working class, because these disgusting racists do nothing useful and their work can easily be replaced by robots. And without hands, they might find it more difficult to vote.

Presidential contender Joe Biden commented that hand amputation is “way, way overdue” but added, “This president has once again shown his criminal incompetence and disregard for the Constitution. George Washington didn’t die on Iwo Jima so that President Trump could be three-fifths of a person.” Biden recalled the time when he personally had to amputate the hand of an infected soldier during the Spanish flu epidemic, thereby saving millions of lives and ending World War II.

Congressperson Alexandria Ocasio-Cortez denounced the Trump plan as racist and misogynist because it didn’t single out the loathsome gringos in the woodpile who had invented the virus and were now profiting from it.

Representative Ilhan Omar pointed out that because of Sharia law, Muslim practitioners have a traditional expertise in hand amputation, and called for the recruitment of thousands of experts in this occupational field who are ready to move from the Middle East to the United States. “Hand amputation is a delicate matter which should not be carried out by the inexperienced,” she stated. Asked who would amputate the hands of people in the Middle East, she responded that this was not a priority, because the racist and imperialist United States was entirely responsible for the epidemic.

Some critics raised the fake issue of whether hand amputations might hurt America’s international competitiveness, but these ill-informed science deniers were soon put in their place. Nobel prize-winning economist Paul Krugman explained that a handless population would in fact be super-productive, which could be proved by second-order differential equations. Hand amputation would be a much-needed stimulus, as long as it was accompanied by other appropriate measures such as a hundred-percent marginal tax rate.

Political commentator Tucker Carlson pointed out that the Chinese might try to take advantage of a handless America by slyly using their temporary advantage in handedness to flood the American market with manufactured goods, forcibly compelling us to buy all kinds of stuff that we don’t want. To level the playing field, the US should bomb all Chinese industrial installations. “The one great principle to which I have devoted my life is the complete destruction of the Chinese people, whose rapid economic growth is an existential threat to America,” said Carlson. “The Chinese hate us: I have no idea why.”

The Enlightenment paradigm, the Romantic reaction to it and liberty

Liberty Posted on Sat, April 18, 2020 14:03:41

There is clash between the Enlightenment paradigm that dominated in the eighteenth century and the newer Romantic paradigm that emerged with Rousseau’s reaction to it, especially in his exchanges with Voltaire. The Romantic outlook sees enemies everywhere whilst the Enlightenment sees a basic harmony in society with the biggest problem being the massive ignorance that even the most knowledgeable people are bound to retain.

For example, the Enlightenment sees common economic interests and basic harmony most of the time, and maybe even all of the time, but the Romantics tend to think trade is a sort of rip off; somewhere, even if they cannot quite see exactly where. So the Romantics hold some get rich but usually only at the expense of the poor whereas the Enlightenment outlook sees the mutual gains of trade, even though one side never gets to be in the richest class. Here the Enlightenment gets the basic economic facts right and Romance fosters the common zero sum delusion, that is sadly retained by current common sense.

And it is similar with debate too. That debate is competitive is a common Romantic idea but quite false, as eristic motivation is harmless enough and cannot alter the logic of the common aim in any debate itself being the truth, even though that might well be distinct from the motivation of the debaters. So debate is basically co-operation, rather than zero- sum competition. And it tends to aid Enlightenment. Polemical or eristic motivation will be a social boon if ever it motivates debate, as debate both fosters enquiry and also the testing of ideas too. In any debate, there are both sides and the case each side might have plus the quest for truth that they do both have in common. The morality of debate should include what Karl Popper held to be the duty of self criticism, or honesty, just to aid Enlightenment all round, otherwise the truth might be neglected.

Both the Enlightenment, and the reaction to it in Romance, tends to apply to all other paradigms. We can expect any Romantic to have enemies so, for example, a Green Romantic will think in terms of “us and them”, but an Enlightenment Green will most likely not, but just think non-Greens are mainly just ignorant. Both paradigms can err. As Pope says, “to err is human.” He adds, “to forgive is divine”. But if a Romantic converts to the older Enlightenment he will think only ignorance is the problem rather than it being as some people basically are, and maybe as they are almost bound to be.

We think many things are natural that are merely habitual. Aristotle said habit was second nature. But any insight might show us that we have a bad habit. I think the Enlightenment paradigm of “one and all” is more efficient than the Romantic one, of “us and them”, but they are both mere paradigms for liberal propagandists.

It is part of Romance to imagine the party differences in politics are tribal rather than of the mere information people might have and that the Enlightenment thinkers are naive to think many people, if any, can be converted over from one side to the other. Psychologists, like Jordan Peterson, takes all that for granted and talks of the political party positions as if they were stable individuals. The Romantic psychologists base a lot of what they see as their science on mere passing ephemera for the parties run on current affairs rather than on principle. The psychologists feel they are non-ideological but they embrace the Romantic outlook.

Most philosophers are confused about freedom and liberty.  We have individual liberty, where we do whatever we like; given the situation we are in. As Hobbes said, all we do is what  we want to do, either as a means or as an end. This is not particularly social and only death ever ends it. Even in prison, we can do this or that. Our liberty is restricted but not thereby ended in gaol. If we are attacked in prison we can fight back, if we are still conscious and have some fitness, but it might be unwise to do so. Duress does not remove individual liberty. We are free to at least attempt to rob or to murder others, if ever we are bad men and so we may want to do so. The good man, as even Aristotle agreed with Socrates and Plato, will not want to do so, but he did not agree with them that mere knowledge of the good ruled out  immoral options by making men good. Common sense never agreed with them on that and Aristotle defended common sense. However, I think they were both quite right.

So people were free in the Hobbesian individual sense even in the late USSR. But they clearly lacked social liberty there. We can all use our individual liberty to try to achieve social liberty.

Social liberty is the liberal, or libertarian, ideal. This is just the above individual liberty but with some respect to the liberty of all others; so we do not rob, do not murder, etc., but rather we treat all others with respect regarding to their liberty. 

The very opposite of social liberty is war. War always destroys social liberty and that rather than the truth is the first causality of war. War cannot similarly affect much truth [or belief i.e. what we see as true, as that will be as vital as ever and needed for all that anyone does], though truth, unlike the facts, can be destroyed. Books can be burnt, for example. We need to make fresh truth whatever we look to see what we are doing. Our other senses do fresh empirical “research” too, to reformulate our needed fresh belief so we can see to do whatever we want to do. Opportunity cost is the only cost and it applies to all we do.

Hobbes thought that liberty would be a war of all against all and that the state might end this anarchy by forcing peace onto people but there he was exactly preposterous or he got it the wrong way round as the state is a war machine aimed at wat and the affairs of state, politics, is to do with war, indeed it is cold war. So if we rolled back or even removed the state we would get not only more liberty but also more peace too.

Winning the Popular Vote

Politics Posted on Sun, June 30, 2019 15:56:40

Whoever Wins the Electoral College Would Have Won the Popular Vote

Since they lost the 2016 election to Donald Trump, many Democrats have become preoccupied with abolishing the electoral college. They point out that Hillary Clinton “won the popular vote”—a misleading turn of phrase, as we shall see. They conclude that there’s something both undemocratic and un-Democratic about the electoral college, that it’s unfair, and more importantly, biased against the left, and should therefore be abolished.

Sometimes they even assert that if the election had been decided by the popular vote, Hillary Clinton would have won. A different point of view comes from President Donald J. Trump, who remarked just after his election victory that he would have won more easily if the election had been decided by the popular vote.

I think in this case President Trump is right and most of the Democrats wrong. In fact, I would like to propose the following fundamental rule:

Whoever wins a presidential election under the electoral college system would also have won if the election had been decided instead by popular vote.

Quick-witted readers will have already figured out why this might well be true. But for the benefit of the sleepy-heads, I will now unpack this proposition at more length.

The obvious rationale for the claim that Hillary would have won if the election had been decided by the popular vote is that she certainly got more votes than Trump, and getting more votes wins you the popular vote.

The equally obvious rebuttal is that many people would certainly not have voted the same way if the election had been decided by popular vote. Knowing that the election was being decided by popular vote, not by the electoral college system, many people would have voted differently, including some people who would have voted for Trump instead of not voting at all.

Abolishing the Electoral College Would Probably Not Help the Democrats

There’s no going back to the electoral college as conceived by the Framers. People who vote in presidential elections think of themselves as voting for one or another candidate for president. They don’t think of themselves as voting for wise persons who will later pick the president.

Although there is no popular demand for abolition of the electoral college, it does appear that any future attempt by states to depart radically from a popular-vote system within each state (something they would be entitled to do under the Constitution) would be met by widespread outrage. And so, the electoral college is tolerated because it produces a result which approximates quite closely to a nationwide popular vote. And inasmuch as it departs from a nationwide popular vote, it does so in a way which is easy to understand and chimes with the idea that the United States is a federation of states. People easily comprehend that the president is chosen state by state.

Arguments for the electoral college include the claim that some autonomous role for the states is a good thing, and the claim that the electoral college tends to favor political tendencies which are more evenly spread across the country, as against tendencies concentrated in some areas. I am not going into these arguments here. On balance, I wouldn’t shed any tears if the electoral college system were replaced by direct election of the president, along the lines of the system in France.

What I do want to point out is that abolishing the electoral college would not help the Democrats, or at least that it’s not clear it would help the Democrats and might just as likely help the Republicans.

People who want to abolish the electoral college, currently mostly Democrats, typically say things like this: “If the 2016 election had been decided by popular vote, then it follows that Hillary Clinton would now be president.” This assertion is blatantly false. The people who make this claim appeal to the fact that Clinton got more votes—a bigger popular vote—than Trump in 2016. But, of course, the conclusion most certainly does not follow! There’s all the difference in the world between “winning the popular vote” in a system where no one who matters gives a flying freak about the popular vote, and winning the popular vote in a system where the popular vote is the all-important decider.

If the 2016 election had been decided by popular vote, then the popular vote would not have been the same as it was in the actual election. Many people would have voted differently than they did. Many people would have been caught up in the campaign who in fact ignored it almost completely, while voters in certain counties, who in the actual election became centers of attention, would have gone unnoticed.

The campaign would have been, in some conspicuous ways, unrecognizable compared with what actually occurred: absolutely no one, for instance, would have cared who “won” Florida or Pennsylvania, a virtually meaningless concept under a popular-vote-decided system. A few thousand more or less Republican or Democratic votes in California, which would have counted for absolutely nothing in the actual 2016 election, would have been exactly as important as a few thousand more or less Republican or Democratic votes in Michigan.

The Rules of the Game

One point to be clear about is that under the electoral college system, no candidate ever tries to win the popular vote. The popular vote is just an incidental outcome which no one is aiming for. It follows (though this is a bit harder to see) that every candidate deliberately aims to reduce their popular vote below what it might have been. If you can’t see this immediately, I will get to it shortly. (When I say “every candidate,” I mean every candidate who’s seriously trying to win the presidency; this doesn’t necessarily apply to candidates who’re running merely in order to “send a message.”)

Hillary Clinton was not trying to win the popular vote in 2016; she was trying to win the electoral college. She would never knowingly have sacrificed a single vote for state electors for the sake of any number of popular votes. And she would have sacrificed any number of popular votes to get one more vote for the electoral college. Of course, many things a candidate might do to increase their electoral-college vote total would also incidentally increase their popular vote, but where there is any conflict between these two objectives, then increasing the popular vote counts for absolutely nothing.

This is why it’s misleading to talk about “winning” or “losing” the popular vote under a system of rules where everyone trying to win views the popular vote as irrelevant to the capture of power. It’s like saying that someone who lost a game of chess by being checkmated “won” the piece-taking score because he captured more pieces than his opponent. This is just not the way chess games are scored. And if it were the way chess games were scored, then both players would have played very differently, and very likely the same player would have won (because skill in one game is transferable to skill in a somewhat similar game).

Most people with little interest in politics probably suppose that a state is given electors in proportion to its population. In fact, states are allocated electors according to their total number of House representatives plus senators. While this is roughly in line with population, it does give a definite built-in advantage to voters in low-population states, each of which has two senators just like the high-population states. While my impression is that currently this way of determining electoral college votes probably favors Republicans slightly, it does not favor them hugely—we can all think of some low-population states which are solidly Democratic (Hawaii, Rhode Island, Delaware).

Incidentally, I have not seen an arithmetic breakdown of the causes of the phenomenon where a candidate “wins the popular vote but loses the electoral college.” In Hillary Clinton’s case, how much (if any) of this disparity was due to low-population states having higher electoral college representation because of their two senators, and how much was due simply to the heavy concentration of Democratic voters in states like California and New York? No doubt someone has analyzed this, but I haven’t come across their conclusions.

I’m not going to do the math here, but common-sense guesswork suggests that the heavy concentration of Democratic support in some states is much more consequential than the built-in advantage to low-population states (though that would not necessarily remain true if all or nearly all low-population states happened to be of the same party).

Thwarting the People’s Will

A typical argument by Alex Cohen for the abolition of the electoral college (in this case, back-door abolition by means of a compact between states) asserts that when a candidate wins the presidency despite “losing” the popular vote, this “thwarts the people’s will.” <>.

Now, first of all, as a pedantic and strictly irrelevant yet irresistible observation in passing, anyone who favors Roe v. Wade accepts that the people’s will should and must often be thwarted. That is the entire point of Roe v. Wade and other Supreme Court decisions beloved of Democrats. Thwart the people’s will, dammit! The Democrats are passionately devoted to using the Constitution to thwart the people’s will, and in this I completely agree with them.

But, more to the point, the way the popular vote turns out, in a system where the popular vote is not the decider, will be very different from the way it would turn out in a system where it is the decider. And therefore, if the latter would express the people’s will, the former cannot express the people’s will. Going only on the facts and arguments assembled by Alex Cohen, his conclusion that the electoral college thwarts the people’s will simply does not follow.

Alex Cohen also throws in the remark that the electoral college “potentially lowers voter turnout,” with a link that strongly suggests the lowering is more than merely potential. Mr. Cohen apparently says this because a high voter turnout is considered a good thing and so this is one more strike against the electoral college. He doesn’t notice that this explicitly recognizes that many people don’t vote the same way under the electoral college as they would have done under decision by popular vote, and that therefore it removes an essential premiss for his conclusion that the electoral college thwarts the people’s will.

A Game of Skill

Republican voters in California, New York, and Illinois would be more inclined to vote if we switched to decision by popular vote. Republicans know that under the electoral college system, as long as the state is heavily Democratic, their votes count for nothing. Under a popular vote system, they would know that their votes count the same as any other votes anywhere in the country.

It’s not quite as simple as that, because most people, having made it to the polling place, vote for a number of candidates, national, state, and local, as well as for referenda (in states like California that allow them). Having made the effort to get to the polling place, the additional cost of voting for one more item, such as president, is very slight. On the other hand, many California voters will see themselves as on the losing side of state and local contests as well as the presidential choice, so there will be some disincentive to show up to vote at all.

You might think that the number of Republicans who don’t bother to vote for president in California because they know they have no chance of winning will be balanced by the number of California Democrats who won’t bother to vote for president because they have no chance of losing. However, it’s a recognized fact of voter behavior that voters do like to vote for the side which wins, and assuming that to be true, it seems reasonable to infer that the disincentive effect of knowing that the state’s vote is a foregone conclusion will more severely affect California Republicans than California Democrats.

So, we see that people will vote differently under the two systems, regardless of anything the candidates do. But it’s additionally true that the candidates will campaign differently, and this will affect how people vote. That’s the entire point of campaigning, after all. So, not only would many voters with the same attitudes and preferences vote differently under the two systems, but also, many voters’ attitudes and preferences would be changed in different ways by campaigning under the two systems.

Trump won the 2016 election largely by superior strategy. He campaigned heavily in rust-belt states like Wisconsin, Michigan, and Pennsylvania. (There were other components to his strategy. He continually pounded away at a dozen key policy issues, whereas Clinton avoided talk about policy in favor of painting Trump as an evil monster.)

Great campaigns, like great battles and great chess games, tend to look easy in retrospect. Some people have second-guessed Napoleon at the Battle of Jena. If things had gone a bit differently, they conclude, Napoleon would have lost Jena . . . No! If things had gone a bit differently, Napoleon would, in all probability, have won Jena in a different way. We have to add “in all probability” because there’s always an element of luck.

Before the election, many conventional experts scoffed at Trump’s decision to campaign so heavily in the rust belt. Couldn’t this amateur, this dolt, see that he had no chance in those states? But Trump had superior intel (Cambridge Analytica) and superior strategic vision. He had been pondering, developing, and honing his working-class, protectionist, America-first electoral strategy for over thirty years. Trump did not win because Hillary was “a bad candidate,” as so many people now like to intone. Her “badness” corresponds with the conventional wisdom of all the accredited cognoscenti before the election, who all confidently expected her to win. Trump won because he was an extraordinarily capable candidate. He out-generaled the highly competent yet conventionally-minded staff of Hillary Clinton. Trump beat Clinton by better science and deeper thought.

Under a popular-vote system, the campaign would have played out very differently. But a brilliant strategist is a brilliant strategist. No doubt both Trump and Clinton would have spent a lot of time in California, New York, and Illinois, places where, in the actual campaign, they did next to nothing. Voter turnout in California, New York, and Illinois would have been higher—and the increased turnout would have been higher among Trump voters than Clinton voters. Trump would, in any knowledgeable judgment, have won the popular vote, probably by a bigger margin than he actually won the electoral college vote.

If you see that winning elections is a contest of abilities, a game of skill, you will appreciate the point that a better campaigner under one system will also be a better campaigner under a different system, just as Napoleon was usually a better general than his opponents, whatever the terrain or the weather. Add to that the fact that under the present system, candidates routinely sacrifice the popular vote to winning the electoral vote. There is a definite trade-off between the two, and the only reason we don’t hear more about this trade-off is because everyone is so thoroughly aware that the popular vote just doesn’t count.

There are parts of California and other solidly Democratic states which are “natural Trump country,” but where Trump did no campaigning, because it would have been a complete waste. An hour’s campaigning, or a million dollars’ worth of campaign spending in California would have netted Trump more popular votes than similar expenditures in Michigan, but this kind of move could have lost Trump the election (while winning him the popular vote), and Trump understood this perfectly.

Any presidential candidate deliberately makes decisions which he or she believes will reduce his or her popular vote (compared with what it would have been, given different decisions), in order to maximize his or her electoral college vote. This must always necessarily occur, whether or not the actual aggregate outcome is to “lose” the popular vote, because a candidate will allocate each unit of campaign resources where it will yield the biggest return in terms of electoral college votes alone, and there will always exist many more alternative ways to allocate each unit where it would tend to increase the popular vote by a greater amount while being less effective at increasing the electoral college vote. These alternative ways, as long as they are accurately perceived as such, are just instantly dismissed from consideration, so we tend to overlook the fact that serious candidates always deliberately sacrifice their popular vote to their electoral college vote.

The Complication of Runoffs

If the United States were ever to be converted to the popular-vote system for the presidency, it’s certain that provision would be made for a runoff second election. In France, for example, if the first presidential election does not give more than fifty percent to one candidate, then a second, runoff election is held, with only the two top-scoring candidates from the first election competing.

A runoff system for US elections under a new popular-vote system introduced by constitutional amendment would be certain because the people designing the new system would want to rule out the possibility that anyone could be elected with a minority of the votes. If there were only one round of voting, with victory going to the candidate who got the most votes, it would be possible for a candidate with a minority of the votes to gain the presidency. Indeed, it would be more than possible, it would very likely happen in the great majority of presidential elections.

Now, it could also happen under the electoral college system that a candidate could get more electoral votes than any other, yet still get a minority of all the electoral votes—even, with several strong candidates, a fairly small minority. The Constitution provides that in that situation, the choice of president goes to the House of Representatives.

It doesn’t look very likely at first, as a matter of practical politics, that the Constitution will be amended to replace the decision by the House of Representatives with a runoff election, while otherwise preserving the present electoral college system. Referring the decision to the House strikes most people as strange and unsatisfactory, yet it happens quite rarely—only twice so far in US history. If it were to happen again, the House might feel obliged to give the presidency to the candidate who had won a plurality of electoral college votes, or might even arrange a runoff election of the top two candidates as its way of deciding who would be awarded the presidency.

If for some reason neither of these expedients worked, and if there were a long period of three big parties (for example because the Democratic Party went into chronic decline and the Republicans split between traditional Republicans and Trumpists), then a runoff election for the electoral college might become a real possibility. Why didn’t the Framers hit upon this obvious solution? Calling a nationwide election was not such a simple matter in the eighteenth century. Many voters would require a journey of a day or two to get to the nearest polling place, assuming they could get ahold of a horse. But more significantly, presidential elections were not originally intended to be democratic. The members of the electoral college were not expected to follow the wishes of the voters, as they are now.

The mechanics of the electoral college system favors an outright majority for one candidate, whereas it’s quite common for the candidate who “wins the popular vote” to get a minority of the popular vote—this happens in about fifty percent of presidential elections. Hillary Clinton, for example, though she “won the popular vote,” received a minority of the popular vote in 2016. More people voted against Hillary Clinton than voted for her, and the same, of course, is true for Donald Trump.

Hillary Clinton got 48.18 percent, compared with Trump’s 46.09 percent. Just to keep the numbers in perspective, and not because it has any profound significance, note that Donald Trump plus Gary Johnson got slightly more votes than Hillary Clinton plus Jill Stein (Johnson got more than three times as many votes as Stein). If Johnson’s votes had gone to Trump and Stein’s votes to Hillary, Trump would have “won the popular vote.”

You might point out that very likely, in a runoff, a higher percentage of Stein’s votes would have gone to Clinton than the percentage of Johnson’s votes which would have gone to Trump. This is probably true—if there were no campaigning between the first and second elections. And yet, between the first and the second elections, there would have been a second campaign, and it can’t be ruled out that Trump would have won over more Johnson voters, and even captured some Stein voters and disgruntled formerly Sanders voters in the second campaign, not to mention possibly a few other Clinton voters!

This kind of exercise doesn’t prove anything at all about the real world, because if the election had been run under popular-vote rules, the actual vote totals, for the first round of voting, would have been quite different from the actual vote totals in 2016. But it does serve to illustrate some of the numerical issues.

A Test of Steele’s Rule

If you’ve followed me this far, you’ll be able to see that the so-called “loser” of the popular vote who wins the electoral college might easily have won the popular vote under a system decided by the popular vote. This is entirely elementary and indisputable.

You’ll probably also agree that who wins the electoral college is a much better indicator of who “would have” won the popular vote under a popular-vote-decided system than is the popular vote under the electoral college system.

These two propositions will be quickly accepted by most people who give any serious thought to the matter. But I’m going further. I am saying that whoever wins the electoral college would have won the popular vote (under a nationwide popular-vote system).

My rule refers to an abstract and simplified model of the world. (So does the contrary claim that the winner of the popular vote under the electoral-college system would have won the election under a hypothetical popular-vote system.) It compares an actual situation where someone wins the electoral college with a hypothetical situation where the election would be decided by nationwide popular vote and where the attitudes and preferences of all the voters would (to start with, let’s say one year before the election) be identical to what they are in the actual situation. It says that the gross outcome—who wins the presidency—would be the same.

In practice we can never make this comparison directly. There can’t be two worlds where the attitudes and preferences of the voters are identical but the electoral systems are different. So we’re dealing with an abstract model which sheds light on reality, rather than a direct observation of reality.

But there is an indirect empirical test of the rule, which goes as follows.

There is a positive association between votes for the presidential candidate and votes for House representatives of the presidential candidates’ party (all House seats are up for election on the same day as the presidential election). If the party of the winning presidential candidate gets the majority of House votes when the winning presidential candidate “loses” the popular vote, this would tend to corroborate my rule and to refute the rule tacitly appealed to by those Democrats who suppose that Hillary would have won a hypothetical election decided by popular vote in 2016. If those Democrats are right and I am wrong, you would expect the party of the presidential candidate who won the election but “lost” the popular vote to get fewer House votes than the other party.

We have two recent cases where the presidential candidate who won the election “lost the popular vote”: 2000 and 2016. In both these cases, the victorious presidential candidate’s party “won the popular vote” for the House of Representatives. (We’re looking at the total votes cast in elections for House candidates, not how many House seats were won or lost.)

In 2000 the Republicans got 47.6 percent of the popular vote for House seats, as against the Democrats’ 47.1 percent and in 2016, they got 49.1 percent, as against the Democrats’ 48.9 percent. <><>.

In elections for the House, there’s a comparatively large number of minor-party candidates (twenty-five in 2016, plus those classed as “Independent” and those classed as “Others”). If we count only the Republicans, Democrats, Libertarians, and Greens, the Republican plurality in 2000 rises to 49.28 and in 2016 to 49.70.

So, the popular vote for House representatives of the party which won the presidency but “lost” the popular presidential vote was in each case the plurality vote. On both occasions the party which won the electoral college while “losing” the popular vote for president “won” the popular vote for House seats. This corroborates my rule and lends support to the view that the electoral college vote has a fair claim to be taken as an expression of the people’s will.

The rule I am proposing is not a theorem, logically derived from axioms, but a generalization about two alternatives, one of them purely hypothetical, a “counterfactual conditional.” It can therefore never be conclusively “proved” nor even directly tested. But if we think seriously about what’s going on in presidential elections, it seems to hold up well, in terms of both its internal coherence and its agreement with the data.

We’re in a good position to conclude that Steele’s Rule is worth accepting as at least highly probable, as the best practical guide to thinking about presidential elections, and as maybe even true: Any candidate who wins the presidency under the present electoral college system, even if he or she “loses” the popular vote under that system, would have won the popular vote and therefore the presidency, if the election had been conducted and decided according to a popular-vote system.

Is Naive Realism the Cure for Postmodernism?

Book Reviews Posted on Tue, March 12, 2019 07:05:22

Is Naive Realism the Cure for Postmodernism?

Quee Nelson, The Slightest Philosophy. Indianapolis: Dog Ear Publishing, 2007.

Quee Nelson is a classical liberal who has written a book on metaphysics and epistemology, well received by many libertarians. Several libertarians told me it was a book I ought to read. It has received numerous compliments but, as far as I know, no serious discussion. I very much disagree with the main thrust of Quee’s argument and in this review I will try to explain why I disagree.

Quee’s book is entertaining, humorous, unpretentious, readable, and displays evidence of a lot of reading and thought. It’s a book about philosophy but one that clearly aims to include among its readership people who have not read much philosophy. This present review is the same kind of thing; it’s not the sort of review you would expect to find in a philosophy journal, but rather one on about the same level as Nelson’s book, that is, one which takes nothing for granted, which is inclined to ramble, and which does not hesitate to ‘naively’ address some elementary points.

I see this review as a kind of cognitive therapy for libertarians who think like Quee, but unlike most therapy, I am offering it for free. So, you have a real bargain here, and libertarians are supposed to love a bargain. I also see this review as an encouragement to readers to get involved in the marvelous intellectual adventure of philosophy, which entails coming to grips with real arguments at their strongest, and understanding why these arguments do convince intelligent people, rather than pulling faces at conclusions you don’t like.

Nelson claims to adhere to an unpopular, minority view in philosophy—naive realism. I adhere to a different unpopular, minority view in philosophy—critical rationalism. Critical rationalism may be better known as Popperianism, though there is a well-known law of nature that all Popperians have at least one major disagreement with Popper, so we should prefer the more impersonal term, ‘critical rationalism’.

However, on most of the issues covered by Nelson’s book, I adhere to the conventional, consensus, or default position among present-day English-speaking philosophers—representative realism. So, most of the time, I will be defending a fairly mainstream philosophical position against Quee Nelson, though occasionally I will come up with a critical rationalist twist.

Here are the main points I will be making in what follows:

1. Nelson calls herself a naive realist but never lets us know what ‘naive realism’ is.

2. Nelson misrepresents mainstream academic philosophy by claiming it is completely dominated by anti-realism.

3. Nelson mistakenly claims that postmodernism is rooted in skepticism about perception (and that it is derived from Hume and Kant).

4. Nelson doesn’t understand the force of the arguments of Hume and Kant.

5. Nelson mistakenly claims that idealism is a powerful movement in present-day English-language philosophy.

6. Nelson relies upon an argument against representative realism which is purely semantic and therefore inconclusive.

7. Nelson advances a theory about the historical causation of political outcomes by philosophical ideas, which is full of holes.

Naive and Representative Realism

Nelson tells us that she is arguing for “naive” or “vulgar” realism (pp. 2–3). She says she prefers the term ‘naive realism’ to ‘direct realism’, because she thinks that the latter could be taken to deny the complex causal chain involved in perception (pp. 10–12). But other philosophers who advocate what they call ‘direct realism’ don’t deny this at all.

David M. Armstrong argues in favor of direct realism in his 1961 book. As far as I know, this is the best case ever made for direct or naive realism, but although Nelson mentions this work (p. 9), she strangely does not say how much of it she agrees with, or whether Armstrong’s direct realism is the same, or roughly the same, as her naive realism. This is part of a general problem, that Nelson’s actual position, the delineation of what she calls naive realism, is elusive. The reader can only be puzzled as to what Nelson’s naive realism is.

All forms of realism agree that physical entities, such as tables, chairs, rocks, trees, stars, and clouds, exist independently of our minds. Disputes between representative realism (representationalism) and naive (direct) realism have focused on the question of whether, when we see an object such as a tree, we do so by means of seeing a mental representation of that object, or whether we don’t see any representation but only see the object itself. (I don’t approve of that way of framing it, but naive realists usually do frame it in some such way.)

A different distinction is that between common-sense realism and scientific realism. Some people think there’s a troubling conflict between these two. Common-sense realism is the view that the things we suppose we observe as we look around in everyday life exist, independently of our awareness of them. Scientific realism is the view that the entities described by physics and other natural sciences exist, independently of our awareness of them.

I don’t see common-sense realism and scientific realism as competing alternatives. My view is that where common sense and science clash, science is probably right and common sense even more probably wrong. So here my view is contrary to that of Nelson, who thinks that common sense trumps physics (pp. 7–8).

Common sense is not fixed. Today’s common sense among educated people is partly a product of science, or of the scientific modification of more old-fashioned common sense. It used to be common sense that iron boats could not float, and when gas lighting was first introduced, many people couldn’t believe that the pipes carrying the gas did not heat up. Common sense is an assemblage of theories, a product of culture, it consists of memes, it is inculcated into individuals largely by the use of language, it varies among cultures and among sub-cultures, it has evolved over the centuries, and it is always further revisable. Common sense often contains valuable suggestions and it should not be ignored, but it carries no authority.

It would be nice to be able to state Nelson’s own characterization of naive realism and proceed from there, but unfortunately this is not straightforward. She tells us (pp. 2–3) that naive realism is the view that the things we perceive “comprise” an external universe which doesn’t depend on our perception. This implies that cosmic rays, magnetic fields, and dark matter are not part of this universe (they either don’t exist or belong to a different universe).

We can probably assume that this is not what Nelson intended; what she might have meant is that the things we perceive are parts of a universe which also contains many other entities. But this also is unsatisfactory, because this definition would apply to all forms of realism, representative as well as naive. So, this definition would not identify what’s peculiar to naive realism. We never do learn how Nelson wants to define her own naive realism, so as to distinguish it from common or garden representative realism.

Again and again, she seems as though she’s just about to define naive realism, or her version of it, but then she simply defines realism, in a way which would include representative realism. To take just one example, she says that naive realists like herself have an “unwavering faith in the actual existence and intractable mind-independence of locomotives” (p. 10). Yet, allowing for some uneasiness about the word “faith”—but let’s not quibble—this is just as true of representative realists as of naive realists.

The closest Nelson comes to criticizing representative realism is with the brief section headed “The Irrelevance of Representationalism” (pp. 12–15). Here she complains that many different philosophers have advocated many different conceptions of whatever it is in the mind of the perceiver that links the perceiver with the perceived object. She complains about the profusion of terminology as well as definitions. And she says this doesn’t really matter, it’s a “technical side show,” because all that “really matters” is realism versus anti-realism, the question of whether perceived objects exist independently of the perceiver’s mind. But if you’re claiming to advocate naive realism, and you disparage its major alternative, representative realism, and many of your opponents are representative realists, it’s incongruous to say that the issue of representative realism doesn’t matter.

In another brief discussion of representative realism (pp. 5–7), Nelson addresses only the question of color realism. In fact, someone reading this passage with no previous knowledge of these issues might easily conclude that the distinction between naive and representative realism lies entirely in the color question. Galileo, Locke, and many others, have held that whereas some aspects of objects like shape and size are really ‘in’ the perceived objects, aspects like color, smell, and sound are generated in the mind of the human observer.

Today almost the only philosophers who discuss color realism at any length are those specializing in this issue, and most of them take the view that color is a real property of objects (see for instance the article by Johnson and Wright). However, this, if correct, would not entirely dispose of the color question, for there are certain apparent facts about colors (such as ‘There can’t be a reddish shade of green’) which, taken naively, seem to be undeniable facts about objective colors, but are in fact (so science tells us, and here I believe science) entirely due to the ‘design’, or specific structure, of our body’s perceptual apparatus. As Günter Wächtershäuser said, there’s more to vision than meets the eye.

Rejecting Realism

The historically most influential form of non-realism (unless you count Plato) was idealism, classically exemplified in George Berkeley, and fashionable at the end of the nineteenth century in the version known as ‘absolute idealism’, as taught by F.H. Bradley, J.M.E. McTaggart, and Josiah Royce. Idealism claims that all our knowledge is essentially mental and subjective, and therefore the only things we can know about are ultimately products of the mind.

Idealism has little following today. Opposition to realism mostly comes from cultural relativism or social constructivism, sometimes lumped together as ‘postmodernism’. Postmodernism has very little following among philosophers, but it has a huge following—it is almost the reigning orthodoxy—among academics in literary and ‘cultural’ disciplines.

Nelson conveys the impression, once or twice by direct assertion but much more often by insinuation, that non-realism is the dominant position among anglophone philosophers. But this is mistaken; probably most philosophers (meaning faculty and grad students in university philosophy departments) are realists. I will mention some indications of this fact, and of how Nelson misrepresents the current state of academic philosophy, as this review proceeds.

To avoid possible misunderstanding, I should add that philosophy as an academic discipline has become so specialized that many philosophers never have occasion to address metaphysical issues like realism, and also that some people I would classify as representative realists may not call themselves by that term. Representative realism is such a wide and general category—the obvious default category—that some adherents may not see the need for any identifying label.

For the sake of brevity and simplicity, I’m going to cover just two forms of realism—representative realism (representationalism) and naive (or direct) realism. I’m not going to offer my own distinction between naive realism and direct realism, as some writers do, but will treat these as equivalent. Nor will I give separate attention to different aspects of realism, such as truth, objectivity, and mind-independence. Since I am reacting to what Quee Nelson says, I completely neglect a number of important arguments and distinctions which don’t arise in her discussion.

In this review I’m concerned only with realism about perceived physical entities. Realism about moral or aesthetic matters would introduce a lot of additional considerations. Many people are realists about physical objects and non-realists about morality or aesthetics. When I use the term ‘skepticism about perception’, this is short for ‘skepticism about perception as informing us of a world of things independent of our minds’.

Misrepresenting Today’s Academic Philosophy

‘Realism’ may be crudely stated as the view that the universe is largely composed of entities which exist independently of any human awareness of them. For instance, if all conscious minds were to be wiped out, the stars and planets would continue to exist. This is a view I hold, in common with most philosophers.

The mainstream view in English-language philosophy is that perceived objects do independently exist, and this has been the mainstream view since about 1910–1920, by which point the formerly dominant ‘absolute idealism’ had begun to be abandoned, mainly due to the work of Bertrand Russell and G.E. Moore.

To a very large extent, current controversies about realism versus non-realism take the form of a battle between philosophers and non-philosophers. Realism has its stronghold in university philosophy departments while anti-realism has its stronghold among non-philosophers in humanities disciplines such as literary theory, sociology, and culture studies.

Nelson conveys the impression that academic philosophy is a bastion of non-realism. This is not true. She states that “a skeptical anti-realism” is “still more or less in the driver’s seat” (p. xii). In the world of English-speaking academic philosophy, no form of skeptical anti-realism has been in the driver’s seat since the 1890s.

Nelson not only falsely identifies anti-realism with mainstream philosophy, but also falsely roots present-day anti-realism in skepticism about perception. Skepticism about perception was originally at the root of idealism, the philosophical movement which had its heyday in the nineteenth century. Today’s anti-realism is normally rooted in cultural relativism and social constructivism, tendencies extremely popular among people in non-philosophy humanities disciplines and decidedly unpopular with philosophers. Cultural relativists and social constructivists rarely (if ever) make arguments which appeal to skepticism about perception.

The Professor and the Student

After the first two chapters, Nelson develops her argument by means of a dialogue between a “Student” and a “Professor.” She identifies with the Student, while the views she opposes are identified with the Professor. Her Professor is testy and dogmatic, shifty and evasive, making feeble arguments with a display of arrogance, and frequently saying things that are blatantly ignorant or silly, while her Student embodies sweet reasonableness, judicious fair play, encyclopedic erudition, and wisdom beyond his tender years.

The views preached by Nelson’s Professor are, taken in their totality, views which no one holds. They are views made up by amalgamating different philosophical doctrines (or selected portions of these doctrines) which Nelson doesn’t like, and which are unlikely to be simultaneously held by the same person. You will never find anyone who is simultaneously a Berkeleyan idealist, a Kantian, a post-Kantian, a Hegelian, a phenomenalist, a postmodernist, a Kuhnian, a pragmaticist, and a pragmatist, but Nelson’s imaginary “Professor” is such a chimera. In fact you would be extremely unlikely to find anyone who combines even two of these, though I admit that conceivably could happen. On almost every page, the Professor says something that no professional philosopher would ever say.

The net effect of this portrayal of the Professor is to further emphasize Nelson’s misleading claim that anti-realism is the dominant, orthodox, or mainstream view. Since Nelson purports to be arguing for naive realism, it would be more appropriate to have the Professor as a representative realist, or perhaps to have five interlocutors, representative realist, naive realist, idealist, phenomenalist, and postmodernist—with the understanding that idealism was included as a historical curiosity.

The Specter of Postmodernism

Nelson begins her book by talking for some pages about postmodernism. Why does she do this?

There’s no agreement about the definition of ‘postmodernism and I won’t try to come up with an authoritative definition. I will say that postmodernism in philosophy, postmodernism in the arts, and postmodernism in discussions of popular culture, are often very different, and attempts (including attempts by some of their followers) to represent them as being aspects of a single movement don’t work. The word ‘postmodernism’ has different usages, and in some of the more popular areas, it may have little, if anything, to do with non-realism or non-objectivity.

In philosophy, postmodernism is usually taken to refer to several French philosophers (including Michel Foucault, Jacques Derrida, Jean-François Lyotard, and Jean Baudrillard), but these writers don’t follow a single party line, and it’s hard to arrive at a single doctrine which they all advocate. As an example of variation in the use of the term, the ‘postmodernist’ writings of the cultural-Marxist (or perhaps former cultural-Marxist) literary theorist Fredric Jameson, immensely influential among students of literature and popular culture, advance a concept of postmodernism which has little in common with that of the French writers labeled postmodernist, or with the concept of postmodernism as equivalent to social constructivism and cultural relativism, or with any kind of non-realism.

Philosophical postmodernism is often identified with the claims that ‘reality is socially constructed’, ‘truth is culturally relative’, and ‘there is no truth, only various interpretations’. Here I’m going to accept these positions as a rough working definition of philosophical postmodernism, which seems to broadly agree with the way Nelson uses the word. In this sense, postmodernism can be seen as a form of non-realism, since it denies that there is a single objectively true account of facts. But postmodernism is very different from old-fashioned idealism.

Use of the word ‘postmodernism’ to denote a style or movement of philosophy dates from the 1970s, but postmodernism’s influence on American literary and culture studies began earlier, when it, or something like it, was called post-structuralism, deconstruction, social constructionism, or constructivism.

There’s no dispute that some writers commonly labeled postmodernist have said things which cause our eyebrows to elevate. To take the most famous example, when archeological researchers, after examining the mummy of the Egyptian pharaoh Rameses II, concluded that he had probably died of tuberculosis, the French philosopher Bruno Latour, sometimes described as a postmodernist, objected that this finding must be wrong, because tuberculosis hadn’t yet been ‘constructed’ by medical science in ancient Egypt, and therefore couldn’t have existed at that time!

You misunderstand the situation in philosophy today if you don’t appreciate that the majority of philosophers, including French philosophers, view Latour’s assertion as totally hilarious, just as I do or just as Nelson (presumably) does. Notice that the way of thinking that leads Latour to suppose that tuberculosis didn’t exist until the medical concept of tuberculosis was ‘constructed’ is not like old-fashioned idealism and has nothing to do with skepticism about perception.

Scholars in literary and ‘cultural’ disciplines have lower IQs and less exacting standards than philosophers, and these disciplines are happy hunting grounds for Marxism, Freudianism, postmodernism, and other fanciful belief systems currently rejected by philosophers. (I wish I could add feminism and critical race theory, but I have to acknowledge that even professional philosophers are often susceptible to these unsightly conditions.) These disciplines taken together have a much higher head count than philosophy departments. (Last time I looked, some years ago but I doubt it has changed much, faculty and grad students in philosophy departments in the US amounted to about 7,000, whereas disciplines such as history, sociology, psychology, religion, and ‘culture studies’ each amounted to several times that number—I’m including women’s studies, African American studies, and so forth, among “culture studies.”)

Postmodernism is one of a succession of French philosophical tendencies, beginning with existentialism in the 1940s, which came into anglophone, mainly American, literary theory and from there into more popular discussion, largely bypassing anglophone philosophy. Generally speaking, these tendencies had proportionately far more support from American non-philosophers than from American philosophers or even from French philosophers. Most French philosophers were never existentialists and never post-modernists.

So, the recurring pattern is that a trendy but distinctly minority ‘coterie’ movement within French philosophy is transmitted into American literary and ‘culture’ disciplines, gets media attention, is swallowed by pundits, educational bureaucrats, and other ignoramuses, and is resoundingly rejected by American and British philosophers, who then occasionally offer criticisms of that tendency. (Perhaps even before World War II, Bergsonianism might have followed much the same pattern, but I don’t know enough to be sure of that.)

Nelson says: “Unfortunately postmodernists didn’t get that way on account of ignoring the teachings of the philosophy department, but on account of sincerely imbibing them. The terrible truth is that postmodernism is what happens when honest, intelligent people read the canonical philosophers and believe them” (p. x). This “terrible truth” is, at best, a wild and unsupported surmise, and Nelson offers no corroboration for it. But it does explain why Nelson begins her book with postmodernism even though the great majority of her book is not about postmodernism and the great majority of philosophers are not (in any sense) postmodernists.

Her claim is that postmodernism (though she acknowledges that most philosophers reject it) follows from what most philosophers believe, and that is, in her account, Hume’s and Kant’s views on perception. But it is not remotely accurate that postmodernists became postmodernists because of the Humean-Kantian views on perception they learned from “the teachings of the philosophy department.”

If it really were the case that postmodernism were due to Hume and Kant, we might wonder why the great majority of philosophers, familiar with Hume and Kant, have no time for postmodernism, while people in literary and ‘culture’ fields, knowing nothing about Hume or Kant, subscribe to postmodernism. We might also wonder why postmodernism waited till the 1960s to put in an appearance, instead of beginning in the eighteenth century.

The Downfall of Idealism

Indeed, we might take our curiosity about history further and wonder why non-realism in the form of idealism dominated English-language philosophy until the 1890s, and was then rapidly dislodged from this dominant position by Russell and Moore, since when representative realism (in various forms) has had considerably more influence. (Probably the major rival of representative realism would be phenomenalism, which I will not pursue here, except to say that it owes something to Hume and nothing to Kant.)

The story of how the dominance of idealism in anglophone philosophy was (rather dramatically and suddenly) overturned is told in Peter Hylton, Russell, Idealism, and the Emergence of Analytic Philosophy. It’s true that Russell later moved to a position known as ‘neutral monism’, an attempt to avoid both idealism and realism but which Karl Popper claims is fundamentally similar to idealism (Realism and the Aim of Science, pp. 90–91). However most anglophone philosophers didn’t necessarily accept neutral monism and probably continued to embrace some form of representative realism. Many of them became materialists. We can define materialism (or ‘physicalism’) as realism plus the view that reality consists entirely of what used to be described as ‘matter in motion’ but is now more fashionably rendered as ‘particles in fields of force’.

A good insight into what happened to philosophy in the English-speaking world is provided by A.C. Ewing’s fine book of 1934, Idealism: A Critical Survey. Ewing was a realist, at a time when avowed idealism had become a rapidly dwindling minority among philosophers. He wanted to explain just what the disappearing idealism had been and why it was demonstrably incorrect, while preserving certain valuable insights he believed some idealists had contributed. Ewing’s book is a respectful autopsy on idealism; it gives a meticulous account of the arguments which had led earlier philosophers to embrace idealism, and the more recent counter-arguments which had led them to abandon idealism.

Many of these counter-arguments were not so much demonstrations that idealism was wrong as demonstrations that arguments in favor of idealism were flawed. Remember, Aristotelian or syllogistic logic had recently been replaced by modern logic, and this was a big deal.

One thing we should be clear about is that, historically, idealists were no less respectful of the objectivity of truth than realists. They didn’t suppose that they could make up the truth about reality according to their taste, or that any theory was as good as any other. They thought that the only reality we could know was constructed by our minds; they did not think that we had any discretionary control over the way this happened. Confronted with the contention that the Copernican account of the solar system is no more or less objectively accurate than the traditional account of some hunter-gatherer tribe (a view now commonly held in literary and culture-studies circles), Bradley, McTaggart, or Royce would have had pretty much the same response as Russell, Quine, or Searle, including astonishment that any functioning biped could countenance anything so ridiculous.

Idealism held that we’re not free to choose for ourselves the way in which the mind shapes reality: this is something involuntary, determined independently of our will. Idealists and realists would agree completely on the facts of astronomy, mechanics, or medicine; it’s just that idealists considered these facts to be inescapably and irreducibly molded or ordered by our minds, while realists maintained they were descriptions of a reality which was independent of our minds—though of course the descriptions themselves were products of our minds.

In one sense, idealism is the diametric opposite of postmodernism, because idealism holds that the necessary structure of the mind—the same for all minds and indeed for all possible minds—determines how we must inescapably conceive of the world, while postmodernism holds that different and contradictory ways of conceiving the world can be equally valid (a view that would have been quite baffling to idealists). This diametric opposition was understood by at least some of the originators of postmodernism, who deliberately included Kant and Hegel among the ‘moderns’ they were repudiating.

A Misleading Depiction

One of the misleading things about Nelson’s account is that she supposes that because Hume and Kant are ranked by knowledgeable people as outstanding philosophers, therefore philosophy students are encouraged to read them uncritically. This is ludicrously far from the case. No one is taught Hume or Kant in a philosophy course without being given a barrage of standard objections to their arguments. The student will be told about naive realism, representative realism, and perhaps two or three forms of non-realism, the points in favor of each of these positions and the points against. There will be explanations of Descartes, Locke, Berkeley, Hume, and Kant, including criticisms which explain why much of these authors’ work is not accepted by philosophers today.

Kant’s Critique of Pure Reason is acknowledged to be difficult (though no more difficult than, say, first-year college physics minus the math; Nelson’s contention, p. 143, that no one can make any sense of it is mistaken), and the student will probably use a short published ‘commentary’ or introductory guide. A short commentary will draw attention to things in Kant which can’t be accepted, things which are problematic because of specified objections, things where there is dispute about exactly what Kant meant (with attention to the vulnerability of what he meant on each interpretation), and so forth.

Anyone who regularly talks to a lot of faculty and grad students in philosophy departments knows that postmodernists are very thin on the ground, whereas they’re very thick in literary and ‘culture’ disciplines. The year before Quee’s book appeared, a little book by Paul Boghossian came out, Fear of Knowledge, straightforwardly debunking “relativism and constructivism” from a very conventional philosophical standpoint.

Here Boghossian points out that “anti-objectivist conceptions of truth and rationality” are generally rejected within academic philosophy, and as a result, there has been “a growing alienation of academic philosophy from the rest of the humanities and social sciences, leading to levels of acrimony and tension on American campuses that have prompted the label ‘Science Wars’” (Fear of Knowledge, p. 8).

Despite its simplicity and brevity, Boghossian’s book was favorably reviewed in prestigious philosophy journals. The review by Harvey Siegel concludes: “Boghossian has wise things to say concerning the contemporary split between ‘academic philosophy’, which by and large rejects the target views [relativism and constructivism], and the rest of the humanities and social sciences, which, unfortunately in Boghossian’s view as in my own, are far more welcoming of them.”

The truth is that contemporary philosophy tends to be realist and philosophers suffer because of their opposition to the fashionable anti-realism prevalent in other humanities disciplines. Meanwhile, Nelson spreads the story that mainstream academic philosophers are responsible for the non-realism of these non-philosophers, a story which is some considerable distance from the truth.

Nelson maintains that postmodernism derives its anti-realism from Hume and Kant and in general from skepticism about perception. But if you look at the arguments proponents of postmodernism offer for their anti-realism, you find that they appeal to cultural relativism and social constructivism, not to skepticism about perception, and if you look at current philosophical critiques of postmodernism, such as Boghossian’s Fear of Knowledge, you find that they barely mention skepticism about perception.

To take another example, in his excellent little introductory book on metaphysics, Peter van Inwagen, a realist philosopher well acquainted with non-realist thinking gives an account of idealism, the arguments for it and against it (pp. 58–67), and here discusses skepticism about perception, and separately he gives an account of modern non-realism, what I have been calling postmodernism, the arguments for it and against it (pp. 93–108), and here he doesn’t mention skepticism about perception.

I have said that Nelson gives the false impression that anglophone philosophers are predominantly non-realist. Mostly she does this by innuendo and rhetorical spin, but on page xi, she offers two pieces of direct evidence for her claim.

The first is a reference to John Heil as cited in Michael Devitt’s book Realism and Truth. Heil reported in 1989 that the number of current books advocating anti-realism exceeded the number of pro-realist books. This doesn’t tell you how many philosophers belong in one camp or the other, and the realist Devitt seems to acknowledge (p. xii) that his earlier impression that anti-realism was “rampant” was mistaken.

The second piece of evidence Nelson reports as follows: “One of the latest books from Oxford University Press still assures us of “our epistemological enlightenment, where we have corrected our ordinary, naive view, and accepted that external items are not accessible to sense-perception.” Here Nelson conveys the impression that the stated view is alive and kicking among philosophers, and perhaps that being published by Oxford University Press is a seal of approval for a work’s conclusions.

The book referred to here is by John Foster (1941–2009), well known as that peculiar and possibly unique anomaly, a contemporary advocate of Berkeleyan idealism. Oxford University Press publishes dozens of philosophy books every year, few of them advocate anti-realism, and almost none of them advocates idealism. John Searle’s 2015 book advocating direct realism (naive realism) was also published by Oxford University Press.

If you do a quick online search for reviews by philosophers of Foster’s books, you’ll easily find half a dozen, and if you read them you’ll find that almost all the reviews mention, in passing, the odd, bizarre, or unfashionable nature of Foster’s idealist position.

So, Nelson’s reference to Foster is misleading, and it is part of a seriously misleading pattern.

Where Hume and Kant Were Coming From

David Hume (1711–1776) thought that “philosophy” showed that belief in a ‘real’ world, existing independently of our awareness of it, was unfounded and indefensible. Since Hume was by temperament a hard-headed Scot, he found himself unable to accept this conclusion. He never did accept it, and he spent much of his life discussing the world based on the assumption that realism is true.

As Hume himself puts it, though “profound and intense reflection” leads to skepticism anent the world of independently-existing physical entities, “carelessness and inattention” come to the rescue, and anyone briefly convinced by skeptical arguments will find himself returning to realism within an hour (A Treatise of Human Nature, p. 218).

This is often called “the Humean shrug.” Though he believes that “philosophy” demonstrates that realism is indefensible, he thinks we have no alternative to accepting realism, even though we can only do so on completely slapdash and illogical grounds, because our rational faculties are overpowered by habit and short attention span, which automatically cause us to accept realism as a practical matter. Hume found this conclusion unwelcome, but he couldn’t see any way out, and he went on to take realism for granted in all his writings on religion, history, society, and economics.

Hume’s starting point is empiricism in the strict sense. Empiricism in the strict sense is the view that all human knowledge comes from experience or observation, which involves relying on the evidence of our senses. We accumulate knowledge by observing the world around us and by performing logical operations on our observations. We have sensory experiences, and from these we deduce the existence of tables, chairs, mountains, stars, and the rest of it. We start with an empty mind, a ‘tabula rasa’ (blank slate), and anything that gets into our mind gets there from observation, and therefore comes through our sense organs, such as our eyes and ears.

‘Empiricism’ in the looser, everyday sense means that we should take empirical evidence very seriously. All schools of philosophy are empiricist in this platitudinous sense, and from now on I will use the word ‘empiricism’ to mean only strict or empty-mind empiricism.

The impulse behind empiricism is the conviction that our view of the world is, or ought to be, derived from evidence about the world, and should not be prejudiced by gratuitous preconceptions. Since our evidence about the world can (according to empiricism) only be the information we get through our senses, our view of the world has to be derived from what our senses tell us, and from that alone.

Empiricism thus gives rise to the empiricist project or challenge: show that our common-sense or scientific ideas about the world are or can be derived from our observations of the world, and from nothing else, by a process of pure deduction.

Hume concludes that this cannot be done, that we can’t get, by any rationally defensible method, from accumulated observations (or sensory experiences) alone to a common-sense theory of the world (involving material objects, space and time, arithmetic, cause and effect, and so forth). For example, no observation of the world can ever, by itself, give us good evidence for causation. Hume acknowledges that some truths, such as 2 + 2 = 4, can be established by logical analysis, without any appeal to experience. But these are what Hume calls “relations of ideas,” not specific claims about material reality.

Since empiricists normally start out by wanting to be able to accept realism, empiricists down the centuries have labored long and hard to come up with a defensible way to reason from the evidence of the senses to the existence of physical entities (from now on let’s stipulate physical objects, except where otherwise stated, as the easiest type of physical entities to talk about).

Unfortunately for the empiricist project, once we accept that everything has to be deduced from the evidence of our senses, what ultimately must follow is that all our knowledge about the world is inferred from or constructed from our sensory experience. But since all our experiences are necessarily subjective and mental, this seems to imply that our view of the world is composed of elements that are subjective and mental. Thus, empiricism has sometimes led to idealism, the view that the world (or of any aspect of the world we can think about and talk about) is itself made up of subjective mental elements.

Some philosophers, to this day, agree with Hume throughout (only in broad outline, of course; all Hume scholars accept numerous detailed criticisms of Hume). They are empiricists who agree that empiricism shows realism to be rationally indefensible and they agree that we are in practice bound to accept realism, and thus they defend the “Humean shrug.” Hence the realist Willard Quine’s remark that “The Humean predicament is the human predicament” (quoted by Nelson, p. 38). Others are still trying to find a way to rehabilitate empiricism by reasoning from observations alone to an objective world of physical objects, or in other words, by demonstrating that induction can be valid. Some still hope to refute Hume’s demonstration of the impossibility of valid induction by deriving induction from Bayes’s theorem (a well-known theorem in probability theory). Good luck with that.

Quee’s Student and Professor make a total hash of Hume’s problem of induction. The Student comes out with inane remarks, and the Professor, since he is just Quee’s other glove-puppet, has no idea what to say and burbles irrelevantly. Nelson’s Student triumphantly asserts that something can be logically possible but physically impossible (p. 210), as though this were something Hume hadn’t thought of! Of course, Hume’s point here is precisely that the conclusion that something is physically impossible can never be deduced from observations alone.

Where do we get the notion that anything is physically impossible? It’s not a truth of logic, so according to empiricism, it must be derived from observation. What Hume has seen is that the claim that something is physically impossible is a conclusion supposedly derived from a finite number of instances, applied to an infinite number of instances. As we might say today, it is a conclusion derived from an infinitesimal bit of spacetime, applied to all of spacetime. Any such supposed inference is deductively invalid. (It is invalid according to modern logic and it is also invalid according to the more primitive and incomplete Aristotelian or syllogistic logic known to Hume.) So, Hume’s question is: What’s the basis for this conclusion, since it is not a truth of logic nor a logical inference from observations? A universal generalization, such as a scientific law or a piece of folk wisdom, can never be deduced from observations alone.

Hume demonstrates the incompatibility of empiricism and realism, but Hume doesn’t address the fact that this conclusion leaves it open which of these is to be discarded. He seemed to take for granted that empiricism is equivalent to “philosophy,” or at least, to high-quality philosophy. However, instead of rejecting realism, we can consider rejecting empiricism.

If empiricism be discarded, then we don’t have to start with an empty mind which is then filled with information from observations. We can start with a mind which, before it experiences anything, is already furnished with preconceptions or expectations. According to this approach, the human mind is not a blank slate at birth; it has plenty already written on it (by billions of years of natural selection, but neither Hume nor Kant knew that), and without that stuff that’s already written on it, the mind would be unable to form a picture of the world.

If this is right, then to defend realism requires abandoning empiricism. One way to abandon empiricism is to say that the mind is not tabula rasa but has built-in preconceptions. We can’t form a mental picture of nature without putting something of ourselves into our picture of nature, right from the get-go. This is what Kant thinks, and to this extent, Kant is right.

Kant maintains that the mind comes equipped with general faculties which impose ‘categories’ on our experience. These categories include time, space, causation, and number. Kant holds that, since these concepts cannot be logically derived from the data of experience, they must be already innate in the mind. He considers them indispensable preconditions to having meaningful experiences of the world, and not themselves logically derivable from experience of the world. This leads him to make a distinction between noumena (things in themselves) and phenomena (things as they appear to us in experience). We can’t experience things without ordering them according to the categories, and so we can never get at pure ‘things in themselves’ (things as they are before they are ordered by the categories).

Kant maintains that all observations are the combined product of external reality and mind-imposed ‘categories’, and that we cannot get anywhere by questioning these categories. For instance, we can’t conceive of objects existing without their existing in space. We cannot but conceive of objects as positioned in space, and we cannot question the fundamental nature of spatial relationships, because our minds are so constructed that we can only make sense of the world by thinking of it in terms of spatial relationships. Later neo-Kantians got rid of ‘things in themselves’, and thus became more unambiguous idealists, but this was a departure from Kant.

Everyone now accepts that no one can defend original Kantianism, not only because of the anti-idealist arguments of Russell and Moore in the 1890s and 1900s, but also because of Einstein’s revolution in physics. One of Kant’s assumptions (a very widely held assumption in the eighteenth and nineteenth centuries) is that Newton’s theory of celestial mechanics simply has to be true. Newton’s theory includes the Euclidean conception of space. We now accept that Newton’s theory is false and that space is not Euclidean.

Euclid is correct in the sense that the theorems follow from the axioms, but the axioms do not correctly map reality. Euclid’s world is fictional and therefore Newton’s world is fictional. The shortest distance between two points is not a straight line and the square of the hypotenuse is not equal to the sum of the squares of the other two sides, because spacetime is curved, something which Kant assumes that we can never seriously entertain as a possibility. Because of this, present-day Kantian metaphysicians like Michael Friedman have to accept that the ‘bounds of sense’ are much less constraining than Kant allowed, though they persist in maintaining that there are some bounds there.

If empty-mind empiricism can’t give us realism, whereas we can hope to get to realism by accepting that the human mind has built-in preconceptions, one possible conclusion, at first glance a very natural one, would be that some kind of guarantee of the truth of some propositions, perhaps very general propositions, is innate in the mind. That’s the way Kant reasoned.

But this step of Kant’s is mistaken, according to critical rationalism. Critical rationalism combines the ‘Kantian’ view that the mind has to contribute something logically prior to all experience with the ‘empiricist’ view that there are no guarantees of the truth of any claims about the way the world is, even the most general claims. (For a clear exposition of the critical rationalist position, as it emerges from the criticism and refutation of Hume and Kant, see Realism and the Aim of Science, pp. 31–88.)

(There is much more to Kant than I am mentioning here. For example, I am saying nothing about his argument from the “antinomies” or his use of the transcendental type of argument, both of which give him additional reasons to reject pure empiricism and supplement empiricism with self-evident truths known independently of experience.)

Critical rationalists say that Kant is right to conclude that the mind cannot make sense of the world unless it has built-in preconceptions which it tries to impose upon the world, but Kant is wrong in thinking that these preconceptions have to take the form of unquestionable, immovable truths. From a critical rationalist point of view, humans have an innate drive to jump to conclusions about the world, a thirst to believe in theories, along with the capacity to abandon any individual theory and replace it with a new and different theory. These theories can’t be inferred (by any valid logical process) from experience but can be suggested by experience, and once formed, can sometimes conflict with experience, and therefore can be tested against experience, possibly leading to their abandonment and replacement with new theories.

The mind is not a passive recipient of observations, but is an active explorer. Any active explorer has to begin with expectations or preconceptions. Observation must always be preceded by theory (including the unconscious theories which we only become aware that we held when they are surprisingly contradicted by experience).

The Ascent from Naivity to Physics

A classic argument against naive realism is the illusion of the bent stick. A straight stick half immersed in water looks (to someone with no prior experience of half-immersed sticks and not having been told about optics) as if it were bent. Usually, most of us learn at some point in childhood that the stick is not really bent, even though it looks bent.

Nelson’s Student has a fine old time ridiculing this point by appealing to the fact that most of us have experience of half-immersed sticks or have been told about optics (pp. 56–64). But this entirely misses the point. (Indeed, it implicitly denies the point, which is absurd. A straight stick half-immersed in water does indeed look bent.) It is question-begging to appeal to the common-sense conclusions we have accepted, as data, when what we are evaluating is precisely the claim that those conclusions were arrived at invalidly—that there is no logically sound way to get from experience to those conclusions.

Learning that the bent-looking stick is straight is one of many corrections we make to the infant’s ‘naive realism’ as we go through life. Another well-known case is the understanding that objects in the distance look smaller because they are further away. This understanding is not automatic but has to be discovered.

Traditionally, the BaMbuti Pygmies lived in the forest and never left it, their visibility always restricted to about a hundred yards. Colin Turnbull went to live among the BaMbuti to study them, and he took one of them outside the forest to the plains:

“As we turned to get back in the car, Kenge looked over the plains and down to where a herd of about a hundred buffalo were grazing some miles away. He asked me what kind of insects they were, and I told him they were buffalo, twice as big as the forest buffalo known to him. He laughed loudly and told me not to tell such stupid stories, and asked me again what kind of insects they were. He then talked to himself, for want of more intelligent company, and tried to liken the buffalo to the various beetles and ants with which he was familiar.

“He was still doing this when we got into the car and drove down to where the animals were grazing. He watched them getting larger and larger, and though he was as courageous as any Pygmy, he moved over and sat close to me and muttered that it was witchcraft. . . . Finally when he realized that they were real buffalo he was no longer afraid, but what puzzled him still was why they had been so small, and whether they really had been small and had suddenly grown larger, or whether it had been some kind of trickery.”

I grew up in England, where ambulances have a clearly marked two-note siren. As a child, I was vaguely aware that when an ambulance went by me, the pitch of its siren would drop. (According to my recollection, the fall in pitch was well over a whole-tone, in fact nearly a third, but this seems incredible now, and perhaps my memory has exaggerated it. Of course, the exact drop would depend upon how fast the ambulance was going.) I am sorry to say that I never figured out by myself that the drop in pitch was an illusion, caused by the fact that things emitting a sound and moving away from the hearer are perceived as having a lower pitch than things emitting the same sound and moving towards the hearer. It was only when I read about the Doppler Effect that it suddenly dawned on me that this was the explanation of something I had heard many times without paying attention (and, by the way, that I was an idiot). I surmise that there might even be some adults who never learn this, and continue to think that the pitch of a sound has dropped when really it has remained the same.

When I was four or five, I spent the summer with my grandparents in Rothesay, Isle of Bute. Walking with my grandfather I pointed to the sky and said “What are those white things?” He looked at me intently and said “Clouds,” or since he was a Scot, “Cloods.” I surmise he had a puzzle. Was I seriously retarded, or could it be that children from English cities had never seen clouds?

I had a puzzle, which, being an introvert and half-Scots, I would never mention. Did he mistake my question as referring to those big fluffy white things which were, obviously, duh, clouds? Or was he correctly answering my question, and was it the case that those objects I was asking about really were a special, rare type of cloud? Soon after this, I came to understand that these small semi-translucent circular white objects were not in the sky at all, but in my visual apparatus, that they were ‘floaters’ (and, by the way, that I was an idiot). My sharp awareness of the floaters was temporarily enhanced by the unusual experience of a ‘big sky’, undistracted by buildings, tall trees, or other objects.

We repeatedly make corrections to our picture or theory of the world, as we learn by trial and error to interpret the evidence of our senses more accurately. The naive realism of the toddler gives way to the less naive realism of the adult, and then to the even less naive realism of the scientifically informed adult.

You might wonder what happens in the first few weeks, months, and years following birth. We have recently come to know a lot about this, thanks to Alison Gopnik and her colleagues. The answer is that even more elementary ways of interpreting the world have to be learned by conjecture and refutation, or trial and error, the only way they could be arrived at. The fact that objects can continue to exist when they disappear from view has to be discovered by trial and error. Some of these things we learn so fast that it seems likely we have an inborn proclivity to learn them. For example, contrary to what used to be supposed (on the basis of armchair speculation), the baby understands he is an individual separate from the rest of the world by no later than the first few weeks after birth.

We learn from science that the Sun does not move across the sky, instead, the appearance of the Sun’s movement arises because the Earth is spinning. The stars do not twinkle; the appearance of twinkling is due to the interference of our planet’s atmosphere. We sometimes see a rainbow in the sky but there is no object there, corresponding to this rainbow; it’s a trick of the light. There is no pot of gold at the foot of the rainbow, not just because there are no leprechauns, but because there is no foot of the rainbow. There is no water in the desert in the experience of witnessing a mirage; another trick of the light. A pain in my toe is not in my toe at all; my brain makes a map of my body and the pain is in the ‘toe’ part of that map—hence, I can experience a completely genuine pain in my toe even if my legs have been amputated.

As we learn more and more, our realism becomes less and less naive. Educated people take for granted that the vast majority of the volume of a solid object such as a granite rock is empty space, and that there is more heat in an iceberg than in a red-hot poker, though these assertions, now common sense among the scientifically literate, would have sounded like mystical riddles as recently as two hundred years ago.

Bertrand Russell famously made the remark, quoted by Nelson, that “Naive realism leads to physics, and physics, if true, shows that naive realism is false.” (The Slightest Philosophy, p. 68).

Russell was fond of paradoxical-sounding bons mots, but the truth stated here, slightly misleadingly for paradoxical effect, is not a genuine paradox. When science corrects naive realism, the scientist replaces naive realism with a less naive realism, and this is in principle no different from the corrections a normal adult non-scientist has been making all her life. When a correction is made, all the previous observational history of the person making the correction is re-interpreted. Some parts or aspects of the earlier naive realism are preserved in the not-so naive realism that supplants it. The person, so to speak, checks to make sure that nothing she did when getting to the new theories relies upon the now discarded parts of the old theories.

Science refutes many notions held by naive realists, but it does so in such a way that we can (and do) adapt our common-sense notions to certain corrections by science. In doing so, science explains the partial or approximate truths contained in our early version of naive realism. It is a key part of critical rationalism that false theories, including theories once held to be true but now acknowledged to be refuted, can continue to be very useful.

The air of paradox in the Russell quote arises from our tendency to read it as saying that science is somehow indebted to uninformed naive realism. Although this is true in terms of chronological progression, it is not true in terms of the logic of statements. By the time someone is undertaking scientific enquiry, they have corrected and replaced their earlier naive realism. Thus the statement “Naive realism leads to physics” is parallel to “The belief that the Earth is stationery leads to the belief that the Earth is moving,” or “Newton’s theory of gravity leads to Einstein’s.”

Common sense is a set of theories of how things are, and as we become more scientifically educated, we understand that common sense uninformed by science includes false theories which require correction, though often false theories that have some workable approximation to the truth within a limited range. But a new common sense emerges among scientifically educated people. The new common sense is less naive than the common sense of the scientifically illiterate. The old picture is abandoned and the new picture is closer to the truth.

The new-born baby’s naive realism is corrected and revised, eventually leading to the very different naive realism of the toddler, which is further corrected and revised leading to the very different naive realism of the adult. If the adult paid attention in high-school science (and if the teacher still teaches any science instead of leftist ideology), his realism will be even more different and his conception of the world even more accurate. But the picture always remains incomplete; it may be indefinitely modified and improved.

If we ever get an opportunity, it would be fruitful to analyze the ‘common sense’ of a feral human, such as one raised by wolves. (Thanks to the techniques developed by Gopnik and others, we can now analyze what babies, and presumably feral humans, think about the world, even though we can’t literally ask them.) Past incidents of this phenomenon seem to show that the feral human is, once past a certain age, permanently unable to learn some aspects of even the most naive form of normal adult ‘common sense’. I conjecture that such a research project would find that exposure to language is essential for arriving at the adult’s naive common-sense view of the world. If so, this would indicate that mere observations of physical objects would never be enough for the child to acquire the rudiments of common-sense understanding. Even the most naive common sense of which we are normally aware is a highly elaborate theoretical system that might require an input from culture, especially language. If this were true, it would predict that a child raised from birth by a single adult in an isolated cottage, where the adult didn’t talk much, would be mentally retarded.

The Semantic Argument for Naive Realism

Nelson has an argument, which she apparently thinks is an argument for naive realism, which I can summarize as follows:

Skeptics have claimed that when we think we see external physical objects, what we really see are impressions, or appearances, or sensations, or sense-data from which we infer the existence of external physical objects. But this is wrong because we really do see external physical objects.

Her Chapter 3 is particularly concerned with this issue, though it makes numerous appearances elsewhere in her book.

The thing you have to be clear about when approaching any argument like this is that it is semantic. It’s not talking about the way things are in the world, independently of our discussion, but only about the rules governing the way we talk about things in the world. Here the assertion is that we’re not allowed to use the verbs ‘see’ or ‘perceive’ in a particular way, even though that way is part of actual English usage. If you fail to grasp this point, you will be bewitched by mere words and unable to talk any sense about actual perception.

The simple fact is that in English, we can use ‘see’ or ‘perceive’ either to refer to the experience of seeing or perceiving, or to refer to the experience of seeing or perceiving when that experience is appropriately linked to the existence of an independently-existing object seen or perceived.

Nelson supposes that the word ‘see’ must be used either for seeing a physical object (when it’s really there, and the appropriate causal connections between the object and our visual apparatus are in place), or for having the experience of seeing a physical object (when this might be a hallucination, and the object might not be really there, or the appropriate causal connections might not be in place).

In correct English, the word ‘see’ can be used for both, and it has several other meanings too (such as ‘understand’, ‘ensure’, or ‘match a bet’). Many words in natural languages have several meanings, sometimes related, though distinct, other times not obviously related. Take the word ‘table’ for instance. And I recently wrote an article in which I discussed two quite different though related meanings of the word ‘fact’.

Nelson’s “Student” and “Professor” go on for page after page arguing to no effect because they don’t acknowledge the simple truth that the word ‘see’ has both meanings in English. This is allowing yourself to be bewitched by words.

Nelson says that “if the relationship between your brain and your retina is called ‘seeing’, then we’re going to need a new word to refer to the relationship between your eyeball and a boat on the horizon” (p. 53). No, that is simply a false statement about English usage. We have to accept that, in the English language, the word ‘see’ has more than one meaning, just as many other words do. The word ‘see’ can be a phenomenological report, describing a subjective experience, or it can be a claim to have had that subjective experience in an appropriate causal relation to a real external object.

Both uses or senses of ‘seeing’ occur in English. It’s somewhat ironic that Nelson gives a chapter the title “Seeing Things.” ‘Seeing things’ is a standard and very common term in English for hallucinating—seeing things that aren’t there and don’t exist at all.

It’s a fact about the English language that ‘see’ has more than one meaning, and can be used to apply to subjective visual experiences with no external correlate. If you were instructing a foreigner learning English that the word ‘see’ could not be used in this way, you would be telling him an untruth and impairing his grasp of English.

You might wish that ‘see’ did not have this meaning in English (and the corresponding term in, at least, all closely related languages I’m aware of, such as French, German, and Spanish). You might propose that we adopt a different convention for epistemological discussions, and get rid of this meaning of ‘see’. This is what some philosophers, including some quoted by Nelson, have proposed. You might even make this proposal as a linguistic change in the English language, the way some folks advocate that we should load the language up with 272 pronouns to stand for 272 ‘genders’. But as things are, the use of ‘see’ to denote the experience of awareness of a mental image is normal, correct English.

The fact that there is a logical gap between our experience of seeming to perceive an external object and the actual existence of the external physical object is not a fact that can be made to go away by any mere analysis of terminology.

Since I was reviewing Nelson’s book, I skimmed through John Searle’s recent book in which he advocates naive realism, though he prefers to call it “direct realism.” Searle is an outstanding philosopher and I was much impressed by his work on intentionality and on consciousness (though I was already prepared for the possibility that he might screw up badly, because of his really terrible attempt to derive ‘ought’ from ‘is’.)

I found that he has an argument essentially the same as Nelson’s. Searle says that we “directly” perceive external physical objects, and that when we mistakenly think we’re perceiving an object (mistaken because there’s really no object there) we’re not actually perceiving anything.

This argument, in its Nelsonian or Searlian form, is a linguistic or semantic argument. It is pure talk about talk, and even as such it is false. It claims that what we mean by ‘seeing’ is seeing an external physical object, and that therefore it must be wrong to say that we ‘really see’ some intermediate mental entity between ourselves and the external physical object. This is simply a false allegation about correct English usage. Other writers, some of them quoted by Searle, who say that we only ‘see’ in the sense of having a visual experience, and that we don’t really see external objects, are of course equally wrong.

Thus Nelson’s and Searle’s arguments (really the same argument) fail. We can be said to see something intermediate between the object and ourselves. But also, it’s not essential to talk that way. In other words, it’s a fact about English that ‘seeing’ (in one sense) can be used as synonymous with ‘seeming to see’ (where ‘see’ has another sense). We can put the same point differently by saying, for example, by saying that we can have the same visual experience whether or not there is really some external object (and precisely the appropriate external object) causing that visual experience. Whether or not we choose to say that the experience without the object is a case of ‘seeing’ is simply a question of freedom of choice within linguistic convention and gives us no information whatsoever about perception or epistemology.

The fact that the Nelson-Searle argument is purely semantic can be confirmed by translating the traditional discussion of perception, including the arguments for representative realism, into Nelsonian or Searlian terminology. Instead of saying that we ‘see’ an immediate ‘object’ of sensory experience, we can say that we ‘seem to see’ or that we have ‘a visual experience of’. The arguments are unchanged by this translation, and still make the same sense, confirming that the Nelson-Searle argument is exclusively about the meanings of words.

Other Arguments for Direct Realism

Searle, of course, gives other arguments for what he calls direct realism, and I won’t try to cover most of them here. I will mention one argument, because it’s very simple, and he finds it completely convincing while I find it totally unpersuasive. The same argument is given by Armstrong—both of them acknowledge getting it from Berkeley (who used it to argue against representative realism and therefore in favor of non-realism).

Searle’s argument is that representative realism claims that the sense data (or whatever we want to call the subjective experience of perceiving) resemble the object perceived. He says this can’t be right because the object perceived is invisible and undetectable except via the sense data, therefore the sense data and the object cannot be said to resemble each other. He claims this is like saying I have two cars in the garage which resemble each other even though one is completely invisible (Seeing Things as They Are, pp. 225–26).

But how can Searle avoid saying that the experience we have of seeing something has some resemblance to the object seen? If it doesn’t, it can’t allow us to conclude that we are ‘seeing things as they are’, can it?

Anyway, this argument proves too much, because it rules out all cases of becoming aware of B through awareness of A, where there is no other way to become aware of B. Searle’s argument would imply that it must always be wrong to say that a model we have mentally constructed of some phenomenon we have detected resembles the phenomenon. And this is trivially false.

For example, consider the technique of observing the structure of deep layers of the Earth by means of tomography. (Think of the guy looking at the computer screen in Dante’s Peak.) There is no other way to observe this structure except through tomography, but we don’t therefore conclude it’s meaningless to say that the graphic which appears on the screen resembles the structure of the deeper levels of the Earth.

You might object to this that it is not a pure case of perception, since we are using normal vision to look at the graph on the computer screen. This is actually irrelevant, but if it bothers you, imagine that, some time in the future, when for some reason it becomes vitally important to be aware of changes in the structure of deep levels of the Earth, people’s brains are wired to the tomography equipment, so that they don’t look at a screen but just see the graphic as a visual experience. People would be perceiving the structure of deep layers of the Earth, by dint of the fact that they were having a subjective experience of seeing something which resembled the deep layers of the Earth. Searle would have to say they are directly perceiving the deep layers of the Earth and Nelson would have to say they are naively seeing them.

After all, what does ‘resemblance’ mean, in the context of perception? It means that the mental entity gives us information about the external entity perceived. For example, a histogram showing the weight by age of the US population resembles the distribution of weight by age in the US population, and we can say this because the former gives us information about the latter. Again, if it bothers you, imagine that people’s brains are wired to the output of the research organization which collects and processes this statistical data, and they then have the experience of seeing a histogram. We would then be perceiving the distribution of weight by age in the US population, and we could say that our experience of perceiving the histogram (which would exist only inside our skulls) resembles the actual distribution of weight by age in the US population, something that we would have no other means of observing.

Someone might feebly object that providing information is not the same as resemblance, but in that case we can say that representative realists don’t need to employ the word ‘resemblance’. The representative realist can simply say that the experience of seeing, when it is veridical, gives us information about the object seen.

We come up with the hypothesis that physical objects exist, in order to make sense of our subjective experiences of perception. Generally, we come up with this hypothesis in our first couple of weeks outside the womb. How can that seriously be disputed?

If I have two cars in the garage, one of which is invisible, and we’re thinking about the hypothesis that properties of the visible car are caused by properties of the invisible car, then it makes perfect sense to say that the visible car resembles the invisible car. This assertion would be the statement of a hypothesis, a guess, a surmise, a conjecture, of course. But all our statements or beliefs about the world are hypotheses, guesses, surmises, or conjectures. They can never be anything else, or so I surmise.

The title of Searle’s book is Seeing Things as They Are. But this might be taken as hyperbole. Aside from optical illusions, which are everywhere, our perceptions are highly selective: we don’t see an object’s mass, chemical composition, electrical charge, radio-activity, ultra-violet ‘color’, or temperature. There could be an organism which saw all these, but did not see size, shape, or color in the human-visible range. Presumably such an organism would, according to Searle, also be seeing things directly (or, according to Nelson, naively) and ‘as they are’, even though its visual experience or sense-data would be very different from ours. Presumably a bat which perceives objects by echolocation also perceives things, naively or directly, as they are. And the same goes for a dragonfly or a paramecium. So, there could be many quite distinct ways of ‘seeing things as they are’ which were vastly different. I won’t say this is an incorrect use of the phrase ‘seeing things as they are’, but it’s a bit different from the most natural and immediate understanding of that phrase, according to which there would be just one form of visual appearance which could be called ‘seeing things as they are’. In that sense, there is no such thing as seeing things as they are, and never conceivably could be, because there are many different ways of seeing things, not one of them privileged over the others. In that sense, we can only see things as they are represented, though we can speculate about the accuracy of the representation, and even subject it to tests, perhaps improving it.

Is There Some Representational Entity between Perceiver and Perceived Object?

People who defend naive or direct realism sometimes frame it like this: when we perceive a physical object, there is nothing, such as a mental state or distinctive subjective experience, intermediate between us and the object.

This strikes me as so absurd that it is not even intelligible. Nonetheless, naive or direct realists do tend to use turns of phrase that evoke it. They seem uncomfortable with any admission that there is anything which might be called sensa or sense-data. The vague notion that it’s possible to deny the existence of any ‘intermediate entity’ may be what unconsciously lies behind the appeal to the purely semantic argument I refuted earlier.

Nelson agrees that “it’s hard to object to the claim that we can perceive an oncoming freight train only by means of data we have gathered by means of our senses” (p. 14). Indeed, very hard! This looks like a grudging admission that sense-data do exist, but one page later, Nelson refers to “representationalist reifications,” insinuating that the data we have gathered by means of our senses don’t really exist. (To reify is to culpably treat an abstraction as though it had concrete existence.) As we read on, later in her book, we continue to get the feeling that Nelson has a hard time letting go of the ‘absolutely unmediated’ theory.

Some naive realists apparently feel that if they grant the existence of something intermediate, such as a person’s subjective experience identical to that involved in perceiving an external object, they will have given the representative realist a foot in the door. That’s right!

Historically, some sense-data theorists got themselves into a pickle because, being mostly materialists, they felt they had to try and explain sense-data in terms of the physics of perceptual processes. If my experience of seeing a tree is not the tree (obviously correct), and not in the tree (also obviously correct), then perhaps it’s somewhere in my optical apparatus, such as in my retinas or in my brain. Hence the many different views and coined jargons in this area. But I would say that we might not yet know enough about subjective mental processes to explain them in worked-out physical specifics. (There can be little dispute that the subjective experience of perception occurs somewhere inside the perceiver’s skull.)

We can explain the essential point here and elsewhere in purely phenomenological terms (we can stick to the subjective experience without trying to translate it into physics or physiology). Whether we see a tree or have a hallucination of seeing a tree, we have a certain type of visual experience. The visual experience is common to seeing a tree and hallucinating seeing a tree. (It’s also common to a third type of possibility, for instance that we’re seeing a hologram of a tree.) That visual experience of seeing something is the kind of thing that used to be called a sense-datum. The term ‘sense-datum’ is currently still in some bad odor (which arose because of many philosophers’ involvement, in the 1950s and 1960s, with the vogue for ‘linguistic philosophy’ or ‘ordinary-language philosophy’, derived from J.L. Austin and the later Wittgenstein). I don’t care whether we rehabilitate it or drop it. We can call it a perceptual-seeming experience, or whatever. It is something that objectively exists, as any subjective experience objectively exists, and in the case where the perception is veridical, it is intermediate between the perceiver and the external object perceived.

I will briefly mention one elementary blunder often made by naive or direct realists I have talked with. They think that seeing something on television, or in a mirror, or through a telescope is indirect whereas seeing something with the naked eye is ‘direct’. This distinction is bogus. If seeing something with the naked eye is direct, then seeing something on television, or in a mirror, or through a telescope must be direct.

I see some things with the aid of spectacles. I could use contact lenses. Or I could have laser surgery on the lenses of my eyes. It should be obvious that there is no distinction in principle between these three. My body has equipment, and it can make no difference in principle if I artificially modify or augment my body’s equipment. When Armstrong or Searle call themselves ‘direct realists’, the directness does not lie in the causal process of perception, but in the alleged non-existence of the sensory experience as an object of perception.

Some deaf people can be cured of their deafness by installing a cochlear implant in their skull. This equipment works on completely different principles than the natural human apparatus of hearing. But, after a bit of practice by the patient, the result (when the operation is successful) is very similar (as described in Michael Chorost’s book Rebuilt). It is clear that we can’t reasonably say that the cochlear implant is any more or less direct than the natural system. Artificiality in itself does not make perception any less direct (and epistemology fails unless it easily encompasses cyborgs).

If any perception is direct, then all perception is direct. However, as a matter of fact, all conscious perception is indirect, and can only be indirect, in the sense that the experience of perception is not the external object perceived, and persons do conjecture the existence of the external object perceived to account for their experiences of perception.

Is This a Hallucination which I See Before Me?

In an attempt to head off the implications of the fact that people sometimes hallucinate—seeing things that aren’t there—Nelson (through her glove-puppet Student) maintains that hallucinations are always misinterpretations of something that’s really there (pp. 74–75). So there are no genuine hallucinations, only misinterpretations of things perceived. This bold claim heroically contradicts everything that psychologists know about hallucinations.

For example, there are about one million sufferers from Parkinson’s Disease in the US, and over a third of them experience hallucinations, most commonly seeing someone they know who isn’t really there, often someone who has died (Oliver Sacks thinks that these hallucinations are not due to the disease, but to the medications). These Parkinson patients see a real person, large as life, in complete detail, every hair in place, a few feet away from themselves.

Are these sick people misinterpreting a speck of dust or a ray of light as a human being? There’s no evidence for this, and if it were true, the gap between the objective stimulus and the hallucinated object would be enormous; for most purposes the situation would be the same as an apparent perception with no external stimulus at all.

In any case, arguing for skepticism about perception by appealing to hallucinations or illusions is ultimately merely illustrative and rhetorical. If, as far as we could tell, perception were always one hundred percent veridical, there would still be a logical gap between the subjective experience of perceiving an object and the independent existence of that object, though I admit it might then be tougher, as a practical matter, to get anyone interested in that fact.

Searle says that he prefers the term ‘direct realism’ to ‘naive realism’, because ‘naive realism’ has become associated with a group of philosophers known as disjunctivists (Seeing Things as They Are, p. 15). The fact that Nelson seems to deny that one can have the same subjective experience when hallucinating as when seeing a real object makes me surmise that possibly Nelson is a disjunctivist. But since I haven’t read much of the literature on disjunctivism and since Nelson’s definition of naive realism is so very obscure, I’m not sure of that.

Armstrong’s Three Arguments against Representationalism

I have said that Armstrong’s 1961 book is the best statement I have seen of a case for direct or naive realism. Armstrong starts by assuming that in the theory of perception there are three live alternatives, representative realism, direct realism, and phenomenalism. Armstrong has a chapter on the refutation of phenomenalism (much of which I agree with) and a chapter on the refutation of representative realism, in which he presents three arguments.

First, he says that according to the claim that sense-impressions are the only immediate objects of perception, we can have no reason to believe that there are physical objects. But, as I have pointed out, talk about what is or is not an immediate object of perception (or immediately perceived) is nothing more than talk about talk. If we recast the same point in different language, we remain with a logical gap between experience and external objects, and so the same objection applies to direct realism. The direct realist does not deny that a subjective experience of perception is essential to perception, nor that the subjective experience of perception is not the external object perceived, nor that the properties of the subjective experience of perception are not the properties of the external object perceived.

Armstrong states that if the representative theory is correct, “we have no evidence at all for passing from the immediate perception of sense-impressions to the mediate perception of physical objects” (p. 29). This is pure Hume, and as far as it goes, properly interpreted, it is perfectly correct.

Armstrong then considers the point that although we have no inductive evidence of the existence of the physical world, “we might form the hypothesis of the existence of the physical world; and, having formed it, we might find that it served to explain the regularities and irregularities in the flow of our sense-impressions” (p. 30). This is excellent, but Armstrong avoids the natural conclusion with the following interesting passage:

“The objection seems just, and blunts the edge of the argument. But it does not turn it altogether. For surely we are not prepared to degrade bodies into hypotheses? We want to say that our assurance of the existence of the physical world is far stronger than any assurance we could obtain by indirectly confirming a theory. If the Representative theory were true, it would be proper to have a lurking doubt about the existence of the physical world. Yet such a doubt does not seem to be proper” (p. 30).

This passage begins with a careless slip, since it is not bodies themselves which are being “degraded” into hypotheses, but our assertion or belief or theory that there are bodies. We may want to say that our assurance of this or that is stronger than any assurance we could obtain by indirectly confirming a theory, but if so, that want must be forever unrequited. The assurance we get from indirectly confirming a theory is the strongest possible assurance for any statement or belief. That’s as strong as it gets. Some might say that logical truths are stronger, and I don’t dismiss that out of hand, however no one claims that the existence of physical objects is a logical truth. The whole passage tends to confound our subjective feelings of conviction with what we can logically demonstrate.

Armstrong’s second argument is the one about resemblance, which I have already refuted in reference to Searle. And again, the essentially semantic nature of the question as posed by Armstrong and Searle means that the objection simply reappears with direct realism, for, setting aside the language of direct and indirect perception, the fact remains that the subjective experience of perception has qualities quite different from the objective properties of physical objects. This logical gap cannot be made to go away, which is of course why we ought to acknowledge that the subjective experience represents the physical object.

Armstrong’s third argument I find difficult to understand. He claims that it makes no sense to think of a physical object that can in no way be “immediately perceived” (p. 33). Again, this is leaning heavily on the semantics. He says that we can’t say that all perception of external objects is mediate, because this implies that they could be immediate, and if they could be immediate they must sometimes be immediate (that doesn’t sound like a fair summary, as it is so obviously wrong, but it is the best I can make of what he says on p. 33).

This kind of reasoning is fallacious. Just as we can entertain the possibility that there are no disembodied minds, no living things that don’t respire, or no particles lacking relativistic mass, so we can entertain the possibility that there are no examples of conscious perception without a subjective representation which is distinct from the external object perceived (but which conveys information about the object perceived and in that sense resembles it).

Political Implications of Epistemology

Views like Nelson’s have often been prevalent among libertarians, usually stemming from Ayn Rand, though as far as I noticed (Nelson has no index) Rand is not mentioned by Nelson. And I don’t know whether Rand, though she was certainly extremely naive in some ways, would have admitted to being a naive realist.

Libertarians who embrace certain metaphysical or epistemological views often believe that these views are somehow congenial to libertarianism, whereas any views they find unconvincing are the first steps to the Gulag. I criticized some of these theories about the link between epistemology and politics in my 2002 article ‘Ayn Rand and the Curse of Kant’ and here I will just add a few observations.

As Nelson seems to uneasily acknowledge, historical claims about the causal relation between philosophy and politics have nothing to do with the merits of the philosophical theory in question. For instance (an example she mentions, p. 140), supposing it to be true that Darwinism caused the Holocaust, this would have exactly no bearing on the truth or value of Darwinism as a scientific theory.

So, even if it were true that skepticism about perception had to lead to the Gulag and the Final Solution, this would not affect the question of whether skepticism about perception is sound or reasonable. But it would be saddening, and to some discouraging, if it were the case that the dissemination of a sound (or even minimally defensible) philosophical analysis led by necessity to a horrendous political outcome. I don’t think we have to worry about that, because the kind of historical causation from philosophy to politics advanced by Rand, Peikoff, Kelley, and Nelson clashes with the historical evidence at so many points.

Nelson gives us no extended discussion of politics in this book, but she sprinkles in brief remarks about politics here and there. Here are some examples.

She repeatedly associates David Hume with Nazism (pp. 222, 231, 239). The only tangible link mentioned is that Johann Georg Hamann and Johann Gottlieb Fichte each admired Hume and each influenced German nationalism. So this, as Huck Finn might say, is a stretch.

She attributes Jonathan Edwards’s role in the Great Awakening, and the fact that some of Edwards’s followers burned books, to the fact that Edwards held to a quasi-Berkeleyan idealism (pp. 228–230). But many evangelical preachers with more impact than Edwards, such as the Wesleys and Whitefield, don’t seem to have been influenced by skepticism about perception. Evangelical ‘revivals’ may be explicable by common factors such as residual Christian beliefs among the population at large, plus people’s innate desire for an all-embracing theory that will help to make sense of their lives, plus the new theological idea (preached by the Moravians and from thence transmitted to John Wesley) that a kind of emotional born-again experience could provide the believer with assurance of salvation.

Book burning has been an occasional feature of Christianity for two thousand years. Edwards was a Calvinist, and the Calvinists sometimes burned books, two centuries before Edwards (or Berkeley). The Calvinists in Geneva not only burned Servetus’s writings, but, just to be on the safe side, burned Servetus too. It’s excessively creative to scan the history of Christian book-burning, find this one preacher who was a philosophical idealist, and attribute the book-burning by some of his followers (not by him) to his idealism (of which those followers were probably unaware). Nelson says Edwards was “inspired” by his idealism, which goes beyond the evidence. Robert Lowell wrote a poem perhaps implying that Edwards was inspired by his observations of spiders, equally a stretch, though more forgivable in a poem.

Out of all the dozens of evangelical preachers, who are realists, just one of them is (in his philosophical writings unknown to the wider public) an idealist. Therefore idealism causes evangelical revivals. The logic is certainly . . . impressive.

Since I have already warned that I might ramble, I will also draw attention to the fact, often briefly mentioned in the literature on Berkeley, that there is a tension between idealism and Christian doctrine. Berkeley, a bishop in the Church of Ireland, had to watch his step. The Bible is realist. The teaching that God became flesh in Christ does not easily harmonize with the notion that flesh is nothing more than an idea in the minds of various persons. Genesis 2:7 tells us that God made man out of the dust of the ground. So, the dust of the ground predated man. And so on.

The fact that proponents of traditional religion would sometimes point to the skeptical literature to support their resort to ‘faith’ doesn’t have the ominous ramifications that Nelson imputes to it (pp. 40–41). There are many such historical facts; people use whatever arguments they find to hand. For instance, defenders of traditional religion will sometimes say that science keeps changing its mind (which is true), while some sacred scripture stays the same (also true). This shouldn’t make us look askance at the fact that science is always open to revision.

Sixteenth- and seventeenth-century Christian teachers pointed to skeptical arguments as showing how we couldn’t rely on our own abilities to get at the truth, and should therefore accept what Christianity taught. Nelson apparently concludes that philosophical skepticism strengthened belief in Christianity. But does Nelson really suppose that if these skeptical arguments hadn’t been published, those Christian teachers would have volunteered that Christian doctrine was open to doubt? In any case, this was just one response. The more orthodox line was that basic elements of the religion, such as the existence of God, could be demonstrated by natural reason.

One of Nelson’s cute throwaway lines is: “A Cambodian guerrilla deep in a steaming jungle carries a paperback copy of Rousseau, and the next thing you know, a million people are dead” (p. 17). Did I somehow miss the memo that the Khmer Rouge renounced Marxism-Leninism and went back to Rousseau? This would imply, for example, that the Khmer Rouge must have repudiated collectivism in favor of private ownership.

Although historians disagree about the extent of Rousseau’s influence on the American Founding Fathers, no one would dispute that there was some appreciable influence. A thousand times more likely than a Khmer Rouge soldier carrying a copy of Rousseau is one of George Washington’s soldiers carrying a copy of Rousseau or Hume, and this would account for the fact that the United States immediately became a totalitarian dictatorship. Oh, wait . . .

How might metaphysical anti-realism lead to hideously repressive forms of government? Two stories are detectable in Nelson. The first is that being skeptical about perception, or doubting objective reality, directly has the effect of making you more prone to totalitarian views. The second is that skepticism about perception historically caused romanticism (which includes disbelief in the efficacy of reason, or valuing emotion above reason), and romanticism historically caused totalitarianism. That’s where Rousseau comes in, since he has been seen as the father of romanticism. But Rousseau influenced Kant, not Kant Rousseau, which looks to be the wrong way around.

A problem with historical cause-and-effect stories like this is that they depend on numerous thinking individuals reacting deterministically in a specific manner to a specific situation. So, people who are skeptics about perception must be bound to respond by valuing emotion above reason (and anyone who values emotion above reason must not be doing it for any other reason than skepticism about perception). But if they’re bound to, why does it take generations for them to do it? Then, someone who values emotion above reason (and we must assume that there’s nothing else to romanticism) must respond by becoming politically totalitarian in outlook (and there must be no other reason why anyone would become totalitarian in outlook). If any of these postulates doesn’t hold, the theory is in trouble.

Other questions follow thick and fast. For instance, German metaphysics when the middle-aged Kant started his revolution, was dominated by the rationalism of Christian Wolff. Would something less or more anti-realist have come along if Kant had died at the age of fifty-six, or would Wolffian rationalism have continued? I suggest we just can’t say. Were Rousseau’s proto-romantic writings somehow connected with skepticism about perception? I doubt it. And is it self-evident that Rousseau’s influence, in its net effect, favored totalitarianism? Nelson alludes to something called “romantic totalitarianism” (p. 231). Really? Is that a thing? If the works of Byron, Berlioz, or Poe somehow advanced the cause of totalitarianism, must we accept the corollary that Balzac, Stravinsky, or Joyce fought back on liberty’s behalf?

What tends to happen with believers in such a wondrously far-fetched story is that they cite a few cases which comply with the story, or cases which their own limited knowledge enables them to falsely suppose comply with the story. However, a few compliant cases do not really corroborate such an ambitious theory of historical causation. All cases, or at least a big majority, must conform, or we ought to discard the story.

For example, what made a lot of people support totalitarianism in the twentieth century? What we find, if we look at the evidence (and I have looked), is that totalitarianism emerged out of economic collectivism. And people became economic collectivists for specific, identifiable reasons: popular theories about economics pointed to collectivism. These economic theories owed nothing to skepticism about perception or to romanticism.

So, to take one strand out of many (but the most prominent strand), if we examine the historical record of those who became socialists, and more particularly Marxists, we observe a number of things. They were epistemological realists, and they denounced skepticism about perception as the worst philosophical crime. They were not romantics by ideological background, but if anything, anti-romantics, adherents of what has sometimes been called scientism or ‘scientific ideology’. They had quite specific reasons for thinking that socialism was both preferable to capitalism and the inevitable successor to capitalism, and these reasons (the most important of which I enumerate in ‘Ayn Rand and the Curse of Kant’) had nothing to do with romanticism and everything to do with a self-consciously ‘rational’, scientific approach.

Nelson’s epistemological views are similar to those of dialectical materialism, the ruling ideology of the USSR. Lenin’s Materialism and Empirio-Criticism has essentially the same anti-skeptical metaphysics as The Slightest Philosophy, and The Slightest Philosophy (leaving out the few sentences directly about politics) would have been heartily applauded by the ideological commissars of Soviet Communism in its heyday. The Slightest Philosophy could, making allowance for references to more recent developments, have been written by a very devout Communist Party member circa 1930. (I know, you’re wondering whether this means that The Slightest Philosophy will cause a million or more deaths. I hope not, but we can’t be too careful.)

How does it come about that the first and greatest totalitarian regime of the twentieth century, and a major causal influence on all the others, had a strictly imposed official philosophy which made a huge point of insisting that skepticism about perception is philosophically the root of all evil? Why did Communist philosophers always talk about perception exactly like Quee Nelson?

Nelson alludes to Frederick Engels’s 1843 claim that German philosophy ought to culminate in communism (pp. 30–31). But if we look at this remark by Engels, it should be understood in exactly the opposite way to Nelson’s construal.

The young Marx and Engels, formerly Young Hegelians, were suddenly converted to the atheism, realism, and materialism of Ludwig Feuerbach and David Friedrich Strauss. This conversion signified a conscious and systematic rejection of idealism. Engels considered that Feuerbach’s materialism brought an end to German philosophy and directly pointed to communism (though Engels thought it was a failure on Feuerbach’s part that he did not perceive that materialism implies communism, just as presumably Nelson might think it a failure on Quine’s part that he didn’t see that realism implies laissez-faire capitalism). It’s no exaggeration to say that in Engels’s thinking as in Marx’s, it is the total repudiation of all idealism and the unconditional acceptance of realism which points to communism.

Engels’s slightly coy statement in his brief article of 1843, quoted by Nelson, was succeeded by The German Ideology (1846) and Ludwig Feuerbach and the End of Classical German Philosophy (1886), where Engels’s and Marx’s hostility to idealism and their view of a tight connection between materialism and communism are spelled out in great detail. Just to be clear, I don’t accept that any metaphysical or epistemological theory implies or causes any social or political theory, much less any political movement. Yet it would be easy to formulate a thesis that the rejection of skepticism about perception and the embrace of metaphysical realism lead to totalitarianism; after all, we observe that totalitarians are usually motivated by certainty, not doubt. I don’t think there’s anything in such a thesis, but at least it would not be as spectacularly contrary to the historical evidence as Quee’s thesis is.

In Nineteen Eighty-Four, Orwell has the Party boss O’Brien propound the theory of “collective solipsism,” according to which reality is whatever the Party says it is. Nelson says that Orwell is “caricaturing collectivist epistemology” (p. 31). But was there, as a matter of fact, any collectivist epistemology to caricature? I discuss this point in my book on Orwell (Orwell Your Orwell, pp. 307–311).

Although O’Brien’s argument is a remarkable anticipation of some aspects of social constructivism (these had been prefigured by Otto Neurath and by Mannheim’s ‘sociology of knowledge’ but it’s doubtful that Orwell knew of these), we have to be clear that no actual totalitarian regime has ever employed the “collective solipsism” type of reasoning, and it’s very unlikely that any actual totalitarian regime ever could, because totalitarian regimes, when they tell untruths (or even truths which they find it useful to draw attention to), always intend to communicate that what they say is true by the traditional absolute and objective standard of truth. Orwell is not caricaturing any existing “collectivist epistemology” but inventing a fictional epistemology in order to caricature totalitarian practice.

Notoriously, the Communist parties would sometimes suddenly switch their position on various issues, saying the opposite today of what they had said yesterday, and sometimes propagating falsehoods to help justify the current party line. Actually, this aspect of Stalinism was barely noticed by most non-leftists, while non-Communist socialists like Orwell were always acutely aware of it. The Wobblies even had a song about it (“Our Line’s Been Changed Again”).

Orwell applies to this phenomenon the type of satire he had learned from Jonathan Swift: he has the totalitarians preaching what he considers them to be practicing. The Communists never did preach anything like this; in fact, as fanatical adherents of materialism, a form of metaphysical realism, they always preached the opposite.

Orwell’s hero Winston Smith expresses the view, against the Ingsoc Party, that “reality is something external, objective, existing in its own right.” It doesn’t surprise anyone knowledgeable about Communist thinking to learn that the leading Communist ideological authority Rajani Palme Dutt, responding in 1955 to Nineteen Eighty-Four, commented that this remark by Winston Smith states what is in fact the Communist view (Meyers, George Orwell, pp. 287–88).

Finally look at this sentence by Nelson: “In the same century that Heidegger, Habermas, and DeMan imbibed totalitarian collectivism as National Socialists, Althusser, Gramsci, Sartre, Camus, Putnam, and Rorty imbibed it as international socialists” (p. 30). You might think that Nelson is here giving us evidence—looks like quite an accumulation of evidence!—for her historical thesis. But exactly what does this all amount to?

What three writers “imbibed” (one of them as a child, one as a young adult, the other in middle age) hardly shows that their distinctive philosophical views resulted from the ideology they imbibed, or vice versa. After the war, none of these three showed any obvious political influence of National Socialism. It’s not playing fair to smear Habermas because he was a schoolboy under the Third Reich. Heidegger seems to have cheerfully embraced the National Socialist regime partly because of career opportunism. To what extent he was a realist or an idealist is sometimes debated (this is the ambiguity of Husserl’s legacy, though Heidegger’s ‘being in the world’ has been seen by some as an attempted answer to Humean skepticism), but at any rate he was no romantic and he didn’t endorse National Socialism before it came to power or after it had been overthrown. De Man (presumably Paul, not his uncle Henri/Hendrik) wrote some antisemitic stuff in Nazi-occupied Belgium, apparently for reasons of self-preservation and self-advancement (for he discreetly helped out individual Jews he knew). After the war he promoted deconstructionism in literary theory. He doesn’t appear to have been influenced by skepticism about perception. He wrote a book on romanticism, which might be considered an unmasking of romanticism’s pretensions.

Gramsci and Althusser were both materialists, therefore hardcore realists (Gramsci’s concept of revolutionary praxis is not, despite what you sometimes hear, a departure from philosophical materialism; there is a question mark over Althusser’s last writings, but they have had no influence). Neither Gramsci nor Althusser were romanticists. Sartre and Camus were philosophical realists and in their literary output decidedly unromantic. Putnam and Rorty vacillated on the question of realism and they were not associated with romanticism. Of course, they had generally leftist worldviews, and that may be what bothers Nelson, but that’s the predominant fashion amongst all twentieth-century intellectuals, including the realist and anti-romantic ones. Some of the harshest attacks by realists on postmodernism have come from leftists (Chomsky, Detmer, Eagleton, Sokal).

So, all in all, if we deconstruct Nelson’s flurry of names, we just don’t find much corroboration for her historical thesis. Nelson’s method, as she scans a lot of historical data, is to pick out a handful of instances which seem to confirm her story, while ignoring the far greater number of instances which starkly contradict it.


Armstrong, D.M. 1963 [1961]. Perception and the Physical World. Routledge.

Boghossian, Paul. 2006. Fear of Knowledge: Against Relativism and Constructivism. Oxford University Press.

Chorost, Michael. 2005. Rebuilt: How Becoming Part Computer Made Me More Human. New York: Houghton Mifflin.

Devitt, Michael. 1991 [1984]. Realism and Truth. Blackwell.

Ewing, A.C. 1934. Idealism: A Critical Survey. Methuen.

———. 1996 [1938]. A Short Commentary on Kant’s Critique of Pure Reason. University of Chicago Press.

Feuerbach, Ludwig. 1986 [1843]. Principles of the Philosophy of the Future. Hackett.

———. 1989 [1841]. The Essence of Christianity. Prometheus.

Foster, John, 1982 The Case for Idealism. Routledge.

———. 2000. The Nature of Perception. Oxford University Press.

———. 2008. A World for Us: The Case for Phenomenalistic Idealism. Oxford University Press.

Friedman, Michael. 1992. Kant and the Exact Sciences. Harvard University Press.

Gopnik, Alison. 2009. The Philosophical Baby: What Children’s Minds Tell Us about Truth, Love, and the Meaning of Life. Farrar, Straus, and Giroux.

Gopnik, Alison, and Andrew N. Meltzoff. 1998. Words, Thoughts, and Theories. MIT Press.

Gopnik, Alison, Andrew N. Meltzoff, and Patricia K. Kuhl. 1999. The Scientist in the Crib: Minds, Brains, and How Children Learn. Morrow.

Hardin, C.L. 1988. Color for Philosophers: Unweaving the Rainbow. Hackett.

Hayek, F.A. 1963. The Legal and Political Philosophy of David Hume. Il Politico 28.

Hume, David. 1888 [1738–1740]. A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects. Oxford University Press.

Hylton, Peter. 1990. Russell, Idealism, and the Emergence of Analytic Philosophy. Oxford University Press.

Jameson, Fredric. 1991 [1984]. Postmodernism: Or the Cultural Logic of Late Capitalism. Duke University Press.

Johnson, Kent, and Wayne Wright. 2006. Colors as Properties of the Special Sciences. Erkenntnis 64.

Latour, Bruno. 1998, Ramses II, Est-il Mort de la Tuberculose? La Recherche 307 (March).

Lenin, V.I. 1927 [1908]. Materialism and Empirio-Criticism: Critical Comments on a Reactionary Philosophy. In Lenin, Collected Works, Volume 13. International.

Lowell, Robert. 1946. Lord Weary’s Castle. Harcourt, Brace.

Meyers, Jeffrey, ed. 1975 [1965]. George Orwell: The Critical Heritage. Secker and Warburg.

Miller, David. 1994. Critical Rationalism: A Restatement and Defence. Open Court.

———. 2006. Out of Error: Further Essays on Critical Rationalism. Ashgate.

Nelson, Quee. 2007. The Slightest Philosophy. Dog Ear.

Popper, Karl R. 1968 [1962]. Conjectures and Refutations: The Growth of Scientific Knowledge. Harper.

———. 1979 [1972]. Objective Knowledge: An Evolutionary Approach. Oxford University Press.

———. 1983. Realism and the Aim of Science. Rowman and Littlefield.

———. 2009 [1932]. The Two Fundamental Problems of the Theory of Knowledge. Routledge

Radnitzky, Gerard, and William Warren Bartley III, eds. 1987. Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Open Court.

Sacks, Oliver. 2012. Hallucinations. Knopf.

Searle, John R. 1964. How to Derive ‘Ought’ from ‘Is’. The Philosophical Review 73:1 (January).

———. 1983. Intentionality: An Essay in the Philosophy of Mind. Cambridge University Press.

——— 1992. The Rediscovery of the Mind. MIT Press.

———. 2015. Seeing Things as They Are: A Theory of Perception. Oxford University Press.

Siegel, Harvey. 2007. Review of Paul Boghossian, Fear of Knowledge: Against Relativism and Constructivism. Notre Dame Philosophical Reviews.

Steele, David Ramsay. 2002. Ayn Rand and the Curse of Kant. Liberty (August).

———. 2017. Orwell Your Orwell: A Worldview on the Slab. St. Augustine’s.

———. 2018. Is It a Fact that Facts Don’t Matter? In Daniel Yim, Galen Foresman, and Robert Arp, eds., Scott Adams and Philosophy: A Hole in the Fabric of Reality (Open Court).

Turnbull, Colin. 1961. Some Observations Regarding the Experiences and Behavior of the BaMbuti Pygmies. American Journal of Psychology 74.

Van Inwagen, Peter. 2009. Metaphysics. Third edition. Westview.

Watkins, J.W.N. 1984. Science and Scepticism. Hutchinson.

Wilkerson, T.E. 1976. Kant’s Critique of Pure Reason: A Commentary for Students. Oxford University Press.

The Libertarian Alliance

Liberty Posted on Wed, April 25, 2018 14:06:49

The Libertarian Alliance [LA].

This is an alliance between classical liberals and anarcho-liberals. It uses the longer word of “libertarian” in its title as the word, “liberal”, has been largely taken over by statists, ironically the very opposite of the free traders they replaced, as the statists are protectionists, ipso facto.

The new statist liberals arose within the UK Liberal Party, from about 1870 onwards and by the 1930s they were, by far, the great majority in the Liberal Party. The statists, who want more state activity, as they feel there is not enough politicians or enough politics in society but the traditional, or the pristine, liberals always felt was way too many politicians and far too much state activity and that it was actually dysfunctional for both individual and for social welfare.

So today’s Liberal Democrats, whom are nearly all statists, seek yet more state control but the LA members seek far less; or even none at all if they happen to be anarcho-liberals.

This is because the LA members find the state to be both uneconomic and anti-social too, as politics both wastes the money it taxes off the public and it also fosters a dependency culture that tends to sap all individual responsibility. This personal responsibility, that arises from liberty, is held by LA members to be vital to the good society.

So the main aim of the LA is social liberty; i.e. the full individual liberty that also respects, and fits in with the liberty of one and all. The means to this is both by reducing taxation and whatever the state provides, replacing it with free, or freer, institutions, to be achieved by persuading the general public, of the value of social liberty though free discussion with anyone who wants to discuss those matters with LA members but maybe more so with keen intellectuals or with outgoing extroverts who will be keen to freely discuss those matters. Thus the LA aims at repealing most of the current statutory law. It expects social liberty to allow most people, if not one and all, to flourish to the extent that they can do so as a result a freer society, if not immediately of a completely a free one. The more liberty we have the better for all people.

The LA holds public meetings, that are recorded and then placed on YouTube and the LA members take part in its own and in other Internet discussion groups to that end.

Next »