Blog Image

The London Libertarian

About the blog

Commentary and debate on politics, economics and culture from a libertarian perspective. To Libertarian Alliance Website >


Anyone can make individual contributions on any subject covered in this blog by emailing LABlog2017@Yahoo.com

Winning the Popular Vote

Politics Posted on Sun, June 30, 2019 15:56:40

Whoever Wins the Electoral College
Would Have Won the Popular Vote

Since
they lost the 2016 election to Donald Trump, many Democrats have become
preoccupied with abolishing the electoral college. They point out that Hillary Clinton “won the
popular vote”—a misleading turn of phrase, as we shall see. They conclude that
there’s something both undemocratic and un-Democratic about the electoral
college, that it’s unfair, and more importantly, biased against the left, and
should therefore be abolished.

Sometimes they even assert that if
the election had been decided by the popular vote, Hillary Clinton would have
won. A different point of view comes
from President Donald J. Trump, who remarked just after his election victory
that he would have won more easily if the election had been decided by the popular
vote.

I think in this case President Trump
is right and most of the Democrats wrong.
In fact, I would like to propose the following fundamental rule:

Whoever wins a presidential
election under the electoral college system would also have won if the election
had been decided instead by popular vote.

Quick-witted readers will have
already figured out why this might well be true. But for the benefit of the
sleepy-heads, I will now unpack this proposition at more length.

The obvious rationale for the claim
that Hillary would have won if the election had been decided by the popular
vote is that she certainly got more votes than Trump, and getting more votes
wins you the popular vote.

The equally obvious rebuttal is that many people would
certainly not have voted the same way if the election had been decided by
popular vote. Knowing that the election
was being decided by popular vote, not by the electoral college system, many
people would have voted differently, including some people who would have voted
for Trump instead of not voting at all.

Abolishing the Electoral College
Would Probably Not Help the Democrats

There’s
no going back to the electoral college as conceived by the Framers. People who vote in presidential elections
think of themselves as voting for one or another candidate for president. They don’t think of themselves as voting for
wise persons who will later pick the president.

Although there is no popular demand for abolition of
the electoral college, it does appear that any future attempt by states to
depart radically from a popular-vote system within each state (something they would
be entitled to do under the Constitution) would be met by widespread outrage. And so, the electoral college is tolerated
because it produces a result which approximates quite closely to a nationwide popular
vote. And inasmuch as it departs from a
nationwide popular vote, it does so in a way which is easy to understand and
chimes with the idea that the United States is a federation of states. People easily comprehend that the president is
chosen state by state.

Arguments for the electoral college include
the claim that some autonomous role for the states is a good thing, and the
claim that the electoral college tends to favor political tendencies which are more
evenly spread across the country, as against tendencies concentrated in some
areas. I am not going into these
arguments here. On balance, I wouldn’t shed
any tears if the electoral college system were replaced by direct election of
the president, along the lines of the system in France.

What I do want to point out is that abolishing the
electoral college would not help the Democrats, or at least that it’s not clear
it would help the Democrats and might just as likely help the Republicans.

People who want to abolish the
electoral college, currently mostly Democrats, typically say things like this: “If
the 2016 election had been decided by popular vote, then it follows that Hillary
Clinton would now be president.” This
assertion is blatantly false.
The
people who make this claim appeal to the fact that Clinton got more votes—a bigger
popular vote—than Trump in 2016. But, of
course, the conclusion most certainly does not follow! There’s all the difference in the world between
“winning the popular vote” in a system where no one who matters gives a flying
freak about the popular vote, and winning the popular vote in a system where
the popular vote is the all-important decider.

If the 2016 election had been decided by popular vote,
then the popular vote would not have been the same as it was in the actual
election. Many people would have voted differently
than they did. Many people would have
been caught up in the campaign who in fact ignored it almost completely, while voters
in certain counties, who in the actual election became centers of attention,
would have gone unnoticed.

The campaign would have been, in some conspicuous
ways, unrecognizable compared with what actually occurred: absolutely no one,
for instance, would have cared who “won” Florida or Pennsylvania, a virtually
meaningless concept under a popular-vote-decided system. A few thousand more or less Republican or
Democratic votes in California, which would have counted for absolutely nothing
in the actual 2016 election, would have been exactly as important as a few
thousand more or less Republican or Democratic votes in Michigan.

The Rules of the Game

One
point to be clear about is that under the electoral college system, no candidate
ever tries to win the popular vote
. The
popular vote is just an incidental outcome which no one is aiming for. It follows (though this is a bit harder to
see) that every candidate deliberately aims to reduce their popular vote
below what it might have been
. If
you can’t see this immediately, I will get to it shortly. (When I say “every candidate,” I mean every
candidate who’s seriously trying to win the presidency; this doesn’t necessarily
apply to candidates who’re running merely in order to “send a message.”)

Hillary Clinton was not trying to win the popular vote
in 2016; she was trying to win the electoral college. She would never knowingly have sacrificed a
single vote for state electors for the sake of any number of popular votes. And she would have sacrificed any number of
popular votes to get one more vote for the electoral college. Of course, many things a candidate might do
to increase their electoral-college vote total would also incidentally increase
their popular vote, but where there is any conflict between these two objectives,
then increasing the popular vote counts for absolutely nothing.

This is why it’s misleading to talk
about “winning” or “losing” the popular vote under a system of rules where everyone
trying to win views the popular vote as irrelevant to the capture of power. It’s like saying that someone who lost a game
of chess by being checkmated “won” the piece-taking score because he captured more
pieces than his opponent. This is just not
the way chess games are scored. And if
it were the way chess games were scored, then both players would have
played very differently, and very likely the same player would have won (because
skill in one game is transferable to skill in a somewhat similar game).

Most people with little interest in politics probably
suppose that a state is given electors in proportion to its population. In fact, states are allocated electors
according to their total number of House representatives plus senators. While this is roughly in line with population,
it does give a definite built-in advantage to voters in low-population states, each
of which has two senators just like the high-population states. While my impression is that currently this way
of determining electoral college votes probably favors Republicans slightly, it
does not favor them hugely—we can all think of some low-population states which
are solidly Democratic (Hawaii, Rhode Island, Delaware).

Incidentally, I have not seen an
arithmetic breakdown of the causes of the phenomenon where a candidate “wins the
popular vote but loses the electoral college.”
In Hillary Clinton’s case, how much (if any) of this disparity was due
to low-population states having higher electoral college representation because
of their two senators, and how much was due simply to the heavy concentration
of Democratic voters in states like California and New York? No doubt someone has analyzed this, but I haven’t
come across their conclusions.

I’m not going to do the math here, but common-sense guesswork
suggests that the heavy concentration of Democratic support in some states is much
more consequential than the built-in advantage to low-population states (though
that would not necessarily remain true if all or nearly all low-population
states happened to be of the same party).

Thwarting the People’s Will

A
typical argument by Alex Cohen for the abolition of the electoral college (in
this case, back-door abolition by means of a compact between states) asserts
that when a candidate wins the presidency despite “losing” the popular vote, this
“thwarts the people’s will.” <www.brennancenter.org/blog/national-popular-vote-explained>.

Now, first of all, as a pedantic and strictly irrelevant
yet irresistible observation in passing, anyone who favors Roe v. Wade accepts
that the people’s will should and must often be thwarted. That is the entire point of Roe v. Wade
and other Supreme Court decisions beloved of Democrats. Thwart the people’s will, dammit! The Democrats are passionately devoted to
using the Constitution to thwart the people’s will, and in this I completely
agree with them.

But, more to the point, the way the popular vote turns
out, in a system where the popular vote is not the decider, will be very different
from the way it would turn out in a system where it is the decider. And therefore, if the latter would express the
people’s will, the former cannot express the people’s will. Going only on the facts and arguments assembled
by Alex Cohen, his conclusion that the electoral college thwarts the people’s
will simply does not follow.

Alex Cohen also throws in the remark
that the electoral college “potentially lowers voter turnout,” with a link that
strongly suggests the lowering is more than merely potential. Mr. Cohen apparently says this because a high
voter turnout is considered a good thing and so this is one more strike against
the electoral college. He doesn’t notice
that this explicitly recognizes that many people don’t vote the same way under
the electoral college as they would have done under decision by popular vote,
and that therefore it removes an essential premiss for his conclusion that the
electoral college thwarts the people’s will.

A Game of Skill

Republican
voters in California, New York, and Illinois would be more inclined to vote if
we switched to decision by popular vote.
Republicans know that under the electoral college system, as long as the
state is heavily Democratic, their votes count for nothing. Under a popular vote system, they would know
that their votes count the same as any other votes anywhere in the country.

It’s not quite as simple as that, because
most people, having made it to the polling place, vote for a number of
candidates, national, state, and local, as well as for referenda (in states
like California that allow them). Having
made the effort to get to the polling place, the additional cost of voting for
one more item, such as president, is very slight. On the other hand, many California voters will
see themselves as on the losing side of state and local contests as well as the
presidential choice, so there will be some disincentive to show up to vote at all.

You might think that the number of Republicans
who don’t bother to vote for president in California because they know they
have no chance of winning will be balanced by the number of California Democrats
who won’t bother to vote for president because they have no chance of losing. However, it’s a recognized fact of voter behavior
that voters do like to vote for the side which wins, and assuming that to be true,
it seems reasonable to infer that the disincentive effect of knowing that the
state’s vote is a foregone conclusion will more severely affect California
Republicans than California Democrats.

So, we see that people will vote
differently under the two systems, regardless of anything the candidates
do. But it’s additionally true that the
candidates will campaign differently, and this will affect how people vote. That’s the entire point of campaigning, after
all. So, not only would many voters with
the same attitudes and preferences vote differently under the two systems, but
also, many voters’ attitudes and preferences would be changed in different ways
by campaigning under the two systems.

Trump won the 2016 election largely
by superior strategy. He campaigned heavily
in rust-belt states like Wisconsin, Michigan, and Pennsylvania. (There were other components to his
strategy. He continually pounded away at
a dozen key policy issues, whereas Clinton avoided talk about policy in favor
of painting Trump as an evil monster.)

Great campaigns, like great battles
and great chess games, tend to look easy in retrospect. Some people have second-guessed Napoleon at
the Battle of Jena. If things had gone a bit differently, they conclude, Napoleon
would have lost Jena . . . No! If things had gone a bit differently, Napoleon
would, in all probability, have won Jena in a different way. We have to add “in all probability” because
there’s always an element of luck.

Before the election, many conventional
experts scoffed at Trump’s decision to campaign so heavily in the rust belt. Couldn’t this amateur, this dolt, see that he
had no chance in those states? But Trump
had superior intel (Cambridge Analytica) and superior strategic vision. He had been pondering, developing, and honing his
working-class, protectionist, America-first electoral strategy for over thirty
years. Trump did not win because Hillary
was “a bad candidate,” as so many people now like to intone. Her “badness” corresponds with the conventional
wisdom of all the accredited cognoscenti before the election, who all confidently
expected her to win. Trump won because
he was an extraordinarily capable candidate
. He out-generaled the highly competent yet
conventionally-minded staff of Hillary Clinton.
Trump beat Clinton by better science and deeper thought.

Under a popular-vote system, the campaign
would have played out very differently. But
a brilliant strategist is a brilliant strategist. No doubt both Trump and Clinton would have
spent a lot of time in California, New York, and Illinois, places where, in the
actual campaign, they did next to nothing. Voter turnout in California, New York, and Illinois
would have been higher—and the increased turnout would have been higher among Trump
voters than Clinton voters. Trump would,
in any knowledgeable judgment, have won the popular vote, probably by a bigger
margin than he actually won the electoral college vote.

If you see that winning elections is a contest of abilities,
a game of skill, you will appreciate the point that a better campaigner under
one system will also be a better campaigner under a different system, just as
Napoleon was usually a better general than his opponents, whatever the terrain
or the weather. Add to that the fact
that under the present system, candidates routinely sacrifice the popular vote
to winning the electoral vote. There is
a definite trade-off between the two, and the only reason we don’t hear more
about this trade-off is because everyone is so thoroughly aware that the popular
vote just doesn’t count.

There are parts of California and other solidly
Democratic states which are “natural Trump country,” but where Trump did no
campaigning, because it would have been a complete waste. An hour’s campaigning, or a million dollars’
worth of campaign spending in California would have netted Trump more popular
votes than similar expenditures in Michigan, but this kind of move could have
lost Trump the election (while winning him the popular vote), and Trump understood
this perfectly.

Any presidential candidate deliberately makes
decisions which he or she believes will reduce his or her popular vote
(compared with what it would have been, given different decisions), in order to
maximize his or her electoral college vote.
This must always necessarily occur, whether or not the actual aggregate outcome
is to “lose” the popular vote, because a candidate will allocate each unit of
campaign resources where it will yield the biggest return in terms of electoral
college votes alone, and there will always exist many more alternative ways to
allocate each unit where it would tend to increase the popular vote by a
greater amount while being less effective at increasing the electoral college
vote. These alternative ways, as long as
they are accurately perceived as such, are just instantly dismissed from
consideration, so we tend to overlook the fact that serious candidates
always deliberately sacrifice their popular vote to their electoral college
vote
.

The Complication of Runoffs

If
the United States were ever to be converted to the popular-vote system for the
presidency, it’s certain that provision would be made for a runoff second election. In France, for example, if the first
presidential election does not give more than fifty percent to one candidate, then
a second, runoff election is held, with only the two top-scoring candidates from
the first election competing.

A runoff system for US elections under
a new popular-vote system introduced by constitutional amendment would be certain
because the people designing the new system would want to rule out the possibility
that anyone could be elected with a minority of the votes. If there were only one round of voting, with
victory going to the candidate who got the most votes, it would be possible for
a candidate with a minority of the votes to gain the presidency. Indeed, it would be more than possible, it
would very likely happen in the great majority of presidential elections.

Now, it could also happen under the electoral college system
that a candidate could get more electoral votes than any other, yet still get a
minority of all the electoral votes—even, with several strong candidates, a
fairly small minority. The Constitution provides
that in that situation, the choice of president goes to the House of Representatives.

It doesn’t look very likely at first, as a matter of practical
politics, that the Constitution will be amended to replace the decision by the
House of Representatives with a runoff election, while otherwise preserving the
present electoral college system. Referring
the decision to the House strikes most people as strange and unsatisfactory, yet
it happens quite rarely—only twice so far in US history. If it were to happen again, the House might
feel obliged to give the presidency to the candidate who had won a plurality of
electoral college votes, or might even arrange a runoff election of the top two
candidates as its way of deciding who would be awarded the presidency.

If for some reason neither of these expedients worked,
and if there were a long period of three big parties (for example because the
Democratic Party went into chronic decline and the Republicans split between
traditional Republicans and Trumpists), then a runoff election for the electoral
college might become a real possibility.
Why didn’t the Framers hit upon this obvious solution? Calling a nationwide election was not such a
simple matter in the eighteenth century.
Many voters would require a journey of a day or two to get to the
nearest polling place, assuming they could get ahold of a horse. But more significantly, presidential
elections were not originally intended to be democratic. The members of the electoral college were not
expected to follow the wishes of the voters, as they are now.

The mechanics of the electoral college
system favors an outright majority for one candidate, whereas it’s quite common
for the candidate who “wins the popular vote” to get a minority of the popular
vote—this happens in about fifty percent of presidential elections. Hillary Clinton, for example, though she “won
the popular vote,” received a minority of the popular vote in 2016. More people voted against Hillary Clinton than
voted for her, and the same, of course, is true for Donald Trump.

Hillary Clinton got 48.18 percent, compared with Trump’s
46.09 percent. Just to keep the numbers
in perspective, and not because it has any profound significance, note that Donald
Trump plus Gary Johnson got slightly more votes than Hillary Clinton plus Jill
Stein (Johnson got more than three times as many votes as Stein). If Johnson’s votes had gone to Trump and Stein’s
votes to Hillary, Trump would have “won the popular vote.”

You might point out that very likely,
in a runoff, a higher percentage of Stein’s votes would have gone to Clinton
than the percentage of Johnson’s votes which would have gone to Trump. This is probably true—if there were no
campaigning between the first and second elections. And yet, between the first and the second elections,
there would have been a second campaign, and it can’t be ruled out that Trump
would have won over more Johnson voters, and even captured some Stein voters
and disgruntled formerly Sanders voters in the second campaign, not to mention
possibly a few other Clinton voters!

This kind of exercise doesn’t prove
anything at all about the real world, because if the election had been run
under popular-vote rules, the actual vote totals, for the first round of voting,
would have been quite different from the actual vote totals in 2016. But it does serve to illustrate some of the
numerical issues.

A Test of Steele’s Rule

If
you’ve followed me this far, you’ll be able to see that the so-called “loser”
of the popular vote who wins the electoral college might easily have won the
popular vote under a system decided by the popular vote. This is entirely elementary and indisputable.

You’ll probably also agree that who wins the electoral
college is a much better indicator of who “would have” won the popular vote
under a popular-vote-decided system than is the popular vote under the
electoral college system.

These two propositions will be quickly accepted by
most people who give any serious thought to the matter. But I’m going further. I am saying that whoever wins the electoral
college would have won the popular vote (under a nationwide popular-vote
system).

My rule refers to an abstract and simplified model of
the world. (So does the contrary claim
that the winner of the popular vote under the electoral-college system would
have won the election under a hypothetical popular-vote system.) It compares an actual situation where someone
wins the electoral college with a hypothetical situation where the election would
be decided by nationwide popular vote and where the attitudes and preferences
of all the voters would (to start with, let’s say one year before the election)
be identical to what they are in the actual situation. It says that the gross outcome—who wins the
presidency—would be the same.

In practice we can never make this
comparison directly. There can’t be two
worlds where the attitudes and preferences of the voters are identical but the electoral
systems are different. So we’re dealing
with an abstract model which sheds light on reality, rather than a direct observation
of reality.

But there is an indirect empirical
test of the rule, which goes as follows.

There is a positive association between votes for the
presidential candidate and votes for House representatives of the presidential candidates’
party (all House seats are up for election on the same day as the presidential
election). If the party of the winning
presidential candidate gets the majority of House votes when the winning presidential
candidate “loses” the popular vote, this would tend to corroborate my rule and
to refute the rule tacitly appealed to by those Democrats who suppose that
Hillary would have won a hypothetical election decided by popular vote in 2016. If those Democrats are right and I am wrong, you
would expect the party of the presidential candidate who won the election but “lost”
the popular vote to get fewer House votes than the other party.

We have two recent cases where the presidential
candidate who won the election “lost the popular vote”: 2000 and 2016. In both these cases, the victorious
presidential candidate’s party “won the popular vote” for the House of Representatives. (We’re looking at the total votes cast in
elections for House candidates, not how many House seats were won or lost.)

In 2000 the Republicans got 47.6
percent of the popular vote for House seats, as against the Democrats’ 47.1 percent
and in 2016, they got 49.1 percent, as against the Democrats’ 48.9 percent. <https://en.wikipedia.org/wiki/2000_United_States_House_of_Representatives_elections><https://en.wikipedia.org/wiki/2016_United_States_House_of_Representatives_elections>.

In elections for the House, there’s
a comparatively large number of minor-party candidates (twenty-five in 2016,
plus those classed as “Independent” and those classed as “Others”). If we count only the Republicans, Democrats,
Libertarians, and Greens, the Republican plurality in 2000 rises to 49.28 and
in 2016 to 49.70.

So, the popular vote for House representatives of the
party which won the presidency but “lost” the popular presidential vote was in
each case the plurality vote. On both
occasions the party which won the electoral college while “losing” the popular
vote for president “won” the popular vote for House seats. This corroborates my rule and lends support
to the view that the electoral college vote has a fair claim to be taken as an
expression of the people’s will.

The rule I am proposing is not a theorem,
logically derived from axioms, but a generalization about two alternatives, one
of them purely hypothetical, a “counterfactual conditional.” It can therefore never be conclusively “proved”
nor even directly tested. But if we think
seriously about what’s going on in presidential elections, it seems to hold up
well, in terms of both its internal coherence and its agreement with the data.

We’re in a good position to conclude that Steele’s Rule
is worth accepting as at least highly probable, as the best practical guide to thinking
about presidential elections, and as maybe even true: Any candidate who wins
the presidency under the present electoral college system, even if he or she “loses”
the popular vote under that system, would have won the popular vote and
therefore the presidency, if the election had been conducted and decided according
to a popular-vote system.



Is Naive Realism the Cure for Postmodernism?

Book Reviews Posted on Tue, March 12, 2019 07:05:22

Is
Naive Realism the Cure for Postmodernism?

Quee
Nelson, The Slightest Philosophy. Indianapolis:
Dog Ear Publishing, 2007.

Quee
Nelson is a classical liberal who has written a book on
metaphysics and epistemology, well received by many libertarians. Several libertarians told me it was a book I
ought to read. It has received numerous
compliments but, as far as I know, no serious discussion. I very much disagree with the main thrust of Quee’s
argument and in this review I will try to explain why I disagree.

Quee’s book is entertaining, humorous, unpretentious,
readable, and displays evidence of a lot of reading and thought. It’s a book about philosophy but one that
clearly aims to include among its readership people who have not read much philosophy. This present review is the same kind of
thing; it’s not the sort of review you would expect to find in a philosophy
journal, but rather one on about the same level as Nelson’s book, that is, one
which takes nothing for granted, which is inclined to ramble, and which does
not hesitate to ‘naively’ address some elementary points.

I see this review as a kind of cognitive therapy for libertarians
who think like Quee, but unlike most therapy, I am offering it for free. So, you have a real bargain here, and
libertarians are supposed to love a bargain.
I also see this review as an encouragement to readers to get involved in
the marvelous intellectual adventure of philosophy, which entails coming to
grips with real arguments at their strongest, and understanding why these arguments
do convince intelligent people, rather than pulling faces at conclusions you
don’t like.

Nelson claims to adhere to an unpopular, minority view
in philosophy—naive realism. I adhere to
a different unpopular, minority view in philosophy—critical rationalism. Critical rationalism may be better known as
Popperianism, though there is a well-known law of nature that all Popperians
have at least one major disagreement with Popper, so we should prefer the more
impersonal term, ‘critical rationalism’.

However, on most of the issues covered by Nelson’s
book, I adhere to the conventional, consensus, or default position among
present-day English-speaking philosophers—representative realism. So, most of the time, I will be defending a fairly
mainstream philosophical position against Quee Nelson, though occasionally I
will come up with a critical rationalist twist.

Here are the main points I will be
making in what follows:

1. Nelson calls herself a naive
realist but never lets us know what ‘naive realism’ is.

2. Nelson misrepresents mainstream
academic philosophy by claiming it is completely dominated by anti-realism.

3. Nelson mistakenly claims that postmodernism is
rooted in skepticism about perception (and that it is derived from Hume and
Kant).

4. Nelson doesn’t understand the force of the
arguments of Hume and Kant.

5. Nelson mistakenly claims that idealism is a
powerful movement in present-day English-language philosophy.

6. Nelson relies upon an argument against
representative realism which is purely semantic and therefore inconclusive.

7. Nelson advances a theory about the historical
causation of political outcomes by philosophical ideas, which is full of holes.

Naive
and Representative Realism

Nelson
tells us that she is arguing for “naive” or “vulgar” realism (pp. 2–3). She says she prefers the term ‘naive realism’
to ‘direct realism’, because she thinks that the latter could be taken to deny
the complex causal chain involved in perception (pp. 10–12). But other philosophers who advocate what they
call ‘direct realism’ don’t deny this at all.

David M. Armstrong argues in favor of direct realism
in his 1961 book. As far as I know, this
is the best case ever made for direct or naive realism, but although Nelson
mentions this work (p. 9), she strangely does not say how much of it she agrees
with, or whether Armstrong’s direct realism is the same, or roughly the same,
as her naive realism. This is part of a
general problem, that Nelson’s actual position, the delineation of what she
calls naive realism, is elusive. The
reader can only be puzzled as to what Nelson’s naive realism is.

All forms of realism agree that physical entities,
such as tables, chairs, rocks, trees, stars, and clouds, exist independently of
our minds. Disputes between
representative realism (representationalism) and naive (direct) realism have
focused on the question of whether, when we see an object such as a tree, we do
so by means of seeing a mental representation of that object, or whether we
don’t see any representation but only see the object itself. (I don’t approve of that way of framing it,
but naive realists usually do frame it in some such way.)

A different distinction is that
between common-sense realism and scientific realism. Some people think there’s a troubling
conflict between these two. Common-sense
realism is the view that the things we suppose we observe as we look around in
everyday life exist, independently of our awareness of them. Scientific realism is the view that the
entities described by physics and other natural sciences exist, independently
of our awareness of them.

I don’t see common-sense realism and scientific
realism as competing alternatives. My
view is that where common sense and science clash, science is probably right
and common sense even more probably wrong.
So here my view is contrary to that of Nelson, who thinks that common
sense trumps physics (pp. 7–8).

Common sense is not fixed. Today’s common sense among educated people is
partly a product of science, or of the scientific modification of more
old-fashioned common sense. It used to
be common sense that iron boats could not float, and when gas lighting was
first introduced, many people couldn’t believe that the pipes carrying the gas
did not heat up. Common sense is an
assemblage of theories, a product of culture, it consists of memes, it is
inculcated into individuals largely by the use of language, it varies among
cultures and among sub-cultures, it has evolved over the centuries, and it is
always further revisable. Common sense often
contains valuable suggestions and it should not be ignored, but it carries no
authority.

It would be nice to be able to state
Nelson’s own characterization of naive realism and proceed from there, but
unfortunately this is not straightforward.
She tells us (pp. 2–3) that naive realism is the view that the things we
perceive “comprise” an external universe which doesn’t depend on our
perception. This implies that cosmic
rays, magnetic fields, and dark matter are not part of this universe (they
either don’t exist or belong to a different universe).

We can probably assume that this is not what Nelson
intended; what she might have meant is that the things we perceive are parts of a universe which also contains
many other entities. But this also is
unsatisfactory, because this definition would apply to all forms of realism,
representative as well as naive. So,
this definition would not identify what’s peculiar to naive realism. We never do learn how Nelson wants to define her
own naive realism, so as to distinguish it from common or garden representative
realism.

Again and again, she seems as though she’s just about
to define naive realism, or her version of it, but then she simply defines
realism, in a way which would include representative realism. To take just one
example, she says that naive realists like herself have an “unwavering faith in
the actual existence and intractable mind-independence of locomotives” (p. 10). Yet, allowing for some uneasiness about the
word “faith”—but let’s not quibble—this is just as true of representative
realists as of naive realists.

The closest Nelson comes to criticizing representative
realism is with the brief section headed “The Irrelevance of
Representationalism” (pp. 12–15). Here
she complains that many different philosophers have advocated many different
conceptions of whatever it is in the mind of the perceiver that links the
perceiver with the perceived object. She
complains about the profusion of terminology as well as definitions. And she says this doesn’t really matter, it’s
a “technical side show,” because all that “really matters” is realism versus
anti-realism, the question of whether perceived objects exist independently of
the perceiver’s mind. But if you’re
claiming to advocate naive realism, and you disparage its major alternative,
representative realism, and many of your opponents are representative realists,
it’s incongruous to say that the issue of representative realism doesn’t
matter.

In another brief discussion of representative
realism (pp. 5–7), Nelson addresses only the question of color realism. In fact, someone reading this passage with no
previous knowledge of these issues might easily conclude that the distinction
between naive and representative realism lies entirely in the color
question. Galileo, Locke, and many
others, have held that whereas some aspects of objects like shape and size are
really ‘in’ the perceived objects, aspects like color, smell, and sound are
generated in the mind of the human observer.

Today almost the only philosophers who discuss color
realism at any length are those specializing in this issue, and most of them
take the view that color is a real property of objects (see for instance the
article by Johnson and Wright). However,
this, if correct, would not entirely dispose of the color question, for there
are certain apparent facts about colors (such as ‘There can’t be a reddish
shade of green’) which, taken naively, seem to be undeniable facts about
objective colors, but are in fact (so science tells us, and here I believe
science) entirely due to the ‘design’, or specific structure, of our body’s perceptual
apparatus. As Günter Wächtershäuser
said, there’s more to vision than meets the eye.

Rejecting
Realism

The
historically most influential form of non-realism (unless you count Plato) was
idealism, classically exemplified in George Berkeley, and fashionable at the
end of the nineteenth century in the version known as ‘absolute idealism’, as
taught by F.H. Bradley, J.M.E. McTaggart, and Josiah Royce. Idealism claims that all our knowledge is
essentially mental and subjective, and therefore the only things we can know
about are ultimately products of the mind.

Idealism has little following
today. Opposition to realism mostly comes
from cultural relativism or social constructivism, sometimes lumped together as
‘postmodernism’. Postmodernism has very
little following among philosophers,
but it has a huge following—it is almost the reigning orthodoxy—among academics
in literary and ‘cultural’ disciplines.

Nelson conveys the impression, once or twice by direct
assertion but much more often by insinuation, that non-realism is the dominant
position among anglophone philosophers. But
this is mistaken; probably most philosophers (meaning faculty and grad students
in university philosophy departments) are realists. I will mention some indications of this fact,
and of how Nelson misrepresents the current state of academic philosophy, as
this review proceeds.

To avoid possible misunderstanding, I should add that
philosophy as an academic discipline has become so specialized that many
philosophers never have occasion to address metaphysical issues like realism, and
also that some people I would classify as representative realists may not call
themselves by that term. Representative
realism is such a wide and general category—the obvious default category—that
some adherents may not see the need for any identifying label.

For the sake of brevity and
simplicity, I’m going to cover just two forms of realism—representative realism
(representationalism) and naive (or direct) realism. I’m not going to offer my own distinction
between naive realism and direct realism, as some writers do, but will treat
these as equivalent. Nor will I give separate
attention to different aspects of realism, such as truth, objectivity, and
mind-independence. Since I am reacting
to what Quee Nelson says, I completely neglect a number of important arguments and
distinctions which don’t arise in her discussion.

In this review I’m concerned only
with realism about perceived physical entities.
Realism about moral or aesthetic matters would introduce a lot of additional
considerations. Many people are realists
about physical objects and non-realists about morality or aesthetics. When I use the term ‘skepticism about
perception’, this is short for ‘skepticism about perception as informing us of
a world of things independent of our minds’.

Misrepresenting
Today’s Academic Philosophy

‘Realism’
may be crudely stated as the view that the universe is largely composed of
entities which exist independently of any human awareness of them. For instance, if all conscious minds were to
be wiped out, the stars and planets would continue to exist. This is a view I hold, in common with most
philosophers.

The mainstream view in
English-language philosophy is that perceived objects do independently exist,
and this has been the mainstream view since about 1910–1920, by which point the
formerly dominant ‘absolute idealism’ had begun to be abandoned, mainly due to
the work of Bertrand Russell and G.E. Moore.

To a very large extent, current controversies about
realism versus non-realism take the form of a battle between philosophers and
non-philosophers. Realism has its
stronghold in university philosophy departments while anti-realism has its
stronghold among non-philosophers in humanities disciplines such as literary
theory, sociology, and culture studies.

Nelson conveys the impression that academic philosophy
is a bastion of non-realism. This is not
true. She states that “a skeptical
anti-realism” is “still more or less in the driver’s seat” (p. xii). In the world of English-speaking academic
philosophy, no form of skeptical anti-realism has been in the driver’s seat since
the 1890s.

Nelson not only falsely identifies anti-realism with
mainstream philosophy, but also falsely roots present-day anti-realism in
skepticism about perception. Skepticism
about perception was originally at the root of idealism, the philosophical movement
which had its heyday in the nineteenth century.
Today’s anti-realism is normally rooted in cultural relativism and
social constructivism, tendencies extremely popular among people in
non-philosophy humanities disciplines and decidedly unpopular with philosophers. Cultural relativists and social
constructivists rarely (if ever) make arguments which appeal to skepticism
about perception.

The
Professor and the Student

After
the first two chapters, Nelson develops her argument by means of a dialogue between
a “Student” and a “Professor.” She
identifies with the Student, while the views she opposes are identified with
the Professor. Her Professor is testy and
dogmatic, shifty and evasive, making feeble arguments with a display of arrogance,
and frequently saying things that are blatantly ignorant or silly, while her
Student embodies sweet reasonableness, judicious fair play, encyclopedic
erudition, and wisdom beyond his tender years.

The views preached by Nelson’s Professor are, taken in
their totality, views which no one holds.
They are views made up by amalgamating different philosophical doctrines
(or selected portions of these doctrines) which Nelson doesn’t like, and which
are unlikely to be simultaneously held by the same person. You will never find anyone who is
simultaneously a Berkeleyan idealist, a Kantian, a post-Kantian, a Hegelian, a
phenomenalist, a postmodernist, a Kuhnian, a pragmaticist, and a pragmatist,
but Nelson’s imaginary “Professor” is such a chimera. In fact you would be extremely unlikely to
find anyone who combines even two of these, though I admit that conceivably
could happen. On almost every page, the
Professor says something that no professional philosopher would ever say.

The net effect of this portrayal of
the Professor is to further emphasize Nelson’s misleading claim that
anti-realism is the dominant, orthodox, or mainstream view. Since Nelson purports to be arguing for naive
realism, it would be more appropriate to have the Professor as a representative
realist, or perhaps to have five interlocutors, representative realist, naive
realist, idealist, phenomenalist, and postmodernist—with the understanding that
idealism was included as a historical curiosity.

The
Specter of Postmodernism

Nelson
begins her book by talking for some pages about postmodernism. Why does she do this?

There’s no agreement about the definition of ‘postmodernism
and I won’t try to come up with an authoritative definition. I will say that postmodernism in philosophy,
postmodernism in the arts, and postmodernism in discussions of popular culture,
are often very different, and attempts (including attempts by some of their
followers) to represent them as being aspects of a single movement don’t work. The word ‘postmodernism’ has different usages,
and in some of the more popular areas, it may have little, if anything, to do
with non-realism or non-objectivity.

In philosophy, postmodernism is usually taken to refer
to several French philosophers (including Michel Foucault, Jacques Derrida, Jean-François
Lyotard, and Jean Baudrillard), but these writers don’t follow a single party
line, and it’s hard to arrive at a single doctrine which they all
advocate. As an example of variation in
the use of the term, the ‘postmodernist’ writings of the cultural-Marxist (or
perhaps former cultural-Marxist) literary theorist Fredric Jameson, immensely
influential among students of literature and popular culture, advance a concept
of postmodernism which has little in common with that of the French writers labeled
postmodernist, or with the concept of postmodernism as equivalent to social
constructivism and cultural relativism, or with any kind of non-realism.

Philosophical postmodernism is often identified with
the claims that ‘reality is socially constructed’, ‘truth is culturally
relative’, and ‘there is no truth, only various interpretations’. Here I’m going to accept these positions as a
rough working definition of philosophical postmodernism, which seems to broadly
agree with the way Nelson uses the word.
In this sense, postmodernism can be seen as a form of non-realism, since
it denies that there is a single objectively true account of facts. But postmodernism is very different from old-fashioned
idealism.

Use of the word ‘postmodernism’ to denote a style or
movement of philosophy dates from the 1970s, but postmodernism’s influence on
American literary and culture studies began earlier, when it, or something like
it, was called post-structuralism, deconstruction, social constructionism, or
constructivism.

There’s no dispute that some writers
commonly labeled postmodernist have said things which cause our eyebrows to
elevate. To take the most famous
example, when archeological researchers, after examining the mummy of the
Egyptian pharaoh Rameses II, concluded that he had probably died of
tuberculosis, the French philosopher Bruno Latour, sometimes described as a
postmodernist, objected that this finding must be wrong, because tuberculosis
hadn’t yet been ‘constructed’ by medical science in ancient Egypt, and therefore
couldn’t have existed at that time!

You misunderstand the situation in philosophy today if
you don’t appreciate that the majority of philosophers, including French
philosophers, view Latour’s assertion as totally hilarious, just as I do or just
as Nelson (presumably) does. Notice that
the way of thinking that leads Latour to suppose that tuberculosis didn’t exist
until the medical concept of tuberculosis was ‘constructed’ is not like
old-fashioned idealism and has nothing to do with skepticism about perception.

Scholars in literary and ‘cultural’ disciplines have
lower IQs and less exacting standards than philosophers, and these disciplines
are happy hunting grounds for Marxism, Freudianism, postmodernism, and other
fanciful belief systems currently rejected by philosophers. (I wish I could add feminism and critical
race theory, but I have to acknowledge that even professional philosophers are often
susceptible to these unsightly conditions.)
These disciplines taken together have a much higher head count than
philosophy departments. (Last time I
looked, some years ago but I doubt it has changed much, faculty and grad
students in philosophy departments in the US amounted to about 7,000, whereas
disciplines such as history, sociology, psychology, religion, and ‘culture
studies’ each amounted to several
times that number—I’m including women’s studies, African American studies, and
so forth, among “culture studies.”)

Postmodernism is one of a succession of French
philosophical tendencies, beginning with existentialism in the 1940s, which
came into anglophone, mainly American, literary theory and from there into more
popular discussion, largely bypassing anglophone philosophy. Generally speaking, these tendencies had
proportionately far more support from American non-philosophers than from
American philosophers or even from French philosophers. Most French philosophers were never
existentialists and never post-modernists.

So, the recurring pattern is that a trendy but distinctly
minority ‘coterie’ movement within French philosophy is transmitted into
American literary and ‘culture’ disciplines, gets media attention, is swallowed
by pundits, educational bureaucrats, and other ignoramuses, and is resoundingly
rejected by American and British philosophers, who then occasionally offer
criticisms of that tendency. (Perhaps
even before World War II, Bergsonianism might have followed much the same
pattern, but I don’t know enough to be sure of that.)

Nelson says: “Unfortunately postmodernists didn’t get
that way on account of ignoring the teachings of the philosophy department, but
on account of sincerely imbibing them. The
terrible truth is that postmodernism is what happens when honest, intelligent
people read the canonical philosophers and believe them” (p. x). This “terrible truth” is, at best, a wild and
unsupported surmise, and Nelson offers no corroboration for it. But it does explain why Nelson begins her
book with postmodernism even though the great majority of her book is not about
postmodernism and the great majority of philosophers are not (in any sense)
postmodernists.

Her claim is that postmodernism (though she
acknowledges that most philosophers reject it) follows from what most
philosophers believe, and that is, in her account, Hume’s and Kant’s views on
perception. But it is not remotely accurate
that postmodernists became postmodernists because of the Humean-Kantian views
on perception they learned from “the teachings of the philosophy department.”

If it really were the case that
postmodernism were due to Hume and Kant, we might wonder why the great majority
of philosophers, familiar with Hume and Kant, have no time for postmodernism,
while people in literary and ‘culture’ fields, knowing nothing about Hume or
Kant, subscribe to postmodernism. We might
also wonder why postmodernism waited till the 1960s to put in an appearance,
instead of beginning in the eighteenth century.

The
Downfall of Idealism

Indeed, we might take our curiosity about history
further and wonder why non-realism in the form of idealism dominated
English-language philosophy until the 1890s, and was then rapidly dislodged
from this dominant position by Russell and Moore, since when representative
realism (in various forms) has had considerably more influence. (Probably the major rival of representative
realism would be phenomenalism, which I will not pursue here, except to say
that it owes something to Hume and nothing to Kant.)

The story of how the dominance of idealism in
anglophone philosophy was (rather dramatically and suddenly) overturned is told
in Peter Hylton, Russell, Idealism, and
the Emergence of Analytic Philosophy
.
It’s true that Russell later moved to a position known as ‘neutral
monism’, an attempt to avoid both idealism and realism but which Karl Popper
claims is fundamentally similar to idealism (Realism and the Aim of Science, pp. 90–91). However most anglophone philosophers didn’t
necessarily accept neutral monism and probably continued to embrace some form
of representative realism. Many of them became
materialists. We can define materialism
(or ‘physicalism’) as realism plus the view that reality consists entirely of
what used to be described as ‘matter in motion’ but is now more fashionably
rendered as ‘particles in fields of force’.

A good insight into what happened to
philosophy in the English-speaking world is provided by A.C. Ewing’s fine book
of 1934, Idealism: A Critical Survey. Ewing was a realist, at a time when avowed
idealism had become a rapidly dwindling minority among philosophers. He wanted to explain just what the
disappearing idealism had been and why it was demonstrably incorrect, while
preserving certain valuable insights he believed some idealists had
contributed. Ewing’s book is a
respectful autopsy on idealism; it gives a meticulous account of the arguments
which had led earlier philosophers to embrace idealism, and the more recent
counter-arguments which had led them to abandon idealism.

Many of these counter-arguments were not so much demonstrations
that idealism was wrong as demonstrations that arguments in favor of idealism
were flawed. Remember, Aristotelian or
syllogistic logic had recently been replaced by modern logic, and this was a
big deal.

One thing we should be clear about is that,
historically, idealists were no less respectful of the objectivity of truth
than realists. They didn’t suppose that
they could make up the truth about reality according to their taste, or that
any theory was as good as any other. They
thought that the only reality we could know was constructed by our minds; they
did not think that we had any discretionary control over the way this
happened. Confronted with the contention
that the Copernican account of the solar system is no more or less objectively
accurate than the traditional account of some hunter-gatherer tribe (a view now
commonly held in literary and culture-studies circles), Bradley, McTaggart, or
Royce would have had pretty much the same response as Russell, Quine, or
Searle, including astonishment that any functioning biped could countenance
anything so ridiculous.

Idealism held that we’re not free to choose for
ourselves the way in which the mind shapes reality: this is something
involuntary, determined independently of our will. Idealists and realists would agree completely
on the facts of astronomy, mechanics, or medicine; it’s just that idealists
considered these facts to be inescapably and irreducibly molded or ordered by
our minds, while realists maintained they were descriptions of a reality which
was independent of our minds—though of course the descriptions themselves were
products of our minds.

In one sense, idealism is the diametric opposite of
postmodernism, because idealism holds that the necessary structure of the mind—the
same for all minds and indeed for all possible minds—determines how we must
inescapably conceive of the world, while postmodernism holds that different and
contradictory ways of conceiving the world can be equally valid (a view that
would have been quite baffling to idealists). This diametric opposition was understood by at
least some of the originators of postmodernism, who deliberately included Kant
and Hegel among the ‘moderns’ they were repudiating.

A
Misleading Depiction

One of the misleading things about
Nelson’s account is that she supposes that because Hume and Kant are ranked by
knowledgeable people as outstanding philosophers, therefore philosophy students
are encouraged to read them uncritically.
This is ludicrously far from the case.
No one is taught Hume or Kant in a philosophy course without being given
a barrage of standard objections to their arguments. The student will be told about naive realism,
representative realism, and perhaps two or three forms of non-realism, the
points in favor of each of these positions and the points against. There will be explanations of Descartes, Locke,
Berkeley, Hume, and Kant, including criticisms which explain why much of these
authors’ work is not accepted by philosophers today.

Kant’s Critique
of Pure Reason
is acknowledged to be difficult (though no more difficult
than, say, first-year college physics minus the math; Nelson’s contention, p.
143, that no one can make any sense of it is mistaken), and the student will
probably use a short published ‘commentary’ or introductory guide. A short commentary will draw attention to
things in Kant which can’t be accepted, things which are problematic because of
specified objections, things where there is dispute about exactly what Kant
meant (with attention to the vulnerability of what he meant on each interpretation),
and so forth.

Anyone who regularly talks to a lot
of faculty and grad students in philosophy departments knows that
postmodernists are very thin on the ground, whereas they’re very thick in
literary and ‘culture’ disciplines. The
year before Quee’s book appeared, a little book by Paul Boghossian came out, Fear of Knowledge, straightforwardly debunking
“relativism and constructivism” from a very conventional philosophical
standpoint.

Here Boghossian points out that “anti-objectivist
conceptions of truth and rationality” are generally rejected within academic philosophy,
and as a result, there has been “a growing alienation of academic philosophy
from the rest of the humanities and social sciences, leading to levels of
acrimony and tension on American campuses that have prompted the label ‘Science
Wars’” (Fear of Knowledge, p. 8).

Despite its simplicity and brevity,
Boghossian’s book was favorably reviewed in prestigious philosophy journals. The review by Harvey Siegel concludes: “Boghossian has wise things to say
concerning the contemporary split between ‘academic philosophy’, which by and
large rejects the target views [relativism and constructivism], and the rest of
the humanities and social sciences, which, unfortunately in Boghossian’s view as
in my own, are far more welcoming of them.”

The truth
is that contemporary philosophy tends to be realist and philosophers suffer
because of their opposition to the fashionable anti-realism prevalent in other
humanities disciplines. Meanwhile,
Nelson spreads the story that mainstream academic philosophers are responsible
for the non-realism of these non-philosophers, a story which is some
considerable distance from the truth.

Nelson maintains that postmodernism
derives its anti-realism from Hume and Kant and in general from skepticism
about perception. But if you look at the
arguments proponents of postmodernism offer for their anti-realism, you find
that they appeal to cultural relativism and social constructivism, not to skepticism
about perception, and if you look at current philosophical critiques of
postmodernism, such as Boghossian’s Fear
of Knowledge
, you find that they barely mention skepticism about
perception.

To take another example, in his excellent little
introductory book on metaphysics, Peter van Inwagen, a realist philosopher well
acquainted with non-realist thinking gives an account of idealism, the
arguments for it and against it (pp. 58–67), and here discusses skepticism
about perception, and separately he gives an account of modern non-realism,
what I have been calling postmodernism, the arguments for it and against it
(pp. 93–108), and here he doesn’t mention skepticism about perception.

I have said that Nelson gives the
false impression that anglophone philosophers are predominantly non-realist. Mostly she does this by innuendo and
rhetorical spin, but on page xi, she offers two pieces of direct evidence for
her claim.

The first is a reference to John
Heil as cited in Michael Devitt’s book Realism
and Truth
. Heil reported in 1989 that
the number of current books advocating anti-realism exceeded the number of
pro-realist books. This doesn’t tell you
how many philosophers belong in one camp or the other, and the realist Devitt
seems to acknowledge (p. xii) that his earlier impression that anti-realism was
“rampant” was mistaken.

The second piece of evidence Nelson reports
as follows: “One of the latest books from Oxford University Press still assures
us of “our epistemological enlightenment, where we have corrected our ordinary,
naive view, and accepted that external items are not accessible to
sense-perception.” Here Nelson conveys
the impression that the stated view is alive and kicking among philosophers,
and perhaps that being published by Oxford University Press is a seal of approval
for a work’s conclusions.

The book referred to here is by John
Foster (1941–2009), well known as that peculiar and possibly unique anomaly, a
contemporary advocate of Berkeleyan idealism.
Oxford University Press publishes dozens of philosophy books every year,
few of them advocate anti-realism, and almost none of them advocates
idealism. John Searle’s 2015 book
advocating direct realism (naive realism) was also published by Oxford
University Press.

If you do a quick online search for reviews by philosophers
of Foster’s books, you’ll easily find half a dozen, and if you read them you’ll
find that almost all the reviews mention, in passing, the odd, bizarre, or
unfashionable nature of Foster’s idealist position.

So, Nelson’s reference to Foster is misleading,
and it is part of a seriously misleading pattern.

Where
Hume and Kant Were Coming From

David
Hume (1711–1776) thought that “philosophy” showed that belief in a ‘real’
world, existing independently of our awareness of it, was unfounded and indefensible. Since Hume was by temperament a hard-headed
Scot, he found himself unable to accept this conclusion. He never did accept it, and he spent much of
his life discussing the world based on the assumption that realism is true.

As Hume himself puts it, though “profound
and intense reflection” leads to skepticism anent the world of
independently-existing physical entities, “carelessness and inattention” come
to the rescue, and anyone briefly convinced by skeptical arguments will find himself
returning to realism within an hour (A Treatise
of Human Nature
, p. 218).

This is often called “the Humean
shrug.” Though he believes that
“philosophy” demonstrates that realism is indefensible, he thinks we have no
alternative to accepting realism, even though we can only do so on completely
slapdash and illogical grounds, because our rational faculties are overpowered
by habit and short attention span, which automatically cause us to accept
realism as a practical matter. Hume found
this conclusion unwelcome, but he couldn’t see any way out, and he went on to take
realism for granted in all his writings on religion, history, society, and
economics.

Hume’s starting point is empiricism
in the strict sense. Empiricism in the
strict sense is the view that all human knowledge comes from experience or observation,
which involves relying on the evidence of our senses. We accumulate knowledge by observing the
world around us and by performing logical operations on our observations. We have sensory experiences, and from these we
deduce the existence of tables, chairs, mountains, stars, and the rest of
it. We start with an empty mind, a ‘tabula
rasa’ (blank slate), and anything that gets into our mind gets there from
observation, and therefore comes through our sense organs, such as our eyes and
ears.

‘Empiricism’ in the looser, everyday
sense means that we should take empirical evidence very seriously. All schools of philosophy are empiricist in
this platitudinous sense, and from now on I will use the word ‘empiricism’ to
mean only strict or empty-mind empiricism.

The impulse behind empiricism is the
conviction that our view of the world is, or ought to be, derived from evidence
about the world, and should not be prejudiced by gratuitous preconceptions. Since our evidence about the world can (according
to empiricism) only be the information we get through our senses, our view of
the world has to be derived from what our senses tell us, and from that alone.

Empiricism thus gives rise to the
empiricist project or challenge: show that our common-sense or scientific ideas
about the world are or can be derived from our observations of the world, and
from nothing else, by a process of pure deduction.

Hume concludes that this cannot be done, that we can’t get,
by any rationally defensible method, from accumulated observations (or sensory
experiences) alone to a common-sense
theory of the world (involving material objects, space and time, arithmetic, cause
and effect, and so forth). For example,
no observation of the world can ever, by itself, give us good evidence for
causation. Hume acknowledges that some
truths, such as 2 + 2 = 4, can be established by logical analysis, without any
appeal to experience. But these are what
Hume calls “relations of ideas,” not specific claims about material reality.

Since empiricists normally start out
by wanting to be able to accept realism, empiricists down the centuries have
labored long and hard to come up with a defensible way to reason from the
evidence of the senses to the existence of physical entities (from now on let’s
stipulate physical objects, except where otherwise stated, as the easiest type
of physical entities to talk about).

Unfortunately for the empiricist project, once we
accept that everything has to be deduced from the evidence of our senses, what
ultimately must follow is that all our knowledge about the world is inferred
from or constructed from our sensory experience. But since all our experiences are necessarily
subjective and mental, this seems to imply that our view of the world is
composed of elements that are subjective and mental. Thus, empiricism has sometimes led to
idealism, the view that the world (or of any aspect of the world we can think
about and talk about) is itself made up of subjective mental elements.

Some philosophers, to this day,
agree with Hume throughout (only in broad outline, of course; all Hume scholars
accept numerous detailed criticisms of Hume).
They are empiricists who agree that empiricism shows realism to be
rationally indefensible and they agree that we are in practice bound to accept
realism, and thus they defend the “Humean shrug.” Hence the realist Willard Quine’s remark that
“The Humean predicament is the human predicament” (quoted by Nelson, p. 38). Others are still trying to find a way to
rehabilitate empiricism by reasoning from observations alone to an objective
world of physical objects, or in other words, by demonstrating that induction
can be valid. Some still hope to refute
Hume’s demonstration of the impossibility of valid induction by deriving
induction from Bayes’s theorem (a well-known theorem in probability
theory). Good luck with that.

Quee’s Student and Professor make a
total hash of Hume’s problem of induction.
The Student comes out with inane remarks, and the Professor, since he is
just Quee’s other glove-puppet, has no idea what to say and burbles
irrelevantly. Nelson’s Student triumphantly
asserts that something can be logically possible but physically impossible (p.
210), as though this were something Hume hadn’t thought of! Of course, Hume’s point here is precisely
that the conclusion that something is physically impossible can never be deduced
from observations alone.

Where do we get the notion that anything is physically
impossible? It’s not a truth of logic,
so according to empiricism, it must be derived from observation. What Hume has seen is that the claim that
something is physically impossible is a conclusion supposedly derived from a
finite number of instances, applied to an infinite number of instances. As we might say today, it is a conclusion
derived from an infinitesimal bit of spacetime, applied to all of
spacetime. Any such supposed inference
is deductively invalid. (It is invalid
according to modern logic and it is also invalid according to the more
primitive and incomplete Aristotelian or syllogistic logic known to Hume.) So, Hume’s question is: What’s the basis for
this conclusion, since it is not a truth of logic nor a logical inference from
observations? A universal generalization,
such as a scientific law or a piece of folk wisdom, can never be deduced from
observations alone.

Hume
demonstrates the incompatibility of empiricism and realism, but Hume doesn’t
address the fact that this conclusion leaves it open which of these is to be
discarded. He seemed to take for granted
that empiricism is equivalent to “philosophy,” or at least, to high-quality
philosophy. However, instead of
rejecting realism, we can consider rejecting empiricism.

If empiricism be discarded, then we don’t have to start
with an empty mind which is then filled with information from
observations. We can start with a mind
which, before it experiences anything, is already furnished with preconceptions
or expectations. According to this
approach, the human mind is not a blank slate at birth; it has plenty already
written on it (by billions of years of natural selection, but neither Hume nor
Kant knew that), and without that stuff that’s already written on it, the mind
would be unable to form a picture of the world.

If this is right, then to defend
realism requires abandoning empiricism. One
way to abandon empiricism is to say that the mind is not tabula rasa but has
built-in preconceptions. We can’t form a
mental picture of nature without putting something of ourselves into our
picture of nature, right from the get-go. This is what Kant thinks, and to this extent, Kant is right.

Kant maintains that the mind comes
equipped with general faculties which impose ‘categories’ on our
experience. These categories include
time, space, causation, and number. Kant
holds that, since these concepts cannot be logically derived from the data of
experience, they must be already innate in the mind. He considers them indispensable preconditions
to having meaningful experiences of the world, and not themselves logically derivable
from experience of the world. This leads
him to make a distinction between noumena
(things in themselves) and phenomena
(things as they appear to us in experience).
We can’t experience things without ordering them according to the
categories, and so we can never get at pure ‘things in themselves’ (things as
they are before they are ordered by the categories).

Kant maintains that all observations are the combined product
of external reality and mind-imposed ‘categories’, and that we cannot get
anywhere by questioning these categories.
For instance, we can’t conceive of objects existing without their
existing in space. We cannot but conceive
of objects as positioned in space, and we cannot question the fundamental
nature of spatial relationships, because our minds are so constructed that we
can only make sense of the world by thinking of it in terms of spatial relationships. Later neo-Kantians got rid of ‘things in
themselves’, and thus became more unambiguous idealists, but this was a
departure from Kant.

Everyone now accepts that no one can
defend original Kantianism, not only because of the anti-idealist arguments of
Russell and Moore in the 1890s and 1900s, but also because of Einstein’s
revolution in physics. One of Kant’s assumptions
(a very widely held assumption in the eighteenth and nineteenth centuries) is
that Newton’s theory of celestial mechanics simply has to be true. Newton’s theory includes the Euclidean
conception of space. We now accept that Newton’s
theory is false and that space is not Euclidean.

Euclid is correct in the sense that the theorems
follow from the axioms, but the axioms do not correctly map reality. Euclid’s world is fictional and therefore
Newton’s world is fictional. The
shortest distance between two points is not a straight line and the square of
the hypotenuse is not equal to the sum of the squares of the other two sides,
because spacetime is curved, something which Kant assumes that we can never
seriously entertain as a possibility.
Because of this, present-day Kantian metaphysicians like Michael
Friedman have to accept that the ‘bounds of sense’ are much less constraining
than Kant allowed, though they persist in maintaining that there are some
bounds there.

If empty-mind empiricism can’t give us realism,
whereas we can hope to get to realism by accepting that the human mind has
built-in preconceptions, one possible conclusion, at first glance a very
natural one, would be that some kind of guarantee of the truth of some
propositions, perhaps very general propositions, is innate in the mind. That’s the way Kant reasoned.

But this step of Kant’s is mistaken, according to critical
rationalism. Critical rationalism
combines the ‘Kantian’ view that the mind has to contribute something logically
prior to all experience with the ‘empiricist’ view that there are no guarantees
of the truth of any claims about the way the world is, even the most general
claims. (For a clear exposition of the
critical rationalist position, as it emerges from the criticism and refutation
of Hume and Kant, see Realism and the Aim
of Science
, pp. 31–88.)

(There is much more to Kant than I
am mentioning here. For example, I am saying
nothing about his argument from the “antinomies” or his use of the transcendental
type of argument, both of which give him additional reasons to reject pure empiricism
and supplement empiricism with self-evident truths known independently of
experience.)

Critical rationalists say that Kant is right to conclude
that the mind cannot make sense of the world unless it has built-in
preconceptions which it tries to impose upon the world, but Kant is wrong in
thinking that these preconceptions have to take the form of unquestionable, immovable
truths. From a critical rationalist
point of view, humans have an innate drive to jump to conclusions about the
world, a thirst to believe in theories, along with the capacity to abandon any
individual theory and replace it with a new and different theory. These theories can’t be inferred (by any
valid logical process) from experience but can be suggested by experience, and once
formed, can sometimes conflict with experience, and therefore can be tested
against experience, possibly leading to their abandonment and replacement with
new theories.

The mind is not a passive recipient of observations,
but is an active explorer. Any active
explorer has to begin with expectations or preconceptions. Observation must always be preceded by theory
(including the unconscious theories which we only become aware that we held
when they are surprisingly contradicted by experience).

The
Ascent from Naivity to Physics

A
classic argument against naive realism is the illusion of the bent stick. A straight stick half immersed in water looks
(to someone with no prior experience of half-immersed sticks and not having
been told about optics) as if it were bent.
Usually, most of us learn at some point in childhood that the stick is
not really bent, even though it looks bent.

Nelson’s Student has a fine old time ridiculing this
point by appealing to the fact that most of us have experience of half-immersed
sticks or have been told about optics (pp. 56–64). But this entirely misses the point. (Indeed, it implicitly denies the point,
which is absurd. A straight stick half-immersed in water does indeed look bent.) It
is question-begging to appeal to the common-sense conclusions we have accepted,
as data, when what we are evaluating is precisely the claim that those
conclusions were arrived at invalidly—that there is no logically sound way to
get from experience to those conclusions.

Learning that the bent-looking stick
is straight is one of many corrections we make to the infant’s ‘naive realism’
as we go through life. Another
well-known case is the understanding that objects in the distance look smaller
because they are further away. This
understanding is not automatic but has to be discovered.

Traditionally, the BaMbuti Pygmies lived in the forest
and never left it, their visibility always restricted to about a hundred yards. Colin Turnbull went to live among the BaMbuti
to study them, and he took one of them outside the forest to the plains:

“As we turned to get back in the car,
Kenge looked over the plains and down to where a herd of about a hundred buffalo
were grazing some miles away. He asked
me what kind of insects they were, and I told him they were buffalo, twice as
big as the forest buffalo known to him. He laughed loudly and told me not to tell such
stupid stories, and asked me again what kind of insects they were. He then talked to himself, for want of more
intelligent company, and tried to liken the buffalo to the various beetles and
ants with which he was familiar.

“He was still doing this when we got
into the car and drove down to where the animals were grazing. He watched them
getting larger and larger, and though he was as courageous as any Pygmy, he
moved over and sat close to me and muttered that it was witchcraft. . . .
Finally when he realized that they were real buffalo he was no longer afraid,
but what puzzled him still was why they had been so small, and whether they really had been small and had suddenly grown larger, or
whether it had been some kind of trickery.”

I
grew up in England, where ambulances have a clearly marked two-note siren. As a child, I was vaguely aware that when an
ambulance went by me, the pitch of its siren would drop. (According to my recollection, the fall in
pitch was well over a whole-tone, in fact nearly a third, but this seems
incredible now, and perhaps my memory has exaggerated it. Of course, the exact drop would depend upon
how fast the ambulance was going.) I am
sorry to say that I never figured out by myself that the drop in pitch was an
illusion, caused by the fact that things emitting a sound and moving away from
the hearer are perceived as having a lower pitch than things emitting the same
sound and moving towards the hearer. It
was only when I read about the Doppler Effect that it suddenly dawned on me
that this was the explanation of something I had heard many times without
paying attention (and, by the way, that I was an idiot). I surmise that there might even be some
adults who never learn this, and continue to think that the pitch of a sound
has dropped when really it has remained the same.

When
I was four or five, I spent the summer with my grandparents in Rothesay, Isle
of Bute. Walking with my grandfather I
pointed to the sky and said “What are those white things?” He looked at me
intently and said “Clouds,” or since he was a Scot, “Cloods.” I surmise he had a puzzle. Was I seriously retarded, or could it be that
children from English cities had never seen clouds?

I had a puzzle, which, being an
introvert and half-Scots, I would never mention. Did he mistake my question as referring to
those big fluffy white things which were, obviously, duh, clouds? Or was he correctly answering my question,
and was it the case that those objects I was asking about really were a special,
rare type of cloud? Soon after this, I came
to understand that these small semi-translucent circular white objects were not
in the sky at all, but in my visual apparatus, that they were ‘floaters’ (and,
by the way, that I was an idiot). My sharp
awareness of the floaters was temporarily enhanced by the unusual experience of
a ‘big sky’, undistracted by buildings, tall trees, or other objects.

We
repeatedly make corrections to our picture or theory of the world, as we learn by
trial and error to interpret the evidence of our senses more accurately. The naive realism of the toddler gives way to
the less naive realism of the adult, and then to the even less naive realism of
the scientifically informed adult.

You might wonder what happens in
the first few weeks, months, and years following birth. We have recently come to know a lot about
this, thanks to Alison Gopnik and her colleagues. The answer is that even more elementary ways
of interpreting the world have to be learned by conjecture and refutation, or
trial and error, the only way they could be arrived at. The fact that objects can continue to exist
when they disappear from view has to be discovered by trial and error. Some of these things we learn so fast that it
seems likely we have an inborn proclivity to learn them. For example, contrary to what used to be supposed
(on the basis of armchair speculation), the baby understands he is an
individual separate from the rest of the world by no later than the first few
weeks after birth.

We
learn from science that the Sun does not move across the sky, instead, the appearance
of the Sun’s movement arises because the Earth is spinning. The stars do not twinkle; the appearance of
twinkling is due to the interference of our planet’s atmosphere. We sometimes see a rainbow in the sky but
there is no object there, corresponding to this rainbow; it’s a trick of the
light. There is no pot of gold at the
foot of the rainbow, not just because there are no leprechauns, but because
there is no foot of the rainbow. There
is no water in the desert in the experience of witnessing a mirage; another
trick of the light. A pain in my toe is
not in my toe at all; my brain makes a map of my body and the pain is in the ‘toe’
part of that map—hence, I can experience a completely genuine pain in my toe
even if my legs have been amputated.

As we learn more and more, our
realism becomes less and less naive.
Educated people take for granted that the vast majority of the volume of
a solid object such as a granite rock is empty space, and that there is more
heat in an iceberg than in a red-hot poker, though these assertions, now common
sense among the scientifically literate, would have sounded like mystical
riddles as recently as two hundred years ago.

Bertrand
Russell famously made the remark, quoted by Nelson, that “Naive realism leads
to physics, and physics, if true, shows that naive realism is false.” (The Slightest Philosophy, p. 68).

Russell was fond of
paradoxical-sounding bons mots, but the truth stated here, slightly
misleadingly for paradoxical effect, is not a genuine paradox. When science corrects naive realism, the
scientist replaces naive realism with a less naive realism, and this is in
principle no different from the corrections a normal adult non-scientist has
been making all her life. When a
correction is made, all the previous observational history of the person making
the correction is re-interpreted. Some
parts or aspects of the earlier naive realism are preserved in the not-so naive
realism that supplants it. The person,
so to speak, checks to make sure that nothing she did when getting to the new
theories relies upon the now discarded parts of the old theories.

Science refutes many notions held by naive realists,
but it does so in such a way that we can (and do) adapt our common-sense
notions to certain corrections by science.
In doing so, science explains the partial or approximate truths
contained in our early version of naive realism. It is a key part of critical rationalism that
false theories, including theories once held to be true but now acknowledged to
be refuted, can continue to be very useful.

The air of paradox in the Russell
quote arises from our tendency to read it as saying that science is somehow
indebted to uninformed naive realism. Although
this is true in terms of chronological progression, it is not true in terms of
the logic of statements. By the time
someone is undertaking scientific enquiry, they have corrected and replaced
their earlier naive realism. Thus the
statement “Naive realism leads to physics” is parallel to “The belief that the
Earth is stationery leads to the belief that the Earth is moving,” or “Newton’s
theory of gravity leads to Einstein’s.”

Common sense is a set of theories of how things are,
and as we become more scientifically educated, we understand that common sense
uninformed by science includes false theories which require correction, though often
false theories that have some workable approximation to the truth within a
limited range. But a new common sense
emerges among scientifically educated people.
The new common sense is less naive than the common sense of the
scientifically illiterate. The old
picture is abandoned and the new picture is closer to the truth.

The new-born baby’s naive realism is
corrected and revised, eventually leading to the very different naive realism
of the toddler, which is further corrected and revised leading to the very
different naive realism of the adult. If
the adult paid attention in high-school science (and if the teacher still
teaches any science instead of leftist ideology), his realism will be even more
different and his conception of the world even more accurate. But the picture always remains incomplete; it
may be indefinitely modified and improved.

If we ever get an opportunity, it
would be fruitful to analyze the ‘common sense’ of a feral human, such as one
raised by wolves. (Thanks to the
techniques developed by Gopnik and others, we can now analyze what babies, and
presumably feral humans, think about the world, even though we can’t literally
ask them.) Past incidents of this phenomenon
seem to show that the feral human is, once past a certain age, permanently unable
to learn some aspects of even the most naive form of normal adult ‘common sense’. I conjecture that such a research project
would find that exposure to language is essential for arriving at the adult’s naive
common-sense view of the world. If so,
this would indicate that mere observations of physical objects would never be
enough for the child to acquire the rudiments of common-sense understanding. Even the most naive common sense of which we
are normally aware is a highly elaborate theoretical system that might require
an input from culture, especially language.
If this were true, it would predict that a child raised from birth by a
single adult in an isolated cottage, where the adult didn’t talk much, would be
mentally retarded.

The
Semantic Argument for Naive Realism

Nelson
has an argument, which she apparently thinks is an argument for naive realism,
which I can summarize as follows:

Skeptics have claimed
that when we think we see external physical objects, what we really see are
impressions, or appearances, or sensations, or sense-data from which we infer
the existence of external physical objects.
But this is wrong because we really do see external physical objects
.

Her Chapter 3 is particularly concerned with this
issue, though it makes numerous appearances elsewhere in her book.

The thing you have to be clear about when approaching
any argument like this is that it is semantic.
It’s not talking about the way things are in the world, independently of
our discussion, but only about the rules governing the way we talk about things
in the world. Here the assertion is that
we’re not allowed to use the verbs ‘see’ or ‘perceive’ in a particular way,
even though that way is part of actual English usage. If you fail to grasp this point, you will be
bewitched by mere words and unable to talk any sense about actual perception.

The simple fact is that in English, we can use ‘see’
or ‘perceive’ either to refer to the experience of seeing or perceiving, or to
refer to the experience of seeing or perceiving when that experience is
appropriately linked to the existence of an independently-existing object seen
or perceived.

Nelson supposes that the word ‘see’ must be used either
for seeing a physical object (when it’s really there, and the appropriate
causal connections between the object and our visual apparatus are in place),
or for having the experience of seeing a physical object (when this might be a
hallucination, and the object might not be really there, or the appropriate
causal connections might not be in place).

In correct English, the word ‘see’
can be used for both, and it has several other meanings too (such as ‘understand’,
‘ensure’, or ‘match a bet’). Many words
in natural languages have several meanings, sometimes related, though distinct,
other times not obviously related. Take
the word ‘table’ for instance. And I
recently wrote an article in which I discussed two quite different though
related meanings of the word ‘fact’.

Nelson’s “Student” and “Professor” go
on for page after page arguing to no effect because they don’t acknowledge the
simple truth that the word ‘see’ has both meanings in English. This is allowing yourself to be bewitched by
words.

Nelson says that “if the
relationship between your brain and your retina is called ‘seeing’, then we’re
going to need a new word to refer to the relationship between your eyeball and
a boat on the horizon” (p. 53). No, that
is simply a false statement about English usage. We have to accept that, in the English
language, the word ‘see’ has more than one meaning, just as many other words
do. The word ‘see’ can be a
phenomenological report, describing a subjective experience, or it can be a claim to have had that
subjective experience in an appropriate causal relation to a real external
object.

Both uses or senses of ‘seeing’ occur in English. It’s somewhat ironic that Nelson gives a chapter
the title “Seeing Things.” ‘Seeing
things’ is a standard and very common term in English for hallucinating—seeing
things that aren’t there and don’t exist at all.

It’s a fact about the English
language that ‘see’ has more than one meaning, and can be used to apply to
subjective visual experiences with no external correlate. If you were instructing a foreigner learning
English that the word ‘see’ could not be used in this way, you would be telling
him an untruth and impairing his grasp of English.

You might wish that ‘see’ did not
have this meaning in English (and the corresponding term in, at least, all
closely related languages I’m aware of, such as French, German, and
Spanish). You might propose that we
adopt a different convention for epistemological discussions, and get rid of
this meaning of ‘see’. This is what some
philosophers, including some quoted by Nelson, have proposed. You might even make this proposal as a
linguistic change in the English language, the way some folks advocate that we
should load the language up with 272 pronouns to stand for 272 ‘genders’. But as things are, the use of ‘see’ to denote
the experience of awareness of a mental image is normal, correct English.

The fact that there is a logical gap
between our experience of seeming to perceive an external object and the actual
existence of the external physical object is not a fact that can be made to go
away by any mere analysis of terminology.

Since I was reviewing Nelson’s book, I skimmed through
John Searle’s recent book in which he advocates naive realism, though he
prefers to call it “direct realism.”
Searle is an outstanding philosopher and I was much impressed by his
work on intentionality and on consciousness (though I was already prepared for
the possibility that he might screw up badly, because of his really terrible
attempt to derive ‘ought’ from ‘is’.)

I found that he has an argument essentially the same
as Nelson’s. Searle says that we
“directly” perceive external physical objects, and that when we mistakenly
think we’re perceiving an object (mistaken because there’s really no object
there) we’re not actually perceiving anything.

This argument, in its Nelsonian or
Searlian form, is a linguistic or semantic argument. It is pure talk about talk, and even as such
it is false. It claims that what we mean
by ‘seeing’ is seeing an external physical object, and that therefore it must
be wrong to say that we ‘really see’ some intermediate mental entity between
ourselves and the external physical object.
This is simply a false allegation about correct English usage. Other writers, some of them quoted by Searle,
who say that we only ‘see’ in the sense of having a visual experience, and that
we don’t really see external objects, are of course equally wrong.

Thus Nelson’s and Searle’s arguments
(really the same argument) fail. We can
be said to see something intermediate between the object and ourselves. But also, it’s not essential to talk that
way. In other words, it’s a fact about
English that ‘seeing’ (in one sense) can be used as synonymous with ‘seeming to
see’ (where ‘see’ has another sense). We
can put the same point differently by saying, for example, by saying that we can
have the same visual experience whether or not there is really some external
object (and precisely the appropriate external object) causing that visual experience. Whether or not we choose to say that the
experience without the object is a case of ‘seeing’ is simply a question of freedom
of choice within linguistic convention and gives us no information whatsoever
about perception or epistemology.

The fact that the Nelson-Searle argument is purely
semantic can be confirmed by translating the traditional discussion of
perception, including the arguments for representative realism, into Nelsonian
or Searlian terminology. Instead of
saying that we ‘see’ an immediate ‘object’ of sensory experience, we can say
that we ‘seem to see’ or that we have ‘a visual experience of’. The arguments are unchanged by this
translation, and still make the same sense, confirming that the Nelson-Searle
argument is exclusively about the meanings of words.

Other
Arguments for Direct Realism

Searle,
of course, gives other arguments for what he calls direct realism, and I won’t
try to cover most of them here. I will
mention one argument, because it’s very simple, and he finds it completely
convincing while I find it totally unpersuasive. The same argument is given by Armstrong—both
of them acknowledge getting it from Berkeley (who used it to argue against
representative realism and therefore in favor of non-realism).

Searle’s argument is that
representative realism claims that the sense data (or whatever we want to call
the subjective experience of perceiving) resemble
the object perceived. He says this can’t
be right because the object perceived is invisible and undetectable except via
the sense data, therefore the sense data and the object cannot be said to
resemble each other. He claims this is
like saying I have two cars in the garage which resemble each other even though
one is completely invisible (Seeing
Things as They Are
, pp. 225–26).

But how can Searle avoid saying that the experience we
have of seeing something has some resemblance to the object seen? If it doesn’t, it can’t allow us to conclude
that we are ‘seeing things as they are’, can it?

Anyway, this argument proves too much, because it
rules out all cases of becoming aware of B through awareness of A, where there
is no other way to become aware of B. Searle’s
argument would imply that it must always be wrong to say that a model we have
mentally constructed of some phenomenon we have detected resembles the
phenomenon. And this is trivially false.

For example, consider the technique of observing the
structure of deep layers of the Earth by means of tomography. (Think of the guy looking at the computer screen
in Dante’s Peak.) There is no other way to observe this
structure except through tomography, but we don’t therefore conclude it’s
meaningless to say that the graphic which appears on the screen resembles the structure of the deeper
levels of the Earth.

You might object to this that it is
not a pure case of perception, since we are using normal vision to look at the
graph on the computer screen. This is
actually irrelevant, but if it bothers you, imagine that, some time in the
future, when for some reason it becomes vitally important to be aware of
changes in the structure of deep levels of the Earth, people’s brains are wired
to the tomography equipment, so that they don’t look at a screen but just see
the graphic as a visual experience. People
would be perceiving the structure of deep layers of the Earth, by dint of the
fact that they were having a subjective experience of seeing something which
resembled the deep layers of the Earth.
Searle would have to say they are directly perceiving the deep layers of
the Earth and Nelson would have to say they are naively seeing them.

After all, what does ‘resemblance’
mean, in the context of perception? It
means that the mental entity gives us information about the external entity perceived. For example, a histogram showing the weight
by age of the US population resembles
the distribution of weight by age in the US population, and we can say this
because the former gives us information about the latter. Again, if it bothers you, imagine that
people’s brains are wired to the output of the research organization which
collects and processes this statistical data, and they then have the experience
of seeing a histogram. We would then be
perceiving the distribution of weight by age in the US population, and we could
say that our experience of perceiving the histogram (which would exist only inside
our skulls) resembles the actual distribution of weight by age in the US
population, something that we would have no other means of observing.

Someone might feebly object that
providing information is not the same as resemblance, but in that case we can
say that representative realists don’t need to employ the word ‘resemblance’. The representative realist can simply say
that the experience of seeing, when it is veridical, gives us information about
the object seen.

We come up with the hypothesis that
physical objects exist, in order to make sense of our subjective experiences of
perception. Generally, we come up with
this hypothesis in our first couple of weeks outside the womb. How can that seriously be disputed?

If I have two cars in the garage,
one of which is invisible, and we’re
thinking about the hypothesis that properties of the visible car are caused by
properties of the invisible car
, then it makes perfect sense to say that
the visible car resembles the invisible car. This assertion would be the
statement of a hypothesis, a guess, a surmise, a conjecture, of course. But all our statements or beliefs about the
world are hypotheses, guesses, surmises, or conjectures. They can never be anything else, or so I surmise.

The title of Searle’s book is Seeing Things as They Are. But this might be taken as hyperbole. Aside from optical illusions, which are
everywhere, our perceptions are highly selective: we don’t see an object’s
mass, chemical composition, electrical charge, radio-activity, ultra-violet
‘color’, or temperature. There could be
an organism which saw all these, but did not see size, shape, or color in the
human-visible range. Presumably such an
organism would, according to Searle, also be seeing things directly (or,
according to Nelson, naively) and ‘as they are’, even though its visual
experience or sense-data would be very different from ours. Presumably a bat which perceives objects by
echolocation also perceives things, naively or directly, as they are. And the same goes for a dragonfly or a
paramecium. So, there could be many
quite distinct ways of ‘seeing things as they are’ which were vastly
different. I won’t say this is an incorrect
use of the phrase ‘seeing things as they are’, but it’s a bit different from
the most natural and immediate understanding of that phrase, according to which
there would be just one form of visual appearance which could be called ‘seeing
things as they are’. In that sense,
there is no such thing as seeing things as they are, and never conceivably
could be, because there are many different ways of seeing things, not one of
them privileged over the others. In that
sense, we can only see things as they are represented, though we can speculate
about the accuracy of the representation, and even subject it to tests, perhaps
improving it.

Is
There Some Representational Entity between Perceiver and Perceived Object?

People
who defend naive or direct realism sometimes frame it like this: when we
perceive a physical object, there is nothing, such as a mental state or
distinctive subjective experience, intermediate between us and the object.

This strikes me as so absurd that it is not even
intelligible. Nonetheless, naive or direct
realists do tend to use turns of phrase that evoke it. They seem uncomfortable with any admission
that there is anything which might be called sensa or sense-data. The vague notion that it’s possible to deny
the existence of any ‘intermediate entity’ may be what unconsciously lies
behind the appeal to the purely semantic argument I refuted earlier.

Nelson agrees that “it’s hard to object to the claim
that we can perceive an oncoming freight train only by means of data we have
gathered by means of our senses” (p. 14).
Indeed, very hard! This looks
like a grudging admission that sense-data do exist, but one page later, Nelson
refers to “representationalist reifications,” insinuating that the data we have
gathered by means of our senses don’t really exist. (To reify is to culpably treat an abstraction
as though it had concrete existence.) As
we read on, later in her book, we continue to get the feeling that Nelson has a
hard time letting go of the ‘absolutely unmediated’ theory.

Some naive realists apparently feel
that if they grant the existence of something intermediate, such as a person’s
subjective experience identical to that involved in perceiving an external
object, they will have given the representative realist a foot in the door. That’s right!

Historically, some sense-data
theorists got themselves into a pickle because, being mostly materialists, they
felt they had to try and explain sense-data in terms of the physics of
perceptual processes. If my experience
of seeing a tree is not the tree (obviously correct), and not in the tree (also
obviously correct), then perhaps it’s somewhere in my optical apparatus, such
as in my retinas or in my brain. Hence
the many different views and coined jargons in this area. But I would say that we might not yet know
enough about subjective mental processes to explain them in worked-out physical
specifics. (There can be little dispute
that the subjective experience of perception occurs somewhere inside the
perceiver’s skull.)

We can explain the essential point here and elsewhere
in purely phenomenological terms (we can stick to the subjective experience
without trying to translate it into physics or physiology). Whether we see a tree or have a hallucination
of seeing a tree, we have a certain type of visual experience. The visual experience is common to seeing a
tree and hallucinating seeing a tree.
(It’s also common to a third type of possibility, for instance that we’re
seeing a hologram of a tree.) That
visual experience of seeing something is the kind of thing that used to be
called a sense-datum. The term ‘sense-datum’
is currently still in some bad odor (which arose because of many philosophers’
involvement, in the 1950s and 1960s, with the vogue for ‘linguistic philosophy’
or ‘ordinary-language philosophy’, derived from J.L. Austin and the later
Wittgenstein). I don’t care whether we
rehabilitate it or drop it. We can call
it a perceptual-seeming experience, or whatever. It is something that objectively exists, as
any subjective experience objectively exists, and in the case where the
perception is veridical, it is intermediate between the perceiver and the external
object perceived.

I will briefly mention one
elementary blunder often made by naive or direct realists I have talked with. They think that seeing something on television,
or in a mirror, or through a telescope is indirect whereas seeing something
with the naked eye is ‘direct’. This
distinction is bogus. If seeing
something with the naked eye is direct, then seeing something on television, or
in a mirror, or through a telescope must be direct.

I see some things with the aid of
spectacles. I could use contact lenses. Or I could have laser surgery on the lenses
of my eyes. It should be obvious that
there is no distinction in principle between these three. My body has equipment, and it can make no
difference in principle if I artificially modify or augment my body’s
equipment. When Armstrong or Searle call
themselves ‘direct realists’, the directness does not lie in the causal process
of perception, but in the alleged non-existence of the sensory experience as an
object of perception.

Some deaf people can be cured of
their deafness by installing a cochlear implant in their skull. This equipment works on completely different
principles than the natural human apparatus of hearing. But, after a bit of practice by the patient,
the result (when the operation is successful) is very similar (as described in
Michael Chorost’s book Rebuilt). It is clear that we can’t reasonably say that
the cochlear implant is any more or less direct than the natural system. Artificiality in itself does not make
perception any less direct (and epistemology fails unless it easily encompasses
cyborgs).

If any perception is direct, then all perception is
direct. However, as a matter of fact, all
conscious perception is indirect, and can only be indirect, in the sense that
the experience of perception is not the external object perceived, and persons do
conjecture the existence of the external object perceived to account for their
experiences of perception.

Is
This a Hallucination which I See Before Me?

In
an attempt to head off the implications of the fact that people sometimes
hallucinate—seeing things that aren’t there—Nelson (through her glove-puppet
Student) maintains that hallucinations are always misinterpretations of
something that’s really there (pp. 74–75).
So there are no genuine hallucinations, only misinterpretations of
things perceived. This bold claim heroically
contradicts everything that psychologists know about hallucinations.

For example, there are about one million sufferers
from Parkinson’s Disease in the US, and over a third of them experience
hallucinations, most commonly seeing someone they know who isn’t really there,
often someone who has died (Oliver Sacks thinks that these hallucinations are
not due to the disease, but to the medications). These Parkinson patients see a real person,
large as life, in complete detail, every hair in place, a few feet away from
themselves.

Are these sick people misinterpreting a speck of dust
or a ray of light as a human being? There’s
no evidence for this, and if it were true, the gap between the objective
stimulus and the hallucinated object would be enormous; for most purposes the
situation would be the same as an apparent perception with no external stimulus
at all.

In any case, arguing for skepticism
about perception by appealing to hallucinations or illusions is ultimately
merely illustrative and rhetorical. If,
as far as we could tell, perception were always one hundred percent veridical, there
would still be a logical gap between the subjective experience of perceiving an
object and the independent existence of that object, though I admit it might
then be tougher, as a practical matter, to get anyone interested in that fact.

Searle says that he prefers the term
‘direct realism’ to ‘naive realism’, because ‘naive realism’ has become associated
with a group of philosophers known as disjunctivists (Seeing Things as They Are, p. 15).
The fact that Nelson seems to deny that one can have the same subjective
experience when hallucinating as when seeing a real object makes me surmise
that possibly Nelson is a disjunctivist.
But since I haven’t read much of the literature on disjunctivism and since
Nelson’s definition of naive realism is so very obscure, I’m not sure of that.

Armstrong’s
Three Arguments against Representationalism

I
have said that Armstrong’s 1961 book is the best statement I have seen of a
case for direct or naive realism. Armstrong
starts by assuming that in the theory of perception there are three live
alternatives, representative realism, direct realism, and phenomenalism. Armstrong has a chapter on the refutation of
phenomenalism (much of which I agree with) and a chapter on the refutation of
representative realism, in which he presents three arguments.

First, he says that according to the
claim that sense-impressions are the only immediate objects of perception, we
can have no reason to believe that there are physical objects. But, as I have pointed out, talk about what
is or is not an immediate object of perception (or immediately perceived) is nothing
more than talk about talk. If we recast
the same point in different language, we remain with a logical gap between experience
and external objects, and so the same objection applies to direct realism. The direct realist does not deny that a
subjective experience of perception is essential to perception, nor that the
subjective experience of perception is not the external object perceived, nor
that the properties of the subjective experience of perception are not the
properties of the external object perceived.

Armstrong states that if the
representative theory is correct, “we have no evidence at all for passing from
the immediate perception of sense-impressions to the mediate perception of
physical objects” (p. 29). This is pure
Hume, and as far as it goes, properly interpreted, it is perfectly correct.

Armstrong then considers the point that although we
have no inductive evidence of the
existence of the physical world, “we might form the hypothesis of the existence
of the physical world; and, having formed it, we might find that it served to
explain the regularities and irregularities in the flow of our sense-impressions”
(p. 30). This is excellent, but
Armstrong avoids the natural conclusion with the following interesting passage:

“The objection seems just, and
blunts the edge of the argument. But it
does not turn it altogether. For surely
we are not prepared to degrade bodies into hypotheses? We want to say that our assurance of the
existence of the physical world is far stronger than any assurance we could
obtain by indirectly confirming a theory.
If the Representative theory were true, it would be proper to have a lurking
doubt about the existence of the physical world. Yet such a doubt does not seem to be proper” (p.
30).

This passage begins with a careless
slip, since it is not bodies themselves which are being “degraded” into
hypotheses, but our assertion or belief or theory that there are bodies. We may want
to say that our assurance of this or that is stronger than any assurance we
could obtain by indirectly confirming a theory, but if so, that want must be
forever unrequited. The assurance we get
from indirectly confirming a theory is the strongest possible assurance for any
statement or belief. That’s as strong as
it gets. Some might say that logical
truths are stronger, and I don’t dismiss that out of hand, however no one
claims that the existence of physical objects is a logical truth. The whole passage tends to confound our subjective
feelings of conviction with what we can logically demonstrate.

Armstrong’s second argument is the
one about resemblance, which I have already refuted in reference to
Searle. And again, the essentially
semantic nature of the question as posed by Armstrong and Searle means that the
objection simply reappears with direct realism, for, setting aside the language
of direct and indirect perception, the fact remains that the subjective
experience of perception has qualities quite different from the objective
properties of physical objects. This
logical gap cannot be made to go away, which is of course why we ought to acknowledge
that the subjective experience represents
the physical object.

Armstrong’s third argument I find
difficult to understand. He claims that
it makes no sense to think of a physical object that can in no way be “immediately
perceived” (p. 33). Again, this is
leaning heavily on the semantics. He
says that we can’t say that all perception of external objects is mediate,
because this implies that they could be immediate, and if they could be
immediate they must sometimes be immediate (that doesn’t sound like a fair
summary, as it is so obviously wrong, but it is the best I can make of what he
says on p. 33).

This kind of reasoning is fallacious. Just as we can entertain the possibility that
there are no disembodied minds, no living things that don’t respire, or no
particles lacking relativistic mass, so we can entertain the possibility that
there are no examples of conscious perception without a subjective
representation which is distinct from the external object perceived (but which
conveys information about the object perceived and in that sense resembles it).

Political
Implications of Epistemology

Views
like Nelson’s have often been prevalent among libertarians, usually stemming
from Ayn Rand, though as far as I noticed (Nelson has no index) Rand is not
mentioned by Nelson. And I don’t know
whether Rand, though she was certainly extremely naive in some ways, would have
admitted to being a naive realist.

Libertarians who embrace certain metaphysical or
epistemological views often believe that these views are somehow congenial to
libertarianism, whereas any views they find unconvincing are the first steps to
the Gulag. I criticized some of these
theories about the link between epistemology and politics in my 2002 article ‘Ayn
Rand and the Curse of Kant’ and here I will just add a few observations.

As Nelson seems to uneasily
acknowledge, historical claims about the causal relation between philosophy and
politics have nothing to do with the merits of the philosophical theory in
question. For instance (an example she
mentions, p. 140), supposing it to be true that Darwinism caused the Holocaust,
this would have exactly no bearing on the truth or value of Darwinism as a
scientific theory.

So, even if it were true that skepticism
about perception had to lead to the Gulag and the Final Solution, this would not
affect the question of whether skepticism about perception is sound or reasonable. But it would be saddening, and to some
discouraging, if it were the case that the dissemination of a sound (or even
minimally defensible) philosophical analysis led by necessity to a horrendous
political outcome. I don’t think we have
to worry about that, because the kind of historical causation from philosophy
to politics advanced by Rand, Peikoff, Kelley, and Nelson clashes with the
historical evidence at so many points.

Nelson gives us no extended discussion of politics in
this book, but she sprinkles in brief remarks about politics here and there. Here are some examples.

She repeatedly associates David Hume with Nazism (pp. 222,
231, 239). The only tangible link mentioned
is that Johann Georg Hamann and Johann Gottlieb Fichte each admired Hume and
each influenced German nationalism. So
this, as Huck Finn might say, is a stretch.

She attributes Jonathan Edwards’s role
in the Great Awakening, and the fact that some of Edwards’s followers burned
books, to the fact that Edwards held to a quasi-Berkeleyan idealism (pp. 228–230). But many evangelical preachers with more
impact than Edwards, such as the Wesleys and Whitefield, don’t seem to have
been influenced by skepticism about perception.
Evangelical ‘revivals’ may be explicable by common factors such as
residual Christian beliefs among the population at large, plus people’s innate desire
for an all-embracing theory that will help to make sense of their lives, plus the
new theological idea (preached by the Moravians and from thence transmitted to
John Wesley) that a kind of emotional born-again experience could provide the
believer with assurance of salvation.

Book burning has been an occasional feature
of Christianity for two thousand years.
Edwards was a Calvinist, and the Calvinists sometimes burned books, two
centuries before Edwards (or Berkeley).
The Calvinists in Geneva not only burned Servetus’s writings, but, just
to be on the safe side, burned Servetus too.
It’s excessively creative to scan the history of Christian book-burning,
find this one preacher who was a philosophical idealist, and attribute the book-burning
by some of his followers (not by him) to his idealism (of which those followers
were probably unaware). Nelson says
Edwards was “inspired” by his idealism, which goes beyond the evidence. Robert Lowell wrote a poem perhaps implying
that Edwards was inspired by his observations of spiders, equally a stretch,
though more forgivable in a poem.

Out of all the dozens of evangelical
preachers, who are realists, just one of them is (in his philosophical writings
unknown to the wider public) an idealist.
Therefore idealism causes evangelical revivals. The logic is certainly . . . impressive.

Since I have already warned that I might
ramble, I will also draw attention to the fact, often briefly mentioned in the
literature on Berkeley, that there is a tension between idealism and Christian
doctrine. Berkeley, a bishop in the
Church of Ireland, had to watch his step.
The Bible is realist. The
teaching that God became flesh in Christ does not easily harmonize with the
notion that flesh is nothing more than an idea in the minds of various
persons. Genesis 2:7 tells us that God
made man out of the dust of the ground. So,
the dust of the ground predated man. And
so on.

The fact that proponents of
traditional religion would sometimes point to the skeptical literature to
support their resort to ‘faith’ doesn’t have the ominous ramifications that
Nelson imputes to it (pp. 40–41). There
are many such historical facts; people use whatever arguments they find to
hand. For instance, defenders of traditional
religion will sometimes say that science keeps changing its mind (which is
true), while some sacred scripture stays the same (also true). This shouldn’t make us look askance at the
fact that science is always open to revision.

Sixteenth- and seventeenth-century Christian teachers
pointed to skeptical arguments as showing how we couldn’t rely on our own abilities
to get at the truth, and should therefore accept what Christianity taught. Nelson apparently concludes that philosophical
skepticism strengthened belief in Christianity.
But does Nelson really suppose that if these skeptical arguments hadn’t
been published, those Christian teachers would have volunteered that Christian
doctrine was open to doubt? In any case,
this was just one response. The more
orthodox line was that basic elements of the religion, such as the existence of
God, could be demonstrated by natural reason.

One of Nelson’s cute throwaway lines
is: “A Cambodian guerrilla deep in a steaming jungle carries a paperback copy
of Rousseau, and the next thing you know, a million people are dead” (p. 17). Did I somehow miss the memo that the Khmer
Rouge renounced Marxism-Leninism and went back to Rousseau? This would imply, for example, that the Khmer
Rouge must have repudiated collectivism in favor of private ownership.

Although historians disagree about the extent of
Rousseau’s influence on the American Founding Fathers, no one would dispute
that there was some appreciable influence.
A thousand times more likely than a Khmer Rouge soldier carrying a copy
of Rousseau is one of George Washington’s soldiers carrying a copy of Rousseau or
Hume, and this would account for the fact that the United States immediately became
a totalitarian dictatorship. Oh, wait .
. .

How might metaphysical anti-realism lead
to hideously repressive forms of government?
Two stories are detectable in Nelson.
The first is that being skeptical about perception, or doubting
objective reality, directly has the effect of making you more prone to
totalitarian views. The second is that
skepticism about perception historically caused romanticism (which includes
disbelief in the efficacy of reason, or valuing emotion above reason), and
romanticism historically caused totalitarianism. That’s where Rousseau comes in, since he has
been seen as the father of romanticism. But Rousseau influenced Kant, not Kant
Rousseau, which looks to be the wrong way around.

A problem with historical cause-and-effect
stories like this is that they depend on numerous thinking individuals reacting
deterministically in a specific manner to a specific situation. So, people who are skeptics about perception
must be bound to respond by valuing emotion above reason (and anyone who values
emotion above reason must not be doing it for any other reason than skepticism
about perception). But if they’re bound to, why does it take generations
for them to do it? Then, someone who
values emotion above reason (and we must assume that there’s nothing else to
romanticism) must respond by becoming politically totalitarian in outlook (and
there must be no other reason why anyone would become totalitarian in outlook). If any of these postulates doesn’t hold, the
theory is in trouble.

Other questions follow thick and
fast. For instance, German metaphysics
when the middle-aged Kant started his revolution, was dominated by the
rationalism of Christian Wolff. Would
something less or more anti-realist have come along if Kant had died at the age
of fifty-six, or would Wolffian rationalism have continued? I suggest we just can’t say. Were Rousseau’s proto-romantic writings
somehow connected with skepticism about perception? I doubt it.
And is it self-evident that Rousseau’s influence, in its net effect,
favored totalitarianism? Nelson alludes
to something called “romantic totalitarianism” (p. 231). Really? Is that a thing? If the works of Byron, Berlioz, or Poe somehow
advanced the cause of totalitarianism, must we accept the corollary that
Balzac, Stravinsky, or Joyce fought back on liberty’s behalf?

What tends to happen with believers
in such a wondrously far-fetched story is that they cite a few cases which
comply with the story, or cases which their own limited knowledge enables them
to falsely suppose comply with the story.
However, a few compliant cases do not really corroborate such an
ambitious theory of historical causation.
All cases, or at least a big majority, must conform, or we ought to discard
the story.

For example, what made a lot of
people support totalitarianism in the twentieth century? What we find, if we look at the evidence (and
I have looked), is that totalitarianism emerged out of economic
collectivism. And people became economic
collectivists for specific, identifiable reasons: popular theories about
economics pointed to collectivism. These
economic theories owed nothing to skepticism about perception or to
romanticism.

So, to take one strand out of many
(but the most prominent strand), if we examine the historical record of those
who became socialists, and more particularly Marxists, we observe a number of
things. They were epistemological
realists, and they denounced skepticism about perception as the worst
philosophical crime. They were not
romantics by ideological background, but if anything, anti-romantics, adherents
of what has sometimes been called scientism or ‘scientific ideology’. They had quite specific reasons for thinking
that socialism was both preferable to capitalism and the inevitable successor
to capitalism, and these reasons (the most important of which I enumerate in ‘Ayn
Rand and the Curse of Kant’) had nothing to do with romanticism and everything
to do with a self-consciously ‘rational’, scientific approach.

Nelson’s epistemological views are
similar to those of dialectical materialism, the ruling ideology of the USSR. Lenin’s Materialism
and Empirio-Criticism
has essentially the same anti-skeptical metaphysics
as The Slightest Philosophy, and The Slightest Philosophy (leaving out
the few sentences directly about politics) would have been heartily applauded
by the ideological commissars of Soviet Communism in its heyday. The
Slightest Philosophy
could, making allowance for references to more recent
developments, have been written by a very devout Communist Party member circa
1930. (I know, you’re wondering whether
this means that The Slightest Philosophy
will cause a million or more deaths. I
hope not, but we can’t be too careful.)

How does it come about that the first and greatest
totalitarian regime of the twentieth century, and a major causal influence on
all the others, had a strictly imposed official philosophy which made a huge
point of insisting that skepticism about perception is philosophically the root
of all evil? Why did Communist
philosophers always talk about perception exactly like Quee Nelson?

Nelson alludes to Frederick Engels’s
1843 claim that German philosophy ought to culminate in communism (pp. 30–31). But
if we look at this remark by Engels, it should be understood in exactly the
opposite way to Nelson’s construal.

The young Marx and Engels, formerly Young Hegelians, were
suddenly converted to the atheism, realism, and materialism of Ludwig Feuerbach
and David Friedrich Strauss. This
conversion signified a conscious and systematic rejection of idealism. Engels considered that Feuerbach’s
materialism brought an end to German philosophy and directly pointed to
communism (though Engels thought it was a failure on Feuerbach’s part that he
did not perceive that materialism implies communism, just as presumably Nelson might
think it a failure on Quine’s part that he didn’t see that realism implies laissez-faire
capitalism). It’s no exaggeration to say
that in Engels’s thinking as in Marx’s, it is the total repudiation of all
idealism and the unconditional acceptance of realism which points to communism.

Engels’s slightly coy statement in his brief article
of 1843, quoted by Nelson, was succeeded by The
German Ideology
(1846) and Ludwig
Feuerbach and the End of Classical German Philosophy
(1886), where Engels’s
and Marx’s hostility to idealism and their view of a tight connection between
materialism and communism are spelled out in great detail. Just to be clear, I don’t accept that any
metaphysical or epistemological theory implies or causes any social or
political theory, much less any political movement. Yet it would be easy to formulate a thesis
that the rejection of skepticism about perception and the embrace of metaphysical
realism lead to totalitarianism; after all, we observe that totalitarians are usually
motivated by certainty, not doubt. I
don’t think there’s anything in such a thesis, but at least it would not be as spectacularly
contrary to the historical evidence as Quee’s thesis is.

In Nineteen Eighty-Four, Orwell has the Party boss O’Brien propound
the theory of “collective solipsism,” according to which reality is whatever
the Party says it is. Nelson says that
Orwell is “caricaturing collectivist epistemology” (p. 31). But was there, as a matter of fact, any
collectivist epistemology to caricature? I discuss this point in my book on Orwell (Orwell Your Orwell, pp. 307–311).

Although O’Brien’s argument is a remarkable anticipation
of some aspects of social constructivism (these had been prefigured by Otto Neurath
and by Mannheim’s ‘sociology of knowledge’ but it’s doubtful that Orwell knew of
these), we have to be clear that no actual totalitarian regime has ever
employed the “collective solipsism” type of reasoning, and it’s very unlikely
that any actual totalitarian regime ever could, because totalitarian regimes,
when they tell untruths (or even truths which they find it useful to draw
attention to), always intend to communicate that what they say is true by the
traditional absolute and objective standard of truth. Orwell is not caricaturing any existing “collectivist
epistemology” but inventing a fictional epistemology in order to caricature totalitarian
practice.

Notoriously, the Communist parties would sometimes
suddenly switch their position on various issues, saying the opposite today of
what they had said yesterday, and sometimes propagating falsehoods to help justify
the current party line. Actually, this
aspect of Stalinism was barely noticed by most non-leftists, while non-Communist
socialists like Orwell were always acutely aware of it. The Wobblies even had a song about it (“Our
Line’s Been Changed Again”).

Orwell applies to this phenomenon the type of satire
he had learned from Jonathan Swift: he has the totalitarians preaching what he considers
them to be practicing. The Communists
never did preach anything like this; in fact, as fanatical adherents of materialism,
a form of metaphysical realism, they always preached the opposite.

Orwell’s hero Winston Smith expresses the view,
against the Ingsoc Party, that “reality is something external, objective, existing
in its own right.” It doesn’t surprise
anyone knowledgeable about Communist thinking to learn that the leading
Communist ideological authority Rajani Palme Dutt, responding in 1955 to Nineteen Eighty-Four, commented that
this remark by Winston Smith states what is in fact the Communist view (Meyers,
George Orwell, pp. 287–88).

Finally look at this sentence by Nelson: “In the same
century that Heidegger, Habermas, and DeMan imbibed totalitarian collectivism
as National Socialists, Althusser, Gramsci, Sartre, Camus, Putnam, and Rorty
imbibed it as international socialists” (p. 30). You might think that Nelson is here giving us
evidence—looks like quite an accumulation of evidence!—for her historical
thesis. But exactly what does this all amount
to?

What three writers “imbibed” (one of them as a child, one
as a young adult, the other in middle age) hardly shows that their distinctive
philosophical views resulted from the ideology they imbibed, or vice versa. After the war, none of these three showed any
obvious political influence of National Socialism. It’s not playing fair to smear Habermas
because he was a schoolboy under the Third Reich. Heidegger seems to have cheerfully embraced
the National Socialist regime partly because of career opportunism. To what extent he was a realist or an
idealist is sometimes debated (this is the ambiguity of Husserl’s legacy,
though Heidegger’s ‘being in the world’ has been seen by some as an attempted answer
to Humean skepticism), but at any rate he was no romantic and he didn’t endorse
National Socialism before it came to power or after it had been overthrown. De Man (presumably Paul, not his uncle Henri/Hendrik)
wrote some antisemitic stuff in Nazi-occupied Belgium, apparently for reasons
of self-preservation and self-advancement (for he discreetly helped out individual
Jews he knew). After the war he promoted
deconstructionism in literary theory. He
doesn’t appear to have been influenced by skepticism about perception. He wrote a book on romanticism, which might
be considered an unmasking of romanticism’s pretensions.

Gramsci and Althusser were both materialists, therefore
hardcore realists (Gramsci’s concept of revolutionary praxis is not, despite
what you sometimes hear, a departure from philosophical materialism; there is a
question mark over Althusser’s last writings, but they have had no influence). Neither Gramsci nor Althusser were romanticists. Sartre and Camus were philosophical realists
and in their literary output decidedly unromantic. Putnam and Rorty vacillated on the question
of realism and they were not associated with romanticism. Of course, they had generally leftist
worldviews, and that may be what bothers Nelson, but that’s the predominant
fashion amongst all twentieth-century intellectuals, including the realist and
anti-romantic ones. Some of the harshest
attacks by realists on postmodernism have come from leftists (Chomsky, Detmer, Eagleton,
Sokal).

So, all in all, if we deconstruct Nelson’s
flurry of names, we just don’t find much corroboration for her historical
thesis. Nelson’s method, as she scans a
lot of historical data, is to pick out a handful of instances which seem to
confirm her story, while ignoring the far greater number of instances which
starkly contradict it.

Bibliography

Armstrong,
D.M. 1963 [1961]. Perception and the
Physical World
. Routledge.

Boghossian,
Paul. 2006. Fear of Knowledge: Against
Relativism and Constructivism
. Oxford University Press.

Chorost,
Michael. 2005. Rebuilt: How Becoming Part
Computer Made Me More Human
. New York: Houghton Mifflin.

Devitt,
Michael. 1991 [1984]. Realism and Truth.
Blackwell.

Ewing,
A.C. 1934. Idealism: A Critical Survey.
Methuen.

———.
1996 [1938]. A Short Commentary on Kant’s
Critique of Pure Reason
. University of Chicago Press.

Feuerbach,
Ludwig. 1986 [1843]. Principles of the Philosophy
of the Future
. Hackett.

———.
1989 [1841]. The Essence of Christianity.
Prometheus.

Foster,
John, 1982 The Case for Idealism. Routledge.

———.
2000. The Nature of Perception.
Oxford University Press.

———.
2008. A World for Us: The Case for
Phenomenalistic Idealism
. Oxford University Press.

Friedman,
Michael. 1992. Kant and the Exact
Sciences
. Harvard University Press.

Gopnik,
Alison. 2009. The Philosophical Baby:
What Children’s Minds Tell Us about Truth, Love, and the Meaning of Life
.
Farrar, Straus, and Giroux.

Gopnik,
Alison, and Andrew N. Meltzoff. 1998. Words,
Thoughts, and Theories
. MIT Press.

Gopnik,
Alison, Andrew N. Meltzoff, and Patricia K. Kuhl. 1999. The Scientist in the Crib: Minds, Brains, and How Children Learn. Morrow.

Hardin,
C.L. 1988. Color for Philosophers:
Unweaving the Rainbow
. Hackett.

Hayek,
F.A. 1963. The Legal and Political Philosophy of David Hume. Il Politico 28.

Hume,
David. 1888 [1738–1740]. A Treatise of
Human Nature: Being an Attempt to Introduce the Experimental Method of
Reasoning into Moral Subjects
. Oxford University Press.

Hylton,
Peter. 1990. Russell, Idealism, and the
Emergence of Analytic Philosophy
. Oxford University Press.

Jameson,
Fredric. 1991 [1984]. Postmodernism: Or
the Cultural Logic of Late Capitalism
. Duke University Press.

Johnson, Kent, and Wayne
Wright. 2006. Colors as Properties of the Special Sciences. Erkenntnis 64.

Latour,
Bruno. 1998, Ramses II, Est-il Mort de la Tuberculose? La Recherche 307 (March).

Lenin,
V.I. 1927 [1908]. Materialism and Empirio-Criticism: Critical Comments on a
Reactionary Philosophy. In Lenin, Collected
Works
, Volume 13. International.

Lowell,
Robert. 1946. Lord Weary’s Castle.
Harcourt, Brace.

Meyers,
Jeffrey, ed. 1975 [1965]. George Orwell:
The Critical Heritage
. Secker and Warburg.

Miller,
David. 1994. Critical Rationalism: A Restatement
and Defence
. Open Court.

———.
2006. Out of Error: Further Essays on
Critical Rationalism
. Ashgate.

Nelson,
Quee. 2007. The Slightest Philosophy.
Dog Ear.

Popper,
Karl R. 1968 [1962]. Conjectures and
Refutations: The Growth of Scientific Knowledge
. Harper.

———.
1979 [1972]. Objective Knowledge: An
Evolutionary Approach
. Oxford University Press.

———.
1983. Realism and the Aim of Science.
Rowman and Littlefield.

———.
2009 [1932]. The Two Fundamental Problems
of the Theory of Knowledge
.
Routledge

Radnitzky,
Gerard, and William Warren Bartley III, eds. 1987. Evolutionary Epistemology, Rationality, and the Sociology of Knowledge.
Open Court.

Sacks,
Oliver. 2012. Hallucinations. Knopf.

Searle,
John R. 1964. How to Derive ‘Ought’ from ‘Is’. The Philosophical Review 73:1 (January).

———.
1983. Intentionality: An Essay in the
Philosophy of Mind
. Cambridge University Press.

———
1992. The Rediscovery of the Mind. MIT
Press.

———.
2015. Seeing Things as They Are: A Theory
of Perception
. Oxford University Press.

Siegel,
Harvey. 2007. Review of Paul Boghossian, Fear
of Knowledge: Against Relativism and Constructivism
. Notre Dame Philosophical Reviews.

Steele,
David Ramsay. 2002. Ayn Rand and the Curse of Kant. Liberty (August).

———.
2017. Orwell Your Orwell: A Worldview on
the Slab
. St. Augustine’s.

———.
2018. Is It a Fact that Facts Don’t Matter? In Daniel Yim, Galen Foresman, and
Robert Arp, eds., Scott Adams and
Philosophy: A Hole in the Fabric of Reality
(Open Court).

Turnbull,
Colin. 1961. Some Observations Regarding the
Experiences and Behavior of the BaMbuti Pygmies. American Journal of Psychology 74.

Van
Inwagen, Peter. 2009. Metaphysics. Third
edition. Westview.

Watkins,
J.W.N. 1984. Science and Scepticism. Hutchinson.

Wilkerson,
T.E. 1976. Kant’s Critique of Pure
Reason: A Commentary for Students
. Oxford University Press.



How I Could Have Made Hillary President

Politics Posted on Thu, February 22, 2018 06:31:41

How I Could Have
Made Hillary President

In
his book Win Bigly: Persuasion in a World
Where Facts Don’t Matter
, Scott Adams analyzes the formidable persuasion
skills of Donald Trump and the comparatively feeble persuasion techniques of
the Hillary Clinton campaign of 2016.
The book is very funny, full of insights, and well worth reading. For those who haven’t read it, what I’m going
to talk about here is a tiny sliver of the richly entertaining material in the
book, but it does illustrate Adams’s approach.

Adams compares what he calls Trump’s
“linguistic kill shots” with the attempted kill shots of the Hillary campaign,
and he compares Trump’s slogan, “Make America Great Again” with the numerous
easily forgettable slogans considered or actually employed by the Hillary
campaign.

Here are the more powerful of
Trump’s linguistic kill shots:


Low-energy Jeb


Crooked Hillary


Lyin’ Ted


Lil’ Marco


Pocahontas

Scott Adams
analyzes these in detail to show exactly why they’re so effective. They all appeal to the visual and they all
plan for “confirmation bias.” Probably
the best of them is “Low-energy Jeb.”
The very day this nickname came out of Trump’s mouth, Scott Adams
blogged that Jeb was finished, as indeed he was, though no other commentator
saw what had just happened. Recall that
Jeb Bush had a war chest of many millions and spent far more than Trump. He was a natural for traditional Republican
voters and for the fabled “Republican establishment,” as yet another dynastic
Bush but a more likeable personality than the preceding two Bushes.

Even after Trump had released his kill
shot into what we can call the rhetorosphere,
most seasoned pundits were still naming “Jeb!” as the most likely nominee. Yet, Trump had given Jeb Bush what Adams
calls his “forever name,” and it was henceforth to be altogether impossible for
anyone to see Jeb or think about him without instantly thinking Low-energy. His presidential ambition had been killed
stone dead, not just for that electoral cycle but for all time, in a fraction
of a second, by the Master Persuader, Donald Trump.

Adams offers similar analyses for
the other nicknames. “Pocahontas” was
the name given to Elizabeth Warren, one of the leading Democratic Party
politicians and a likely future Democratic presidential candidate. Warren, a blue-eyed blonde, had claimed to be
of Native American, specifically Cherokee, ancestry and had gotten an academic
job by impersonating a “minority.” The
Cherokee Nation, which has a database of everyone they have been able to find
with Cherokee ancestry, has repeatedly protested against Warren’s claim. Warren also once contributed a “Native
American” recipe to a book of supposedly Native American recipes called . . .
wait for it . . . Pow Wow Chow. It turns out that Warren is not Native
American, the recipe was not Native American but French, and the recipe itself was
plagiarized from another source.

A look at this book on Amazon shows
that Warren is in even deeper trouble.
The subtitle of Pow Wow Chow
is A Collection of Recipes from Families
of the Five Civilized Tribes
, and the book is published by Five Civilized
Tribes Museum. This blatantly insinuates
that the Apache didn’t routinely solve quadratics or use trig to calculate the
circumference of the Earth, and this is indisputably the filthiest kind of
racism.

I would be
irresponsible if I didn’t point out that this kill shot illustrates Donald
Trump’s disgraceful carelessness with facts.
The Cherokee belong to the Iroquoian group, whereas the historical Pocahontas
belonged to an Algonquian-speaking tribe.
How low have we sunk when our president tells such appalling lies?

Everyone
could see that Trump’s nicknames were effective, and so the Hillary campaign burned
the midnight oil to come up with an effective nickname for Trump himself. They tried three in succession:


Donald Duck

● Dangerous Donald

● Drumpf

“Donald
Duck” is obviously the sort of thing a committee would come up with. “Duck” tries to make the point that Trump was
“ducking” various issues and various criticisms, including releasing his tax
returns. But of course, associating
Trump with a beloved if distinctly ridiculous cartoon character doesn’t mesh
well with the idea that Trump is a fearful Hitler-like menace.

“Dangerous
Donald” doesn’t really work, especially because a large portion of the
electorate positively wanted someone “dangerous,” someone who would go to
Washington and break things.

“Drumpf” is
the real surname of Trump’s Austrian immigrant ancestor, a perfectly
respectable German name which isn’t so congenial to Americans, so it was
changed to “Trump.” This idea that
having a non-Anglo-Saxon name in your family tree is a dirty little secret is
not a winner, for several obvious reasons.

As everyone knows, Trump’s election
slogan was “Make America Great Again.”
This is a brilliant slogan which can hardly be faulted. Adams lists its strong points (Win Bigly, pp. 155–56).

As against this, the Hillary
campaign considered eighty-five slogans (yes, 85!, according to Scott Adams, p.
157, citing the New York Times) and
eventually ended up with “Stronger Together.” Here are the ones which were actually tried
out.


Love Trumps Hate


I’m with Her


I’m ready for Hillary


Fighting for Us


Breaking Down Barriers


Stronger Together

These all have the flavor of
mediocrity and ineffectiveness that comes out of committees, and especially committees
of bigoted leftists. “Love Trumps Hate”
literally begins with “Love Trump,” and as Scott Adams points out, people’s
attentiveness declines steeply, so they often pay more attention to the
beginning than to the end of a sentence.

“I’m with Her” and “I’m Ready for
Hillary” both have a patronizing tone, as though you can prove yourself by
being open to a female candidate, just because she’s female; that kind of thing
is off-putting to some voters. And as
Bill Maher pointed out, “Ready for Hillary” evokes the resignation of being
“ready” for that uncomfortable tetanus shot from that possibly sadistic nurse.

“Fighting for Us” makes you wonder
who the “Us” really is. During World War
II, George Orwell pointed out how a British working man might interpret the
government poster that said: “Your
Courage, Your Cheerfulness, Your Resolution, will bring Us Victory” (the first three sets of
italics in the original, the fourth definitely not!).

“Breaking Down Barriers” has good
rhythm but an uncertain appeal because most people feel strongly that they
really want some barriers between them and some kinds of other people.

“Stronger Together” was the final
throw, and it came just as voters could hardly ignore the fact that violence
was coming from the left. Some of Hillary
supporters were bullies, and bullies are always stronger together. The news was already out that the “violence
at Trump’s rallies” was deliberately engineered by paid agents of the DNC.

Scott Adams Doesn’t
Give His Alternatives!

Although
Scott Adams does an excellent job of identifying the strengths of Trump’s
slogan and nicknames for opponents, and the weaknesses of Hillary’s, he doesn’t
come up with his own, better proposals for Hillary.

This is a bit of a disappointment, and a surprise,
as he emphasizes that it’s all a matter of conscious technique, not instinct.

And so, I decided to cook up my own
suggestions. Here goes!

My proposal for the nickname Hillary
should have given Trump is:


The Don

Here’s how this works. Before Trump announced for president, he was
often called “The Donald,” a phrase which usually went along with either patronizing
amusement or mild and grudging admiration.
Use of “The Donald” died out, presumably because the US population was
mobilizing into two great camps, one of which viewed Trump as a satanic
monster, the other of which saw him as the nation’s redeemer, and neither of
these would perceive “The Donald” as entirely apt.

My plan would be for Hillary supporters
to refer to him several times as “The Don,” and just occasionally, for those who
might be a bit slow on the uptake, “The Godfather” (or variations like “The
Godfather of Greed”). Hillary would then
take up “The Don,” as an already established nickname for Trump.

Trump has many of the popular
attributes of the Mafia boss: a commanding presence and a weakness for vulgar
display (his golden toilets). All the
points actually made against Trump’s character by Clinton could have been given
a slightly different coloration. Thus,
when making the allegation that Trump had stiffed some of his sub-contractors
(which the Hillary campaign did), this would be described as “making them an
offer they couldn’t refuse.” You could
throw in a reference to one of Trump’s business dealings with someone who has
since passed on, and add the jocular remark, “He now sleeps with the
fishes.” When complaining about the fact
that Trump wouldn’t release his tax returns, this could be framed as “the Trump
Family [Family, get it?] has sworn the oath of Omertà never to reveal their sources of income.”

But aren’t mafiosi supposed to be
Italian? Yes, but now they’re often
Russian too. Hillary’s campaign promoted
the story that Trump had “colluded with the Russians.” This appears to have been a pure fabrication,
simply made up (no one has ever faulted Hillary for being over-scrupulous or
excessively candid) but it would have been so much more believable if
associated with the Russian mafia.

It’s a self-evident truth that every
Russian has “ties to Vladimir Putin,” and this can always be asserted of any
Russian without fear of rebuttal. Similarly,
it’s a self-evident truism that every Russian businessman has “ties to the
Russian mob.” It would have been a
simple matter to dig up every occasion when Trump did business with a Russian,
call that Russian an “oligarch” (who could deny it?) and declare that this
Russian oligarch had ties to organized crime (or deny that?). In this way, it would have become impossible
for voters not to think of Trump’s business activities as steeped in
criminality.

Now, what about a campaign slogan
for Hillary? This is quite difficult,
because of the fact that Hillary had spent the previous eight years as
Secretary of State within the Obama administration. She could not therefore put any emphasis on
“change,” and it would be hard to imply anything radically new. But anything that looked like a defense of
the last eight years could only run the risk of implying that “the status quo
is fine and we just want to keep things the way they are.” This is a disadvantageous position to be in.

A slogan that goes negative and tries to focus on
the evil of Trump is liable to boomerang—remember that meeting of Democrats, where
a speaker referred to Hillary using the word “honest,” and the entire room
spontaneously erupted into laughter?

As Scott Adams hilariously points
out (p. 159), a rather different kind of boomerang was a major feature of the
campaign. One of Trump’s problems, as a
former reality TV host, was to get voters to take him seriously as a real
president. Hillary continually urged voters
to “imagine” Trump as president, and thus provided Trump with exactly what he
needed. He needed people to imagine him
as president, and Hillary did an excellent job of helping voters to do just that.

The Hillary campaign slogan has to
have the following qualities:


It mustn’t directly mention the rival product.


It mustn’t be easily interpreted as merely a response to Trump’s slogan or
campaign.


It can’t, unfortunately, make a bold plea for change.


It can’t, unfortunately, make a bold claim for Hillary’s trustworthiness or
other personal virtues.


It must have rhythm.


It mustn’t allow the interpretation that some special interest will be
benefited.


It must take the high ground.

So here’s my proposal:

● A Win-Win for
America

This slogan would occasionally
follow the words “Hillary Rodham Clinton.”
(It’s bad luck that “HRC” doesn’t trip off the tongue like “LBJ” or even
“JFK.” There is no other memorable
version comparable with “Doubleya”.
“HRC” might evoke “hardcore,” but we probably don’t want to go there.)

The slogan is positive and inclusively
patriotic. It therefore crowds out the
undesirable thought that Hillary appeals chiefly to welfare recipients,
criminal aliens, and billionaire hedge-fund managers. “For America” takes the high ground and
crowds out the thought that Hillary’s election would be a win for Hillary, an
undesirable thought because Hillary might be considered a loser, and also because
we don’t want voters thinking about any personal advantage Hillary might reap.

The term “Win-Win” has several
functions. Literally it refers to a
situation where we win, whichever of two alternate possibilities occurs. There would have to be a story about this,
ready for those times when Hillary or her henchmen were directly asked about
the meaning. But that’s
unimportant. We could even come up with
a dozen different stories and get people arguing about which one was true. Really the term is simply a repetition of the
positive word “win,” and gives the slogan distinctiveness and rhythm.

It also has something which Scott
Adams has talked about on a number of occasions: he has pointed out how
President Trump utilizes the tried and tested marketing ploy of putting slightly
“wrong” formulations into his tweets to enhance their effectiveness. A slightly doubtful formulation or a feeling
that something is not quite conventionally correct helps a phrase to lodge in
the memory. “Win-Win” therefore gains
something from the fact that what it means is slightly obscure and off-key,
while its emotional associations are entirely positive.

So there we are, Trump is The Don and Hillary’s slogan is A Win-Win for America. This would have been enough to give her the
electoral college, though it wouldn’t have hurt to have also done a bit more
campaigning in Michigan and Wisconsin.

Hillary threw tens of millions of
dollars at various “consultants” who were out of their depth and out of touch
with public feeling. As I’ve just proved,
I could have gotten Hillary elected by a few commonsense marketing touches. Given my unpretentious proletarian origins
and unimpressive net worth, I would have done it for, say, half a million
dollars. That would have been a terrific
deal for Hillary, and would have enabled me to pay off a good chunk of my
debts.

But, I can already hear you saying,
you’d be enabling this disgusting warmonger, purveyor of PC bigotry, and
criminal sociopath to take power. Could
you really live with yourself?

Yes, I have to admit, I would feel
bad about that. So, make it a round
million.



Beverly Hills Tale

Arts Posted on Mon, May 04, 2015 22:15:30

I composed this little tale about fifteen years ago, and sent it to a
couple of magazines which promptly rejected it.
I then forgot about it until recently and thought of sending it to a few
more, but upon re-reading it I see it is out of date in many ways but not yet
ancient enough for retro appeal (“zero cool” still had a gleam at the time, but
is now covered in verdigris). My first
thought was that I might update it, but that would actually be a lot of work
and I might never get around to it. So
rather than just waste it entirely, I’m sticking it here.

I’ve never been to Beverly Hills, except in
the sense that we all have.

KIND OF A POWER

By

DAVID RAMSAY STEELE

Lucy moved all her stuff into Dan’s old office. The last item she carried in was the “Under
urgent consideration” pile of current scripts.
She placed this in the tray on the left of her desk, then gazed with
satisfaction at the Sony Pentium notebook, the phone, the bowl of polished stones,
the herbal bouquet, and the purplish black candle in the little silver
candlestick. This desk would never look
so tidy again, until maybe, a couple years down the road, she had her next
major promotion, probably to president of the agency. And then someone else would take over from
her the office she had now taken over from Dan.

The phone rang. It was Fiona in Human Resources. “Lucy, I have an Officer Martinez here, from
the police. Do you have a minute to talk with him? It’s about Dan.”

Ninety seconds later,
Fiona appeared at the office door, with a young man, single by the look of it,
who was wearing, not a police uniform, but a gray-green jacket and aqua shirt
which went quite well with his dark skin tone.
Not bad, thought Lucy.

This being her first
day, in fact her first five minutes, in her new office, she had a choice of
uncluttered chairs to offer Martinez. He was looking at her Welsh brooch.

“Is that . . .?”

“A spiritual
symbol. Would you be a spiritual
person?”

“No.” And then by way
of explanation: “Catholic.”

“Oh.”

“Just a few points we
need to clarify. This was Daniel
Zegarac’s office? Ms. McGregor informs
me you now have Mr. Zegarac’s job.”

Martinez’s black eyes darted about like a
snake’s tongue. Lucy had been looking
forward to a few minutes alone to gloat over her new corner office with the
zero cool view and enough bookshelf space to encompass a basic herb garden in
seven earthenware pots. It might have
been better to have seen the cop in a conference room.

Martinez asked a few general questions about
Dan and quite naturally slipped in a few about Lucy. She explained the nature of Dan’s job, now
her job. She gave Martinez
her routine little chat about the work of the agency, representing creative
talent in the Hollywood jungle, getting the
most promising scripts to the right people in the studios. Fiona
should have taken care of this
.

She
vaguely wondered why the police were interested at all. Maybe they needed to rule out suicide for
insurance purposes, though if she’d considered it seriously, she’d have known
that wouldn’t be police business.

Martinez paused and
Lucy picked up a hint of awkwardness.
She thought she knew what he would say next. She almost helped him along by inquiring
“Just what kind of a script is it?”

But she was
mistaken. Martinez was not about to
mention his screenplay, or his girlfriend’s or brother-in-law’s screenplay.

He said: “We now
believe that Daniel Zegarac’s death was not accidental. Mr. Zegarac was murdered.”

His eyes had stopped
their restless flickering. They were fixed
on Lucy’s face. What he saw there was an
instant of pure, unfeigned astonishment.

“We were told it was
an accident,” said Lucy. “Wasn’t he
working on a boat?”

“He was building a
yacht. He fell and broke his neck. Since the autopsy, we now believe someone
gave him a push.”

Deftly massaging the
truth was second nature to Detective Martinez.
He didn’t explain that a witness had seen someone leaving Dan Zegarac’s
place close to the time of his fatal fall, and only because of that had the
police and the coroner looked more closely for signs of foul play. There was nothing in the autopsy report to
definitely indicate homicide. Once they
looked, however, they found details of the fatal scene that were atypical in
this kind of accident.

She said: “Wow, that’s . . . Why would anyone
kill Dan? You have any idea who did it?”

Fortunately, Lucy had not yet put up her
movie festival posters. Martinez was
sitting in front of a plain peach wall, which made it child’s play to
scrutinize his aura. Applying her
well-honed technique, she could instantly make out that this aura had blue and
turquoise points. Not a man to be taken
lightly, but no signs of unusual potency, at least not of a spiritual kind. Years of experience had taught her that by a
little deeper concentration she could see beyond the immediate manifestation,
to a faint kind of secondary aura, invisible to all save the most spiritually
discerning. She perceived a thin brown
smoke, some underlying ominous quality, an emanation of violence. Not surprising in a homicide detective, given
the kinds of experiences he must be familiar with almost daily. This whole analysis took less than two
seconds. Lucy was very good at it.

“It’s early in the
investigation,” said Martinez. How well
did you know him? Would you know if he
had any enemies?”

“Not really. Not that I can think of. A lot of people around here didn’t like him,
but not enough to . . . want to hurt him.”

“Not mixed up in
anything shady? Drugs or anything like
that?” The purpose of this question was to ascertain whether Lucy would snatch
the opportunity to send him off in an irrelevant direction; she merely shook
her head.

“Did you personally
like him? Did you get on well with him?”

He made my life a misery. When I
heard about the accident, I felt like celebrating.
“Dan could be very trying. We had our issues, work-related issues. But . . . I was just appalled that he died. I couldn’t believe it. I was really upset. We all were.”
If Martinez had not already been informed that she had loathed Dan and
fought him bitterly over the recent negotiations with Bernstein at Columbia
TriStar, someone was sure to tell him.

“Okay.” Martinez made a slight movement in his chair,
a hint that he was about to get up and leave.
“Uh, one last thing. This is a
routine question we have to ask everyone.
Where were you on the night of May 6th?”

She checked her palm
pilot. Nothing on that night.

“I must have been at
home watching TV. Yeah, I’m sure I
was. Then in bed. Asleep.”

“Alone all the time?”

“Totally.” She pulled a mock-dismayed face and added:
“No alibi.”

Martinez didn’t smile,
but his voice was soft enough to perhaps indicate sympathy. “It’s routine. We have to check on everyone.”

Not so routine was the
call she took from Detective Martinez a few days later. He asked her to stop by police
headquarters. He was ready to say that
she needed to be there to look at an artist’s sketch of the person seen leaving
the scene of the homicide. Surprisingly,
she agreed to be there, without any need to invoke this contrived rationale.

When she showed at
headquarters, Martinez had more questions of a general nature. Then: “Would you say you’ve been lucky in
your career?”

Uh oh. “Some good luck. Some bad luck. A whole lot of hard work.”

“As a matter of fact,
you’ve had some lucky breaks.”

Lucy guessed what was coming next.
So they’ve noticed. Well, they can’t
know anything. And even if they did,
what could they do about it?

“As I look at the trajectory of your career,
I see you’ve had three big breaks. And
each one of those lucky breaks has been precipitated by the death of a
colleague.” Trajectory. Precipitated. Definitely has a screenplay.

Lucy felt slightly dazed but not
anxious. If she’d been even moderately
perturbed she’d have gone straight into Great Pan breathing for serenity of
soul, but this hadn’t been necessary.

Martinez said: “Quite
a coincidence.” He paused and looked around the room in an oddly unfocussed
way. Lucy abruptly knew that colleagues of
Martinez were watching her reactions from an adjacent room and—of course—videotaping them. Weren’t they supposed to warn you in advance
when they did that? Or at least tell you
they considered you a suspect? This is so LAPD. Either they would swear under oath that they had warned her, or, if they thought they
had a case against her—not that this could ever happen—they’d swear that not
informing her was a careless and deeply regretted slip.

Martinez had Lucy’s
basic bio on a sheet of paper, and was checking off each item. Yes, in her previous job she had worked for
the Tom Davenport agency. Yes, twelve
years ago she had been assistant to Mary Nolan.
She got on badly with Nolan, who had recommended that Lucy be canned.

“Business was bad,”
Lucy recalled. “They were looking for
headcount reductions.”

“Nolan’s body turned
up in her swimming pool. So they got
themselves a headcount reduction.”
Martinez kept the irony out of his voice. He didn’t reveal that the drowning had been
viewed at the time as suspicious and a police report had been generated. An unidentified DNA sample was on file. The report was inconclusive and the
investigation had been shelved.

Lucy had taken over
Mary’s job, an arrangement that was eventually made permanent. When business improved she got her own
assistant.

Five years later,
Lucy’s boss at Davenport was Eddie McInerny.
Though she had been Eddie’s protégé—Martinez didn’t yet know she had
also been his mistress—their relationship soured and they held sharply opposed
views on the future direction of the agency.

McInerny’s house burned down. Something fatty
had been left simmering on a kitchen stove.
He slept through the thickening fumes and was dead of smoke inhalation
before his flesh began to char. The body contained traces of cocaine and three
other controlled substances.

“He could have been
zonked on drugs and forgotten to turn off the stove. Or he could have had help.” Help with the zonking or help with the fire,
or both. Still, there was no proof this
wasn’t a typical accidental blaze.

So here was a second
apparently accidental death of a colleague with whom Lucy had developed an
acrimonious relationship. A second
career boost, as it turned out, for Lucy had quickly concluded the deal that
Eddie had been working on, the deal that turned the Davenport Agency around.

Opinions might differ on whether two deaths
and two career boosts were or were not an extraordinary coincidence. Subsequently Lucy, with a number of lucrative
movie deals to her credit, had moved into a senior position at the Paulsen
Creative Talent Agency. And after five
years here, bingo, we have a third seemingly accidental death of a colleague
who clashed with Lucy, who quarreled with Lucy, and whose removal would likely
help Lucy. Surely this is beyond
ordinary coincidence.

Martinez had read
about individuals who’d been struck by lightning on three separate
occasions. Astounding coincidences could
happen, were bound to happen once in a while.
Or perhaps some persons had physical qualities or chosen habits that
made them unusually likely to be struck by lightning. Could there possibly be people whose personal
qualities made it fatal for others to get in their way? Detective Martinez didn’t think so. He was open-minded but not unduly credulous.

Martinez was an excellent listener.

“Yes.
I did a good job,” Lucy was saying.
“You can’t say I’ve coasted to the top by wasting the competition.” She didn’t intend to sound amused, but a
little of that came through.

Martinez was
thoughtful: “It could be fifty percent job performance and fifty percent
luck.” He might have been speaking of
his own career in the police department.
“A person could do okay on job performance and still decide on some
pro-active interventions to improve the odds.”

Even as he said this, Martinez
couldn’t make himself believe it. The
story just wouldn’t walk, it wouldn’t bark, it wouldn’t wag its tail. And whether he believed it or not, no one in
the DA’s office would want to parade it on a leash in front of a jury. Some link had to be found between Lucy
Armstrong and the death scenes. So far,
nothing.

Looking at her soft
countenance, long red hair, and nicely curved figure, Martinez briefly
considered the possibility he was mentally exonerating her because he liked the
look of her. He didn’t think so. Only six months before, he had not hesitated
to pursue and arrest the mouth-watering Mrs. Mulligan, who had quite
understandably, after more than ample provocation, hired a contract killer to
dispose of her exasperating but well-insured husband.

Martinez could have placed Lucy as a serial
murderer if she’d worn tight black pants, cropped hair, a leaner physique, a
bonier face. Or, given her actual
persona, if Nolan, McInerny, and Zegarac had been poisoned. Any type of person might commit murder, but
he couldn’t see Lucy sneaking out to Zegarac’s house at dead of night and
pushing him off a ladder. Also, the DNA
found at the Nolan drowning was male.
And the person seen leaving the Zegarac homicide scene was believed to
be male.

Suppose pure
coincidence is ruled out. Suppose also
Lucy Armstrong did not kill these three people.
What are we left with?

Lucy thought of mentioning that she’d been in Denver when Mary Nolan
died, and in Paris when Eddie McInerny died.
Did Martinez know that yet? He
would find out about Europe first, then about Denver. Let him
find out
.

Martinez was trying a
different approach. “You must have
thought about this yourself. What did
you think?”

A little shrug.
“Coincidence?” There are no coincidences.
Synchronicity is a law of nature, just like gravity.
That was Zuleika,
holding forth in her preachy way.
Zuleika had been right about synchronicity, of course.

“Anyone
else ever comment about this coincidence?”

One time, just before
she’d left Davenport, when someone had gone quiet, tailed off in mid-sentence,
and the other two people present had looked embarrassed. As though they had some knowledge in
common—probably knowledge of a conversation in which they had speculated avidly
about Lucy’s benefiting from two deaths in a row.

Then there was
Laura. Laura had said: “So the spells
worked, then.” Lightly enough to show
she wasn’t concerned. Warmly enough that
it could have been more than just a joke.
Lucy smiled at the recollection.

“What?”

He’s sharp. “Someone joked about
it. Said the spells work.”

“Spells?”

Why not? He can find out anyway. “I was into Wicca.” Blank stare from Martinez. “I belonged to a coven.”

“You’re a witch?”

“We called ourselves
students of Wicca. This was a few years
back. People around the office knew
something about it. There were
jokes.” Let the Cowans mock.

“You’re not into this
anymore?”

“The coven was
disbanded. I haven’t really kept it
up.” Like the South Beach diet, except
that she really hadn’t kept that up.

Disbanded. You could say the Spirits of Light Coven had
been disbanded. Zuleika sputtering with
rage, her lips positively frothing. “I offered you the world and you betrayed
me, you loathsome creature, you rotten hypocrite.”
Her Russian accent thickened as her
adrenaline level rose. “Your pathetic little act is finished. You’re under my curse . . . yes, my curse!”

“You worshipped the
Devil?” Martinez wanted to know.

A brief exhalation of
amusement. The usual misconceptions. “Wicca has nothing to do with His Satanic
Majesty. Really. It’s nature worship, not devil worship. Though I do know a couple devotees of Satan,
and they’re, like, totally nice people who wouldn’t hurt a fly. . . .”

Lucy found herself parroting one of Zuleika’s
set pieces: “Wicca sees the divine manifest in all creation. The cycles of nature are the holy days of
Wicca, the earth is the temple of Wicca, all life-forms are its prophets and
teachers. Wiccans respect life, cherish
the free will of sentient beings, and acknowledge the sanctity of the
environment.” That’s about all the Cowans need to know.

Martinez considered
this. “So. Did you nature worshippers
put spells on people you wanted out of the way?”

“We were so totally not about that. Wiccans believe for every action there’s a
reaction. If you send out evil energy,
it’ll return to you threefold.” Only if
your enemy is protected by a sufficiently strong charm.
“Using spells to coerce or injure is always
evil.” But possible. And evil’s kind of
a relative thing.

“Didn’t
stick pins in voodoo dolls?”

The absurd preoccupation with physical props. “Oh no.
I respect voodoo as an authentic grassroots religion of the Haitian
people and an expression of community solidarity in the face of neo-colonialist
exploitation.” Lucy had majored in
Sociology at Berkeley. “Voodoo is a
totally valid kind of folk tradition; Wicca is different.”

She didn’t mention
that the Wicca tradition does involve acquiring some of the enemy’s hair or
fingernails, and burning them over a black candle with appropriate
incantations, on four consecutive nights when the Moon is waning. Let him
do his own research. Why do I pay
property taxes?
But this was a
portion of the tradition that she had repudiated, and the online record of her
dispute with the Reverend Zuleika LeGrand would attest to that. In any case, it was all totally
academic. The DA’s office was not going
to indict anyone, in the twenty-first century, for casting spells.

The meeting had been a formality, the concluding handshake to months of
negotiations. In the Paulsen Agency’s
glass conference room, amid blue sky and green palms, business was over, the
chit-chat was winding down, lunch was in the offing, and there came an
unexpected voice.

“Ms. Armstrong, do you have a moment, please?” It was Martinez,
standing in the doorway, lithe and springy on his feet.

Around the table were
Paulsen’s president, Jay Maxwell, Bill Rescher from Public Relations, the
writer Joss Whedon, and, in a rare appearance, the legendary Clyde Paulsen
himself. Whedon would rewrite his story
along the lines agreed to, and the agency would pay him half a million for the
option.

“Just a couple of
questions.” The voice was as gravely
courteous as ever. Maxwell and Whedon
didn’t seem to notice: Martinez might just as well be the limo driver. Rescher looked distinctly annoyed at the
interruption. Paulsen appeared
fascinated, but then, he always did.

Lucy found Martinez’s
feeble ambush both mildly diverting and mildly irritating. For him to appear like this, without prior
warning, while she was with other people, was a planned attempt to disconcert
her. She was slightly embarrassed for
him because it wasn’t very well done.

“How can I help you
this time? Why don’t we go to my
office?” A warm smile and a cheerful
lilt, but in that moment Lucy decided this would be the last interview. Before saying goodbye to Martinez, she would
let him know that all future communications had to go through her lawyer. One-on-oneing with Martinez had been fine,
but she wouldn’t be Columboed, even ineffectually.

Over the next few weeks, Lucy’s attorney Steve Gordon heard from
Martinez a couple of times with what seemed like trivial inquiries. Martinez appeared at the Paulsen offices more
than once, and she heard of other people who had been questioned.

Laura was away in Cannes, lucky Laura. In the evenings, there seemed to be more
squad cars than usual, making more commotion than usual, near Lucy’s
condo. Before going to bed, she would
call Laura—it was early morning in Cannes—and exchange reports. They would enjoy a good chuckle about agency
office happenings, who was doing what to whom in Cannes, Laura’s own
hair-raising adventures, Hollywood scuttlebutt, the hunk Martinez, and the
strange investigation into Dan Zegarac’s demise.

The day before Laura was to get back in LA,
Gordon took a call from a female cop named Bennett, asking to set up a
meeting. Bennett said just enough to
convince Gordon that his client was not a suspect, and that she would
personally benefit from being present.
When Gordon called Lucy, she immediately insisted on co-operating.

So here they were. The two cops, Lucy, and her lawyer.

“We requested this meeting,” said Bennett,
“because of a few points we need to clarify, and because we have information
Ms. Armstrong needs to be aware of. Let
me say right away that Ms. Armstrong is not a suspect. We’re grateful to her for her co-operation. We do have a few questions for her. Then we’ll explain the latest developments in
this case.”

Gordon turned to Lucy and was about to
whisper something; she held up her hand and shook her head. “I’ll answer. Go ahead.” But
don’t trust them.

Martinez asked: “How well do you know William
Rescher?”

What the . . .? “I’ve known him for yea long. He was with Davenport.”

“You
both worked at Davenport, and now you both work at Paulsen. He followed you here.”

“Yes. He joined us a year ago.”

“Do you know him
well?”

“Don’t see much of
him. He’s not involved directly with the
talent side of the business.”

Bennett asked: “On May
4th, did you tell William Rescher you were leaving town for several days?”

Lucy remembered
sharing an elevator ride with Bill. He’d
asked her if she was leaving for the airport, “for that Nebraska thing.” She’d said yes. She had been selected as one of the judges at
the new Nebraska festival. But the
festival had been called off, some scandal about funds.

“I was in a
hurry. Conversations with Bill tend to
go on too long. I was taking a taxi to
LAX but that was to say goodbye to a friend who was leaving for Europe. I didn’t want to take the time to get into
the convoluted messy story of why the Nebraska festival was cancelled.”

There was actually a
little more to it. She had lied on
impulse, not just to save the time of explaining, but because she somehow
instinctively didn’t want creepy Bill to know where she was or what she was
doing.

It occurred to Lucy that if she’d gone to
Nebraska as planned, she would have had a way solid alibi for all three deaths,
not just the first two. Yet still the
significance of this fact didn’t dawn on her.

“Did you ever date
William Rescher?”

Where’s this going? “Yeah. A
long time ago. That was, let me see now,
twelve years ago.” The week after she
started at Davenport. Bill was already
there, and that’s where she had first met him.

“Who terminated the
relationship?”

“There was no
relationship.”

“Can you recall
whether one of you wanted to go on dating, the other didn’t?”

“Oh, that would be him
kind of wanting to go on, me wanting to stop.”

“He upset about that?”

She had given it
barely a thought for twelve years, but now it came back to her. Bill had been younger and cuter then. She’d felt a bit mushy about being so brutal. God, there were tears in his eyes. He was all, “I can’t go on without you. You mean everything to me.” Sweet, at the time, but what a loser. Surely he got over it quickly. Four months later he married that anemic
blonde at the front desk, was still married to her, as far as Lucy could
remember.

Lucy’s next date was Brad Pitt. It was only once and that was six months
before the release of Thelma and Louise,
so he wasn’t big BO. But there was a
picture in Hollywood Reporter. That kind of publicity never hurt an agency,
and Eddie McInerny had been impressed. So
much for Bill Rescher.

“William Rescher is in
custody, said Bennett. “He has confessed
to the murder of Daniel Zegarac.”

What? That’s way wacky.”

“He did it,” said
Martinez. “We have independent corroboration.”

“He’s
totally putting you on. Did he even know
Dan?”

“You’re
right. He barely knew him.” It was odd that Martinez said this as though
it clinched the case against Rescher.

Lucy sensed that they
were now getting to the whole point of the interview. What
kind of a trick is this?
Lucy was
absolutely sure that Bill could not
have killed Dan. This just had to be a
smokescreen. But why?

Bennett cleared her throat and said: “Ms.
Armstrong, we have to tell you . . . because this may cause you some
embarrassment when he goes to trial.
William Rescher committed murder because of you. He’s seriously . . . infatuated with
you. He has been obsessively in love
with you for fourteen years. His motive
for killing Daniel Zegarac was to help you out.”

Lucy’s head was spinning. She instantly went into Great Pan breathing.

A long pause.
Martinez asked, “Did you have any idea he felt this way about you?”

“But that’s just . . . He must be . . . totally out of his mind.”

Lucy was rapidly computing. Numerous little recollections of subtle
oddities in Bill’s speech and behavior, at rare intervals over the past twelve
years, suddenly made sense. What the
police were telling her struck her with stunning force as true, even obviously
true, a truth screaming for recognition.

Yet it had to
be false,

because it contradicted her certain knowledge that she could bring down her
enemies by the sheer force of her mind.
She knew perfectly well that she had deliberately caused the deaths of
Mary Nolan, Eddie McInerny, and Dan Zegarac by her own unique magickal hexing
power. Therefore, Bill Rescher could not
have killed them.

“This must be a shock
for you, and also disgusting, like a violation,” said Bennett, seeking to
soften the blow by empathizing. “However
the evidence is that you’ve been the center of William Rescher’s thoughts for
the past fifteen years.

Don’t panic. Need to think. “You’re saying he did those other killings
too?”

“He may never be
charged with them, but . . . yes, we believe he did them.” Martinez chose not to reveal that a DNA trace
from the Nolan death matched Bill Rescher.
Rescher would be offered concessions for confessing to at least two of
the three homicides, preferably all three.
It didn’t matter anyway: Rescher would likely be acquitted by reason of
insanity and incarcerated for life in a mental institution.

In kind of a vertigo,
Lucy heard Bennett’s voice, as though from the other end of a long, winding
corridor: “Did Rescher ever tell you what he did before he got into the agency
business? He worked for an insurance company,
investigating claims. Before that he’d
been a small-town sheriff for a few years.
He knew something about domestic accidents and crime scene
investigations. So when he . . .
developed this psychotic obsession about you, he could easily see a practical
way to help you out.”

It took less than three minutes of turmoil
for the mist to be dispelled, for the simple truth to shine forth in all its
clarity, and for Lucy to feel once again completely in control. If Bill had directly engineered the deaths of
Mary Nolan, Eddie McInerny, and Dan Zegarac, this showed, not that Lucy didn’t
have the power to kill at a distance by the trained exercise of concentrated
thought, but that this formidable power of hers worked through a human
intermediary. Of course. She should have known it. Hadn’t she known it?

Zuleika had once said: “All occult powers
work through the human world; all human powers work through the natural
world.” Actually the gross old manatee
had said this more than once. It was one
of her irritating little sayings, as if she could have attained to some kind of
privileged wisdom. At the time, of
course, they had all hung on Zuleika’s every goddamn word.

All occult powers work through the human
world, the mental world. It was true
enough, and obvious enough, and therefore it was something Lucy must always
have known. Of course Bill was besotted with her and consumed with the mission of
serving her interests. Bill was an
instrumentality of the hex.

Within a few seconds of this surprising
thought, she began to feel that she had never been surprised at all. She conceived that she had been struck with
fresh force by a fact she had always taken for granted. The only surprise, it now seemed to her, was
the identity of the human agent. And she
would soon begin to recall that she had known all along, on some deep level of
her being, that it was Bill.

She could picture herself one day explaining
the principle of the thing to Laura and a select inner circle of devoted
followers: “What’s more in keeping with Wicca wisdom? That a witch might cause the death of an enemy
by using mental power to make a ladder collapse? Or that a witch might cause the death of an
enemy by influencing the mind of a third person who then kicks over the
ladder?” The answer could be no less
self-evident to Lucy and her followers than it had been to the sorry old fraud
Zuleika.

Now Lucy had Bennett
figured out. She was the kind of
sympathetic cop who would be first choice to talk to a rape victim or to a
witness who had seen a loved one blown away.
Probably had a degree in social work.

Okay now, what would they expect me to say in this situation? “This is just awful,” Lucy wailed. “I thought I’d made it this far by my own
efforts.”

“It would be
unproductive to let that distress you.”
Bennett’s tone was almost maternal.
“Chance enters into everyone’s life.
Many people fail to get the promotion they deserve because someone
doesn’t like them, for instance. You
didn’t ask Rescher to do any of the things he did. And as far as we can see, he was not mainly
concerned about helping your career. It
seems he was thinking that each of these victims, at the time, was getting you
down, causing you severe emotional pain.
The way he thought of it, he couldn’t bear to see you suffer.”

Detective Martinez felt good about the case. Everything, or almost everything, had clicked
into place quite smoothly. A week after
his first meeting with Lucy, he’d found she had a cast-iron alibi for the death
of Dan Zegarac, an alibi she evidently didn’t even know about.

On the afternoon of
May 6th, Jordan Pirelli, actuary, e-trader, body-builder, occasional model, and
currently unbooked actor, had gone out of town leaving a faucet trickling in
his jacuzzi. When wet stuff came through
the ceilings of the condos below, the janitor, Frank Vucovic, had to make sure
of the source of the flood. Since Pirelli was a security-minded person who had
installed additional anti-theft devices, janitor Frank needed to have the fire
department break into Pirelli’s condo through the window. Before going to such lengths, Frank wanted to
be very sure of the source of the flooding, so he had called the neighboring
apartment, Lucy Armstrong’s, at 12:30 in the morning, and when she answered and
said she was still up, he had personally gone into her apartment, talked with
her, and checked around for any signs of leakage. This had taken about a minute. It wasn’t remarkable that Lucy didn’t recall
it—some people do forget unimportant occurrences in the few minutes before they
fall asleep. Frank vividly remembered
the whole sequence of events, which he was obliged to report in tiresome detail
to the building management later that morning.

It was a perfect
alibi. Not only did it place Lucy two
hours away from the scene of the crime, it was also a purely chance event;
there was no way she could have engineered it, certainly not with the required
precise timing. This ruled out the
possibility that she had arranged to provide herself with an alibi, knowing in
advance that the murder would take place.
It tended to eliminate her as an accomplice or accessory.

When Bill Rescher
displayed an interest in the questioning of Lucy, Martinez, in a reflexive
impulse to sow misdirection, hinted that she was the hot suspect. The calamitous look on Rescher’s face
intrigued Martinez, who began to feed Rescher with suggestions that Lucy was
the investigative target, and to ostentatiously pull her in for
questioning. Then Martinez had staged
his arrival at Paulsen to confront Lucy in Rescher’s company.

Bill’s pathetic
eagerness that Lucy should come to no harm, skilfully manipulated by Martinez,
had soon prompted Bill to confess to the killing of Dan Zegarac. Bill knew many details of the fatal scene. It was a couple of days before Martinez
mentioned to him that he was also a suspect in the Nolan and McInerny
killings. Once this matter was raised,
Rescher became exceedingly cagey. He
knew that Lucy had excellent alibis for those killings, so he had no motive to
confess to them. Martinez had not yet
informed him that DNA placed him at the scene of Mary Nolan’s drowning.

Hours of questioning
of Rescher and of Lucy had convinced Martinez that they had not been working
together. Rescher had acted alone and
without anyone else’s knowledge.

There were no major
loose ends. Something felt not quite
right about Lucy’s failure to volunteer her alibis for the Nolan and McInerny
deaths. But she was, after all, a deeply
spiritual person, which Martinez quite benevolently took to mean: occasionally
out to lunch and in need of a little practical guidance. For all her quick shrewdness, her mind could
sometimes be way off someplace on a broomstick.

The Council of Thirteen, the trustees of the coven, were all
there. Zuleika screeched: “You’re under my curse . . . yes, my curse!”
and slapped Lucy’s face. The next few
seconds of intense silence made that slap seem like the snapping of a bone,
though some eye-witnesses later argued about whether any blow had actually
landed. Lucy didn’t flinch but just
glared. Most of the onlookers felt
awkward as well as awed. Wiccans don’t
talk much about curses, and when they cast them, the entire rigmarole is
decorous and painstakingly slow. Yet
Zuleika was Zuleika. The members were embarrassed but also filled
with foreboding. They fully expected
something bad to happen to the delinquent Lucy, though possibly not for years.

That night, Zuleika,
never at a loss for captivating words, was paralyzed and rendered permanently
speechless by a stroke. Within hours,
self-effacing Ben Goldberg, Zuleika’s reliable lieutenant—and heir apparent now
that Lucy had vacated this role—was hit by a truck and put out of action. From the following morning when she heard the
news, Lucy never doubted her own awesome gift.

No member of the Spirits of Light Coven had
any doubts about what these events signified.
Lucy didn’t have to say anything.
For a few days, she thought she might assume the throne vacated by
Zuleika, but most of the members melted away.
They were impressed, even intimidated, but having been Zuleika’s
apostles they were not ready to switch allegiance to this disconcerting young
witch. Only Laura remained. And then, over the years, contacts were made with
a few more interested seekers: a new coven was discreetly in the making.

Martinez turned the steering wheel.
Bulky shoulders and taut arms, an efficient instrument of justice. He spent some time in Dave’s Gym, no time in
Dunkin’ Donuts. He wore a demeanor of
solemn dignity like a ceremonial robe.
His ancestors, Lucy divined, had been priests of Quetzalcoatl. They could be relied upon to hack out the
hearts of an endless procession of sacrificial victims to gratify their
ineffably potent god. Lucy was enough of
a postmodernist to feel at home with her vision of this vanished mystical
empire, with its pitiless established church ever thirsty for more daily
gallons of fresh human blood. Our own
society is brutal enough in its way, just kind of a different way, what with
corporate greed, global warming, and all.

Martinez thought he
was beginning to know Lucy better, to pierce beneath her unruffled
surface. She had never acted as upset as
he’d expected. She was calm; most of the
time she radiated an awesome sense of calm; he couldn’t help admiring her
amazing calm. Inside of her, she
undoubtedly did experience turbulent emotions.
Learning of Rescher’s sick obsession had shaken her. The observable signs were subtle, subdued,
yet there was no mistaking the juddering impact of tremendous shock, an eight
on her personal Richter scale, at the moment when she had learned of Bill’s
confession. She definitely had been
shaken. She was still shaken. Better see her right to her door.

“For me this is another case to be filed
away. For you it must be a little bit
traumatic.”

He for sure
has a script
. Detective Martinez stopped the car right on
the corner by Lucy’s condo building.

She said: “I guess you come across some,
like, really weird stuff in your job. As
weird as anything in the movies. Or even
weirder.”

“As a matter of fact . . .,” began Hugo
Martinez.

© 2001 David Ramsay
Steele



Misunderstanding Memeplexes: Where Andy West Goes Wrong

Sociology Posted on Sun, September 21, 2014 21:01:02

Andy West analyzes climate catastrophism
as a ‘memeplex’. The most complete
version of his argument is presented in a long essay titled ‘The Memeplex of
Catastrophic Anthropogenic Global Warming’, available at his blog <wearenarrative.wordpress.com/>. This essay, or fragments and summaries of it,
have been widely circulated and some climate skeptics have welcomed West’s
conclusions.

I’ll
begin with where I agree with West. Despite
the occasional description of climate catastrophism by Rush Limbaugh and a few others
as a ‘hoax’, the term ‘hoax’ implies that the perpetrators don’t themselves
believe in it, whereas it’s only too obvious that in this case they do believe
in it. Climate catastrophism is no more
a hoax than Marxism, Islam, psychoanalysis, or Seventh-Day Adventism. Or, if we want to take examples from
institutional science, cold fusion, Martian canals, or Lysenkoism.

Climate skeptics have often
likened catastrophism to a religion (and catastrophists sometimes liken climate
skepticism to a religion—when they’re not claiming that it’s all paid for by
the oil companies and is therefore a hoax).
West maintains that this likening of catastrophism to a religion is a
basically correct insight, but slightly misdefined, in that global warming catastrophism
and, say, Mormonism, are both instances of a category broader than that of
religion, a category West calls the “memeplex.”

Up to this point, I completely
agree with West, though I more often employ a different terminology. I would say that climate catastrophism and
Mormonism are both instances of an enthusiastic
belief system
. (Come to think of it,
Mormonism began with a hoax, when the
con artist Joseph Smith dishonestly claimed he’d gotten The Book of Mormon from the angel Moroni, but it’s not a hoax today
and has millions of sincere adherents.)

Now
I’ll explain where I think Andy West goes wrong. According to Richard Dawkins, who coined the
term ‘meme’ in his 1976 book, The Selfish
Gene
, a meme is a unit of cultural transmission, just as a gene is a unit
of biological transmission. Anything
culturally transmitted is a meme. All of
literature, science, religion, music, common-sense wisdom, and technology
consists of memes, and nothing but memes.
The first law of thermodynamics is just as much a meme as the story of
Eve and the Serpent (or we can view each as a set of memes; this makes no
difference). Andy West’s writing, like
mine and like Al Gore’s, consists of nothing but memes. Any idea, belief, or practice, capable of
being picked up by one human from another human and thus perpetuated culturally,
is a meme. No exceptions: this is the
definition of ‘meme’.

I’ll
be focusing here on beliefs, so I’ll equate a meme with a belief. Since it doesn’t affect any of the issues,
I’ll ignore here the fact that some memes are not beliefs—a meme may be a
practice or an idea that is not believed in, because it does not assert
anything about the way the world is.

If
every belief is a meme, it follows that every assemblage of beliefs is an
assemblage of memes. Andy West, however,
wants to exclude some assemblages of beliefs from his category of
‘memeplexes’. He doesn’t see climate
skepticism as a memeplex and he’s not going to agree that his own theory of
memeplexes is itself a memeplex.

It seems likely from his essay
that he even refuses to recognize some transmissible beliefs as memes, and
there are certainly numerous passing remarks indicating that West confines the
term ‘memeplex’ to a very restricted range of belief systems. Take West’s reference (p. 58) to “Both the
laudable and the lurking memetic content” (p. 2) in an essay by Pascal Bruckner
(a French philosopher, author of The
Fanaticism of the Apocalypse
, critical of greenism or what he calls “ecologism”). How can there be a “lurking” memetic content
in Bruckner’s essay when every idea in that essay, and in every essay ever
penned, including every essay by Andy West, is a meme? And notice how “laudable’ is counterposed
with “memetic.” West tells us that
“Memeplexes wallow in uncertainty and confusion” (p. 3). I’m guessing he wouldn’t say that quantum
mechanics wallows in uncertainty and confusion.
He does tell us that “If done properly, science is anti-memetic” (p.
63).

A parallel would be if someone
wanted to say that not all bits of a chromosome carrying information about the
organism’s structure and behavior are to be called ‘genes’. Some are to be called ‘genes’ and others are
not to be called ‘genes’, and we are then going to discuss the baleful
influence of these ‘genes’ on the way the organism works, the implication being
that the heritable bits of information we’re not calling ‘genes’ (but leaving
unnamed and undescribed) are somehow healthy and unproblematic, while the
‘genes’ are a seriously disturbing influence.
(And this might even have a popular resonance. A survey of why some people are nervous about
genetically modified vegetables found that the main reason was that these
people had heard that the vegetables in question contain genes!)

Andy West is not alone. The term ‘meme’ has achieved a currency
beyond that of scholars interested in cultural transmission, and as so often
happens, the term has changed its meaning as it has passed from specialized to
more general usage. So today we often
come across the definition of a ‘meme’ as something mindless, something
non-reflective, something perhaps even irrational. West has simply adopted this popular
modification of the meaning of ‘meme’.

It’s one thing to say that
beliefs may sometimes survive for reasons other than their appeal to reason and
evidence. It’s quite another to say that
only beliefs which survive for such reasons are to be called ‘memes’. One thing that Dawkins’s concept of the meme
alerts us to is that an idea may spread for reasons other than the ostensible
ones. That is true and can be
illuminating, but it does not help to then confine the concept of ‘meme’ to
those case where the actual reasons for its spread differ from the ostensible
ones. And, let’s never forget, an idea
may spread for reasons other than the ostensible ones and still be correct,
while an idea may spread for exactly the ostensible reasons and still be
incorrect.

I haven’t done a thorough check
on whether any other serious writers on memes have adopted, as West has, the more
popular meaning. But I do have Susan
Blackmore’s fine book, The Meme Machine (1999),
sitting on my shelf. This is the book
that popularized the term ‘memeplex’ (employed in 1995 by Hans-Cees Speel as a
contraction of ‘co-adapted meme complex’, though apparently Speel wasn’t the
first to use it). Blackmore makes it
clear in several passages in The Meme
Machine
that she sticks to the original Dawkins definition of ‘meme’, as
applying equally to all kinds of beliefs, including those comprising science
and technology. For example she writes
that “Science, like religion, is a mass of memeplexes,” and “Science is
fundamentally a process; a set of methods for trying to distinguish true memes
from false ones” (p. 202). So Blackmore
accepts that a meme, if it is a belief about a matter of fact, is either true
or false, that we can take steps to distinguish true memes from false memes,
and that science is composed of memeplexes and therefore of memes.

Now, someone might try to defend Andy
West as follows: If West wants to define ‘memes’ and ‘memeplexes’ in a way that
differs from Dawkins’s and Blackmore’s original definitions, who is Steele to
say that he shouldn’t? True, there may
be some verbal confusion caused by the fact that some kinds of cultural transmission
are excluded from the memes, and not given an alternative name. But that could be taken care of by clearly
distinguishing memes from non-memes.

Unfortunately, however, West
never gives a clear explanation of what separates memes from non-memes or
memeplexes from other assemblages of memes.
And no such distinction can seriously be made—not one that will
withstand a few seconds’ scrutiny.

The division of belief systems
into those which appeal to reason and evidence and those which do not is a
hopeless task. If there are two
incompatible points of view, x and y, then an adherent of x will always say that y does not appeal to reason and evidence,
or at least does so to a lesser extent than x. And an advocate of y will say the same, only reversing the terms.

Climate Catastrophism
and the Actual Climate

West intimates that climate
catastrophism has little or nothing to do with the facts of what’s going on in
the climate (pp. 1–5), and this is no doubt one reason he has for viewing it as
a memeplex in his derogatory sense. But
CAGW adherents would not agree with this judgment. They would say that climate skeptics,
including West, are the ones who disregard the facts. I disagree with CAGW and agree with West on
this issue, in fact I go further, maintaining (as West explicitly does not)
that CAGW has been refuted. But the
point is that people like West, seeking to distinguish memeplexes from other
belief systems, are always going to classify as memeplexes those belief systems
they disavow or dislike, and refuse to classify as memeplexes those belief
systems they agree with. In other words,
once we set out to distinguish memeplexes along these lines, we can’t classify
CAGW as a memeplex without swallowing a whole chunk of the arguments of its
opponents. Discussing CAGW as West does
becomes a way of denigrating it without addressing its arguments. It can easily become an excuse for ad hominem
attacks, masquerading as study of memetic processes.

Where have we come across this
kind of thing before? Most conspicuously
in the case of psychoanalysis.
Psychoanalysts had the habit of diagnosing their opponents instead of
addressing their arguments. If you read Ernest
Jones’s life of Freud, you’ll notice how everyone who had a falling out with
Freud turned out to be seriously mentally ill, which explains why they adopted
the theoretical positions they did.
There was therefore no need for Jones to outline these positions or to
offer a refutation of these positions.

Perhaps
someone might think that the distinction between memeplexes and non-memeplexes
can be made by asserting that memeplexes are insulated from reality. Perhaps West is gesturing in this direction
with his claim that CAGW has little to do with what’s going on in the climate. I agree with the gist of West’s claim—that
catastrophists tend to give insufficient weight to the many observations which
appear to go against their theory. But
we need to be careful here.

It’s characteristic of scientific
theories—virtually all of them—that their adherents tend to brush aside
apparently contrary considerations in a way that seems arbitrary to dissenters
from the theory or to outsiders to the field.
There are many examples of this phenomenon in the history of those
scientific theories that we all consider acceptable. For instance, when Pasteur revived, reformulated,
and corroborated the theory of infection by germs, numerous experts said
something along the lines of: ‘We can see that this theory must obviously be
false, because we observe that when several people are exposed to the same
conditions, and therefore presumably the same germs, some become sick and some
don’t.’ Refutation by observations (or
more strictly, by reports of observations) is not necessarily simple or
straightforward. The theory tends to
dominate the observations, selecting and interpreting them in a distinctive way. When West says that memeplexes “manipulate
perceptions” (p. 1), he may not realize that this applies to all theories
without exception. This is why there can
be paradigm shifts. This is why someone
as clever and well-read as Paul Feyerabend can advocate ‘epistemological
anarchism’ or ‘anything goes’ as the rule for science.

Can
we really say that CAGW has nothing to do with what’s going on in the
climate? Surely there would have been no
CAGW if global mean surface temperature had not shown a net increase over the
past hundred years. If we look at the
reaction of CAGW proponents to the ‘Pause’—the fact that global mean surface
temperature has not risen in the past ten to thirty years (depending on your
favorite dataset)—we certainly do not observe that they are unconcerned about
it. They manifestly see it as something troublesome
that needs to be accounted for. When the
Pause started, many of them denied that it was happening. When this denial became impossible, many of
them said it could not possibly last much longer: just wait a year or two, and
then you’ll see a big spike in temperature!
Wrong again. As the Pause has
continued, they have responded in various ways, many of these mutually incompatible. They’re visibly troubled about it. And they will no doubt become increasingly
troubled with every year that the Pause continues, especially if we see
statistically significant cooling (as many skeptical climate scientists
predict). So I think it’s simplistic to
say that CAGW is sealed off from what’s actually happening in the climate.

To avoid misunderstanding, I should
point out that even without the Pause, the predictions of the CAGW crowd have
always been for more warming than has actually occurred. In every case, reality has turned out to be
cooler than what they confidently asserted would happen (or, in cases where
they gave a wide range of probabilities, the subsequent observations have been
at the extreme low end of the range, never close to the central values). Thus, even without the Pause, observations have
always told against their theory. And if
warming were to resume next year at the rate of the late twentieth century, the
new observations would continue to contradict the predictions of the IPCC
models. But the Pause is such a
contemptuous rebuff by Mother Nature, and something the broader public can so
easily grasp, that the CAGW zealots cannot escape the awareness that their
theory has landed in deep doo-doo.

West Forays into
Functionalism

West asks: “what are memeplexes for?”
(p. 18). He thinks it very likely that
they must be ‘for’ something. They can’t
just be a precipitate of human activity but have to possess a function or
telos, if not a purpose.

So
he tries to answer this question. His
answer is that memes are “for” benefiting society, and we can show this by
tracing some of the benefits which various memeplexes have conferred on society.

His chief example is pyramid
building in ancient Egypt. Pyramid
building used up a lot of resources and yet Egyptian society was successful by
various measures. Given that the burden
of pyramid building was so huge, West reasons, “it seems highly likely that the
overall social payback must be very positive indeed, in order to offset or
exceed the huge efforts involved” (p. 20).
He assumes that every major social phenomenon must pay. He then checks off some of the indirect
apparent benefits that resulted from the building of pyramids. Belief in retribution for bad deeds in the
afterlife encouraged altruistic behavior, which contributed to social cohesion
and therefore helped society (p. 19).

The logistics of pyramid building
“might well have been the catalyst that triggered the formation of the Egyptian
super-power civilization from pre-existing tribes, with all that a civilization
implies: central administration, writing, a managed food-supply and economy, a
large and formally organized professional army, . . .” And so on (pp. 20–21).

West goes on to offer a more
general social benefit, maintaining that it causes difficulties for society if
people’s beliefs are too dissimilar, and that therefore something that makes
beliefs more uniform will be helpful. So
societies with strong memeplexes will tend to outcompete societies without them
(p. 23).

Where
have we heard this before? In Durkheim,
of course, and in a whole brood of social anthropologists and sociologists,
most conspicuously people like Bronislaw Malinowski. This theory is called functionalism, and it
embodies certain misconceptions.
(Functionalist theories are not all the same; Durkheim’s functionalism
for instance holds that practices are functional inasmuch as they adjust to
equilibrium, which is not guaranteed to be nice. We don’t need to pursue these differences
here.)

West supposes that if a memeplex
exists, it must be because it confers some benefit. (In his case, the benefit seems to be
increasing the strength and power of the polity.) He then casts around for what this benefit
might possibly be, and hits upon one or two imaginable ways in which the
existence of this memeplex had helpful consequences for the population. But the initial question is misplaced. We will always be able to find good (or
functional) consequences of any belief system (it’s an ill wind that blows no
one any good), and there’s no reason to suppose that this explains the
prevalence of that belief system, especially as these consequences may arise
centuries after the beliefs have caught hold.

When the Egyptians were trying to
secure their prospects in the afterlife by protecting their corpses, what
selective mechanism could look ahead and foresee these remote consequences of
the prevalence of such beliefs, encouraging them to proliferate and
discouraging the alternatives (such as the belief that nothing of your
personality survives in your corpse, and when you die that’s the end of
you)? There’s no such mechanism. The prevalence of a belief cannot be properly
explained by remote and unknown (to the believer) consequences of many people
holding that belief.

West
looks for a functionalist explanation of the prevalence of certain systems of
belief, but such explanations are generally fallacious. This is not to deny the commonplaces of
historical enquiry. A group of people
may certainly become more or less successful because of some aspect of their
beliefs. National Socialist beliefs led
to the driving away from Germany of Jewish scientists and to such poor
decisions as the refusal to activate Ukrainian nationalism against Moscow. Thus, National Socialist beliefs helped
Germany to lose the war. In a famous
example, Bertrand Russell maintained that one of the reasons early Christianity
prevailed while Mithraism disappeared was that following Mithraism involved undue
expense: you frequently had to find a live bull to slaughter. There’s no mystery about these kinds of
explanations, and they do not imply functionalism.

Sometimes people may take deliberate
notice of the consequences of belief systems, and this may affect their
decisions. For example the patricians of
imperial Rome applied certain rules of thumb about religious movements. One was that old religions (like Judaism)
were to be warmly tolerated whereas new religions (like Christianity) were to
be less tolerated. Another rule of thumb
was that religious movements focused on following a particular person (again
Christianity) were likely to be dangerous, since any such person or his
designated successor would automatically become a rival to the emperor. Political leaders have always paid attention
to the purely factual consequences (in their judgment) of various belief
systems and have acted appropriately, to encourage or discourage those systems. This is not functionalism: in this case
someone consciously recognizes the consequences of the belief systems and acts
accordingly. The selective mechanism is
deliberate, conscious choice by specific individuals. Both social
institutions and belief systems evolve partly by cumulative rational selection,
as opposed to blind selective processes.

There
are also minor quibbles with West’s argument.
For example, he tacitly assumes that building pyramids is an outgrowth
of preoccupation with the afterlife. No
doubt this is true, but it goes against his argument, because if pyramid
building is explained by being a result of people’s preoccupation with the
afterlife, then there’s no need to explain it by its impact on military
organization and the like. We have an
explanation: pyramid building arose because of preoccupation with the
afterlife, end of story. And if, in the
functionalist perspective, pyramid building is a burden, while encouragement of
altruistic behavior is a benefit, then the most functional memeplex would be
something that encouraged altruistic behavior without building huge stone structures. There’s no logical necessity that a belief
system encouraging altruistic behavior must also encourage the building of huge
stone structures. Furthermore, the
building of huge stone structures clearly indicates that the pharaohs believed
that something other than altruistic behavior (to wit, the building of huge
stone structures) would benefit them in the afterlife. Therefore belief in the building of huge
stone structures represents a denial of the exclusive importance of altruistic
behavior: it’s an expression of people’s skepticism that altruistic behavior
could be enough, and so it undermines the altruistic ethics which West claims
pyramid building exists to promote.

What Are
Memeplexes For?

What are memeplexes for? Strictly, this question is absurd. It’s like asking what the aurora borealis is
for. The correct answer is that it is
not for anything and could not possibly be for anything. Systems of belief do not exist for any
purpose, except to assuage the believer’s thirst for truth. Nor do systems of belief exist because they
perform any social function.

To bring out the absurdity of
this kind of enquiry, consider the following example: Many story plots involve
the ‘eternal triangle’ theme of a man’s romantic involvement with two
women. What’s the social function of
this fictional theme? In other words,
what benefits does it confer on society, which benefits can account for the
fact that it exists? The answer is that
the prevalence of this literary theme, and of other common ‘dramatic situations’
arises automatically from certain basic, all-pervasive facts about human
life. It is therefore simply an
elementary misunderstanding to ask what it’s for. It’s just not ‘for’ anything and could not be
‘for’ anything.

Given a different interpretation,
however, the question “What are memeplexes for?” can be answered simply and
conclusively. Let’s restate the question. Why is it that humans have beliefs,
especially enthusiastic beliefs to which they become fiercely devoted? And why do groups of beliefs have a tendency
to clump together into systems of beliefs?

People
have beliefs because they have an appetite to believe. This appetite is stronger than hunger,
stronger than thirst, stronger than sex.
It’s innate in the human makeup, ineradicable, and dictated by
the genes. The human mind is so
constructed that it must believe. A belief
is taking something to be true. There is
no such thing as believing something you think is untrue—this is a
straightforward contradiction, because believing something is exactly
equivalent to thinking it true. So,
people’s appetite for belief always appears to them as (and is in fact) an
appetite for the truth.

What’s
the nature of this voracious, all-consuming appetite? It’s a demand to have the world make
sense. What you believe is always what
you think is true, and the demand that you come up with something you think is
true (the reason you’re interested at all, so to speak) arises from the
categorical imperative to be convinced of a theory about the world. This imperative is hardwired, it is observed
in babies (recall Alison Gopnik et al., The
Scientist in the Crib
) and cannot be shut down except by unconsciousness.

To
take the question back a stage further, why are babies born with such a
fanatical, dogmatic, uncompromising conviction that the world absolutely must
make sense? The answer to this is not
obscure—because humans born with such a ferocious hunger for the truth do
better at passing on their genes than humans born without any such appetite. Surely this is more or less what we would
expect.

Why do beliefs clump
together? Anyone trying to make sense of
the world will come up with numerous beliefs, and these cannot always be
isolated from each other. One reason is
that we may have several beliefs about the same thing, and there is the
possibility that such beliefs might be inconsistent. We automatically strive to remove inconsistency
and harmonize our beliefs. If two of our
beliefs are incompatible, we recognize that something is wrong; we feel uneasy
and look for a way to make the incompatibility disappear. It’s impossible to believe anything without
tacitly acknowledging the rudiments of logic.
Just as the whole of arithmetic is implicit in the act of distinguishing
two from one, so the whole of logic is implicit in holding that one belief is
true and its denial is false.

Another reason is that beliefs
are often useful to us, and where they are useful, they are often more useful
if they are more general. It may be
useful to believe that this tree will bear sweet fruit every summer, but it
could be even more useful to believe that all trees with this shape of leaf
will bear sweet fruit every summer.

As a child grows up, it will
frequently have the experience of learning something that explains a lot, a
single insight that puts a myriad of things in a different light, making more
sense of them. Thus, the drive to
believe automatically tends to encourage the appetite for beliefs of wide application,
the limit being all-embracing beliefs which explain everything.

The
existence of belief systems (or memeplexes) can be seen to follow automatically
from innate factors in the human constitution.
With the development of language and other media of communication, most
of an individual’s beliefs come from the culture—from what other individuals
say. We all believe that there are
kangaroos in Australia and that it’s cold at the North Pole, even though most
of us have never visited either place (or spent any time or effort
investigating the plausibility of these tales we’ve been told). This doesn’t mean that we’re bound to believe
everything ‘the culture’ (other people) tells us, though we very often do so
until some acute problem makes us question a received belief.

I
have given a brief account here of why belief leads to belief systems, without
assuming that there is some genetic predisposition to embrace large systems of
interlocking beliefs. But, of course,
there certainly is some such predisposition, and more generally there is likely
to be, not merely a genetically programmed drive to beliefs of wide generality,
but a genetically programmed drive to hold certain kinds of general beliefs
rather than others. Still the most
brilliant stab at such a theory of how the mind strives to order its
understanding of the world in a particular way is the identity theory of Émile Meyerson.

The Myth of
Irrationality

I surmise that West subscribes to the
common view that there are rational and irrational reasons or motivations for
believing something. This misconception
is criticized at length in Ray Scott Percival’s book, The Myth of the Closed Mind (2012).
I believe that the main thrust of Percival’s argument is correct. West may think that adopting a memeplex is
irrational. But adopting any belief
system is never irrational—though it may sometimes be mistaken or even foolish. Humans just can’t help being rational; they
are forever condemned to be rational.

The
misconception that humans can believe things for irrational motives often arises
from the tacit definition of ‘rationality’ as absence of error. Certainly, humans often commit errors; we all
make mistakes. In fact, only a rational
being can commit an error; the existence of error (in the strict sense) is
proof of rationality. Errors can, as we
know, be corrected, and very frequently are.

Some
systems of belief are more passionate than others. You could put together all my beliefs about
transportation in and around Chicago and call it a belief system. For example, my belief that I can get from
the Loop to Logan Square in about half an hour by taking the Blue Line from any
of several stations along Dearborn is one belief among thousands. If I had to revise some of these beliefs, it
wouldn’t upset me very much.

Other
belief systems involve a more visceral attachment. My belief in neo-Darwinian evolution, or in
any major part of that theory such as genetics, could not be changed without an
intellectual upheaval accompanied by emotional turmoil. I call this kind of belief system an
enthusiastic belief system, and I maintain that enthusiastic belief systems, be
they religious, philosophical, or scientific, all have common characteristics.

To
mention a few: they all involve ‘confirmation bias’; once you accept the system
you tend to interpret evidence so that it fits the system. They all dismiss or explain away apparent
counter-instances with great facility.
They all involve privileged texts (scriptures) and accredited
spokespersons, which become imbued with authority. They all exhibit emotional attachment on the
part of their adherents and strong feelings of aversion toward people who
dispute the system. Attachment to a
belief system is very much like attachment to a person; just as love is blind,
so our attachment to the belief system makes us overlook its possible
faults. All these features are just as
much in evidence in belief systems we agree with as in belief systems we reject. In fact all these features are inevitable:
science, no less than religion, could never have developed without them (as
Percival makes clear).

Why We Should
Resist the Temptation to Diagnose

People often disagree with each other. There are many competing and incompatible
theories (I view any religious doctrine as a theory). This disagreement arises ineluctably (in
human groups of more than a few hundred) because the world is big, complex, and
messy and because individual humans are each equipped with nothing more than
strip-maps of the world. When an
adherent of one belief system encounters adherents of another belief system,
there is a feeling of incredulity: surely they can’t possibly think that?

When
we encounter a belief system we disagree with, we can criticize it. We can try to show that it is contrary to
observed facts, or that it involves inconsistency and is therefore
self-contradictory. We can also
criticize it simply by finding flaws in some of the arguments in its favor. But having stated our criticisms of the
belief system, we observe with amazement that its adherents do not instantly
accept our arguments and abandon their belief system. They persist in their erroneous ways, by
ignoring what we have said, or by misrepresenting what we have said, or by
replying to what we have said with blatantly unsound counter-arguments. This is the age-old pattern of differing
beliefs, in science just as much as in religion.

In
this situation, it’s tempting to conclude that we have not done enough. Instead of simply refuting these people’s
arguments, we may feel we need to try to show that their erroneous beliefs
arise from some deep-seated disorder in their thinking. We then try to show that they are guilty of
some kind of irrationality.

The
temptation should always be resisted.
Once we have stated the arguments against their position, and worked on
improving these arguments, we just have to keep on restating them and wait for
the penny to drop. The arguments against
their position (assuming these arguments can’t for the moment be improved) are
everything we could possibly have; there’s nothing more to be had.

Here
we should remind ourselves of a couple of elementary points:

1. One and the same belief may be
held by different people with different habits of thought, different
epistemologies, and different methodologies.
A true belief may be held for seriously defective reasons and a false
belief may be held for impeccable reasons.
Logically, there is a disjunction between the soundness of one’s
thinking and the truth of one’s beliefs.
We cannot validly reason from the unsoundness of someone’s thinking to
the untruth of their beliefs, nor from the untruth of their beliefs to the
unsoundness of their thinking, nor from the soundness of their thinking to the
truth of their beliefs, not from the truth of their beliefs to the soundness of
their thinking.

2. What applies to individual
beliefs applies to systems of belief or memeplexes. One person may embrace a given memeplex
because of meticulous analysis while another may embrace the same memeplex
because of disturbed thinking, or seriously mistaken methodology (including uncritically
accepting what he has heard another person say). That this is routinely so is a familiar
fact. George Orwell famously pointed out
that he believed the world was round and orbited the sun, but would be unable
to mount a good defense of these beliefs against anyone (with a bit of
astronomical knowledge) who disputed them.

Even if every single person who
adheres to a particular memeplex does so for faulty reasons, it’s still
possible that at any moment, someone may arrive at a defense of that very
memeplex for different and sounder reasons.

If the adherents of a memeplex are
in fact prey to some thinking disorder, this is immaterial to the merits of
that memeplex, for one can arrive at a correct position by disordered thinking
or at an incorrect position by impeccable thinking. So the only relevance of their thinking
disorder would be that in this case it led them to espouse a faulty position,
and once again we find that all that matters is the faultiness of the position
and not in the slightest degree the type of thinking that happened to cause any
individual to accept it. The position
can only be shown to be faulty by direct criticism, never by diagnosing the way
its proponents think.

Science
has always involved passionate attachment to enthusiastic belief systems. As Blackmore says, “False theories thrive
within science as well as within religion, and for many of the same reasons” (The Meme Machine, p. 202). In itself, this is perfectly normal. Fiercely clinging to some theory after it has
been shown to contradict experience is a human trait (and according to Percival
a necessary and productive human trait) and it occurs in science just as much
as in other institutional areas.

Sometimes, under certain
conditions, the situation gets bad, as with Lysenkoism and CAGW. These monumental follies arose because the
zealots used political power to protect themselves from criticism by
stigmatizing their actual critics and intimidating potential critics. Just as competition is required to keep
business socially optimal and nothing else can, so debate, the competition of
ideas, keeps enquiry on the track of truth, and nothing else can. But monopoly enfeebles the monopolist—“power
stupefies”—and ensures that when the memeplex falls, it crashes and burns with
spectacular suddenness.

If
general cultural conditions favored it, episodes like Lysenkoism or CAGW could
actually destroy science. But conditions,
though worsening, are nowhere near that bad.
Science will survive, at least for the next century or so. CAGW will soon be a thing of the past, a
paroxysm of ideological craziness that we will look back upon with amused
fascination.

Naturally,
the bare bones of the environmentalist belief system will grow different
flesh. Just as global warming supplanted
acid rain, so some new scare will supplant global warming. (Always eager to help out, I have nominated
the use of electronic devices causing a switch in the Earth’s magnetic
field.) Environmentalism holds that
human activity is destroying the planet, and therefore economic growth must be
crippled and millions of Third World babies allowed to die. The specific evil activity which is destroying
the planet can be changed, although possibly the environmentalist loss of
credibility due to the thoroughness of the discrediting of CAGW will be so
immense that the entire environmentalist belief system will be weakened for a
while. If so, that will be good news for
humankind and especially its poorer half.



Remembering Tom Szasz

Psychology Posted on Thu, August 07, 2014 19:32:06

We’re probably in for a fresh spate of
critiques and reappraisals of the work of Thomas S. Szasz.

In 1961 Szasz published The Myth of Mental Illness (following an
article with the same title, two years earlier). His many subsequent books would preach the
same message, and most of these later volumes make much more entertaining
reading than The Myth of Mental Illness.

Szasz’s reputation as a writer
suffered on account of that early work.
Because of its title and its key role in psychiatric controversies, it
became the one work of Szasz to cite. People
curious about Szasz would usually seek out that particular book. It’s rather dull compared to such sparkling later
works as The Manufacture of Madness (1970),
The Therapeutic State (1975), or Liberation by Oppression (2002). His Karl
Kraus and the Soul Doctors
(1976, later reprinted as Anti-Freud) is also captivating, but in this case partly because of
the translated remarks of Kraus. Szasz’s
own witty, oracular debunking style evidently owed a lot to the author of The Last Days of Mankind, as well as to
Mark Twain, Ambrose Beirce, and H.L. Mencken.

Szasz argued that there is
literally no such thing as ‘mental illness’.
Mental illness is no more than a metaphor. If we speak of ‘a sick economy’, we know this
is a metaphor. We don’t try to pretend
that economics is a branch of medicine.
It’s just the same with human behavior, human feelings, and human
thoughts. These do not belong to the
domain of medicine. But in this case, we
may be tempted to think that there is a branch of medicine—psychiatry—which is
competent to deal with problems of behavior, feeling, and thinking. This Szasz denied outright. He did not rule out as meaningless or useless
everything that psychiatrists might do—he merely insisted that it was not
medicine. He undoubtedly did believe,
though, that psychiatry had done a lot more harm than good.

Szasz himself had a private
practice as a psychotherapist, as well as being a professor of psychiatry. He defended being a professor of psychiatry
by pointing out that few would object if an atheist were a professor of
religion. He talked about his own practice
of psychotherapy rarely and vaguely: he characterized it as having
conversations with people in order to help them with their problems in
living. As for helping them by giving
them drugs, Szasz held that this should be permitted as long as it was entirely
voluntary, but he himself was not a big enthusiast for the practice (and, for
all I know, believed it was always wrong).
He would say, for instance, that you don’t call in a TV repairman when
you’re disgusted with the quality of the programs. This is an entirely typical Szasz bon mot.
On the one hand, it strikingly clarifies one facet of the issue. On the other hand, there is a lingering
doubt, is there not? For after all, if
the entire scriptwriting and production process occurred inside the TV set, it
wouldn’t be so obviously silly to get
the repairman to fix up the script for It’s
Always Sunny
.

Szasz—an MD who knew quite a bit
about medicine and the history of medicine—didn’t dispute that the realm of
behavior often interacts with the domain of medicine. By drinking too heavily, a person may give
himself cirrhosis of the liver, which is a medical problem. By bungee jumping a person may give himself a
broken neck. What makes him take to drink
or go in for bungee jumping is not, in Szasz’s view, a matter in which medical
doctors have any special competence. What
are commonly regarded as ‘mental illnesses’ are simply ‘problems in living’.

His books are eloquent in
exposing and criticizing the absurdities which result when any and all human
behavior is viewed in terms of health and disease. Even before such diseases as sex addiction,
shopping addiction, and internet addiction had been invented, Szasz had
accounted for them, and had pointed out the affinity of such afflictions with drapetomania
(the disease diagnosed in some black slaves by a nineteenth-century doctor, the
symptom of this malady being the slaves’ desire to run away from their owners)
and the mental diseases identified by Soviet psychiatrists in people who
criticized the socialist regime.

I
first became aware of someone called ‘Szasz’ when I read R.D. Laing in the
1960s; at that time Laing was all the rage in England. At first the ‘anti-psychiatrists’ eagerly
quoted their predecessor Szasz, but it soon became apparent that Szasz had
nothing but contempt for the anti-psychiatrists. He didn’t like them because they were
socialists and because he believed that they sought to glorify the mental
states of designated mental patients. Szasz
had no patience with those who imputed to mental patients wondrous insights
denied to the rest of us. He tended to
think of mental patients as, for the most part, a rather pathetic bunch who
were often complicit in their own oppression.

Jonathan Engel (in his American Therapy, 2008) gets the
chronology wrong and thinks that Szasz was a follower of the
anti-psychiatrists. I have occasionally
encountered people who suppose that since Szasz was a ‘radical’ in the 1960s
and later says things that sound ‘conservative’, he must have undergone a
political conversion. But the truth is that
Szasz’s fundamental outlook was pretty much fixed by the 1940s and never
changed. He was always a classical
liberal, an anti-communist, and a ‘cultural conservative’ in lifestyle matters,
though of course favoring the repeal of all prohibitions on drugs and
victimless crimes. The biggest change he
did undergo was from being a psychoanalyst (some said the crown prince of
psychoanalysis) to being a hostile critic of psychoanalysis.

The
volume Szasz Under Fire: The Psychiatric
Abolitionist Faces His Critics
(edited by Jeffrey Schaler, 2004), which
includes a brief autobiography, also contains an exchange of letters between
Szasz and Karl Popper (this is given by Szasz in his reply to the article by
Ray Percival). Here, Popper says he
thinks that Szasz is ninety-five percent right about the nonexistence of mental
illnesses. What Popper meant was that
while he agreed with Szasz that the extension of the medical metaphor to every
type of human ethical or lifestyle decision is preposterous, we can still reasonably
conjecture that there are some few cases where a typical brain malfunction is
the cause of some typical cluster of emotional and behavioral problems (even
though we can’t yet identify the brain malfunction in question).

Not
that Szasz would have disputed the truism that Alzheimer’s and syphilis can
cause mental deterioration, and that there are sure to be many other as yet
undiscovered diseases of the nervous system that have mental and behavioral
symptoms. But he took the position that
we can’t describe these as diseases until we have ascertained their physical
cause.

In a typically Szaszian crisp
summary (and possibly oversimplification), he asserted that we’re not entitled to
talk about a disease until a pathologist can identify its presence in a
corpse. A corpse can have cancer,
bunions, or atherosclerosis. A corpse can’t
have schizophrenia, bipolar disorder, or paranoia, let alone shopping addiction
or obsessive-compulsive disorder. No
pathologist can detect the presence of these supposed illnesses by examining a
cadaver. To Szasz, this meant that they
could not be called literal diseases, even though he allowed that at some
future date we might find that they corresponded, more or less, with some
presently unknown literal diseases.

Szasz observed that once a
genuine physical disease is identified, it tends to be taken away from psychiatry
and given to general medicine, as occurred with syphilis of the brain and with
strokes, and more recently with Alzheimer’s.
Once these are classified as literal diseases with known physical
causes, psychiatry can claim no special expertise in these areas. Szasz also pointed out the influence of
ethical and religious fashion on psychiatric diagnoses: when Szasz started
writing, nearly all psychiatrists held that homosexuality was a disease (this
was the official position of the American Psychiatric Association until 1973
and the World Health Organization until 1990).
Now most of them don’t. The
switch is not in the least due to any new medical evidence, but purely to a
re-adjustment of mores and ethical attitudes.

Although on occasion Szasz fully
acknowledged that some human problems would eventually be attributed to
presently undiscovered brain diseases, the general sweep of his rhetoric tends
to give the opposite impression: “. . . we will discover the chemical cause of
schizophrenia when we discover the chemical cause of Christianity and Communism. No sooner and no later” (The Untamed Tongue, pp. 215–16).

I
agree with Szasz in opposing involuntary commitment of the mentally ill and I
admire his exposure of much psychiatric silliness. But the route to those conclusions is not as
simple as he believed. Szasz holds that
there can be no literal disease of the mind, only a literal disease of the body
or a metaphorical disease of the mind.
This is strictly correct, but it does not have the sweeping implications
he supposes. Szasz attacks people who
employ the term ‘mental illness’, but his attacks fail if people are using the
term to mean ‘a brain disease with mental symptoms’.

Various drugs can cause you to have
hallucinations and infection by rabies will make you terrified of water. So we know that purely bodily changes can change
your conscious states and your deliberate behavior in predictable ways, and we
can’t rule out the possibility that some such bodily changes may happen without
the intervention of drugs or of rabid beasts.

Szasz
would say that until we have identified the physical cause (the lesion), we
can’t assert the existence of an illness.
But, as far as I can see, nothing prevents us from conjecturing that
certain symptoms are accounted for by an illness whose existence we can’t yet observe
directly. I know a lot less than Szasz did
about the history of medicine, but I would even surmise that there have been
such cases—consumption, epilepsy, and asthma spring to mind. But even if I’m wrong in thinking that there
have been actual cases, it still wouldn’t follow that such conjectures are inadmissible. And if we can do this with physical symptoms,
we can do it with mental symptoms: I can see nothing wrong in principle with
hypothesizing that a certain cluster of emotions, thoughts, and behaviors is
accounted for by a brain malfunction.
It’s literally, pedantically wrong to call this a ‘mental disease’ just
as it’s literally, pedantically wrong to say that the sun rises, but such
casual expressions are inevitably rife throughout language.

Involuntary
commitment and other pretexts for imprisonment and torture are very common in
our culture, and so is the endless re-iteration of the claim that victims of
state coercion are ‘ill’. Yet these two
facts are not as tightly connected as Szasz supposed. I can easily imagine a change in semantic
fashion, so that state paternalists would say: ‘Granted, these people are not
ill, but they are still a threat to themselves and others and therefore need
treatment whether they consent or not’.
And I can also easily imagine some people coming around to the view:
‘These people are indeed ill, but even sick people shouldn’t be forcibly
incarcerated or given drugs or electric shocks against their wishes’.

Szasz
wrote about forty books, even one (Faith
in Freedom
, 2004) devoted to a critique of the views of libertarians on
mental illness. The one I found most
disappointing is The Meaning of Mind
(1996). As you read most of Szasz’s
work, you become conscious of an odd lacuna: he repeatedly draws a bright line
between consciousness and physiology, as though these are independent
realms. This is the more remarkable
because he is an atheist with no theological commitments. So, you wonder what he thinks about the
relation of mind and brain. With The Meaning of Mind, we find out that he
has no coherent view of the relation between mind and brain and (while the book
does have a sprinkling of his usual piercing insights) his uninformed comments
on those who have carefully elaborated various theories often miss the point
and are at times painful to peruse.

Following
protracted illness, and a few days after a severe spinal injury due to a fall, Tom
Szasz exercised his right to suicide. I
never met him but had various phone and email exchanges with him over a number
of years. If I had met him in the flesh,
I might have mentioned some of my criticisms of his views, though his always thick
Hungarian accent might have been a conversational impediment, and I have heard
from a reliable source that in his last years he became testier and testier,
disposed to see any disagreement as betrayal.

Szász Tamász István (the surname comes
first in Hungarian). Born Budapest, 15th
April 1920. Died Manlius, New York, 8th
September 2012.



A Moral Defense of Meat-Eating

Philosophy Posted on Tue, May 06, 2014 20:25:26

A
moral case for vegetarianism has been made by some philosophers and has become
popular among a small group of people not noted for their reticence. The most influential of these philosophers is
Peter Singer. Singer’s argument is that
it’s immoral to cause suffering, that the suffering of non-human animals has
equal weight with the suffering of humans, that you can’t eat meat without
patronizing and encouraging the inflicting of suffering on animals, and that
therefore it must be immoral to eat meat, except in cases of dire necessity.

I think this argument is mistaken,
and I will now give you my chief counter-argument. My counter-argument contains a lemma—an
intermediate conclusion that I can then use as a premiss for my final
argument. To keep things short and
simple, I’m not going to argue here for the lemma (though I am going to briefly
explain the point of it), since I believe that most people, if they think about
it even briefly will agree with it. I’m
just going to state the lemma and then move on from there. (Although I say “my” counter-argument, I
don’t mean to imply that there’s anything original about this. I’ve heard something similar to this before,
though I have no idea who first came up with it. After all, it’s pretty obvious.)

Lemma:
We’re not under any moral obligation to act so as to reduce the total amount of
animal suffering below what it is in the wild, or below what it would be if
humans didn’t exist. In other words, if
the immorality of eating meat is dependent on humans causing animals to suffer,
then it can’t be immoral to eat meat if the production of meat for human
consumption does not increase the suffering of animals above what it would have
been in the absence of any human intervention.

Explanation
of the lemma:
In the absence of human intervention, animals like deer and
oxen would be eaten by non-human predators.
When humans eat meat, they’re competing with other meat-eating animals,
such as lions and wolves. If the
predators disappear, this may lead to overpopulation of the former prey animals
and consequent unwelcome environmental effects such as deforestation followed
by soil erosion. The situation is not
changed in principle if we move from hunting to the raising of livestock: the
morally relevant issue is whether the cows or sheep we’re raising would suffer
more, or less, or the same, if they were in the wild and being eaten by lions
or wolves.

The lemma allows the possibility
that some ways of treating animals may be immoral, but the lemma rules out the presumptive immorality of all cases of treating animals in such a way that their situation
is no worse than they would face in the wild.
In the case of hunting, this is clear enough. Anyone who knows cats knows that they love to
keep their prey alive and toy with it before finally killing it, and this
causes more suffering than would be caused by a quick kill with an arrow or a
bullet. So human hunting causes less
suffering than hunting by at least some other predators.

Could it be argued that by hunting
deer, humans are causing suffering to lions and wolves by taking away their
prey? This doesn’t look like a promising
line of argument. Humans are hunters by
nature, and it’s not clear why we would feel obliged to let other species of
hunters have prey that we could have. A
lion whose potential prey is killed by a human is no worse off than a lion
whose potential prey is killed by another lion, and in either case the total
lion population adjusts to the availability of prey for lions, with marginal
lions always dying or otherwise failing to reproduce because of competition.

As we move from hunting to raising livestock,
no important new issues of principle arise.
Do farm animals suffer more or less than animals in the wild? It’s not clear that they suffer any more, and
it seems likely that they suffer a lot less.
The day-to-day life of a cow munching the grass and chewing the cud has
less excitement than that of the wild ox, continually fearful of sudden attack
by a predator, but I doubt that the cow would get a thrill from dangerous
adventures the way some humans do. When
death comes to the cow, it does not seem to cause any more suffering than death
in the wild—and if we ever found out that it did, we could adjust our techniques
of slaughter, without abandoning the practice of killing animals for food. My argument is not that all and any ways of raising
and killing animals for food are morally acceptable, but merely that some
feasible ways are morally acceptable, and therefore morality does not require
vegetarianism.

Some people may feel that the life
of an animal in the wild is in some way better than that of a farm animal, even
though the farm animal experiences less actual pain and fear. Well, we observe, as real incomes rise, that
there is a growing interest in both recreational hunting and in the demand for
game animals, animals killed in the wild, in preference to farm-raised animals.
The meat of game animals is leaner and
tastes better. This trend is merely the
tip of a broader movement towards free-range raising of animals. Suppliers of meat can charge more for meat
that has been produced in a ‘more natural’ way, partly because of superior
taste and partly because consumers feel better knowing that what they were
eating was produced in a more natural way.
As our incomes rise, we spontaneously move away from factory farming
toward free-range farming, and then ultimately to preferring meat from animals
that have been hunted in the wild.

If we accept the lemma, then the
mere fact that some suffering occurs to animals when they’re raised for meat
production is not enough to show that this is immoral. Instead, we have to show that they
necessarily suffer more than they (or corresponding animals, which might be a
bit different in a hypothetical alternative world) would suffer, if the human
population were much smaller and the populations of lions and wolves much
bigger.

Although I’m not offering arguments
for the lemma, I do want to look at three possible ways of rejecting it. Someone could maintain that our obligation is
simply to stop suffering wherever we can.
One way to stop the suffering that comes from animals being harvested as
prey would be to wipe out those animals.
Thus, we could kill all oxen (including beef cows). At the same time, we would wipe out all the
predators, the animals that would have eaten the oxen. This would mean wiping out virtually all
animal species, including insects, birds, and fish, for all these animals are
either predators or likely prey. Some
folks would feel sad that all these species had disappeared, but they could
console themselves with the thought that being extinct means you never have to
suffer, whereas being extant means you do have to suffer.

Consistently,
we should extend this to humans: they should be killed off, and then no human would
ever suffer again. (Just to keep an eye
on things and make sure everyone follows the rules, I’ll be the last one to
go.) If allowing suffering is decisively
immoral then every sentient living thing, including humans, should be made
extinct, because this and only this guarantees no more suffering.

Another person might, however,
approach the issue a bit differently.
Instead of killing all animals, we could take over and manage the entire
animal kingdom, transforming it into something very different from the way it
has evolved, intervening with birth control drugs, factory-produced food,
analgesics, and anesthetics. The former
predators could be fed substitute foods made in factories from soybeans, or
even directly from industrial chemicals.
Since they would suffer somewhat from not being able to hunt, we would
have to provide them with robotic imitation-prey, so that they could continue
to experience the activity of hunting. Herbivores
could be left to graze the wilderness, but fed fertility-reducing drugs to keep
their populations stable. There would
still be some suffering: accidents do happen, and every animal has to die,
though we could try to limit this suffering by infiltrating the natural world
with robots using analgesic and anesthetic dart guns, watching all the while
for any impending pain or anxiety.

There are various aspects of this
scenario which may not be very appealing.
Be that as it may, it is not feasible right now, and won’t be feasible
without a huge investment over many decades, if not centuries (think about the
difficulty in ensuring that every fish in the oceans is guaranteed never to be
eaten). So, even assuming that this
ambitious intervention is morally required, we’re stuck for a while with the
choice between a certain amount of suffering in the wild and a certain amount
of suffering (probably the same or a bit less) down on the farm. And therefore, if we accept the lemma, we
must reject the case for vegetarianism on grounds of the suffering caused by
meat-eating.

Of course, most vegetarians will reject those two approaches and go for a third approach: simply have humans abstain from meat-eating. But what the lemma helps to bring out is that this option has an arbitrary quality. Turning humans into herbivores means excluding other herbivores from a large area of land, reducing the world’s populations of non-human herbivores. So the third approach is a kind of partial and inconsistent version of the first approach. Either we have an obligation to reduce animal suffering every chance we get, or we don’t have such an obligation. Eschewing the first two approaches means admitting that we have no such obligation.

We
can kill animals for food without adding to the total net suffering in the
animal kingdom, and this is morally okay.



The Bigotry of the New Atheism (by an Old Atheist)

Current Affairs Posted on Sat, April 12, 2014 22:05:39

(I wrote this a few years ago. A magazine said they would probably print it
but then held on to it for over two years before deciding not to use it. I’ve just now gone quickly through it and
changed it slightly in several places.)

If there’s anything new about the New Atheism which erupted
in 2004, it’s the strident proclamation that belief in God is a powerful force
for evil. All kinds of atrocities are
laid at the door of “religion,” equated with belief in God.

The central
message of the New Atheism is that 9/11 and similar outrages have occurred
because their perpetrators believed in God.
This is explicitly stated and reiterated many times by Sam Harris, but
the same tune has been hummed and whistled in several keys by Richard Dawkins
and the late Christopher Hitchens.

If you
believe in God, then you have been infected and (twenty-eight days or years
later) this belief is going to prompt you to kill yourself and your
fellow-humans. So the New Atheists tell
us. I view this as a fairytale, just as
far-fetched as anything in the Bible or the Quran.

Atheists Do It Better
(Mass Murder, That Is)

There’s an obvious problem with the New Atheist claim that
theistic religion is peculiarly conducive to atrocities. The last hundred years have seen the rise to
power of secular, in some cases overtly atheistic, ideological movements, and
these movements have been responsible for the killing, torture, enslavement,
and terrorizing of many millions of people.

By any measure, the evil deeds done
by these secular regimes within a few decades have vastly outweighed the evil
deeds done by Christianity and Islam combined, throughout their entire
history—not by a factor of just two or three, but by a factor of hundreds, if
not thousands. Institutions claiming to
embody Christianity or Islam have murdered thousands. Institutions claiming to embody Marxism,
National Socialism, or other types of socialism, have murdered tens of
millions.

Since this factual
point is so conspicuous, the New Atheists have naturally attempted to account
for it. Their most common response is
that whereas theists (like Torquemada) committed atrocities because they
believed in God, atheists (like Stalin or Mao) did not commit their atrocities because they
disbelieved in God. This strikes me as a
very strange claim.

Even if
this strange claim were true, it would not address the difficult point. The New Atheists maintain that “religious,”
meaning theistic, ideologies generate atrocities. History shows that non-theistic or secular
ideologies have generated atrocities on a vastly greater scale than theistic
ideologies. Now, even if the religious
atrocities were committed because the perpetrators believed in God while the
secular atrocities were not committed because the perpetrators disbelieved in
God, this does nothing to get around the stark fact that ideologies without
belief in God have motivated more and bigger atrocities than ideologies
incorporating belief in God, and that therefore it looks dubious to single out
belief in God as an especially virulent intellectual source of atrocities.

However,
the strange claim, if we can make any sense of it at all, can only be
false. Belief in God is an integral part
of Christianity and disbelief in God is an integral part of Marxism. Torquemada committed his atrocities because
of a belief system which included belief in God. Stalin and Mao committed their immensely more
ambitious atrocities because of a belief system which included disbelief in
God. I can’t imagine how you extract
from these facts the conclusion that theists committed their atrocities
“because” they believed in God while atheists did not commit their atrocities “because” they
disbelieved in God.

Another
argument offered by the New Atheists is to cite ways in which the churches were
complicit in the crimes against humanity committed by Fascist and National
Socialist regimes. The New Atheists
don’t seem equally concerned about the complicity of atheist intellectuals in
the greater crimes against humanity committed by Communist regimes.

But, in any case, what do such
examples really show? Fascism and
National Socialism were not Christian movements. The distinctive elements in their ideologies
and policies were not derived from what the churches were teaching. When the Fascists and the Nazis were new,
small parties with little following, they did not seek, nor did they get, the
slightest bit of support from the churches.
Until 1933, for instance, Catholics were forbidden by the German bishops
to join the Nazi Party.

By the time
Fascism and National Socialism became contenders for power, and then achieved
power, many people compromised with them, including most of the churches. So did other groups, for example, the
majority of scientists, scholars, and journalists in the affected
countries. Both totalitarian movements,
Fascism in Italy and
National Socialism in Germany,
gained electoral support at the expense of specifically Christian political
parties, which were closed down when the Fascist and National Socialist parties
came to power.

It’s also true that some
Christians, motivated at least in part by their Christianity, resisted these
regimes and paid for it. The truly
heroic Claus von Stauffenberg, leader of Operation Valkyrie, the plot to
assassinate Hitler, was a devout Catholic.

As well as the Soviet repression of
theists, both Christian and Muslim, and such well-known instances as the mass
killings directed by the atheist Khmer Rouge in Cambodia, it’s worth mentioning
a couple of other, lesser-known cases where specifically atheist persons or
groups were responsible for horrible acts of violence.

In 1924,
the Mexican government ramped up its already severe restrictions on the
activities of the Catholic church.
Hundreds of priests and other Catholics were imprisoned or executed
because they refused to comply with new regulations (requiring, for example,
that priests not criticize government officials and not wear clerical garb
outside a church). The brutal repression
of Catholics led to the “Cristero war” between Catholic rebels and the
government, followed by further government assaults on Catholics. The government hunted down and killed
priests, just because they would not give up being priests. Graham Greene wrote about this in a
documentary work, The Lawless Roads
(1939), and then in a novel, The Power
and the Glory
(1940). The former
president and de facto ruler of Mexico
at this time, Plutarco Elias Calles, was a highly enthusiastic atheist.

The traditional anticlericalism,
often atheism, of Mexico’s
ruling elite stems mainly from Positivism, the atheist belief system
promulgated by Auguste Comte, a form of pre-Marxist socialism which took root
among the Mexican intelligentsia in the nineteenth century. Vicente Fox Quesada, elected in 2000, was the
first Mexican president for ninety years who could openly admit to being a
Catholic, and even today, a few remnants of the old restrictions remain, for
example ministers of religion are banned from holding political office in
Mexico.

In another
example, the Spanish anarchists, atheistic followers of Mikhail Bakunin (“If
God existed, it would be necessary to abolish him”), had come to control some
regions of rural Spain by the 1930s.
They committed numerous outrages against Catholics, not just the
desecration of churches, but also occasionally the killing and mutilation of
priests and nuns. These atheist-inspired
attacks alarmed many Spaniards, and stimulated support for rightwing enemies of
the Republic, helping prepare the way for extraordinary brutality by both sides
in the Spanish Civil War. Numerous
leftist supporters of the Spanish Republic, like George Orwell, were fully
aware of these anti-Catholic crimes and never uttered one word of criticism. Yes, it’s true that these atrocities were
“exaggerated by the right for their own purposes.” But the right had something to exaggerate.

Atheist Terrorism

Harris’s explanation for the current spate of suicide
terrorism is that the terrorists believe they will be rewarded as martyrs in
Heaven. The religious zeal of
fundamentalist Muslims is the explanation for suicide attacks. This entertaining story has been continually
reiterated by journalists, but it will not withstand scrutiny.

Harris, and
following him Dawkins, have asked, rhetorically, whether we can imagine any
atheist group conducting suicide terrorism.
In actuality, a rather high proportion of suicide terrorists have been
atheists. In the years up to 2009, the
pre-eminent perpetrator of suicide bombings in the world was the group known as
the Tamil Tigers, in Sri Lanka. They were
of Hindu background but led by atheists.
Opinions differ on whether the Tamil Tigers could accurately be
described as “Marxist-Leninist,” but it is not disputed that they were belligerently
anti-religion.

Another atheist
group responsible for suicide terrorism was the Kurdistan Workers’ Party (PKK),
a Kurdish nationalist and Marxist-Leninist group active in Turkey. These suicide bombers were atheists and their
victims were mostly Muslims. Around 1999
the PKK leadership abandoned its Marxism-Leninism and its practice of suicide
bombings, and later changed its name.

Suicide
terrorism is primarily political in its aims and rationale. Suicide bombers have political objectives
which provide the reason for their actions.
Suicide terrorism is the recourse of members of ethnic populations who
find themselves completely outmatched by vastly stronger military might. It’s their way of hitting back at the occupying
troops, whom they are too feeble to confront directly. It is particularly effective if the occupying
power is a democracy. Robert Pape’s
study of the backgrounds of Muslim suicide terrorists (Dying to Win: The Strategic Logic of Suicide Terrorism, 2005) shows
that many of them are not especially religious.

If suicide
bombers knew of a way to kill an equal number of the enemy without also killing
themselves, they would act upon it. The
reason that suicide bombing has become much more frequent since 1983 is that it
works. The Israeli government, for
example, while usually unmoved by peaceful overtures or by (comparatively
ineffective) non-suicide attacks, has made concessions to the Palestinians
following suicide bombings. Reagan
pulled the troops out of Lebanon
because of suicide attacks, intended precisely to get US troops pulled out of Lebanon. Pape, who made a thorough study of all cases
of suicide terrorism (up to 2003), calculated that about fifty percent of
suicide attacks had some demonstrable success in achieving their political
objectives—an amazingly high success rate for terrorism, or indeed for any form
of political operation by small groups not in control of a government.

This is not
to say that suicide terrorism has any moral justification. It is merely to say that it works extremely
well. Suicide terrorism is far more
effective than any of the alternatives open to militant political groups
acting, as they see it, on behalf of comparatively powerless ethnic communities
under foreign military occupation. It’s
a highly rational, expertly calibrated activity which delivers the political
goods.

Some
readers will no doubt protest that some of the Muslim suicide bombers really do
believe they will enjoy the attentions of seventy-two virgins in paradise. (Some Muslims have told me this is a
mistranslation and it should read “seventy-two raisins,” which confirms my view
that Islam isn’t much fun.) It wouldn’t
astound me to learn that one or two members of IRA-Sinn Fein did believe they
would have a friendly chat with St. Peter at the Pearly Gates before being
issued with harps. But Al-Qaeda, like
the IRA, is an organization all of whose activities are strictly determined by
its assessment of how these activities will serve its political
objectives. Being prepared to give up
one’s life for a great cause is a commonplace of all national cultures, and always
positively valued when done for the side we favor.

It’s
understandable that someone who picks up his knowledge of Christianity and
Islam from the TV news would be innocent of the above facts. (In the wake of 9/11, an operation carried
out by Saudis, I kept hearing about seventy-two virgins, but not once did I
hear a single murmur on the major TV networks about US troops stationed in
Saudi Arabia. These troops were pulled
out eighteen months after 9/11, rendering that operation a brilliant
success.) Still, anyone of a curious
disposition might pause to wonder why, if belief in God explains 9/11, the
first fifteen centuries of Islam passed by without a single suicide bombing or
anything comparable, whereas suicide bombings (usually assassinations of public
figures) were well-known in nineteenth-century Europe. We see this awareness reflected in such
stories as The Secret Agent by Conrad
and ‘The Stolen Bacillus’ by Wells.
Again, we can generally assume that the “anarchists” who committed
suicide bombings in nineteenth-century Europe were atheists.

What Makes Religion
Dangerous?

Confronted by the fact that atheists have been implicated in
both state repression and terrorism to an extent hugely disproportionate to
their numbers, the New Atheists offer the rejoinder that these dictators and
terrorists, though they may not believe in God, still think in ways that are unreasonable. In one formulation of this rejoinder, Harris
says that “although these tyrants [Stalin and Mao] paid lip service to
rationality, communism was little more than a political religion” (End of Faith, p. 79).

The first
thing to note about this is that in making such a move, the New Atheists
casually abandon what had been their central claim—and continues to be their
central claim, because they don’t acknowledge that they have abandoned it, but
go right back to repeating it. They keep
drumming into their readers that religion must be defined as belief in God (or
occasionally, the supernatural), and that specifically belief in God is the
pathological meme which causes terrorism and mass murder.

If
“religion” is to be used to characterize all belief systems which have ever led
to terrorism and mass murder, then in shifting from religion-defined-as-theism
to religion-which-may-just-as-well-be-atheistic, the New Atheists have tacitly
accepted that their original claim is false.

The second
thing to note is that while Harris will not apply the term “religion” to his
own beliefs, he does not give us a litmus test to distinguish “religion” from
non-religious beliefs. But a favorite
rhetorical trope of his is to assert that people he disagrees with accept
things without evidence, and so I think we can assume that Harris defines
“religion” as accepting things without evidence, or, as he sometimes says,
without justification.

However,
virtually all spokespersons for Christianity, Islam, Communism, or even National
Socialism, would hasten to insist that they do not, repeat not, accept anything
without evidence. They would go on to
assert that Harris ignores the relevant evidence for their doctrines. Harris would naturally reply that he’s not
very impressed with their evidence, and interprets it differently. On this point I agree with Harris (as I have
unpacked at length in my Atheism
Explained: From Folly to Philosophy
).

But the crucial thing to remember
here is that anyone who takes up any point of view on any subject whatsoever
will always claim that the evidence supports this point of view and that the
evidence goes against people who espouse a different point of view. So what Harris is saying is that he is right
and the theists are wrong. But we are
all right about some things and wrong about others, and, while we ought to
strive to increase the ratio of our true beliefs to our false beliefs, this in
itself says nothing about which false beliefs have the effect of increasing the
predisposition to kill people.

And so we
find that, in practice, what Harris is saying amounts to the claim that
“religion” means belief systems he disagrees with, and people who think
precisely the way he does would never commit atrocities. Any Marxist around the year 1900 would have
said the same thing.

Why Atheists Have
More Blood on Their Hands

While I point out that atheists have perpetrated more and
bigger atrocities than theists, I do not attribute this to an inherently
greater tendency on the part of atheists to commit atrocities. If the historical facts were the other way
round, with theists having committed more and bigger atrocities than atheists,
I would then be pointing out that it is a logical error to conclude that theism
is inherently more inclined than atheism to perpetrate atrocities.

As I see
it, there’s no direct causal link between atheism and atrocities or between
theism and atrocities. Neither theism
nor atheism is significantly conducive or unconducive to atrocities (or to
happiness or health, as I argued in Atheism
Explained
). But I do have a
historical theory explaining why atrocities by atheists in the twentieth
century vastly exceeded the far smaller-scale atrocities perpetrated by
Christians and Muslims in all centuries up to and including the twentieth.

Enthusiastic
ideologies or belief systems, especially when they are able to capture a
monopoly of governmental authority, are liable to give rise to atrocities. It doesn’t make any difference to the body
count whether such a belief system encompasses theism or atheism. The rise of secular belief systems such as
Positivism, Marxism, Fascism, and National Socialism coincided historically
with the greatly enhanced technology for committing atrocities. If Torquemada had possessed the administrative
and personnel resources of Stalin, he might have more nearly approached Stalin
as a superstar of mass murder.

Modern
capitalism produces improved techniques and it also produces
secularization. But secularization does
not mean the disappearance of belief systems with fanatical adherents. Spiritual religions are replaced by purportedly
scientific religions, from Mesmerism to Global Warming. Socialism has come and gone, and has now been
replaced by Environmentalism. When
Environmentalism passes away, it will be replaced by some new enthusiastic
belief system, perhaps one associated with Mental Health or the need for
contact with space aliens.

In the “third
world,” the poorer half of the world, which is now the stronghold of both
Christianity and Islam, there remains some danger of atrocities perpetrated in
the name of Christianity or Islam, but in the advanced industrial countries,
most of the danger of future holocausts arises from secular-minded and pseudoscientific
belief systems.

The New Illiberalism

Do we have anything to fear from the New Atheists
themselves? Some of the things they say
aren’t very reassuring.

Harris
informs us that “belief is not a private matter.” (p. 44). The phrase “a private matter” has a specific
meaning in the history of liberal thought.
It means an area which is none of the business of the authorities, an
area where whatever you may choose to do will not cause you to fall into the
hands of the police. Hence the chilling
quality, to any liberal, of the phrase, “Thought Police.”

Maybe this
was just a slip by Harris? Not a bit of
it. “Some propositions are so
dangerous,” he explains, “that it may even be ethical to kill people for
believing them” (pp. 52–53). The whole
thrust of his book conveys the message that belief in God is the most dangerous
of the dangerous ideas for which it is ethically permissible to kill people who
have done absolutely nothing wrong.
Harris reasons that since thoughts give rise to actions, it’s okay to
coerce people on account of their dangerous thoughts alone. The rhetorical tone of The End of Faith suggests that Christian fundamentalists have the
moral standing of insect pests. Just
imagine the fuss the New Atheists would be making if Jerry Falwell or Pat
Robertson had so much as hinted that it might be ethically permissible to kill
people simply for believing there is no God.
But the late Reverend Falwell said: “We [meaning traditional-minded
Americans] honor the unbeliever.” You
can’t imagine Harris saying anything this nice about Christians.

Commenting
on the fact that most Muslims living in the West are tolerant of the non-Muslim
beliefs of their neighbors, Harris points out that Muslims in the West are in a
small minority, so their seeming tolerance may be just a sham (p. 115).

Quite
possibly. And if the New Atheists today,
when atheists constitute about two percent of the US population, can cheerfully
entertain the ethically permissible liquidation of some unspecified segment of
the dangerous eighty-plus percent who believe in God, what should we expect
from the New Atheists when atheists have increased their following to forty,
fifty, or sixty percent of the population?



Next »