Andy West analyzes climate catastrophism
as a ‘memeplex’. The most complete
version of his argument is presented in a long essay titled ‘The Memeplex of
Catastrophic Anthropogenic Global Warming’, available at his blog <wearenarrative.wordpress.com/>. This essay, or fragments and summaries of it,
have been widely circulated and some climate skeptics have welcomed West’s
conclusions.

I’ll
begin with where I agree with West. Despite
the occasional description of climate catastrophism by Rush Limbaugh and a few others
as a ‘hoax’, the term ‘hoax’ implies that the perpetrators don’t themselves
believe in it, whereas it’s only too obvious that in this case they do believe
in it. Climate catastrophism is no more
a hoax than Marxism, Islam, psychoanalysis, or Seventh-Day Adventism. Or, if we want to take examples from
institutional science, cold fusion, Martian canals, or Lysenkoism.

Climate skeptics have often
likened catastrophism to a religion (and catastrophists sometimes liken climate
skepticism to a religion—when they’re not claiming that it’s all paid for by
the oil companies and is therefore a hoax).
West maintains that this likening of catastrophism to a religion is a
basically correct insight, but slightly misdefined, in that global warming catastrophism
and, say, Mormonism, are both instances of a category broader than that of
religion, a category West calls the “memeplex.”

Up to this point, I completely
agree with West, though I more often employ a different terminology. I would say that climate catastrophism and
Mormonism are both instances of an enthusiastic
belief system
. (Come to think of it,
Mormonism began with a hoax, when the
con artist Joseph Smith dishonestly claimed he’d gotten The Book of Mormon from the angel Moroni, but it’s not a hoax today
and has millions of sincere adherents.)

Now
I’ll explain where I think Andy West goes wrong. According to Richard Dawkins, who coined the
term ‘meme’ in his 1976 book, The Selfish
Gene
, a meme is a unit of cultural transmission, just as a gene is a unit
of biological transmission. Anything
culturally transmitted is a meme. All of
literature, science, religion, music, common-sense wisdom, and technology
consists of memes, and nothing but memes.
The first law of thermodynamics is just as much a meme as the story of
Eve and the Serpent (or we can view each as a set of memes; this makes no
difference). Andy West’s writing, like
mine and like Al Gore’s, consists of nothing but memes. Any idea, belief, or practice, capable of
being picked up by one human from another human and thus perpetuated culturally,
is a meme. No exceptions: this is the
definition of ‘meme’.

I’ll
be focusing here on beliefs, so I’ll equate a meme with a belief. Since it doesn’t affect any of the issues,
I’ll ignore here the fact that some memes are not beliefs—a meme may be a
practice or an idea that is not believed in, because it does not assert
anything about the way the world is.

If
every belief is a meme, it follows that every assemblage of beliefs is an
assemblage of memes. Andy West, however,
wants to exclude some assemblages of beliefs from his category of
‘memeplexes’. He doesn’t see climate
skepticism as a memeplex and he’s not going to agree that his own theory of
memeplexes is itself a memeplex.

It seems likely from his essay
that he even refuses to recognize some transmissible beliefs as memes, and
there are certainly numerous passing remarks indicating that West confines the
term ‘memeplex’ to a very restricted range of belief systems. Take West’s reference (p. 58) to “Both the
laudable and the lurking memetic content” (p. 2) in an essay by Pascal Bruckner
(a French philosopher, author of The
Fanaticism of the Apocalypse
, critical of greenism or what he calls “ecologism”). How can there be a “lurking” memetic content
in Bruckner’s essay when every idea in that essay, and in every essay ever
penned, including every essay by Andy West, is a meme? And notice how “laudable’ is counterposed
with “memetic.” West tells us that
“Memeplexes wallow in uncertainty and confusion” (p. 3). I’m guessing he wouldn’t say that quantum
mechanics wallows in uncertainty and confusion.
He does tell us that “If done properly, science is anti-memetic” (p.
63).

A parallel would be if someone
wanted to say that not all bits of a chromosome carrying information about the
organism’s structure and behavior are to be called ‘genes’. Some are to be called ‘genes’ and others are
not to be called ‘genes’, and we are then going to discuss the baleful
influence of these ‘genes’ on the way the organism works, the implication being
that the heritable bits of information we’re not calling ‘genes’ (but leaving
unnamed and undescribed) are somehow healthy and unproblematic, while the
‘genes’ are a seriously disturbing influence.
(And this might even have a popular resonance. A survey of why some people are nervous about
genetically modified vegetables found that the main reason was that these
people had heard that the vegetables in question contain genes!)

Andy West is not alone. The term ‘meme’ has achieved a currency
beyond that of scholars interested in cultural transmission, and as so often
happens, the term has changed its meaning as it has passed from specialized to
more general usage. So today we often
come across the definition of a ‘meme’ as something mindless, something
non-reflective, something perhaps even irrational. West has simply adopted this popular
modification of the meaning of ‘meme’.

It’s one thing to say that
beliefs may sometimes survive for reasons other than their appeal to reason and
evidence. It’s quite another to say that
only beliefs which survive for such reasons are to be called ‘memes’. One thing that Dawkins’s concept of the meme
alerts us to is that an idea may spread for reasons other than the ostensible
ones. That is true and can be
illuminating, but it does not help to then confine the concept of ‘meme’ to
those case where the actual reasons for its spread differ from the ostensible
ones. And, let’s never forget, an idea
may spread for reasons other than the ostensible ones and still be correct,
while an idea may spread for exactly the ostensible reasons and still be
incorrect.

I haven’t done a thorough check
on whether any other serious writers on memes have adopted, as West has, the more
popular meaning. But I do have Susan
Blackmore’s fine book, The Meme Machine (1999),
sitting on my shelf. This is the book
that popularized the term ‘memeplex’ (employed in 1995 by Hans-Cees Speel as a
contraction of ‘co-adapted meme complex’, though apparently Speel wasn’t the
first to use it). Blackmore makes it
clear in several passages in The Meme
Machine
that she sticks to the original Dawkins definition of ‘meme’, as
applying equally to all kinds of beliefs, including those comprising science
and technology. For example she writes
that “Science, like religion, is a mass of memeplexes,” and “Science is
fundamentally a process; a set of methods for trying to distinguish true memes
from false ones” (p. 202). So Blackmore
accepts that a meme, if it is a belief about a matter of fact, is either true
or false, that we can take steps to distinguish true memes from false memes,
and that science is composed of memeplexes and therefore of memes.

Now, someone might try to defend Andy
West as follows: If West wants to define ‘memes’ and ‘memeplexes’ in a way that
differs from Dawkins’s and Blackmore’s original definitions, who is Steele to
say that he shouldn’t? True, there may
be some verbal confusion caused by the fact that some kinds of cultural transmission
are excluded from the memes, and not given an alternative name. But that could be taken care of by clearly
distinguishing memes from non-memes.

Unfortunately, however, West
never gives a clear explanation of what separates memes from non-memes or
memeplexes from other assemblages of memes.
And no such distinction can seriously be made—not one that will
withstand a few seconds’ scrutiny.

The division of belief systems
into those which appeal to reason and evidence and those which do not is a
hopeless task. If there are two
incompatible points of view, x and y, then an adherent of x will always say that y does not appeal to reason and evidence,
or at least does so to a lesser extent than x. And an advocate of y will say the same, only reversing the terms.

Climate Catastrophism
and the Actual Climate

West intimates that climate
catastrophism has little or nothing to do with the facts of what’s going on in
the climate (pp. 1–5), and this is no doubt one reason he has for viewing it as
a memeplex in his derogatory sense. But
CAGW adherents would not agree with this judgment. They would say that climate skeptics,
including West, are the ones who disregard the facts. I disagree with CAGW and agree with West on
this issue, in fact I go further, maintaining (as West explicitly does not)
that CAGW has been refuted. But the
point is that people like West, seeking to distinguish memeplexes from other
belief systems, are always going to classify as memeplexes those belief systems
they disavow or dislike, and refuse to classify as memeplexes those belief
systems they agree with. In other words,
once we set out to distinguish memeplexes along these lines, we can’t classify
CAGW as a memeplex without swallowing a whole chunk of the arguments of its
opponents. Discussing CAGW as West does
becomes a way of denigrating it without addressing its arguments. It can easily become an excuse for ad hominem
attacks, masquerading as study of memetic processes.

Where have we come across this
kind of thing before? Most conspicuously
in the case of psychoanalysis.
Psychoanalysts had the habit of diagnosing their opponents instead of
addressing their arguments. If you read Ernest
Jones’s life of Freud, you’ll notice how everyone who had a falling out with
Freud turned out to be seriously mentally ill, which explains why they adopted
the theoretical positions they did.
There was therefore no need for Jones to outline these positions or to
offer a refutation of these positions.

Perhaps
someone might think that the distinction between memeplexes and non-memeplexes
can be made by asserting that memeplexes are insulated from reality. Perhaps West is gesturing in this direction
with his claim that CAGW has little to do with what’s going on in the climate. I agree with the gist of West’s claim—that
catastrophists tend to give insufficient weight to the many observations which
appear to go against their theory. But
we need to be careful here.

It’s characteristic of scientific
theories—virtually all of them—that their adherents tend to brush aside
apparently contrary considerations in a way that seems arbitrary to dissenters
from the theory or to outsiders to the field.
There are many examples of this phenomenon in the history of those
scientific theories that we all consider acceptable. For instance, when Pasteur revived, reformulated,
and corroborated the theory of infection by germs, numerous experts said
something along the lines of: ‘We can see that this theory must obviously be
false, because we observe that when several people are exposed to the same
conditions, and therefore presumably the same germs, some become sick and some
don’t.’ Refutation by observations (or
more strictly, by reports of observations) is not necessarily simple or
straightforward. The theory tends to
dominate the observations, selecting and interpreting them in a distinctive way. When West says that memeplexes “manipulate
perceptions” (p. 1), he may not realize that this applies to all theories
without exception. This is why there can
be paradigm shifts. This is why someone
as clever and well-read as Paul Feyerabend can advocate ‘epistemological
anarchism’ or ‘anything goes’ as the rule for science.

Can
we really say that CAGW has nothing to do with what’s going on in the
climate? Surely there would have been no
CAGW if global mean surface temperature had not shown a net increase over the
past hundred years. If we look at the
reaction of CAGW proponents to the ‘Pause’—the fact that global mean surface
temperature has not risen in the past ten to thirty years (depending on your
favorite dataset)—we certainly do not observe that they are unconcerned about
it. They manifestly see it as something troublesome
that needs to be accounted for. When the
Pause started, many of them denied that it was happening. When this denial became impossible, many of
them said it could not possibly last much longer: just wait a year or two, and
then you’ll see a big spike in temperature!
Wrong again. As the Pause has
continued, they have responded in various ways, many of these mutually incompatible. They’re visibly troubled about it. And they will no doubt become increasingly
troubled with every year that the Pause continues, especially if we see
statistically significant cooling (as many skeptical climate scientists
predict). So I think it’s simplistic to
say that CAGW is sealed off from what’s actually happening in the climate.

To avoid misunderstanding, I should
point out that even without the Pause, the predictions of the CAGW crowd have
always been for more warming than has actually occurred. In every case, reality has turned out to be
cooler than what they confidently asserted would happen (or, in cases where
they gave a wide range of probabilities, the subsequent observations have been
at the extreme low end of the range, never close to the central values). Thus, even without the Pause, observations have
always told against their theory. And if
warming were to resume next year at the rate of the late twentieth century, the
new observations would continue to contradict the predictions of the IPCC
models. But the Pause is such a
contemptuous rebuff by Mother Nature, and something the broader public can so
easily grasp, that the CAGW zealots cannot escape the awareness that their
theory has landed in deep doo-doo.

West Forays into
Functionalism

West asks: “what are memeplexes for?”
(p. 18). He thinks it very likely that
they must be ‘for’ something. They can’t
just be a precipitate of human activity but have to possess a function or
telos, if not a purpose.

So
he tries to answer this question. His
answer is that memes are “for” benefiting society, and we can show this by
tracing some of the benefits which various memeplexes have conferred on society.

His chief example is pyramid
building in ancient Egypt. Pyramid
building used up a lot of resources and yet Egyptian society was successful by
various measures. Given that the burden
of pyramid building was so huge, West reasons, “it seems highly likely that the
overall social payback must be very positive indeed, in order to offset or
exceed the huge efforts involved” (p. 20).
He assumes that every major social phenomenon must pay. He then checks off some of the indirect
apparent benefits that resulted from the building of pyramids. Belief in retribution for bad deeds in the
afterlife encouraged altruistic behavior, which contributed to social cohesion
and therefore helped society (p. 19).

The logistics of pyramid building
“might well have been the catalyst that triggered the formation of the Egyptian
super-power civilization from pre-existing tribes, with all that a civilization
implies: central administration, writing, a managed food-supply and economy, a
large and formally organized professional army, . . .” And so on (pp. 20–21).

West goes on to offer a more
general social benefit, maintaining that it causes difficulties for society if
people’s beliefs are too dissimilar, and that therefore something that makes
beliefs more uniform will be helpful. So
societies with strong memeplexes will tend to outcompete societies without them
(p. 23).

Where
have we heard this before? In Durkheim,
of course, and in a whole brood of social anthropologists and sociologists,
most conspicuously people like Bronislaw Malinowski. This theory is called functionalism, and it
embodies certain misconceptions.
(Functionalist theories are not all the same; Durkheim’s functionalism
for instance holds that practices are functional inasmuch as they adjust to
equilibrium, which is not guaranteed to be nice. We don’t need to pursue these differences
here.)

West supposes that if a memeplex
exists, it must be because it confers some benefit. (In his case, the benefit seems to be
increasing the strength and power of the polity.) He then casts around for what this benefit
might possibly be, and hits upon one or two imaginable ways in which the
existence of this memeplex had helpful consequences for the population. But the initial question is misplaced. We will always be able to find good (or
functional) consequences of any belief system (it’s an ill wind that blows no
one any good), and there’s no reason to suppose that this explains the
prevalence of that belief system, especially as these consequences may arise
centuries after the beliefs have caught hold.

When the Egyptians were trying to
secure their prospects in the afterlife by protecting their corpses, what
selective mechanism could look ahead and foresee these remote consequences of
the prevalence of such beliefs, encouraging them to proliferate and
discouraging the alternatives (such as the belief that nothing of your
personality survives in your corpse, and when you die that’s the end of
you)? There’s no such mechanism. The prevalence of a belief cannot be properly
explained by remote and unknown (to the believer) consequences of many people
holding that belief.

West
looks for a functionalist explanation of the prevalence of certain systems of
belief, but such explanations are generally fallacious. This is not to deny the commonplaces of
historical enquiry. A group of people
may certainly become more or less successful because of some aspect of their
beliefs. National Socialist beliefs led
to the driving away from Germany of Jewish scientists and to such poor
decisions as the refusal to activate Ukrainian nationalism against Moscow. Thus, National Socialist beliefs helped
Germany to lose the war. In a famous
example, Bertrand Russell maintained that one of the reasons early Christianity
prevailed while Mithraism disappeared was that following Mithraism involved undue
expense: you frequently had to find a live bull to slaughter. There’s no mystery about these kinds of
explanations, and they do not imply functionalism.

Sometimes people may take deliberate
notice of the consequences of belief systems, and this may affect their
decisions. For example the patricians of
imperial Rome applied certain rules of thumb about religious movements. One was that old religions (like Judaism)
were to be warmly tolerated whereas new religions (like Christianity) were to
be less tolerated. Another rule of thumb
was that religious movements focused on following a particular person (again
Christianity) were likely to be dangerous, since any such person or his
designated successor would automatically become a rival to the emperor. Political leaders have always paid attention
to the purely factual consequences (in their judgment) of various belief
systems and have acted appropriately, to encourage or discourage those systems. This is not functionalism: in this case
someone consciously recognizes the consequences of the belief systems and acts
accordingly. The selective mechanism is
deliberate, conscious choice by specific individuals. Both social
institutions and belief systems evolve partly by cumulative rational selection,
as opposed to blind selective processes.

There
are also minor quibbles with West’s argument.
For example, he tacitly assumes that building pyramids is an outgrowth
of preoccupation with the afterlife. No
doubt this is true, but it goes against his argument, because if pyramid
building is explained by being a result of people’s preoccupation with the
afterlife, then there’s no need to explain it by its impact on military
organization and the like. We have an
explanation: pyramid building arose because of preoccupation with the
afterlife, end of story. And if, in the
functionalist perspective, pyramid building is a burden, while encouragement of
altruistic behavior is a benefit, then the most functional memeplex would be
something that encouraged altruistic behavior without building huge stone structures. There’s no logical necessity that a belief
system encouraging altruistic behavior must also encourage the building of huge
stone structures. Furthermore, the
building of huge stone structures clearly indicates that the pharaohs believed
that something other than altruistic behavior (to wit, the building of huge
stone structures) would benefit them in the afterlife. Therefore belief in the building of huge
stone structures represents a denial of the exclusive importance of altruistic
behavior: it’s an expression of people’s skepticism that altruistic behavior
could be enough, and so it undermines the altruistic ethics which West claims
pyramid building exists to promote.

What Are
Memeplexes For?

What are memeplexes for? Strictly, this question is absurd. It’s like asking what the aurora borealis is
for. The correct answer is that it is
not for anything and could not possibly be for anything. Systems of belief do not exist for any
purpose, except to assuage the believer’s thirst for truth. Nor do systems of belief exist because they
perform any social function.

To bring out the absurdity of
this kind of enquiry, consider the following example: Many story plots involve
the ‘eternal triangle’ theme of a man’s romantic involvement with two
women. What’s the social function of
this fictional theme? In other words,
what benefits does it confer on society, which benefits can account for the
fact that it exists? The answer is that
the prevalence of this literary theme, and of other common ‘dramatic situations’
arises automatically from certain basic, all-pervasive facts about human
life. It is therefore simply an
elementary misunderstanding to ask what it’s for. It’s just not ‘for’ anything and could not be
‘for’ anything.

Given a different interpretation,
however, the question “What are memeplexes for?” can be answered simply and
conclusively. Let’s restate the question. Why is it that humans have beliefs,
especially enthusiastic beliefs to which they become fiercely devoted? And why do groups of beliefs have a tendency
to clump together into systems of beliefs?

People
have beliefs because they have an appetite to believe. This appetite is stronger than hunger,
stronger than thirst, stronger than sex.
It’s innate in the human makeup, ineradicable, and dictated by
the genes. The human mind is so
constructed that it must believe. A belief
is taking something to be true. There is
no such thing as believing something you think is untrue—this is a
straightforward contradiction, because believing something is exactly
equivalent to thinking it true. So,
people’s appetite for belief always appears to them as (and is in fact) an
appetite for the truth.

What’s
the nature of this voracious, all-consuming appetite? It’s a demand to have the world make
sense. What you believe is always what
you think is true, and the demand that you come up with something you think is
true (the reason you’re interested at all, so to speak) arises from the
categorical imperative to be convinced of a theory about the world. This imperative is hardwired, it is observed
in babies (recall Alison Gopnik et al., The
Scientist in the Crib
) and cannot be shut down except by unconsciousness.

To
take the question back a stage further, why are babies born with such a
fanatical, dogmatic, uncompromising conviction that the world absolutely must
make sense? The answer to this is not
obscure—because humans born with such a ferocious hunger for the truth do
better at passing on their genes than humans born without any such appetite. Surely this is more or less what we would
expect.

Why do beliefs clump
together? Anyone trying to make sense of
the world will come up with numerous beliefs, and these cannot always be
isolated from each other. One reason is
that we may have several beliefs about the same thing, and there is the
possibility that such beliefs might be inconsistent. We automatically strive to remove inconsistency
and harmonize our beliefs. If two of our
beliefs are incompatible, we recognize that something is wrong; we feel uneasy
and look for a way to make the incompatibility disappear. It’s impossible to believe anything without
tacitly acknowledging the rudiments of logic.
Just as the whole of arithmetic is implicit in the act of distinguishing
two from one, so the whole of logic is implicit in holding that one belief is
true and its denial is false.

Another reason is that beliefs
are often useful to us, and where they are useful, they are often more useful
if they are more general. It may be
useful to believe that this tree will bear sweet fruit every summer, but it
could be even more useful to believe that all trees with this shape of leaf
will bear sweet fruit every summer.

As a child grows up, it will
frequently have the experience of learning something that explains a lot, a
single insight that puts a myriad of things in a different light, making more
sense of them. Thus, the drive to
believe automatically tends to encourage the appetite for beliefs of wide application,
the limit being all-embracing beliefs which explain everything.

The
existence of belief systems (or memeplexes) can be seen to follow automatically
from innate factors in the human constitution.
With the development of language and other media of communication, most
of an individual’s beliefs come from the culture—from what other individuals
say. We all believe that there are
kangaroos in Australia and that it’s cold at the North Pole, even though most
of us have never visited either place (or spent any time or effort
investigating the plausibility of these tales we’ve been told). This doesn’t mean that we’re bound to believe
everything ‘the culture’ (other people) tells us, though we very often do so
until some acute problem makes us question a received belief.

I
have given a brief account here of why belief leads to belief systems, without
assuming that there is some genetic predisposition to embrace large systems of
interlocking beliefs. But, of course,
there certainly is some such predisposition, and more generally there is likely
to be, not merely a genetically programmed drive to beliefs of wide generality,
but a genetically programmed drive to hold certain kinds of general beliefs
rather than others. Still the most
brilliant stab at such a theory of how the mind strives to order its
understanding of the world in a particular way is the identity theory of Émile Meyerson.

The Myth of
Irrationality

I surmise that West subscribes to the
common view that there are rational and irrational reasons or motivations for
believing something. This misconception
is criticized at length in Ray Scott Percival’s book, The Myth of the Closed Mind (2012).
I believe that the main thrust of Percival’s argument is correct. West may think that adopting a memeplex is
irrational. But adopting any belief
system is never irrational—though it may sometimes be mistaken or even foolish. Humans just can’t help being rational; they
are forever condemned to be rational.

The
misconception that humans can believe things for irrational motives often arises
from the tacit definition of ‘rationality’ as absence of error. Certainly, humans often commit errors; we all
make mistakes. In fact, only a rational
being can commit an error; the existence of error (in the strict sense) is
proof of rationality. Errors can, as we
know, be corrected, and very frequently are.

Some
systems of belief are more passionate than others. You could put together all my beliefs about
transportation in and around Chicago and call it a belief system. For example, my belief that I can get from
the Loop to Logan Square in about half an hour by taking the Blue Line from any
of several stations along Dearborn is one belief among thousands. If I had to revise some of these beliefs, it
wouldn’t upset me very much.

Other
belief systems involve a more visceral attachment. My belief in neo-Darwinian evolution, or in
any major part of that theory such as genetics, could not be changed without an
intellectual upheaval accompanied by emotional turmoil. I call this kind of belief system an
enthusiastic belief system, and I maintain that enthusiastic belief systems, be
they religious, philosophical, or scientific, all have common characteristics.

To
mention a few: they all involve ‘confirmation bias’; once you accept the system
you tend to interpret evidence so that it fits the system. They all dismiss or explain away apparent
counter-instances with great facility.
They all involve privileged texts (scriptures) and accredited
spokespersons, which become imbued with authority. They all exhibit emotional attachment on the
part of their adherents and strong feelings of aversion toward people who
dispute the system. Attachment to a
belief system is very much like attachment to a person; just as love is blind,
so our attachment to the belief system makes us overlook its possible
faults. All these features are just as
much in evidence in belief systems we agree with as in belief systems we reject. In fact all these features are inevitable:
science, no less than religion, could never have developed without them (as
Percival makes clear).

Why We Should
Resist the Temptation to Diagnose

People often disagree with each other. There are many competing and incompatible
theories (I view any religious doctrine as a theory). This disagreement arises ineluctably (in
human groups of more than a few hundred) because the world is big, complex, and
messy and because individual humans are each equipped with nothing more than
strip-maps of the world. When an
adherent of one belief system encounters adherents of another belief system,
there is a feeling of incredulity: surely they can’t possibly think that?

When
we encounter a belief system we disagree with, we can criticize it. We can try to show that it is contrary to
observed facts, or that it involves inconsistency and is therefore
self-contradictory. We can also
criticize it simply by finding flaws in some of the arguments in its favor. But having stated our criticisms of the
belief system, we observe with amazement that its adherents do not instantly
accept our arguments and abandon their belief system. They persist in their erroneous ways, by
ignoring what we have said, or by misrepresenting what we have said, or by
replying to what we have said with blatantly unsound counter-arguments. This is the age-old pattern of differing
beliefs, in science just as much as in religion.

In
this situation, it’s tempting to conclude that we have not done enough. Instead of simply refuting these people’s
arguments, we may feel we need to try to show that their erroneous beliefs
arise from some deep-seated disorder in their thinking. We then try to show that they are guilty of
some kind of irrationality.

The
temptation should always be resisted.
Once we have stated the arguments against their position, and worked on
improving these arguments, we just have to keep on restating them and wait for
the penny to drop. The arguments against
their position (assuming these arguments can’t for the moment be improved) are
everything we could possibly have; there’s nothing more to be had.

Here
we should remind ourselves of a couple of elementary points:

1. One and the same belief may be
held by different people with different habits of thought, different
epistemologies, and different methodologies.
A true belief may be held for seriously defective reasons and a false
belief may be held for impeccable reasons.
Logically, there is a disjunction between the soundness of one’s
thinking and the truth of one’s beliefs.
We cannot validly reason from the unsoundness of someone’s thinking to
the untruth of their beliefs, nor from the untruth of their beliefs to the
unsoundness of their thinking, nor from the soundness of their thinking to the
truth of their beliefs, not from the truth of their beliefs to the soundness of
their thinking.

2. What applies to individual
beliefs applies to systems of belief or memeplexes. One person may embrace a given memeplex
because of meticulous analysis while another may embrace the same memeplex
because of disturbed thinking, or seriously mistaken methodology (including uncritically
accepting what he has heard another person say). That this is routinely so is a familiar
fact. George Orwell famously pointed out
that he believed the world was round and orbited the sun, but would be unable
to mount a good defense of these beliefs against anyone (with a bit of
astronomical knowledge) who disputed them.

Even if every single person who
adheres to a particular memeplex does so for faulty reasons, it’s still
possible that at any moment, someone may arrive at a defense of that very
memeplex for different and sounder reasons.

If the adherents of a memeplex are
in fact prey to some thinking disorder, this is immaterial to the merits of
that memeplex, for one can arrive at a correct position by disordered thinking
or at an incorrect position by impeccable thinking. So the only relevance of their thinking
disorder would be that in this case it led them to espouse a faulty position,
and once again we find that all that matters is the faultiness of the position
and not in the slightest degree the type of thinking that happened to cause any
individual to accept it. The position
can only be shown to be faulty by direct criticism, never by diagnosing the way
its proponents think.

Science
has always involved passionate attachment to enthusiastic belief systems. As Blackmore says, “False theories thrive
within science as well as within religion, and for many of the same reasons” (The Meme Machine, p. 202). In itself, this is perfectly normal. Fiercely clinging to some theory after it has
been shown to contradict experience is a human trait (and according to Percival
a necessary and productive human trait) and it occurs in science just as much
as in other institutional areas.

Sometimes, under certain
conditions, the situation gets bad, as with Lysenkoism and CAGW. These monumental follies arose because the
zealots used political power to protect themselves from criticism by
stigmatizing their actual critics and intimidating potential critics. Just as competition is required to keep
business socially optimal and nothing else can, so debate, the competition of
ideas, keeps enquiry on the track of truth, and nothing else can. But monopoly enfeebles the monopolist—“power
stupefies”—and ensures that when the memeplex falls, it crashes and burns with
spectacular suddenness.

If
general cultural conditions favored it, episodes like Lysenkoism or CAGW could
actually destroy science. But conditions,
though worsening, are nowhere near that bad.
Science will survive, at least for the next century or so. CAGW will soon be a thing of the past, a
paroxysm of ideological craziness that we will look back upon with amused
fascination.

Naturally,
the bare bones of the environmentalist belief system will grow different
flesh. Just as global warming supplanted
acid rain, so some new scare will supplant global warming. (Always eager to help out, I have nominated
the use of electronic devices causing a switch in the Earth’s magnetic
field.) Environmentalism holds that
human activity is destroying the planet, and therefore economic growth must be
crippled and millions of Third World babies allowed to die. The specific evil activity which is destroying
the planet can be changed, although possibly the environmentalist loss of
credibility due to the thoroughness of the discrediting of CAGW will be so
immense that the entire environmentalist belief system will be weakened for a
while. If so, that will be good news for
humankind and especially its poorer half.