Blog Image

The London Libertarian

About the blog

Commentary and debate on politics, economics and culture from a libertarian perspective. To Libertarian Alliance Website >


Anyone can make individual contributions on any subject covered in this blog by emailing LABlog2017@Yahoo.com

Remembering Tom Szasz

Psychology Posted on Thu, August 07, 2014 19:32:06

We’re probably in for a fresh spate of critiques and reappraisals of the work of Thomas S. Szasz.

In 1961 Szasz published The Myth of Mental Illness (following an article with the same title, two years earlier). His many subsequent books would preach the same message, and most of these later volumes make much more entertaining reading than The Myth of Mental Illness.

Szasz’s reputation as a writer suffered on account of that early work. Because of its title and its key role in psychiatric controversies, it became the one work of Szasz to cite. People curious about Szasz would usually seek out that particular book. It’s rather dull compared to such sparkling later works as The Manufacture of Madness (1970), The Therapeutic State (1975), or Liberation by Oppression (2002). His Karl Kraus and the Soul Doctors (1976, later reprinted as Anti-Freud) is also captivating, but in this case partly because of the translated remarks of Kraus. Szasz’s own witty, oracular debunking style evidently owed a lot to the author of The Last Days of Mankind, as well as to Mark Twain, Ambrose Beirce, and H.L. Mencken.

Szasz argued that there is literally no such thing as ‘mental illness’. Mental illness is no more than a metaphor. If we speak of ‘a sick economy’, we know this is a metaphor. We don’t try to pretend that economics is a branch of medicine. It’s just the same with human behavior, human feelings, and human thoughts. These do not belong to the domain of medicine. But in this case, we may be tempted to think that there is a branch of medicine—psychiatry—which is competent to deal with problems of behavior, feeling, and thinking. This Szasz denied outright. He did not rule out as meaningless or useless everything that psychiatrists might do—he merely insisted that it was not medicine. He undoubtedly did believe, though, that psychiatry had done a lot more harm than good.

Szasz himself had a private practice as a psychotherapist, as well as being a professor of psychiatry. He defended being a professor of psychiatry by pointing out that few would object if an atheist were a professor of religion. He talked about his own practice of psychotherapy rarely and vaguely: he characterized it as having conversations with people in order to help them with their problems in living. As for helping them by giving them drugs, Szasz held that this should be permitted as long as it was entirely voluntary, but he himself was not a big enthusiast for the practice (and, for all I know, believed it was always wrong). He would say, for instance, that you don’t call in a TV repairman when you’re disgusted with the quality of the programs. This is an entirely typical Szasz bon mot. On the one hand, it strikingly clarifies one facet of the issue. On the other hand, there is a lingering doubt, is there not? For after all, if the entire scriptwriting and production process occurred inside the TV set, it wouldn’t be so obviously silly to get the repairman to fix up the script for It’s Always Sunny.

Szasz—an MD who knew quite a bit about medicine and the history of medicine—didn’t dispute that the realm of behavior often interacts with the domain of medicine. By drinking too heavily, a person may give himself cirrhosis of the liver, which is a medical problem. By bungee jumping a person may give himself a broken neck. What makes him take to drink or go in for bungee jumping is not, in Szasz’s view, a matter in which medical doctors have any special competence. What are commonly regarded as ‘mental illnesses’ are simply ‘problems in living’.

His books are eloquent in exposing and criticizing the absurdities which result when any and all human behavior is viewed in terms of health and disease. Even before such diseases as sex addiction, shopping addiction, and internet addiction had been invented, Szasz had accounted for them, and had pointed out the affinity of such afflictions with drapetomania (the disease diagnosed in some black slaves by a nineteenth-century doctor, the symptom of this malady being the slaves’ desire to run away from their owners) and the mental diseases identified by Soviet psychiatrists in people who criticized the socialist regime.

I first became aware of someone called ‘Szasz’ when I read R.D. Laing in the 1960s; at that time Laing was all the rage in England. At first the ‘anti-psychiatrists’ eagerly quoted their predecessor Szasz, but it soon became apparent that Szasz had nothing but contempt for the anti-psychiatrists. He didn’t like them because they were socialists and because he believed that they sought to glorify the mental states of designated mental patients. Szasz had no patience with those who imputed to mental patients wondrous insights denied to the rest of us. He tended to think of mental patients as, for the most part, a rather pathetic bunch who were often complicit in their own oppression.

Jonathan Engel (in his American Therapy, 2008) gets the chronology wrong and thinks that Szasz was a follower of the anti-psychiatrists. I have occasionally encountered people who suppose that since Szasz was a ‘radical’ in the 1960s and later says things that sound ‘conservative’, he must have undergone a political conversion. But the truth is that Szasz’s fundamental outlook was pretty much fixed by the 1940s and never changed. He was always a classical liberal, an anti-communist, and a ‘cultural conservative’ in lifestyle matters, though of course favoring the repeal of all prohibitions on drugs and victimless crimes. The biggest change he did undergo was from being a psychoanalyst (some said the crown prince of psychoanalysis) to being a hostile critic of psychoanalysis.

The volume Szasz Under Fire: The Psychiatric Abolitionist Faces His Critics (edited by Jeffrey Schaler, 2004), which includes a brief autobiography, also contains an exchange of letters between Szasz and Karl Popper (this is given by Szasz in his reply to the article by Ray Percival). Here, Popper says he thinks that Szasz is ninety-five percent right about the nonexistence of mental illnesses. What Popper meant was that while he agreed with Szasz that the extension of the medical metaphor to every type of human ethical or lifestyle decision is preposterous, we can still reasonably conjecture that there are some few cases where a typical brain malfunction is the cause of some typical cluster of emotional and behavioral problems (even though we can’t yet identify the brain malfunction in question).

Not that Szasz would have disputed the truism that Alzheimer’s and syphilis can cause mental deterioration, and that there are sure to be many other as yet undiscovered diseases of the nervous system that have mental and behavioral symptoms. But he took the position that we can’t describe these as diseases until we have ascertained their physical cause.

In a typically Szaszian crisp summary (and possibly oversimplification), he asserted that we’re not entitled to talk about a disease until a pathologist can identify its presence in a corpse. A corpse can have cancer, bunions, or atherosclerosis. A corpse can’t have schizophrenia, bipolar disorder, or paranoia, let alone shopping addiction or obsessive-compulsive disorder. No pathologist can detect the presence of these supposed illnesses by examining a cadaver. To Szasz, this meant that they could not be called literal diseases, even though he allowed that at some future date we might find that they corresponded, more or less, with some presently unknown literal diseases.

Szasz observed that once a genuine physical disease is identified, it tends to be taken away from psychiatry and given to general medicine, as occurred with syphilis of the brain and with strokes, and more recently with Alzheimer’s. Once these are classified as literal diseases with known physical causes, psychiatry can claim no special expertise in these areas. Szasz also pointed out the influence of ethical and religious fashion on psychiatric diagnoses: when Szasz started writing, nearly all psychiatrists held that homosexuality was a disease (this was the official position of the American Psychiatric Association until 1973 and the World Health Organization until 1990). Now most of them don’t. The switch is not in the least due to any new medical evidence, but purely to a re-adjustment of mores and ethical attitudes.

Although on occasion Szasz fully acknowledged that some human problems would eventually be attributed to presently undiscovered brain diseases, the general sweep of his rhetoric tends to give the opposite impression: “. . . we will discover the chemical cause of schizophrenia when we discover the chemical cause of Christianity and Communism. No sooner and no later” (The Untamed Tongue, pp. 215–16).

I agree with Szasz in opposing involuntary commitment of the mentally ill and I admire his exposure of much psychiatric silliness. But the route to those conclusions is not as simple as he believed. Szasz holds that there can be no literal disease of the mind, only a literal disease of the body or a metaphorical disease of the mind. This is strictly correct, but it does not have the sweeping implications he supposes. Szasz attacks people who employ the term ‘mental illness’, but his attacks fail if people are using the term to mean ‘a brain disease with mental symptoms’.

Various drugs can cause you to have hallucinations and infection by rabies will make you terrified of water. So we know that purely bodily changes can change your conscious states and your deliberate behavior in predictable ways, and we can’t rule out the possibility that some such bodily changes may happen without the intervention of drugs or of rabid beasts.

Szasz would say that until we have identified the physical cause (the lesion), we can’t assert the existence of an illness. But, as far as I can see, nothing prevents us from conjecturing that certain symptoms are accounted for by an illness whose existence we can’t yet observe directly. I know a lot less than Szasz did about the history of medicine, but I would even surmise that there have been such cases—consumption, epilepsy, and asthma spring to mind. But even if I’m wrong in thinking that there have been actual cases, it still wouldn’t follow that such conjectures are inadmissible. And if we can do this with physical symptoms, we can do it with mental symptoms: I can see nothing wrong in principle with hypothesizing that a certain cluster of emotions, thoughts, and behaviors is accounted for by a brain malfunction. It’s literally, pedantically wrong to call this a ‘mental disease’ just as it’s literally, pedantically wrong to say that the sun rises, but such casual expressions are inevitably rife throughout language.

Involuntary commitment and other pretexts for imprisonment and torture are very common in our culture, and so is the endless re-iteration of the claim that victims of state coercion are ‘ill’. Yet these two facts are not as tightly connected as Szasz supposed. I can easily imagine a change in semantic fashion, so that state paternalists would say: ‘Granted, these people are not ill, but they are still a threat to themselves and others and therefore need treatment whether they consent or not’. And I can also easily imagine some people coming around to the view: ‘These people are indeed ill, but even sick people shouldn’t be forcibly incarcerated or given drugs or electric shocks against their wishes’.

Szasz wrote about forty books, even one (Faith in Freedom, 2004) devoted to a critique of the views of libertarians on mental illness. The one I found most disappointing is The Meaning of Mind (1996). As you read most of Szasz’s work, you become conscious of an odd lacuna: he repeatedly draws a bright line between consciousness and physiology, as though these are independent realms. This is the more remarkable because he is an atheist with no theological commitments. So, you wonder what he thinks about the relation of mind and brain. With The Meaning of Mind, we find out that he has no coherent view of the relation between mind and brain and (while the book does have a sprinkling of his usual piercing insights) his uninformed comments on those who have carefully elaborated various theories often miss the point and are at times painful to peruse.

Following protracted illness, and a few days after a severe spinal injury due to a fall, Tom Szasz exercised his right to suicide. I never met him but had various phone and email exchanges with him over a number of years. If I had met him in the flesh, I might have mentioned some of my criticisms of his views, though his always thick Hungarian accent might have been a conversational impediment, and I have heard from a reliable source that in his last years he became testier and testier, disposed to see any disagreement as betrayal.

Szász Tamász István (the surname comes first in Hungarian). Born Budapest, 15th April 1920. Died Manlius, New York, 8th September 2012.



A Moral Defense of Meat-Eating

Philosophy Posted on Tue, May 06, 2014 20:25:26

A moral case for vegetarianism has been made by some philosophers and has become popular among a small group of people not noted for their reticence. The most influential of these philosophers is Peter Singer. Singer’s argument is that it’s immoral to cause suffering, that the suffering of non-human animals has equal weight with the suffering of humans, that you can’t eat meat without patronizing and encouraging the inflicting of suffering on animals, and that therefore it must be immoral to eat meat, except in cases of dire necessity.

I think this argument is mistaken, and I will now give you my chief counter-argument. My counter-argument contains a lemma—an intermediate conclusion that I can then use as a premiss for my final argument. To keep things short and simple, I’m not going to argue here for the lemma (though I am going to briefly explain the point of it), since I believe that most people, if they think about it even briefly will agree with it. I’m just going to state the lemma and then move on from there. (Although I say “my” counter-argument, I don’t mean to imply that there’s anything original about this. I’ve heard something similar to this before, though I have no idea who first came up with it. After all, it’s pretty obvious.)

Lemma: We’re not under any moral obligation to act so as to reduce the total amount of animal suffering below what it is in the wild, or below what it would be if humans didn’t exist. In other words, if the immorality of eating meat is dependent on humans causing animals to suffer, then it can’t be immoral to eat meat if the production of meat for human consumption does not increase the suffering of animals above what it would have been in the absence of any human intervention.

Explanation of the lemma: In the absence of human intervention, animals like deer and oxen would be eaten by non-human predators. When humans eat meat, they’re competing with other meat-eating animals, such as lions and wolves. If the predators disappear, this may lead to overpopulation of the former prey animals and consequent unwelcome environmental effects such as deforestation followed by soil erosion. The situation is not changed in principle if we move from hunting to the raising of livestock: the morally relevant issue is whether the cows or sheep we’re raising would suffer more, or less, or the same, if they were in the wild and being eaten by lions or wolves.

The lemma allows the possibility that some ways of treating animals may be immoral, but the lemma rules out the presumptive immorality of all cases of treating animals in such a way that their situation is no worse than they would face in the wild. In the case of hunting, this is clear enough. Anyone who knows cats knows that they love to keep their prey alive and toy with it before finally killing it, and this causes more suffering than would be caused by a quick kill with an arrow or a bullet. So human hunting causes less suffering than hunting by at least some other predators.

Could it be argued that by hunting deer, humans are causing suffering to lions and wolves by taking away their prey? This doesn’t look like a promising line of argument. Humans are hunters by nature, and it’s not clear why we would feel obliged to let other species of hunters have prey that we could have. A lion whose potential prey is killed by a human is no worse off than a lion whose potential prey is killed by another lion, and in either case the total lion population adjusts to the availability of prey for lions, with marginal lions always dying or otherwise failing to reproduce because of competition.

As we move from hunting to raising livestock, no important new issues of principle arise. Do farm animals suffer more or less than animals in the wild? It’s not clear that they suffer any more, and it seems likely that they suffer a lot less. The day-to-day life of a cow munching the grass and chewing the cud has less excitement than that of the wild ox, continually fearful of sudden attack by a predator, but I doubt that the cow would get a thrill from dangerous adventures the way some humans do. When death comes to the cow, it does not seem to cause any more suffering than death in the wild—and if we ever found out that it did, we could adjust our techniques of slaughter, without abandoning the practice of killing animals for food. My argument is not that all and any ways of raising and killing animals for food are morally acceptable, but merely that some feasible ways are morally acceptable, and therefore morality does not require vegetarianism.

Some people may feel that the life of an animal in the wild is in some way better than that of a farm animal, even though the farm animal experiences less actual pain and fear. Well, we observe, as real incomes rise, that there is a growing interest in both recreational hunting and in the demand for game animals, animals killed in the wild, in preference to farm-raised animals. The meat of game animals is leaner and tastes better. This trend is merely the tip of a broader movement towards free-range raising of animals. Suppliers of meat can charge more for meat that has been produced in a ‘more natural’ way, partly because of superior taste and partly because consumers feel better knowing that what they were eating was produced in a more natural way. As our incomes rise, we spontaneously move away from factory farming toward free-range farming, and then ultimately to preferring meat from animals that have been hunted in the wild.

If we accept the lemma, then the mere fact that some suffering occurs to animals when they’re raised for meat production is not enough to show that this is immoral. Instead, we have to show that they necessarily suffer more than they (or corresponding animals, which might be a bit different in a hypothetical alternative world) would suffer, if the human population were much smaller and the populations of lions and wolves much bigger.

Although I’m not offering arguments for the lemma, I do want to look at three possible ways of rejecting it. Someone could maintain that our obligation is simply to stop suffering wherever we can. One way to stop the suffering that comes from animals being harvested as prey would be to wipe out those animals. Thus, we could kill all oxen (including beef cows). At the same time, we would wipe out all the predators, the animals that would have eaten the oxen. This would mean wiping out virtually all animal species, including insects, birds, and fish, for all these animals are either predators or likely prey. Some folks would feel sad that all these species had disappeared, but they could console themselves with the thought that being extinct means you never have to suffer, whereas being extant means you do have to suffer.

Consistently, we should extend this to humans: they should be killed off, and then no human would ever suffer again. (Just to keep an eye on things and make sure everyone follows the rules, I’ll be the last one to go.) If allowing suffering is decisively immoral then every sentient living thing, including humans, should be made extinct, because this and only this guarantees no more suffering.

Another person might, however, approach the issue a bit differently. Instead of killing all animals, we could take over and manage the entire animal kingdom, transforming it into something very different from the way it has evolved, intervening with birth control drugs, factory-produced food, analgesics, and anesthetics. The former predators could be fed substitute foods made in factories from soybeans, or even directly from industrial chemicals. Since they would suffer somewhat from not being able to hunt, we would have to provide them with robotic imitation-prey, so that they could continue to experience the activity of hunting. Herbivores could be left to graze the wilderness, but fed fertility-reducing drugs to keep their populations stable. There would still be some suffering: accidents do happen, and every animal has to die, though we could try to limit this suffering by infiltrating the natural world with robots using analgesic and anesthetic dart guns, watching all the while for any impending pain or anxiety.

There are various aspects of this scenario which may not be very appealing. Be that as it may, it is not feasible right now, and won’t be feasible without a huge investment over many decades, if not centuries (think about the difficulty in ensuring that every fish in the oceans is guaranteed never to be eaten). So, even assuming that this ambitious intervention is morally required, we’re stuck for a while with the choice between a certain amount of suffering in the wild and a certain amount of suffering (probably the same or a bit less) down on the farm. And therefore, if we accept the lemma, we must reject the case for vegetarianism on grounds of the suffering caused by meat-eating.

Of course, most vegetarians will reject those two approaches and go for a third approach: simply have humans abstain from meat-eating. But what the lemma helps to bring out is that this option has an arbitrary quality. Turning humans into herbivores means excluding other herbivores from a large area of land, reducing the world’s populations of non-human herbivores. So the third approach is a kind of partial and inconsistent version of the first approach. Either we have an obligation to reduce animal suffering every chance we get, or we don’t have such an obligation. Eschewing the first two approaches means admitting that we have no such obligation.

We can kill animals for food without adding to the total net suffering in the animal kingdom, and this is morally okay.



The Bigotry of the New Atheism (by an Old Atheist)

Current Affairs Posted on Sat, April 12, 2014 22:05:39

(I wrote this a few years ago. A magazine said they would probably print it but then held on to it for over two years before deciding not to use it. I’ve just now gone quickly through it and changed it slightly in several places.)

If there’s anything new about the New Atheism which erupted in 2004, it’s the strident proclamation that belief in God is a powerful force for evil. All kinds of atrocities are laid at the door of “religion,” equated with belief in God.

The central message of the New Atheism is that 9/11 and similar outrages have occurred because their perpetrators believed in God. This is explicitly stated and reiterated many times by Sam Harris, but the same tune has been hummed and whistled in several keys by Richard Dawkins and the late Christopher Hitchens.

If you believe in God, then you have been infected and (twenty-eight days or years later) this belief is going to prompt you to kill yourself and your fellow-humans. So the New Atheists tell us. I view this as a fairytale, just as far-fetched as anything in the Bible or the Quran.

Atheists Do It Better (Mass Murder, That Is)

There’s an obvious problem with the New Atheist claim that theistic religion is peculiarly conducive to atrocities. The last hundred years have seen the rise to power of secular, in some cases overtly atheistic, ideological movements, and these movements have been responsible for the killing, torture, enslavement, and terrorizing of many millions of people.

By any measure, the evil deeds done by these secular regimes within a few decades have vastly outweighed the evil deeds done by Christianity and Islam combined, throughout their entire history—not by a factor of just two or three, but by a factor of hundreds, if not thousands. Institutions claiming to embody Christianity or Islam have murdered thousands. Institutions claiming to embody Marxism, National Socialism, or other types of socialism, have murdered tens of millions.

Since this factual point is so conspicuous, the New Atheists have naturally attempted to account for it. Their most common response is that whereas theists (like Torquemada) committed atrocities because they believed in God, atheists (like Stalin or Mao) did not commit their atrocities because they disbelieved in God. This strikes me as a very strange claim.

Even if this strange claim were true, it would not address the difficult point. The New Atheists maintain that “religious,” meaning theistic, ideologies generate atrocities. History shows that non-theistic or secular ideologies have generated atrocities on a vastly greater scale than theistic ideologies. Now, even if the religious atrocities were committed because the perpetrators believed in God while the secular atrocities were not committed because the perpetrators disbelieved in God, this does nothing to get around the stark fact that ideologies without belief in God have motivated more and bigger atrocities than ideologies incorporating belief in God, and that therefore it looks dubious to single out belief in God as an especially virulent intellectual source of atrocities.

However, the strange claim, if we can make any sense of it at all, can only be false. Belief in God is an integral part of Christianity and disbelief in God is an integral part of Marxism. Torquemada committed his atrocities because of a belief system which included belief in God. Stalin and Mao committed their immensely more ambitious atrocities because of a belief system which included disbelief in God. I can’t imagine how you extract from these facts the conclusion that theists committed their atrocities “because” they believed in God while atheists did not commit their atrocities “because” they disbelieved in God.

Another argument offered by the New Atheists is to cite ways in which the churches were complicit in the crimes against humanity committed by Fascist and National Socialist regimes. The New Atheists don’t seem equally concerned about the complicity of atheist intellectuals in the greater crimes against humanity committed by Communist regimes.

But, in any case, what do such examples really show? Fascism and National Socialism were not Christian movements. The distinctive elements in their ideologies and policies were not derived from what the churches were teaching. When the Fascists and the Nazis were new, small parties with little following, they did not seek, nor did they get, the slightest bit of support from the churches. Until 1933, for instance, Catholics were forbidden by the German bishops to join the Nazi Party.

By the time Fascism and National Socialism became contenders for power, and then achieved power, many people compromised with them, including most of the churches. So did other groups, for example, the majority of scientists, scholars, and journalists in the affected countries. Both totalitarian movements, Fascism in Italy and National Socialism in Germany, gained electoral support at the expense of specifically Christian political parties, which were closed down when the Fascist and National Socialist parties came to power.

It’s also true that some Christians, motivated at least in part by their Christianity, resisted these regimes and paid for it. The truly heroic Claus von Stauffenberg, leader of Operation Valkyrie, the plot to assassinate Hitler, was a devout Catholic.

As well as the Soviet repression of theists, both Christian and Muslim, and such well-known instances as the mass killings directed by the atheist Khmer Rouge in Cambodia, it’s worth mentioning a couple of other, lesser-known cases where specifically atheist persons or groups were responsible for horrible acts of violence.

In 1924, the Mexican government ramped up its already severe restrictions on the activities of the Catholic church. Hundreds of priests and other Catholics were imprisoned or executed because they refused to comply with new regulations (requiring, for example, that priests not criticize government officials and not wear clerical garb outside a church). The brutal repression of Catholics led to the “Cristero war” between Catholic rebels and the government, followed by further government assaults on Catholics. The government hunted down and killed priests, just because they would not give up being priests. Graham Greene wrote about this in a documentary work, The Lawless Roads (1939), and then in a novel, The Power and the Glory (1940). The former president and de facto ruler of Mexico at this time, Plutarco Elias Calles, was a highly enthusiastic atheist.

The traditional anticlericalism, often atheism, of Mexico’s ruling elite stems mainly from Positivism, the atheist belief system promulgated by Auguste Comte, a form of pre-Marxist socialism which took root among the Mexican intelligentsia in the nineteenth century. Vicente Fox Quesada, elected in 2000, was the first Mexican president for ninety years who could openly admit to being a Catholic, and even today, a few remnants of the old restrictions remain, for example ministers of religion are banned from holding political office in Mexico.

In another example, the Spanish anarchists, atheistic followers of Mikhail Bakunin (“If God existed, it would be necessary to abolish him”), had come to control some regions of rural Spain by the 1930s. They committed numerous outrages against Catholics, not just the desecration of churches, but also occasionally the killing and mutilation of priests and nuns. These atheist-inspired attacks alarmed many Spaniards, and stimulated support for rightwing enemies of the Republic, helping prepare the way for extraordinary brutality by both sides in the Spanish Civil War. Numerous leftist supporters of the Spanish Republic, like George Orwell, were fully aware of these anti-Catholic crimes and never uttered one word of criticism. Yes, it’s true that these atrocities were “exaggerated by the right for their own purposes.” But the right had something to exaggerate.

Atheist Terrorism

Harris’s explanation for the current spate of suicide terrorism is that the terrorists believe they will be rewarded as martyrs in Heaven. The religious zeal of fundamentalist Muslims is the explanation for suicide attacks. This entertaining story has been continually reiterated by journalists, but it will not withstand scrutiny.

Harris, and following him Dawkins, have asked, rhetorically, whether we can imagine any atheist group conducting suicide terrorism. In actuality, a rather high proportion of suicide terrorists have been atheists. In the years up to 2009, the pre-eminent perpetrator of suicide bombings in the world was the group known as the Tamil Tigers, in Sri Lanka. They were of Hindu background but led by atheists. Opinions differ on whether the Tamil Tigers could accurately be described as “Marxist-Leninist,” but it is not disputed that they were belligerently anti-religion.

Another atheist group responsible for suicide terrorism was the Kurdistan Workers’ Party (PKK), a Kurdish nationalist and Marxist-Leninist group active in Turkey. These suicide bombers were atheists and their victims were mostly Muslims. Around 1999 the PKK leadership abandoned its Marxism-Leninism and its practice of suicide bombings, and later changed its name.

Suicide terrorism is primarily political in its aims and rationale. Suicide bombers have political objectives which provide the reason for their actions. Suicide terrorism is the recourse of members of ethnic populations who find themselves completely outmatched by vastly stronger military might. It’s their way of hitting back at the occupying troops, whom they are too feeble to confront directly. It is particularly effective if the occupying power is a democracy. Robert Pape’s study of the backgrounds of Muslim suicide terrorists (Dying to Win: The Strategic Logic of Suicide Terrorism, 2005) shows that many of them are not especially religious.

If suicide bombers knew of a way to kill an equal number of the enemy without also killing themselves, they would act upon it. The reason that suicide bombing has become much more frequent since 1983 is that it works. The Israeli government, for example, while usually unmoved by peaceful overtures or by (comparatively ineffective) non-suicide attacks, has made concessions to the Palestinians following suicide bombings. Reagan pulled the troops out of Lebanon because of suicide attacks, intended precisely to get US troops pulled out of Lebanon. Pape, who made a thorough study of all cases of suicide terrorism (up to 2003), calculated that about fifty percent of suicide attacks had some demonstrable success in achieving their political objectives—an amazingly high success rate for terrorism, or indeed for any form of political operation by small groups not in control of a government.

This is not to say that suicide terrorism has any moral justification. It is merely to say that it works extremely well. Suicide terrorism is far more effective than any of the alternatives open to militant political groups acting, as they see it, on behalf of comparatively powerless ethnic communities under foreign military occupation. It’s a highly rational, expertly calibrated activity which delivers the political goods.

Some readers will no doubt protest that some of the Muslim suicide bombers really do believe they will enjoy the attentions of seventy-two virgins in paradise. (Some Muslims have told me this is a mistranslation and it should read “seventy-two raisins,” which confirms my view that Islam isn’t much fun.) It wouldn’t astound me to learn that one or two members of IRA-Sinn Fein did believe they would have a friendly chat with St. Peter at the Pearly Gates before being issued with harps. But Al-Qaeda, like the IRA, is an organization all of whose activities are strictly determined by its assessment of how these activities will serve its political objectives. Being prepared to give up one’s life for a great cause is a commonplace of all national cultures, and always positively valued when done for the side we favor.

It’s understandable that someone who picks up his knowledge of Christianity and Islam from the TV news would be innocent of the above facts. (In the wake of 9/11, an operation carried out by Saudis, I kept hearing about seventy-two virgins, but not once did I hear a single murmur on the major TV networks about US troops stationed in Saudi Arabia. These troops were pulled out eighteen months after 9/11, rendering that operation a brilliant success.) Still, anyone of a curious disposition might pause to wonder why, if belief in God explains 9/11, the first fifteen centuries of Islam passed by without a single suicide bombing or anything comparable, whereas suicide bombings (usually assassinations of public figures) were well-known in nineteenth-century Europe. We see this awareness reflected in such stories as The Secret Agent by Conrad and ‘The Stolen Bacillus’ by Wells. Again, we can generally assume that the “anarchists” who committed suicide bombings in nineteenth-century Europe were atheists.

What Makes Religion Dangerous?

Confronted by the fact that atheists have been implicated in both state repression and terrorism to an extent hugely disproportionate to their numbers, the New Atheists offer the rejoinder that these dictators and terrorists, though they may not believe in God, still think in ways that are unreasonable. In one formulation of this rejoinder, Harris says that “although these tyrants [Stalin and Mao] paid lip service to rationality, communism was little more than a political religion” (End of Faith, p. 79).

The first thing to note about this is that in making such a move, the New Atheists casually abandon what had been their central claim—and continues to be their central claim, because they don’t acknowledge that they have abandoned it, but go right back to repeating it. They keep drumming into their readers that religion must be defined as belief in God (or occasionally, the supernatural), and that specifically belief in God is the pathological meme which causes terrorism and mass murder.

If “religion” is to be used to characterize all belief systems which have ever led to terrorism and mass murder, then in shifting from religion-defined-as-theism to religion-which-may-just-as-well-be-atheistic, the New Atheists have tacitly accepted that their original claim is false.

The second thing to note is that while Harris will not apply the term “religion” to his own beliefs, he does not give us a litmus test to distinguish “religion” from non-religious beliefs. But a favorite rhetorical trope of his is to assert that people he disagrees with accept things without evidence, and so I think we can assume that Harris defines “religion” as accepting things without evidence, or, as he sometimes says, without justification.

However, virtually all spokespersons for Christianity, Islam, Communism, or even National Socialism, would hasten to insist that they do not, repeat not, accept anything without evidence. They would go on to assert that Harris ignores the relevant evidence for their doctrines. Harris would naturally reply that he’s not very impressed with their evidence, and interprets it differently. On this point I agree with Harris (as I have unpacked at length in my Atheism Explained: From Folly to Philosophy).

But the crucial thing to remember here is that anyone who takes up any point of view on any subject whatsoever will always claim that the evidence supports this point of view and that the evidence goes against people who espouse a different point of view. So what Harris is saying is that he is right and the theists are wrong. But we are all right about some things and wrong about others, and, while we ought to strive to increase the ratio of our true beliefs to our false beliefs, this in itself says nothing about which false beliefs have the effect of increasing the predisposition to kill people.

And so we find that, in practice, what Harris is saying amounts to the claim that “religion” means belief systems he disagrees with, and people who think precisely the way he does would never commit atrocities. Any Marxist around the year 1900 would have said the same thing.

Why Atheists Have More Blood on Their Hands

While I point out that atheists have perpetrated more and bigger atrocities than theists, I do not attribute this to an inherently greater tendency on the part of atheists to commit atrocities. If the historical facts were the other way round, with theists having committed more and bigger atrocities than atheists, I would then be pointing out that it is a logical error to conclude that theism is inherently more inclined than atheism to perpetrate atrocities.

As I see it, there’s no direct causal link between atheism and atrocities or between theism and atrocities. Neither theism nor atheism is significantly conducive or unconducive to atrocities (or to happiness or health, as I argued in Atheism Explained). But I do have a historical theory explaining why atrocities by atheists in the twentieth century vastly exceeded the far smaller-scale atrocities perpetrated by Christians and Muslims in all centuries up to and including the twentieth.

Enthusiastic ideologies or belief systems, especially when they are able to capture a monopoly of governmental authority, are liable to give rise to atrocities. It doesn’t make any difference to the body count whether such a belief system encompasses theism or atheism. The rise of secular belief systems such as Positivism, Marxism, Fascism, and National Socialism coincided historically with the greatly enhanced technology for committing atrocities. If Torquemada had possessed the administrative and personnel resources of Stalin, he might have more nearly approached Stalin as a superstar of mass murder.

Modern capitalism produces improved techniques and it also produces secularization. But secularization does not mean the disappearance of belief systems with fanatical adherents. Spiritual religions are replaced by purportedly scientific religions, from Mesmerism to Global Warming. Socialism has come and gone, and has now been replaced by Environmentalism. When Environmentalism passes away, it will be replaced by some new enthusiastic belief system, perhaps one associated with Mental Health or the need for contact with space aliens.

In the “third world,” the poorer half of the world, which is now the stronghold of both Christianity and Islam, there remains some danger of atrocities perpetrated in the name of Christianity or Islam, but in the advanced industrial countries, most of the danger of future holocausts arises from secular-minded and pseudoscientific belief systems.

The New Illiberalism

Do we have anything to fear from the New Atheists themselves? Some of the things they say aren’t very reassuring.

Harris informs us that “belief is not a private matter.” (p. 44). The phrase “a private matter” has a specific meaning in the history of liberal thought. It means an area which is none of the business of the authorities, an area where whatever you may choose to do will not cause you to fall into the hands of the police. Hence the chilling quality, to any liberal, of the phrase, “Thought Police.”

Maybe this was just a slip by Harris? Not a bit of it. “Some propositions are so dangerous,” he explains, “that it may even be ethical to kill people for believing them” (pp. 52–53). The whole thrust of his book conveys the message that belief in God is the most dangerous of the dangerous ideas for which it is ethically permissible to kill people who have done absolutely nothing wrong. Harris reasons that since thoughts give rise to actions, it’s okay to coerce people on account of their dangerous thoughts alone. The rhetorical tone of The End of Faith suggests that Christian fundamentalists have the moral standing of insect pests. Just imagine the fuss the New Atheists would be making if Jerry Falwell or Pat Robertson had so much as hinted that it might be ethically permissible to kill people simply for believing there is no God. But the late Reverend Falwell said: “We [meaning traditional-minded Americans] honor the unbeliever.” You can’t imagine Harris saying anything this nice about Christians.

Commenting on the fact that most Muslims living in the West are tolerant of the non-Muslim beliefs of their neighbors, Harris points out that Muslims in the West are in a small minority, so their seeming tolerance may be just a sham (p. 115).

Quite possibly. And if the New Atheists today, when atheists constitute about two percent of the US population, can cheerfully entertain the ethically permissible liquidation of some unspecified segment of the dangerous eighty-plus percent who believe in God, what should we expect from the New Atheists when atheists have increased their following to forty, fifty, or sixty percent of the population?



Response to Lee Waaks on Praxeology

Economics Posted on Mon, March 10, 2014 02:53:13

[I tried to post this as a comment, but the blog wouldn’t let me (too long?) so I’m submitting it as a new post.]

Thanks, Lee, for your kind remarks. My debate with Robert Taylor was on New Year’s Day, so naturally I was in recovery mode, and even slower-witted than usual. I failed to make some key points.

I don’t dispute that it’s possible to formulate a useful deductive system in which conclusions can be spun out from axioms. Euclidean geometry is an obvious example. The theorems follow from the axioms. Furthermore, we feel intuitively that the theorems are true, that is, that they tell us something about the physical structure of objects in space. Yet, as we now know, Euclidean geometry is not literally true of the physical world. In other words, if we take Taylor’s example of the Pythagorean theorem, the conclusion of this theorem is not true of real triangles in space (it is slightly off, because of the curvature of space).

Yet the Pythagorean theorem is true in two senses: 1. The theorem does follow from the axioms, and is therefore true in Euclid’s world, much as ‘elves have a characteristic smell, different from that of hobbits’ is true in Tolkien’s world; 2. The theorem applies to real-world triangles of manageable size with very close approximation. It’s therefore fine for building bridges.

Now, we can assume that ‘Man acts’ is true, and derive from this certain propositions (which then help to define what we mean by that assertion). We can logically derive from these propositions some conclusions about the way humans will act. I don’t deny that we can do this, and that the results will sometimes be illuminating. For example, if we assume that business firms always do what they perceive as maximizing their financial returns, we can make predictions about the behavior of firms which will very often be true (and where not true may draw our attention to some special circumstance, which may be helpful to our analysis of what is going on in the behavior of firms). Furthermore, the assumption that business firms always do what they perceive as maximizing their financial returns is not arbitrary, since we know that competition favors the survival of firms which do better at maximizing their returns. So, I agree that there’s a lot of mileage in the view that important steps in economic reasoning may be made by applying a Pure Logic of Choice. And, of course, virtually all economists would agree with this, and many of them, never having met a Misesian, would wonder why I am saying anything so trite.

However, Mises and Rothbard maintain that all of economic theory is derived by deduction from axioms. In other words all of economic theory is derived from self-evident axioms about human action, and not in the least from empirical observation!

Here we should note the historical fact that Rothbard was a huge influence on the libertarian movement. To my mind, as well as being an entirely lovable person, for I met him several times and had a number of long conversations with him, he was a wonderful pamphleteer and propagandist. He was no great shakes as an economist, and an abysmal thinker about economic epistemology. Despite his shortcomings, he was a huge improvement over the very poor philosophizing of Ayn Rand, which threatened to engulf libertarianism in the 1960s and 1970s. A lot of what Rand said was entirely correct, but totally unoriginal with her, and this ninety-five percent of her output was simply the common heritage of classical liberalism since Locke’s Second Treatise. Whenever Rand came up with an original idea, she was not only wrong but most of the time almost unbelievably sloppy in her reasoning. Rothbard was a scholar who knew quite a bit about the history of liberal thought. He had read and understood, not only Locke’s Second Treatise, the fountainhead of libertarianism, not only Mill and Spencer, but numerous other thinkers, some of them still under-examined, like Bastiat, Molinari, La Boettie, Lysander Spooner, and Franz Oppenheimer. Rothbard had his limitations but he would never blunder as badly as Rand, who was essentially a creature of Hollywood. Rothbard helped a lot of people transition out of Randism into something more defensible, something with more potential for serious elaboration.

Today this ideological background is less important. Rothbard’s following has largely evaporated. The fact that he quit the libertarian movement in the last few years of his life has something to do with it. Today the issue of ‘praxeology’ is hardly ever raised. Economists arguing for the free market simply don’t address it. It still attracts a few enthusiasts like Robert Taylor and animates some of the output of the Mises Institute. But, for example, if you take the work of a capable and effective popularizer of the economic case for the free market like John Lott, you don’t find any appeals to aprioristic reasoning. It’s all empirically based, this is what you expect, and it would be a distraction for anyone to suggest that it all follows from ‘Man acts’.

Mises was a Kantian. He believed that there are synthetic a priori propositions, and that the essential truths about purposive action were among these propositions. I think there are no synthetic a priori propositions. Like Hume and the logical empiricists, I think that what is analytic is a priori and what is a priori is analytic. But more narrowly, though I’m always prepared to be surprised, I just haven’t seen a convincing candidate for a synthetic a priori proposition.

If a statement tells you about the world, it’s an empirical claim and can’t be deduced from self-evident axioms. If something’s deduced from self-evident axioms, it doesn’t tell you anything about the world (except that part of the world which is the theory in question), though it may be useful if you can make the assumption that the world complies with the axioms. I am entirely prepared to accept that we can construct a body of praxeological theory from axioms and that this might include a great deal of economic theory. But in applying this theory to the world in order to draw conclusions about the world, we would be taking a step beyond the pure theory, and this would make our conclusions fallible. The empirical is always fallible in a way that deduction isn’t.

A point I didn’t bring out in my conversation with Robert Taylor is that even if we had a complete body of praxeological theory which we then set about applying to economic life, there would be the question of whether we had applied it accurately. This is more serious than you might think. Take the example of identifying whether something is money. (Jeff Hummel made this point and others like it back in the 1970s, at a time when we were all much more preoccupied with Mises-Rothbardism than anyone is today.)

Although there are strong tendencies for one good to emerge as the single monetary unit, these tendencies do not fully carry through. Today we know that bank deposits, coins and bills, gold and silver, and bitcoin, all function as money. Other entities, like commercial bills, could also be included. The central notion of money is that it is a good which you acquire because it will enable you to exchange for other goods. It follows that goods can have more or less moneyness. Diamonds have more moneyness than machine tools. Once a good has a lot of moneyness, it automatically tends to accumulate more moneyness, but in practice this is not going to culminate in just one good having all the moneyness.

The point of this for praxeological reasoning is that identifying and measuring what is actually functioning as money can be a tricky empirical exercise. So even with our complete praxeological theory, we would still have to do a lot of empirical work to explain, say, what happened to the US money supply in 2008. There would still be empirical issues about whether this or that asset was functioning as money.

However, I do not think we have, or are going to have that complete praxeological theory to start with. For example, praxeology holds that if someone prefers a to b and b to c, they will prefer a to c. This certainly looks convincing as a likely empirical generalization, but is it apodictically true? Obviously not. We might say that someone who prefers a to b and b to c but chooses c when confronted by a choice between a and c is somehow muddled. Yes, but people can be muddled. Even if we never observed a case of someone being muddled, we would still have to say that the very idea of someone being muddled is conceivable, and so the claim that no one is ever muddled would be an empirical claim. But it is false. People actually are sometimes muddled. Psychology experiments have documented that some kinds of muddle are actually quite prevalent. (I have elsewhere argued against the usual conclusion from this fact, the conclusion that people are often irrational. I maintain that people are never irrational—here Mises was right—though they often make mistakes.)

Exhibiting non-transitive preferences (preferring c to a) is just one example; there are many others, such as counting sunk costs. How does the praxeologist handle cases like these? He can accept them as allowed within his theory of human action, or he can outlaw them as contrary to the basic conception of action. One way to do the latter is to say that if someone preferred a to b and b to c, and was then observed to pick c over a, he must have changed his preferences. But then, if economics is to make empirical predictions about the world, it becomes useless, since any false prediction we make can be accommodated by saying that people’s preferences unaccountably changed during the real-life process we were analyzing. In order to keep economics as capable of making falsifiable predictions, we therefore have to add the premiss (not a fundamental component of praxeology) that people’s preferences are stable. Now we have something empirically falsifiable, and therefore empirically useful, but we have introduced something not derivable from ‘Man acts’.

So what if we take the other route and say that we allow people to have non-transitive preferences, and generally to be muddled? Then we can’t get the most elementary market mechanisms started. We will be unable even to get to the downward-sloping demand curve. Not much, if anything at all, can be spun out from the action axiom if we allow that at any point people can be muddled.

If we turn some of the aprioristic claims into empirical generalizations, then we can make progress. We can say, for instance, as a first stab at it, that people are not muddled all the time, and when dealing with a group of people some of the muddles will cancel out, and if the consequences of the muddles become too painful people will pay more attention and reduce their muddles. We can also observe, as Becker did, that since people’s incomes are limited, the downward sloping demand curve is automatically generated as highly probable for any given good (for example, it’s just not possible for someone to adopt the rule to buy more at higher prices for all the goods they buy; their limited income won’t let them do it). So even if individuals’ buying behavior were totally random (I admit that’s impossible for other reasons) we would still observe downward-sloping demand curves in the vast majority of cases, probably all. The downward sloping demand curve, which we couldn’t derive by deduction from actions, starts to look pretty good as an empirical generalization. Is it a law? Yes, or at least it’s a putative law, just like a putative law in physics or chemistry, provided we specify the circumstances of its application stringently enough.

Much more could be said at this point, but to me one of the most interesting aspects of market behavior is that markets behave much more in accordance with textbook models than they ‘ought’ to. We know that real human beings have a tendency to count sunk costs, yet industrial investment goes on as if they didn’t. A study was done of drivers’ responses to different and changing gas (for Brits: petrol) prices at different gas stations. Their behavior was observed and they were questioned as to what they knew about prices at different gas stations. It was found that they knew very little about prices at different stations, yet their actual behavior was quite close to what we would expect if they had had a perfect knowledge of the prices at all the local stations. Of course, we can come up with explanations for these phenomena, and these explanations tell us a lot about the way markets work. But we have come a long way from the Misesian attempt to move from ‘Man acts’ by strict deduction to ‘Consumers buy from the cheapest seller’.

Here are my answers to Lee’s five questions

1. I don’t believe any significant number of economists will dispute that the minimum wage will increase unemployment, but they may still favor a minimum wage. Remember, that a minimum wage will both increase unemployment and increase the wages of those workers who keep their jobs or get new jobs. Some workers do benefit directly from a minimum wage. I have actually seen discussions where economists argue that if the increase in the wages of workers who keep their jobs exceeds the total wages lost by those who are made unemployed, this makes the case for the minimum wage! An economist (a leftist ideologue who happens to be an economist) may also take the view that it’s better for a worker to be on welfare than to be hired in a very low-wage job.

Common political views of the minimum wage are illustrated in the recent report of the non-partisan Congressional Budget Office on Obama’s proposed minimum wage increase. The CBO said the proposed increase would rescue 900,000 people from poverty, and would also cause an increase in unemployment somewhere between “slight” and one million, settling for 500,000 as a probable figure. This CBO analysis was taken by the vast majority of supporters of the increase as complete vindication of the measure: obviously 900,000 is bigger than 500,000! (The White House economic advisor objected and said there would be no increase in unemployment! But that’s the kind of nonsense you expect from someone in that position. White House strategists know that if anywhere there occurs an official endorsement of the claim that any of the administration’s measures will result in any unemployment whatsoever, this meme could mutuate into something fearsome that would come and eat them all up.)

To me the biggest tragedy of the minimum wage is that it takes the least productive and therefore most vulnerable workers and ensures that they will never get on-the-job training which would be one of their likely roads to improvement. Along with welfare and the public schools, it systematically creates a vast class of unproductive criminals. It also ensures that the snow doesn’t get cleared in the Loop every winter, that no one buses tables in diners any more, and that no one cleans your windshield or your tires when you stop for gas.

2. They didn’t necessarily react this way because they had been convinced by praxeological reasoning. It’s characteristic of all sciences that most scientists’ initial response to a challenge is to assume that the currently prevailing view is correct and a new challenge is pretty sure to be flawed (and a lot of the time they’re right; that’s why it’s the currently prevailing view). And, after all, you can give a loose, intuitive explanation of why a minimum wage will increase unemployment, and anyone can see the point of this, without accepting that it’s a matter of apodictic certainty. Remember, it’s not just the minimum wage. This is just one example of a price floor. Price floors have been observed for many different goods. I doubt that anyone would object to the statement that if you announce a minimum price for cement, above the market price, and enforce it, less cement will be purchased.

Some further comments on empirical challenges to the theory that minimum wage increases raise unemployment. 1. From what I’ve seen, the economists who challenge it usually say something like ‘Modest minimum wage increases will rescue people from poverty and will cause negligible increases in unemployment’. The implication is that negligible increases don’t matter. But the consequences for those who are hit are unusually severe and long lasting. A couple of thousand additional teenagers thrust into a life of drug dealing and gang banging is not negligible from where I’m sitting. For a certain percentage of them it will actually be a death sentence. We’re talking about marginal-quality workers, people whom it’s barely worth hiring. 2. Elasticity is always greater given more time, so you would need to look at a long enough time period to see the full effects. 3. The rise in unemployment might not show up in the statistics for total employment. Suppose McDonald’s hires fewer blacks from low-income backgrounds and replaces them with more middle-class students. Why would this happen? Because the middle-class students, many of whom can take the job or leave it, are induced to apply by the higher wage, and they on average possess positive attributes valued by the franchisee, who is, after all, paying more; the franchisee will be able to reject blacks from devastated neighborhoods he might have formerly taken a chance on. The middle-class students will on average be distinctly better at the job; they will find it less of a struggle to show up punctually and to be gracious to customers. If you’re paying $10 an hour, and you can get someone worth $10 an hour, you’ll go for it, even if you would have preferred to do what you’re no longer permitted to do: pay $8 an hour to someone worth $8 an hour.

3. Yes, just as physics is an empirical science and it’s reasonable to speak of laws of physics.

4. I don’t suppose they could ever be sure, but it doesn’t really arise since they usually do find clear enough results in this area. Most studies of the minimum wage look at things like differences between neighboring states with different state minimum wages. Actually, it’s remarkable to me that they can find anything at all, given how small these differences are. You might expect the outcomes to be swamped by market noise, but apparently they aren’t.

In cases where it’s not feasible to observe the effects, you would exercise your ingenuity to find more roundabout ways of testing the theory.

5. No, I don’t think so. We must distinguish between gut instincts about how the economy works and apodictic reasoning. People working in all the sciences are guided in what to look for by intuitive models which lack any formal standing. These intuitive models suggest; they don’t validate.



If the Dull Substance of My Flesh Were Thought

Philosophy Posted on Thu, March 06, 2014 22:33:58

When I saw the Clint Eastwood movie Firefox in 1982, the thing that most stuck in my mind was that it depicted Soviet society in an unflattering light. I had never seen this in a Hollywood movie before. What had gone wrong? Hadn’t the director got the memo from the studio bosses? Shouldn’t he be blacklisted or something?

Just as we knew back then that in Hollywood’s world every Catholic priest would be a kindly figure with absolutely no eye for a nicely rounded bottom, who never voiced any of the controversial bits of Catholic dogma, but could be counted on to say to a wayward soul, about two-thirds of the way through the story, “I know there is some good in you,” so we knew that the Soviet Union always had to be shown as industrious, peace-loving, and quaintly mysterious, or at worst suspicious and irritable, like a temperamental bear which had been grievously mistreated but would now surely respond gratefully to patience and kindness.

Any trouble with the Russians was due to sinister forces dedicated to provoking misunderstanding—usually right-wing Americans, whose minds were too warped to understand that the USSR was essentially all about peace and love, but occasionally the culprits were international criminals—think of all the times in the Bond movies when SPECTRE was revealed to be so much more evil than SMERSH, and never more evil than when it was sowing mistrust between East and West.

Truth be told, my attention had been caught by a very minor detail of the movie itself. The glimpse into Soviet society was a small part of the film and only occupied a few minutes of screen time. Firefox is an adventure story in which Clint’s character (a traumatized former fighter pilot who’s fluent in Russian) steals the only existing model of an advanced jet fighter from its Russian base and flies it out of the Soviet Union! But, oh no, scratch that, it’s not the only model after all! There turns out to be another prototype, so the top Russian military test pilot uses this second aircraft to come after Clint, and following a supersonic chase there’s a supersonic dogfight, in which Clint is blown to pieces. . . . No, of course not, just making sure you’re awake.

To me at the time, the second most striking thing about the movie was that the jet fighters’ weapon systems were thought-controlled. And there’s that crucial point in the dogfight where Clint has to remind himself to think in Russian before he can properly fire a missile at his antagonist.

Thought-controlled weapons? Was this yet another of Hollywood’s tall tales? Or was it something that could ever become reality? And how would that work, exactly? I thought about this at the time, without knowing much about the brain. After all, you move your arm by thinking, don’t you? (Ha! We can imagine that muddlehead Wittgenstein going on for several pages about this. “I do not move my arm by thinking, . . . I just move my arm.” Well, danke schön, Lu, that’s very deep.)

Thirty-two years on, I know a little bit more about the brain, and a little bit more about brain-related technology, and I know that it’s now safe to assume the US military does have thought-controlled weapons if it wants them. We do have brain-computer interfacing (BCI), so that, for instance, someone with a wired-up brain can move a cursor on a computer screen (just by thinking, so to speak) or steer his wheelchair, just by thinking, and we have thought-controlled prosthetics, movable just by thought alone, and artificial thought control of real limbs, so that people with completely paralyzed bodies can move their limbs, just by thinking (Clear out, Ludwig. You’re too much underfoot. You too, Gilbert. Out.)

So, thought-controlled appliances, including weapons, are now just a yawn. More exciting is the prospect of transmitting thoughts from one brain to another by linking up the brains, without any external messaging.

In August 2013 it was announced that, in an experiment conducted at the University of Washington by Rajesh Rao, one person, his brain linked to another person’s, was able, by thinking about moving his finger (but not actually moving it) to make the other person’s finger move. Essentially the same thing had been done earlier with the brains of two rats, and then with the brain of a human and the brain of a rat. In Rao’s experiment, the sender was playing a videogame; the receiver’s moving finger fired a rocket in the game, though only the sender and not the receiver could see the game on a screen.

I don’t know if Rao’s experiment was reported in the UK press, and if so, how. But over here, and online, all the reports I saw described it as a wholly unprecedented breakthrough and failed to mention that four years earlier something apparently more radical had been achieved at the University of Southampton. The Southampton team led by Christopher James transmitted a thought from one person’s brain to another. Or at least, that’s one way of describing it. Though the recipient was not aware of the ‘thought’, the information could be retrieved electronically from his brain.

To get one brain, by having a thought about something, to cause a motor response in another brain, ought to be a lot easier than getting one brain, by having a thought about something, to cause a thought about the same thing to arise in another brain. However, the Southampton brain-to-brain transmittal was done by using a binary code: the sender imagined moving his left arm for zero and his right arm for one. It was simply a matter of transmitting which of two known options had been chosen. As a case of thought transmission this is kind of like cheating. A more advanced variant is having someone look at some alphabetical letters in succession, and simply sending a signal when the ‘right’ one has been hit upon, the accumulation of such letters adding up to the desired message. This is a neat trick, but something far more momentous is now being developed and will soon be marketed. Specific thoughts, sensations, and emotions will be conveyed from one brain to another by electronic transmission.

Imagine a football team or a military patrol in which each member of the team is simultaneously aware of the experiences of each of the other members, just as instantly as he’s aware of events affecting the various parts of his own body. How awesome is that? Even more impressively, imagine that all the world’s theoretical physicists working on superstring theory are connected by equipment attached to their skulls, so that whenever one of them has a promising idea, this idea is immediately picked up by all the others. This triggers a new thought in the mind of one of the others, which is also picked up by all of them, and so on.

Dr James’s and Dr Rao’s experiments were done without penetrating the skull. Electrodes were attached to the outside of the skulls of sender and receiver. The sender’s brain activity was picked up by EEG (electro-encephalography) and what was picked up was then transmitted to the receiver’s brain by TMS (transcranial magnetic stimulation). So these are all things attached to the outside of the skull which receive and send impulses from and into the skull. To achieve more precise and detailed transmission of thoughts and feelings, we have to physically get inside the skull, though we’re helped by the fact that thoughts occur mostly at or near the brain’s surface.

Every time I think of the currently favored way to accomplish this, it strikes me as so clunky it’s bound to be superseded before the whole show gets on the road, but so far it hasn’t been. The plan is, first, to genetically modify all of a person’s brain cells (neurons) to make them sensitive to light. This is feasible and has been done with animals—the research area is called optogenetics. Then, we insert a whole lot of nanowires—really very thin tubes—into the person’s body, with many different wires going through blood vessels into many different regions (groups of neurons) in the brain. There’s plenty of room; the wires are very thin. In this way, it’s possible to send light signals to quite specific areas in the brain, associated with particular kinds of thoughts. With this set-up, it will be possible for A to think of something, and as a direct result, almost immediately, B will think of that same thing. The brains of both A and B are connected to computers, and these two computers will communicate by telephone. This is called brain-brain interfacing (BBI). BBI is an application and extension of BCI and will soon be as much talked about as smart phones were ten years ago.

You may think that people will be reluctant to have every cell of their brains genetically modified, and then have thousands of yards of nanowires introduced into their bodies, reaching into every part of their brains. However, the initial impetus for widespread application of BCI is to improve people’s lives by correcting severe disorders, especially paralysis. The nanowires will also be useful for medical diagnosis. Once the basic apparatus is in place, it will be a platform for further enhancements, such as direct brain connection to the Internet.

Furthermore, other types of inside-the-skull apparatus for different enhancements are available, one of the simplest being cochlear implants. Cochlear implants give incurably deaf people the ability to hear again. It takes practice and hard work for the patient to hear with the implants, which operate according to a computerized system entirely different from natural hearing, but they do usually learn to hear quite well, and these implants are already routine. They have nothing to do with thinking as a brain process, but they do help to prepare people for the assumption that it’s okay to have your head wired.

Many other enhancements are feasible; it would be child’s play to give someone x-ray vision (though most likely by microwaves rather than x-rays), with all the necessary equipment inside their skull. As people get more used to having their skulls wired in various ways, one more way will not seem too much of a leap. Pretty soon, it will make possible brain connectivity to videogames, and many of the millions of gaming fanatics will not want to be left out. I give it eighteen years before the first court case (probably rape) in which a defendant is charged with coercively influencing an alleged victim by using BBI, and twenty-five years before the first case involving stealing information by hacking into someone’s brain.

Since the wave of BBI is about to hit us, we can anticipate some of the misconceptions which will abound. Already, the two participants in Rao’s experiment (Rao himself and a guy called Andrea Stocco) have been quoted as referring to it as a “Vulcan Mind Meld.” All joking aside, let’s try to keep misunderstandings to a minimum by remembering the following essential points:

1. Inevitably, BBI is described as ‘direct brain-to-brain communication’. (In the trade it’s called B2B and unfortunately this label looks as if it will stick, despite the fact that B2B already has other established usages, most notably, in the world of marketing, ‘business to business’.) But this description can easily be misleading.

All communication between humans is brain-to-brain. When you read the words “If the dull substance of my flesh were thought,” William Shakespeare’s brain (or the brain of whoever really wrote it) has communicated with your brain. And as for communication being direct, with BBI there’s a lot of apparatus inside and outside the skulls of the communicating parties, and this apparatus communicates through computers, connected by a telephone system (usually the Internet). This is much less direct than just talking and listening! But it seems more direct because the communicator and the receiver only have to think. So at most we may say that it feels direct—it’s subjectively direct.

2. You can be certain that the media will love to call BBI ‘telepathy’, but there is not and never can be any such thing as telepathy. In BBI, a thought in one person’s mind causes (by a complicated technological pathway) a thought with similar content in another person’s mind. This has many of the properties of communication by speech or writing, and in fact to achieve the communication in any precise way, words will normally have to be among the material communicated.

3. The communication will not be guaranteed accurate. I think something and this causes you to think something similar, and because of this subjective immediacy, it may encourage the mistake of thinking that we can trust this communication to be more honest or accurate than, say, reading a letter. But that’s all wrong. Misinterpretation (seeming to ‘receive’ a thought which is not what the sender was thinking at all) will be commonplace. It may even turn out that the conscious act of formulating something in a letter causes us to take certain pains to make the message more accurate than it could ever be in a more spontaneous thought-to-thought process. Honesty does not come naturally; it takes a special effort, as I think David McDonagh was the first to point out. Tricks of deception will no doubt require different skills in BBI than ordinary speech, but deception, intentional and unintentional, will keep cropping up.

The third thing that struck me about Firefox was the irritating martial music of the soundtrack. Was the composer hired by the studio to punish the director and scriptwriter for their thoughtcrime of anti-communism? Hmm, I wonder . . .



The Fallacy of Sheer Bulk

Economics Posted on Mon, February 24, 2014 19:03:31

I’ve been having a lot of fun reading the works of the big-government anarchist David Graeber. Superficially he embodies everything I most loathe, yet actually I like his writing. One reason is that he’s completely unpretentious. He lays out what he believes with perfect clarity and no mystifying tricks. This is very unusual for someone on today’s extreme left, especially when that someone is a professor of social science with a considerable academic reputation. So, if for some reason you’ve been swallowing some toxic swill like Deleuze and Guattari, to cleanse the palate, try a swig of Graeber.

One thing that makes Graeber somewhat unlike most of today’s left and very much like the left of the 1930s is his powerful conviction that capitalism cannot possibly last much longer. He suggests different reasons for this belief on different occasions, but here I just want to look at a single sentence:

“There is good reason to believe that, in a generation or so, capitalism will no longer exist: for the simple reason that (as many have pointed out) it’s impossible to maintain an engine of perpetual growth forever on a finite planet” (page 31 in Revolutions in Reverse: Essays on Politics, Violence, Art, and Imagination, not dated but probably 2011).

Here I’m not going to look at the intended meaning of this sentence in the context of Graeber’s overall political outlook (I’ll do that sometime) but I’m just going to comment on its figurative and rhetorical aspect as a typical cliché of today’s left.

Walking in Chicago’s Loop a couple of weeks ago, I was stopped by a pleasant young woman who asked me to sign something for Greenpeace. I said, apologetically, that she was wasting her time on me because I am opposed to Greenpeace. She donned a maternal frown of concern and asked why, and I responded: “Because I’m evil” (though with the twinkling smile of a rough diamond, and if I’d had a mustache I would have tweaked it roguishly; the English accent always counts for something too; in England my voice betrays me as a prole, but over here the natives can’t tell me from Hugh Grant).

So, I had to explain about how we need economic growth to industrialize the poorer half of the world so that millions of third-world babies can be rescued from heartrending deaths and how Greenpeace is among the enemies of economic growth and therefore of human happiness. And I threw in a few of my best lines, like the way wind farms mangle and rip apart the poor little bodies of millions of innocent birds and bats, and I can’t understand how anyone can defend such a horrible practice. (Nor could she, for a few seconds) I was heavily bundled up against Chicago’s global warming (10 degrees below zero Fahrenheit), and she probably couldn’t see that I was nearly four times her age. So this was working, trust me, it really was.

Then she asked, no doubt rhetorically, whether we could go on indefinitely expanding aggregate real incomes, whether we could do this, as she put it, infinitely. I was trapped by my damnable candor, and in all honesty I had to say: “Yes indefinitely, yes infinitely.” So now I was the escapee from the psych ward and that was that. Another lamb lost to the wilderness.

Back to Graeber’s sentence. If we look at this sentence, we see that it relies on the metaphor of physical quantities. Perpetual growth forever (strictly redundant, but I think the redundancy works well as a bit of prose) on a finite planet. We immediately think of something physically expanding forever within a fixed perimeter. Obviously, this is impossible, at least in Euclid’s world, and Euclid has to be good enough for this, I think.

The metaphor hides a simple mistake. Economic growth—growth of incomes—need not be growth of physical mass. For example, my iPhone is much smaller than an old-fashioned telephone, and even smaller than a telephone-plus-camera-plus-music-player-plus-computer. It is less massive yet more valuable.

A Ferrari 458 Italia sports car weighs 3,274 lbs. A Ford F-150 pickup truck weighs 4,013 lbs. So the mass of the Ford is more than a fifth greater than the mass of the Ferrari. But the Ferrari will cost you around $250,000 while you can pick up the pickup for around $25,000. The Ferrari has ten times the value of the more massive Ford.

Since something smaller can have more value than something bigger, economic growth does not always mean accumulating more and more physical stuff. Growth may sometimes coincide with reducing the amount of physical stuff. A CD is less massive than a vinyl LP, and any number of downloadable tracks account for no mass at all in the hands of the listener. Yet the progression from LP to CD to online song is an example of economic growth. As people move from books made of paper to e-books, there is a reduction in the assembly, storage, and transportation of physical stuff. Yet this is an example of economic growth.

It’s not a novel phenomenon. Observers of the early factory system noted that, compared with domestic production, it uses less of certain kinds of equipment and supplies. Take a hundred weavers out of their homes and put them side by side in a factory, and they don’t need as much backup equipment or stocks of yarn or even auxiliary labor as they all did in aggregate when they were working separately. This isn’t the whole story, of course, but it’s one reason why the factories could be more economical than the scattered domestic system. In this case, economic growth means getting the same output with less equipment.

Economic growth is not accumulation of more and more physical stuff. It primarily consists of the re-arrangement of physical stuff in new ways. Matter-energy can’t be created or destroyed, but only re-arranged. Human activity can convert matter into energy, but the amount of matter lost in this way is minute. What producers are doing under capitalism is not to add bulk but to re-arrange the existing bulk in ways more suited to human gratification. And if we’re worried about sheer bulk, several tons of cosmic dust land on the Earth every day (or even every hour, as most of the relevant scientists say), so the sheer bulk of the planet is always growing, do what we will.

At times, a reduction in the amount of physical stuff being used in production can be a form of economic growth. And it seems reasonable that these times would get more frequent once a lot of basic infrastructure has been built.

Economic growth also means more free time, more leisure. People have far more free time than they did a hundred years ago. People choose to take some of the gains of economic growth in shorter working hours and some in more stuff (where ‘more stuff’ means stuff affording greater satisfaction of wants). The proportions are determined by people themselves, not by something else such as ‘the system’. In my judgment, income statistics would be more accurate if they included the amounts of free time, but in practice this doesn’t matter much as people always choose to take most of the gains of economic growth in goods other than leisure, rather than sharply reducing their working hours. We just have to bear in mind that the gain in human well-being from economic growth measured by national income statistics is usually understated: we have to add something for the bonus of shorter working hours.

Is it possible to maintain perpetual growth forever on a finite planet? Well, it can’t go on forever, because as the sun very slowly gets hotter, in 3.75 billion years from now, the oceans will boil away and we’ll be dead or gone. Even before that, since our galaxy, the Milky Way, is colliding with another galaxy, Andromeda, death by cosmic accident seems likely, not to mention more ordinary catastrophes like an asteroid hit, which can easily happen at any time (Our only hope there would be SDI; thank you, Ronald Reagan). And the next glaciation presents a problem, though per capita growth might go on, close to the equator, if we can reduce population by 96 percent.

But with those qualifications, yes, it’s perfectly possible to maintain perpetual growth forever on a finite planet. Just as there will always be new songs to be sung, new stories to be told, new discoveries, new achievements, new debates, new adventures, new worlds of imagination and experience, so there will always be new ways to re-arrange the material structure of capital to make it ever more adaptable to human gratification and therefore human happiness. And that is all that we mean by economic growth. Capitalism could last forever and economic growth could go on, somewhere, until the heat death of the universe, with everyone getting richer indefinitely. There is no contradiction, and what “many have pointed out” is, as so very often, quite wrong.

Actually, I think that we’ll probably evolve beyond capitalism within the free market economy. But that’s a story for another day.



Understanding Belief Change

Environment Posted on Sat, February 22, 2014 00:34:00

Understanding Belief Change (Thoughts on Reading AR5)

The global warming ideological movement is now in its final throes, and people like me, who love to observe and analyze belief systems, and especially their life cycles from birth to death, have a ringside seat to witness its final disappearance.

It will take five to ten years for this to be generally admitted, but the global warming ideology is now hemorrhaging beyond any hope of recovery and we can watch its entertaining collapse with eager anticipation and high amusement. Gloating would be childish but some degree of serene gratification is entirely appropriate.

Given that the fate of global warming is now sealed, the interesting question is how its obituary will be written. Ten years from now, the present adherents of catastrophic global warming will all be voicing what is now called the skeptical or realist position. How will they get there?

It seems unlikely that they will say: ‘We were seriously wrong. Sorry!’ A few of them, no doubt, but not the majority.

It also seems unlikely that they will do what they did with acid rain: just suddenly stop blathering about it. For years they were screaming at us that acid rain was destroying the world’s forests, which would bring about ecological catastrophe. Scarcely a day went by without some new media reference to the horrors of acid rain. Then it all stopped, and we haven’t heard a peep about acid raid for many a long year.

The difference is that global warming is, culturally speaking, a much bigger deal than acid rain. More noise has been made about it, more politicians have staked their reputations on it, numerous best-selling books have been written about it and countless international conferences have been generously tax-funded (to the tune of trillions) to talk about it.

If they suddenly stopped mentioning global warming without explanation, people would notice. But aside from that, the folks who promote the global warming belief-system are not conscious hoaxers. They really do believe in it, so they’re going to have to find a way to explain to themselves how and why they have abandoned that belief and replaced it with the beliefs they now abhor: global warming is nothing to worry about, and more carbon dioxide in the atmosphere is not a bad thing at all. The fact that they really do believe in catastrophism explains why they cannot take the tent down in a disciplined and co-ordinated manner. Since their disillusionment will be uneven, they will start turning on each other, snarling viciously, the way they now turn on their skeptical scientific critics.

Of course, memories are highly fallible, and people are often capable of forgetting what they used to believe in. So, I confidently expect some people to deny that they ever believed in catastrophic global warming. Just think of it! Ten years from now, some people are going to be denying that anyone ever believed what they now loudly proclaim they believe!

Naturally, there will be some definite recanters, people with unusual moral courage or those approaching retirement who calculate they can no longer be injured by the militant global warming ideologues. These will probably be few, but definitely expect to see some. Fritz Vahrenholt is one of these. There will soon be several more, and there will be no converts in the opposite direction.

The really fascinating thing is how the adjustment process will take place. What, specifically, will these global warming believers say from day to day and week to week, to account for the uncomfortable changes they are undergoing?

There are essentially five likely techniques:

1. Abandoning their specific claims while ever more loudly declaiming the general summary of these claims. (Yes, polar bear populations are growing, Arctic and Antarctic ice are both increasing, glaciers are in great shape, extreme weather events are at an all-time low, it keeps getting colder, and so on and on and on, but still, global warming is building up and catastrophe is just around the corner.) This can’t go on forever. Even the journalists (always the last people to notice anything) will perceive that it is all in the believers’ heads; nothing remarkable is actually happening out there in the climate.

We can see this process in the IPCC’s Fifth Assessment Report (AR5, Part 1), where the body of the report retreats in several key places from the Fourth Assessment (AR4), while the concluding summary for policymakers is even more apoplectic in its wording than the conclusion of AR4, Part 1. Six or seven years from now we’ll be due for a Sixth Assessment, and that will present the IPCC with a painful dilemma (the crisis will come sooner, because it takes several years to prepare these reports, and there are always rumblings a couple of years before the report is finalized).

If the body of the report again records retreat after retreat while the conclusions for policymakers become yet more extreme and strident, this will be just too embarrassing a joke. They can’t do that again, so what will they do? I imagine they will most likely postpone the report for a while, and then come out and actually admit they have had to reduce the estimated risk of catastrophe. Perhaps they will produce two reports, allowing a minority report for the first time. Perhaps AR5 is already the last, and AR6 will just never get off the ground. Whatever they do, those of us who fondly appreciate la comédie humaine are in for a real treat.

2. Surreptitious retreat, admitting it’s not as bad as they once said, but insisting it is still very, very bad. This can later be stealthily modulated into ‘not so very bad after all’.

3. You can’t be too careful. Yes, the whole thing was overblown, but it’s better to err on the side of caution. And who knows what obscure undetectable demons might still be lurking somewhere in the climate?

4. Making little adjustments here and there, without affecting the overall results. This is a bit like someone who has embezzled a lot of money from a company and still has a highly-placed job in the Accounts department. He tries to move a few thousand dollars around every month, and keeps on improvizing new stories to conceal the fact that a big sum has gone missing. One day, independent auditors walk through the door, and then the game’s up!

5. Finally, this is the one that I find most fascinating and most amusing. I think they will adopt the practice of announcing amazing new discoveries, which are in fact exactly what the skeptics have been saying for several decades. We have already seen some hints that this ploy is to become standard.

For reasons familiar to those who follow climate science, these amazing new discoveries will mostly be about the effects of aerosols and clouds.

Aerosols are due to fine particles which exist naturally and are also generated by industrial activity. To explain why their climate predictions always turn out to be false, and always erroneous in the same direction, the IPCC folks have been assuming that newly generated aerosols exert a cooling influence, difficult to quantify but always just enough to explain away the difference between the catastrophic warming effects of carbon dioxide (which never actually happen, remember) and the actual observations. In simulating climate with models, they can add in an aerosol cooling, so that the current lack of warming can be rendered compatible with a hypothetical warming that is never actually observed.

The idea is that Chinese industrialization is increasing the number of aerosols or changing their character so that there is greater cooling from aerosols than in the past. And so, industrialization, which brings catastrophe through carbon dioxide warming, just coincidentally rescues us from catastrophic warming because of the cooling effect of aerosols. This sounds like a stroke of luck for all of us, but wait, the IPCC’s models show that the cooling effect of aerosols eventually won’t be able to counteract the warming, so in this way we can reconcile the two propositions: a. We’re doomed (unless we dismantle modern industry), and b. The data keep loudly insisting that nothing remarkable is happening.

Unlike similar ad hoc expedients in the realm of theology, aerosols can be observed and analyzed. So we can expect to see the results of research into the effects of aerosols, and these results will of course amount to the conclusion that the cooling effect has been over-estimated in the models. If that’s so, then the counterfactual warming, or what warming would have been without the added aerosols, would be less, and thus the failure of nature to comply with predictions of increased warming would be even bigger, casting greater doubt on the assumptions in the models which yielded those predictions.

Climate realists have always argued that the net effect of clouds is probably to act as a negative feedback on warming: as the troposphere (the part of the atmosphere nearest the surface of the Earth) warms, this increases humidity, which increases clouds, which reflects more heat back into space. Thus, the warming is limited by a natural thermostat. (Clouds are very complicated, and do all sorts of things, and there are other feedbacks which no one pretends to have yet understood, but the skeptical argument is that their net effect is probably to moderate, not amplify, changes in global temperature.)

The catastrophist doctrine absolutely relies on the net effect of such feedbacks being to amplify the natural greenhouse effect by a factor of at least 3; that’s how they get those scary warming scenarios. Where did the idea of this huge amplification come from? Certainly not from any observations. Satellite measurements confirm the obvious: that when the troposphere warms, more heat is emitted into space.

Now again, clouds and other manifestations of water vapor are not like the Holy Spirit. They can actually be investigated and measured. Over the next few years we can expect to see studies of clouds and water vapor which keep finding more and more corroboration for the skeptical arguments for negative net feedback. In other words, the climate has low sensitivity to minute increments of carbon dioxide, just as common sense suggests.

We will see the results of one study after another, of aerosols, clouds, and other matters, which conform to the realist arguments and contradict the assumptions underlying catastrophic global warming. Naturally, these will often be published along with hot-tempered protestations that they do not in any way, shape, or form, challenge the unassailable truth of catastrophic global warming. But that doesn’t matter, or rather it mainly matters as part of an ephemeral process of adjustment in beliefs. Ignore these obeisances to the sacred tenets of the belief system, look at the actual conclusions of the studies, and hold your breath for the entire delusional structure to come crashing down.

What these future studies will do is to legitimate a narrative less upsetting than ‘We have made a horrible and costly mistake’. That narrative is ‘We didn’t know about these surprising facts which go against the global warming belief-system, but now, thanks to astounding new findings by intrepid researchers, we do know about them. The fact that we now have to admit that the denialist psychotics were right all along is sad and regrettable, but no more than an ironic coincidence.’

Who will really be taken in by this kind of drivel? I can’t say. People’s credulity is more baffling than climate feedbacks.

Belief change is real. It affects all of us. It will govern our children’s future. We should all be concerned about the fateful impact of belief change. But let not your heart be troubled! Belief systems have always changed and always will. Keep watching. Even amateur observers can monitor and document the signs of belief change everywhere.

And believe it or not, the results of belief change are not catastrophic; they are mostly benign, and (like the results of natural climate change) absolutely nothing to worry about.



« Previous