Crimes Against Logic Page 12
Experimenting can be dangerous. You might guess wrong and end up losing all your customers or giving away unit profits without gaining volume, which is why companies often conduct market research before making any price changes. Alas, such research often gives misleading results, for a simple reason: people lie. Specifically, they claim to be more price sensitive than they really are.
I recently commissioned a survey of the managers of small businesses in Holland regarding the size of discount required to make them change banks. “How likely would you be to switch to a bank offering a rate of interest on your overdraft 0.25 percent lower than your current bank? Certain, very likely, maybe, very unlikely, certainly not? What about 0.5 percent . . .”
If you took the results of the survey at face value, even the slightest discount would have most Dutch small business managers switching banks in an instant. But small discounts are available from some Dutch banks, who do not in fact experience long lines of small businesspeople wanting to open accounts.
The good reason managers don’t switch banks for small discounts is that switching banks costs more in time and bother than the discount is worth. On a $20,000 overdraft, 0.25 percent is only $50 a year, and changing banks is a big hassle.
p. 139 So, why do they say they would switch? I can’t be sure. But my guess is that they like to think of themselves as astute businesspeople who would not pass up opportunities for a better deal. And saying you would switch banks involves none of the time or hassle of actually doing it.
It is generally best to be skeptical about the results of surveys that get their data merely by asking people about their inclinations or habits. People have all sorts of reasons to misrepresent themselves. They usually don’t mean to deceive, but even if they are only lying to themselves the results will still be unreliable. If you want to know about the sexual prowess of men, for example, I wouldn’t advise gathering your information just by asking them.
It is difficult to know in advance what people will misrepresent. For example, you would think that voting intentions are something on which you could take anyone’s word. But they aren’t. The U.K. Conservative Party’s 1992 general election victory came as a surprise to opinion polling organizations, most of which had forecast a comfortable victory for Labour. Their post-election analysis of how they got it so wrong revealed that many people who vote Conservative are reluctant to admit it, even in an anonymous poll. So, be warned. If even Tories can’t be relied upon secretly to admit it, there is little you can take at face value.
Dope with Dad?
It is always refreshing to discover a good news story in the paper. I thought I had encountered one in the London Times (Feb. 24, 2003, p. 2) under the headline “Drug Parents.” It announced that “nearly a quarter of young drug users have smoked cannabis with a parent.” Family life is not dead in Britain after all.
p. 140 Alas, I read on and discovered that the statistic couldn’t be trusted. It was the outcome of “a survey completed by 493 readers of rave magazine Mixmag.” You will see the problem. Even if those who complete surveys in Mixmag can be relied upon to tell the truth about their drug-taking habits, they are hardly a representative sample of young drug users. They are, for a start, people who want to share information about their drug-taking habits, which makes them more than usually likely to take drugs with their parents. Then, there is the simple fact that they read a magazine about the rave scene, which is notoriously drug-riddled. These aren’t typical young drug users; they are enthusiasts, the train-spotters of the drug world.
This statistic is a result of what is known as sample bias. The sample was not characteristic of young drug users more generally, and was uncharacteristic in a way that made it more likely to give the result in question.
The need to avoid sample bias when collecting statistics is well-known. The mistake is widespread nevertheless. Newspapers such as the London Times should certainly know better, because they frequently publish the results of political polls and sometimes even conduct them. Yet, if it gives a good headline, they are happy to publish the results of a badly biased survey, as the example illustrates.
Our drugs statistic is an example of a common way of ending up with a biased sample, namely, letting the sample choose itself. Those who volunteer to participate in surveys about something are not normal citizens with respect to that something. They are more passionate than most. So, what is true of them is not likely to be true of the wider population.
p. 141 About ten years ago, the radio and newspapers were thrilled to announce that 40 percent of British women who go on holiday in Spain have sex with someone they had not previously met within five hours of arriving in the country. This statistic was gathered from a survey conducted by a women’s magazine. They had invited readers with interesting holiday sex experiences to participate.
More broadly, self-selection bias explains why politicians cheerfully ignore the views of protest marchers, letter-to-the-editor writers, and even party members attending the annual conferences. Only fanatics take part in such political activities, and most voters aren’t fanatical.
Most cases of sample bias are quite obvious but some are difficult to detect. For example, you might think it reasonable to take a “snap shot” approach to discovering the average duration of periods of unemployment. Contact some portion of the unemployed population on one day of the year and ask them how long they have been unemployed. Provided the sample is large enough, its average is the average for all those who experience unemployment.
In fact, this sample would dramatically bias the result upward. People who are unemployed for long periods are much more likely to be unemployed on any given day than people who are unemployed for only short periods. Lots of people who have been unemployed for a week are back at work on the day of the poll. So they don’t get counted. But everyone who has been unemployed for years is unemployed that day and so they all get counted. To avoid this bias, you need a sample of people, not who are unemployed today, but who have been unemployed at some p. 142 time in the last, say, ten years. The average term of unemployment in this sample gives a better answer.[11.2]
Before moving on, I cannot resist mentioning a really egregious case of sample bias. For many years now, it has been taken as a well-established fact that 10 percent of Western men are homosexual. Most believe this statistic but do not know its source. It is Kinsey’s Sexual Behavior in the Human Male, published in 1948. Alas, 25 percent of the sample used for Kinsey’s survey were prison inmates, despite the fact that prison inmates were only 1 percent of the American male population. Since they live in an all-male environment, prison inmates are more likely to have homosexual sex than other men.
It does not follow that less than 10 percent of men are homosexual after all. There were competing forces at work in Kinsey’s survey, especially the tendency of people to lie about what was then a taboo activity. So, for all Kinsey’s research tells us, we have no idea what percentage of men are homosexual.[11.3] Looking out the window of my office, I would be inclined to think it is more than 10 percent. Then again, my office is in Covent Garden.
Anorexia and Other Big Small Numbers
The BMA called for the fashion industry and television to stop focusing on “abnormally thin” celebrities, such p. 143 as Kate Moss, Callista Flockhart, and Victoria Beckham of the Spice Girls, and for the Government to set targets on reducing the disease. Anorexia nervosa affects about 2 percent of young women and kills a fifth of sufferers.
—The London Times (May 31, 2000)
The British Medical Association (BMA) is always calling on people to stop doing this or that on account of its dreadful effects on the health. Normally, their mistake is in thinking that health is all people care about. I may know that smoking is bad for me but persist in any case, because I prefer a short and smoky life to a long fresh one. On this occasion, however, they went wrong on what should be their home ground, namely, on the medical facts and figures. The idea that anorexia affects 2 percen
t of young women and kills a fifth of sufferers is ridiculous.
There are 3.5 million British women between the ages of fifteen and twenty-five. If 2 percent of them suffer from anorexia nervosa, that is 70,000. And if a fifth die from it, we should expect 14,000 young women to die from anorexia each year.[11.4] You will begin to suspect that something has gone wrong when I tell you that in 1999 the total number of deaths in women from this age group, from all causes, including anorexia, was 855. Can anorexia really kill sixteen times more young women than even die?
p. 144 We need not flounder around in the dark. Causes of death are recorded and the figures are available from the National Statistics Office. We can check the number of anorexia-caused deaths in young women. The BMA’s figure must be wrong, since no disease can kill more people than die. But how wrong is it?
The figure of 14,000 is more than a thousand times greater than the truth. The number of young women who died from anorexia nervosa in 1999 was 13. Not 13,000. 13.
If I were Callista Flockhart, I’d have sued the BMA and the Times. By encouraging the media to stop focusing on her, they attempted to ruin her career, on the bogus allegation that looking at her makes people die of anorexia. Millions of young British women watch Callista Flockhart and at most thirteen die from it each year. That makes Ms. Flockhart safer than crossing the road.
I’m not Callista Flockhart of course, so I did not sue the BMA or the Times. But I did write to the editor of the Times pointing out their error. Neither my letter nor a correction was published, and I received no explanation of how they could have published such a crazy number. So, I am left to guess at where things went wrong.
My suspicion is that Helen Rumbelow, who wrote the article, suffers from an ailment that afflicts 25 percent of journalists and makes a fifth of them talk nonsense.[11.5] She has no sense of scale. When numbers get very small or very big, those afflicted lose all sense of whether or not they are reasonable.
We all suffer when the subject matter is unfamiliar. Is forty billion dollars a good price for a space shuttle, or is it a bit over the p. 145 top? Most of us wouldn’t have a clue. Is 0.01 of a second a reasonable period of time for an electrical impulse to cross a synapse in your brain? Again, unless you are a neuroscientist, you’ll have no idea. And anorexia deaths in young women? Well, 2 percent isn’t very many. And if only a fifth of them die, that’s a very small number: only 0.4 percent. Seems reasonable, doesn’t it?
Usually, 0.4 percent is quite a small number. When it comes to deaths in young women, however, it’s enormous. Young women hardly ever die. Young men die a bit more. But, more or less, dying is the exclusive preserve of the old. That is something you might have expected the BMA and a medical correspondent from the Times to know, and they probably do know it in some general sense. But a very small number like 0.4 percent just didn’t ring the alarm bells.
Just as small numbers can be bigger than they look, so big numbers can be smaller. Barclays Bank’s profit announcement prompts an outraged newspaper editorial every year. “Three billion pounds profit! And still they shut branches and sack staff. Greedy bastards!” This misses the fact that Barclays is a very big business with many thousands of shareholders. No single greedy bastard gets that £3 billion. In 2002, £3 billion represented a return of only 15 percent on shareholders’ investment in the business. That’s a reasonable return in these hard times, but hardly scandalous.
The same mistake is at work when you hear all those amazing facts about the cost of repairing the damage done by a hurricane, the economic value of joining the Euro, and so on. A cost or benefit that is spread across many individuals is summed up and presented as a single, shockingly large number. Repairing hurricane damage may cost an amazing $150 million, but it will p. 146 be borne by ten million Florida taxpayers, costing each of them a much less amazing $15. Joining the Euro really might increase the United Kingdom’s GDP by £3 billion per year, as the treasury’s recent report claims. But sixty million participate in the UK economy, so each benefits by only £50 per year, or £1 per week.[11.6]
Everyone enjoys being shocked by amazing statistics. But you have to be able to believe them; the fun is wrecked by discovering the statistics are bogus. The brief moment of elation I experienced upon hearing about the promiscuity of English women in Spain was spoiled by discovering the shoddy sample selection that lay behind it. If I hadn’t noticed the sample bias, I could have enjoyed the alleged fact for longer. Ignorance is bliss, as they say. But I console myself with the fact that I did not waste the price of an airfare to Spain. Ignorance can also be expensive.
That is the real value of learning to see through bogus statistical claims. You don’t make the mistake of acting upon them, by flying to Spain for unlikely sex or pointlessly lowering your prices or supporting silly policies.
12 – Morality Fever
p. 147 As a boy, I occasionally told my parents how awful I found some classmate or neighbor. I would list his most appalling characteristics and wait for the parental groans of agreement. But they were never forthcoming. Instead, they always offered some hypothesis as to why the little creep had turned out so (not me, the other kid). His parents had divorced and he felt insecure, his father beat him mercilessly, or something of the sort.
“Maybe,” I would protest, “but explaining why he is awful doesn’t show that he isn’t awful. On the contrary, it assumes he is. So why do you make these remarks as if they count against my point—which was only that he is, in point of fact, awful?” Or words to that effect.
It is bizarre to think that you have refuted a claim by explaining why it is true. How could anyone get so confused as to think this?
p. 148 Morality fever did it. My parents assumed that I was morally condemning the boy in question. “It isn’t his fault” is what they were saying. But I wasn’t morally condemning him any more than I would be morally condemning a desert by saying that I find it objectionably dry. The desert can’t help it. It is dry nevertheless, and I don’t care for it.
Had I told my parents that there is a mountain range in Switzerland, they would not have corrected me by explaining how that mountain range came to be formed. Only in a haze of moral anxiety are people capable of mistaking an explanation for a refutation.
My parents were not alone in suffering from morality fever. It is a widespread malady of the mind, and I suspect it is spreading. An increasing number of opinions and topics seem to raise the moral temperature to a point where the brain overheats.
This chapter is devoted to three more mental malfunctions that commonly occur when morality fever sets in. Being alert to them is important because, where the issues are morally weighty, proper reasoning is required more than ever. Or so I shall argue in the last subsection. Just as all self-help books should begin with a confession, so they should end with preaching.
What’s Wicked Is False
During New Zealand’s 1985 public debate on legalizing homosexuality, one of the more peculiar but nonetheless popular arguments was that homosexuality should be illegal because it is unnatural. The argument is peculiar because, whatever is meant p. 149 by “unnatural,” it is silly to think that what is unnatural should be illegal. Miniature golf is an unnatural activity, yet it would be outrageous to criminalize it on that account alone. The same goes for little boys kissing their octogenarian grandmothers, wearing socks with sandals, and open-heart surgery.
Yet, few on the pro-legalization side of the debate pointed this out. Instead they replied that, in fact, homosexuality is natural. This struck me as tactically disastrous, since it tacitly accepted the idea that what is unnatural should be illegal. It is an example of something I have since noted often, namely, a strong bias in favor of arguing about the facts rather than about what follows from them. It is a foolish bias, because it gives the irrational a strong advantage in debates. They need only invalidly draw their favored conclusion from a true premise and an opponent with this bias will be in a hopeless position.
Neverth
eless, this is what happened. Most on the pro-legalization side claimed that homosexuality is natural because it has a genetic basis. In 1985, this was a controversial claim and certainly the science involved was beyond the understanding of most in the debate. So it was fortunate perhaps that some had a much simpler approach to establishing that homosexuality is natural. It must be, they argued, since those who wish to keep homosexuality illegal claim it isn’t, and keeping homosexuality illegal is obviously wrong.
These thinkers accepted the structure of the anti-legalization argument, but reversed its direction. They agreed that if homosexuality is unnatural it should be illegal. But homosexuality should not be illegal. So homosexuality must be natural.
p. 150 This approach allows those with moral certainty to discover all sorts of interesting facts about the world without going through the normal rigors of scientific research. By accepting some alleged link between facts about how the world is (e.g., that homosexuality is natural) and facts about how it ought to be (e.g., that homosexuality ought to be legal), those with certainty about the latter are blessed with instant knowledge of the former. Those poor fools struggling in the laboratory to discover a genetic basis for homosexuality; if only they had clear moral vision they could rest easy.
Despite its absurdity, this “moral method” is common where touchy subjects are concerned. The debate about systematic differences in the IQs of different races is the most obvious example. Scientists have published results showing that Asians’ average IQ is higher than whites’ and whites’ higher than blacks’.[12.1] Most critics of the view reject the finding without any discussion of the research methods or data used to arrive at it. The fact that the finding is agreeable to racists is taken to be a sufficient ground for its rejection.