Home Mail Articles Stats/current Supplements Subscriptions Links

The following article appeared in Left Business Observer #73, August 1996. It retains its copyright and may not be reprinted or redistributed in any form - print, electronic, facsimile, anything - without the permission of LBO.

The costs of crime

by J.W. Mason


J.W. Mason was a reporter for LBO when he wrote this piece; he's now working somewhere in DC.


Suppose you could calculate the dollar value of the costs of crime - lost property, medical bills, missed work, pain and suffering - and figure out its total yearly cost to society? While "putting a dollar value on the suffering resulting from crime might seem cold and impersonal, such information is useful in the policy arena." For example, "is a patrol pattern that prevents a rape better than one that prevents three burglaries?" The answer, it turns out, is yes: 20.7 times better, in fact. Who would have guessed it?

The quotes and figure come from "Victim Costs and Consequences: A New Look," a study by Mark Cohen, Ted Miller, and Brian Wiersema (CM&W) published with some fanfare by the National Institute of Justice (NIJ) this spring. This study, building on earlier work by Miller and Cohen, adds up seven types of cost (lost productivity, medical care, mental health care, police and fire services, social services, property loss and damage, and lost quality of life) for a dozen types of crime. The bill, by their reckoning, comes to $447 billion a year.


What's wrong

It's worth reviewing why this is such a bad idea. Take CM&W's discussion of lost productivity. For nonfatal crimes, they not unreasonably estimate lost productivity from workers' compensation awards for similar types of injuries to workers of the same age and earnings as crime victims. But most lost productivity is due to fatal crime, and here, rather than base their estimate on the actual earnings of murder victims, they simply use average earnings for individuals of the same age and sex. This introduces a major upward bias in their calculations, since murder victims are generally poor.

In the absence of data on the earnings of murderees, one could correct for race, since nonwhites are both disproportionately poor and likely to be murdered. CM&W declined to do that math, because "society...might decide for equity reasons that differences in value of life estimates across individuals should not be used for policy analysis." Say what? If CM&W are prepared to fudge some numbers on the basis of some social preference, they might as well have saved everyone a lot of trouble and just made them up. Left unexplained, too, is why they do incorporate age and sex into their analysis; that is, why it's OK to value the life of a young person more than that of an old person and that of a man more than that of a woman. Murder victims tend to be young, male, poor, and nonwhite; it's interesting that CM&W incorporate the two factors that increase their estimate of lost earnings and ignore the two that would diminish it. If "lost productivity" means lost wages, then an affluent person's death is more costly than a poor person's. How does "society" feel about that?


Getting spongier

All these problems aside, at least there are some dollar values to base the estimates on. But what about the "lost quality of life" of crime victims? Suffering has no market value, concede CM&W, but "[n]evertheless, these losses are real. Victims would pay dearly to avoid them." How can you measure the dollar value of something with no market price?

Here you have to give CM&W some credit for ingenuity. They based their calculations on 1,000 jury awards to victims of violent crimes, extracting a "functional relationship" among the characteristics of the crime victim (like age, sex, and work status) and the crime (severity of injury, relationship of victim to offender, etc.) and the size of the pain and suffering award. From this, they calculate what the total awards would be if every crime resulted in a jury award: $344 billion. "It's the sort of thing," says Berkeley law professor Franklin Zimring, "that if a first-year law student did it, you would give him an A for originality and then you would explain why it was completely wrong."

Indeed. The pain and suffering costs computed by CM&W are a case of what Zimring has called "catastrophic compound error" and a warning to anyone inclined to play games with statistics. Because CM&W assume that jury awards to crime victims represent measurements of the "true" value of their suffering, their equations include factors they assume are relevant to victim pain and suffering, like whether the crime involved facial scarring, but not ones they assume are irrelevant, like the state where the trial took place. But the size of awards depends heavily on jurisdiction: compensatory awards are 25 times higher in New York than Tennessee. Far from being neutral measurements of lost quality of life, jury awards are highly sensitive to details of the legal process.


Confidence game

But you can't tell from the NIJ study alone just how vacuous the pain and suffering (P&S) numbers really are. Like any figure arrived at by extrapolating from a sample, these numbers mean nothing without their confidence intervals, typically given as the range within which one can be 95% certain that the true numbers lie. So, for example, when a poll says that 50% of the population believes there's life on Mars, with a 3-point margin of error, the chances are 95 out of 100 that the real number lies between 47% and 53%.

In the NIJ study, CM&W don't give the confidence intervals, merely noting that they are "extremely large." Helpfully, Miller provided LBO with those numbers. So how large are these "extremely large" confidence intervals? Soooo large...that they leave the figures completely meaningless. For quality of life lost by assault victims, the 95% confidence interval runs from $1,700 to $1.3 million; for rape, $7,500 to $9.8 million. For society as a whole, the total P&S costs of crime are between $175 billion and $27 trillion. No wonder they published only the $344 billion number; even a journalist might think there's something fishy about an estimate with a range of almost $27 trillion (greater than 1994's gross global product).

Now there's nothing obscure about confidence intervals; it's first-year statistics. But, as Miller confessed, "None of us are big econometricians here."



The confidence problem points up the fundamental flaw in CM&W's study: they are trying to measure a nonexistent quantity. What does the "cost of crime" mean? Certainly not the blow to GDP; from that perspective crime-incurred expenditures on doctors and car alarms appear as gains, and stolen property, a transfer payment. Maybe they mean what people would theoretically be willing to pay to eliminate crime. But who's selling that, and at what price?

The real political question is how much of our resources to put into police, courts, and prisons, as opposed to, say, education and job creation. Here knowing "the cost of crime" is no help. The implication of CM&W's $450 billion cost of crime figure is that it might be worthwhile to spend that much on crime control. But this is a non sequitur: using CM&W's figures for the value of an average life, one can arrive at an annual "cost of death" well in excess of GDP. Does this mean we should spend every dollar we have on health care?

CM&W's work may be bad scholarship, but it can be useful for propaganda. They received over half a million dollars from the NIJ for their work; someone must have expected a return on that money. The study has already reaped a considerable harvest of publicity, of course, and there's reason to think it might have a more lasting impact as well, since Cohen and Miller's earlier work on crime costs helped end early prisoner release programs in several states. So maybe the study's funders were right not to care how accurate or meaningful its results actually were.

Certainly its authors don't seem to. "It's just a back of the envelope sort of thing," says Wiersema, brushing off criticisms of his assumptions and methodology. Miller, explaining why he sees his work as useful, is a bit more explicit about why he's not really concerned if its conclusions are justified: "Politicians often know what they want to do, but they need the numbers to justify doing what they want to do. The influence of numbers varies, sometimes it's that extra little push off the edge, sometimes it's the way the media present it, sometimes it's just another number to put in the speech. Sometimes the speech and the influence it has is key, and then it doesn't matter so much what the number was but that there was a number and that it sounded reasonable."

Unfortunately, that's just exactly right.

Home Mail Articles Stats/current Supplements Subscriptions Links