Buyer Beware

Posted on May 23, 2007

Print
Email
Share

Three researchers from the Public Health School at Harvard University--David Hemenway, Matthew Miller and Deborah Azrael--get a fair amount of attention for their research. Their studies consistently claim that guns cause crime and suicide.

The New York Times, which never appears to have covered a study that finds guns are beneficial, has recently given their research significant news coverage, while their work claiming that carrying a gun is directly related to road rage was covered by the prominent New Scientist magazine out of England. Furthermore, more recent work of theirs stating that more gun ownership leads to higher homicide rates was covered in everything from The Times of London to U.S. wire services such as United Press International.

Despite all the publicity, their work has been shrouded in secrecy; they have made it particularly difficult for other researchers to carefully scrutinize their work. Other researchers have not been allowed to look at their data. Since they won’t release survey data even eight years after the survey has been completed, and five years after their last research paper using the data was published, it is hardly surprising that they refuse to release their data for other studies, such as the one on road rage, when they start getting news coverage.

It is understandable when they say they are unable to release parts of their data due to its having been lost, yet even the data they still possess is not released. Even if the data were finally released 10 years later and mistakes were found, no one--including the media--would have any interest. By then it would have long past been even “old news.”

Fortunately, though, when the Harvard three rely on publicly available data, their studies are so incredibly small that it is not hard to replicate their data. For example, their study on gun ownership and homicide had only a trivial 50 observations (50 states with survey data on gun ownership from just one year). By today’s standards, that is ridiculously low. Compare that to a study such as John Lott’s More Guns, Less Crime. His study (the second edition) examined crime rates for all 3,140 U.S. counties per year for 20 years (from 1977 to 1996), thus including over 62,000 observations.

While Hemenway, Miller and Azrael’s research accounted for a dozen factors that could help explain different crime rates, Lott’s study accounted for thousands.

As an aside, note that sometimes in their other studies enough information is provided that even the media can understand that there is a problem. Take their claim that carrying a gun in one’s car causes road rage.

Hemenway, Miller and Azrael believed that their study proved that Right-to-Carry laws caused people to behave dangerously and thus directly led to people shooting or threatening others with a gun. The obvious way to examine such a theory would have been to see whether permit holders lose their permits for this type of behavior. But everyone already knows the answer to that question: Permit holders are extraordinarily law-abiding and only lose their permits for gun violations at hundredths or thousandths of a percentage point.

So if they didn’t look at the most obvious evidence, how did Hemenway, Miller and Azrael decide to examine their theory? They conducted a survey that asked people whether they had a gun in their car at least one time over the last year, and then asked whether they had made an obscene gesture or drove aggressively over that same yearlong period. There were no questions asking whether the people had a permit to carry a gun legally, and despite the claimed connection between gun possession and road rage, there were no questions asking whether there actually was a gun in the car at the time of the road rage incident.

What might have gotten the most attention, though--something that the authors apparently didn’t notice themselves--was that their results actually showed that “liberals” were even much more likely to engage in road rage than “conservatives” (both in terms of making obscene gestures and driving aggressively), and that factor was more important than whether one had a gun in his or her car in the past year. If the researchers had noticed this result, would they have opposed both liberals driving cars as well as guns in cars?

The road rage study was simple to rebut, but the claims these authors make relating gun ownership and homicides requires a little more knowledge. To learn the truth, we replicated their study using the exact publicly available data they used, along with doing some research into what other data were available that might have been appropriate to include.

After studying the research, two major problems with Hemenway, Miller and Azrael’s empirical work stand out:

  1. How do they choose what data to look at?
  2. Do they use the appropriate tests?

Let’s look at these one at a time.

A red flag should always go up when researchers use only a fraction of the data available. Researchers should always use all data available, or they had better have a darned good reason not to do so.

Take a simple example: Suppose you flipped a coin 20 times and got 10 heads and 10 tails. Obviously, it seems a “fair” coin. As it should be, there is no bias in you getting a a head or a tail when you flip it. But if you get to pick, say, only 10 of the coin flips, you could pick just the 10 cases where you got a “head” and falsely claim that the coin was biased towards “heads,” even when it wasn’t.

Hemenway, Miller and Azrael use estimates of gun ownership by state from a 2001 survey by the Centers for Disease Control and Prevention, and relate those estimates to homicide rates (ironically, they use both homicides in which firearms were used and those in which they were not). There are a number of points one immediately notices about this study. First, 2001 wasn’t the only year that the CDC did its survey--there were identical surveys for all of the United States in 2002 and 2004, though that data wasn’t used.

The “researchers” also didn’t include, or even discuss, the District of Columbia--a place where legal handgun ownership is virtually non-existent and rifles are rare, yet a place where crime rates are extremely high. In addition, besides homicides, no attempt was made in the study to compare any other crime rate to gun ownership.

Well, guess what? These things matter--and they matter a lot.

As it turns out, the estimates of the level of gun ownership from this survey bounce around a lot from one year to another. If you look at homicide, murder or overall violent crime and also account for something as obvious as the corresponding arrest rates for those crimes, there is always a negative relationship between those crimes and gun ownership. The effect is much smaller and much less statistically significant for 2001 than it is for 2002 and 2004.

Did Hemenway, Miller and Azrael just happen to randomly pick the survey data from the year that best worked to “prove” their case? Who knows, but it sure looks suspicious. It is very hard to think of a valid reason for only looking at survey data for 2001 and not the other years, which were more recent.

Interestingly, not all the results were affected by the years chosen. In looking at robbery and aggravated assaults, results showed that more guns meant less crime for all those years. Rapes were the only crime that provided a perverse result, but then only for one year--2004.

Given how high crime is in the District of Columbia and how low the gun ownership rate is, it is hardly surprising that including D.C. in the original study makes a big difference. In fact, including D.C. in the research actually makes the results show strong relationships between more guns meaning less crime, not the opposite as the researchers allegedly “proved.”

It is much more difficult to believe that Hemenway, Miller and Azrael simply forgot the District of Columbia existed than to believe that they intentionally ignored data from D.C. that failed to support their hypothesis. Why was this data, which was readily available, not used in the research? Only the trio of researchers knows.

There is a major problem with finding meaningful results when using what is called “cross-sectional” data--that is, looking across different places at one point in time. We see this frequently when people make comparisons with murder rates and gun ownership.

For instance, you commonly hear that Britain has fewer murders and fewer guns than the United States, so it is argued that the smaller number of murders must be because the British have fewer guns. Of course, that assumption ignores the fact that about 100 years ago Britain had very few gun crimes or gun murders, yet no gun control laws.

Take another simple example, this one regarding the death penalty. Some years ago, The New York Times conducted a cross-sectional study of murder rates in states with and without the death penalty. They found: “Indeed, 10 of the 12 states without capital punishment have homicide rates below the national average, Federal Bureau of Investigation data shows, while half the states with the death penalty have homicide rates above the national average.”

As a result, they erroneously concluded from that study that the death penalty did not deter murder. The problem with this conclusion, though, was that the states without the death penalty (Alaska, Hawaii, Iowa, Maine, Massachusetts, Michigan, Minnesota, North Dakota, Rhode Island, West Virginia, Wisconsin and Vermont) have long enjoyed relatively low murder rates, something that might well have more to do with factors other than the death penalty.

In reality, what was “learned” from the research was that those states that had such a low murder rate didn’t feel the need to adopt the death penalty precisely because they still have a low murder rate. To determine whether there was a relationship between murders and the death penalty, the researchers should have studied whether the gap in relative murder rates fell in states that introduced the death penalty. Indeed, it did.

Unfortunately, whether it’s a discussion on road rage or gun ownership and homicides, none of Hemenway, Miller and Azrael’s research recognizes that inappropriate tests were used to garner invalid conclusions. Economists may not be perfect, but the need to use the appropriate tests to prove an actual cause-and-effect relationship is something most researchers outside of public health have understood for many years.

Hemenway, Miller and Azrael’s small, primitive studies, their consistent unwillingness to share their data, their selective throwing out of data without any explanation and their ignoring results that don’t fit the conclusions they desire all fail to fit what we would like to think of as science. Unfortunately, the media has been complicit in all of this.

Whether it is The New York Times or The Times of London, media reporting on this research have failed to ask other academics for critical comments when they write news stories on this research.

Perhaps the next time the media intends to publish a story about the Harvard trio’s research, they can ask Hemenway, Miller and Azrael if they plan to share their data with other researchers. If the answer is “no,” it would be wise to take a closer look at the research before simply parroting the results to unsuspecting readers.

Print
Email
Share