Let’s start with a statistical discussion around safety: So who is lying to us?
Vice President Harris assures us that we are safer now than when she took office. National Public Radio tells us that “Violent crime is dropping fast in the U.S. — even if Americans don’t believe it.”
But the director of the Crime Prevention Research Center says not to believe believe media reports that crime is declining, and statistics from the Department of Justice’s Bureau of Justice Statistics appear to support him. From 2019 to 2023, violent crime victimization increased by more than 20 percent, excluding simple assault.
At our cultural moment of distrust, it sounds as if the phrase popularized by Mark Twain more than a century ago remains as accurate today: “Lies, damned lies and statistics.” Reflexively, we interpret conflicting statistics as clear evidence of a statistical lie. But is that go-to interpretation always or even usually warranted?
As with language, people can use statistics to mislead. However, statistical controversy does not imply a statistical crime. Collecting data and summarizing them with statistics is less straightforward than many believe. It involves overcoming inherent measurement challenges and myriad practical and conceptual judgments that people can and should debate.
Choosing which statistic to emphasize in reporting requires further judgment. I should know: I’ve seen the statistical sausage from pig to table, having collected data from farmers in the field, cleaned it in a federal statistical agency, and interpreted it in White House policy debates.
Imagine trying to measure the prevalence of crime. One approach is to measure crimes reported to law enforcement, the basis of FBI crime statistics. This tacitly overlooks the large and significant percentage of sexual or domestic assaults, for instance, that go unreported.
Another approach is to survey people and ask if they have been the victim of a crime, which is what the Bureau of Justice Statistics does. If we resolve all the practical data collection quandaries, we still face hard conceptual and often moral questions. What constitutes a crime? What crimes matter most? And which should we include in the statistic to answer questions such as whether we are safer now than before?
Similar practical and conceptual challenges arise when measuring other issues of public concern such as poverty and unemployment. For example, when determining whether someone is in poverty, should we treat the value of housing vouchers and Medicaid benefits as income? The answer matters, since the inclusion of such benefits yields much lower poverty rates.
For unemployment, should we count as unemployed people who recently gave up looking for work? How do we treat those working minimal hours, who would like to work more but have not found the opportunity? People can provide different answers with good reasons behind them, all of which are healthy to debate publicly.
Lively public debate can reflect a healthy desire to verify that the statistics support the stories told through them. FBI crime data capture a minority share of the crime that occurs, so it is good to ask whether crime is really falling, or we truly have a crisis.
Woe to us if we collectively stop asking what’s really happening and how do we know. If we do stop asking, we will ignore serious problems and instead waste attention and money on phantoms.
We should expect politicians and advocacy groups to report statistics with partiality, selecting and amplifying only the ones that tell the good story and advance their agenda. We should see these people more as trial lawyers whom we know will selectively present evidence supporting their client’s interests. Journalists and academics play different roles, and we should expect and demand more evenhandedness and explanation from them.
The closer we get to statistics and their construction, the messier they look, but keep in mind TINAS: There Is No Alternative to Statistics. Yes, journalists can drop them from their articles, and politicians drop them from their speeches. But don’t expect them to drop their statistical-like claims.
They will still claim that something is rising or falling, common or rare, a crisis or not. They will still claim that illegal immigration is surging or that we’re safer than before. The claims are statistical in nature and are better anchored to actual statistics: summaries of systematic measurements of people crossing the border or being robbed.
Lively discussions about the accuracy of our statistics are better than discussions about anecdotes alone, and far better than no discussion at all.
Jeremy G. Weber is a professor at the University of Pittsburgh Graduate School of Public and International Affairs and author of Statistics for Public Policy: A Practical Guide to Being Mostly Right (or at Least Respectably Wrong), University of Chicago Press.