Watch out ‘CSI’: New report finds bloodstain pattern analysis isn’t reliable
In study of expert opinions, discrepancies, errors found
Anyone who’s a fan of TV police procedurals and true crime knows that many cases hinge on blood pattern evidence. That’s nothing new.
The first documented use of bloodstain pattern analysis, or BPA, in a trial was in Maine in 1857, when George Knight was convicted of stabbing his wife Mary to death. In the 166 years since, it’s been one of the most common types of evidence used to both convict and exonerate people charged with murder.
But in the past couple of decades, the accuracy of blood pattern analysis has increasingly been questioned. A new report from the first in-depth study on the topic, made public this week by the National Institute of Justice – the research, development and evaluation agency of the U.S. Department of Justice – found blood pattern analysis is tainted by erroneous conclusions, lack of consistency, and more. That finding points out a serious issue with criminal trial case law.
The study by Noblis, a Virginia nonprofit science research firm, recommended changes, including universal standards that would make such analysis more consistent.
Noblis was commissioned in 2021 by the NIJ to conduct a “black box study” of blood pattern analysis. Such studies in forensic science measure the reliability of methods that rely mainly on human judgment, as opposed to methods that rely on laboratory instruments.
The Noblis study was made public this week.
In the study, 75 practicing bloodstain pattern analysts reviewed and analyzed photos of bloodstain patterns and crime scenes that represented a broad spectrum of cases. Participants were given prompts to categorize what they were looking at, as well as asked to provide a 75-word analysis and asked questions.
A major finding was that, with photos where the cause of the blood was known to those conducting the study, 11.6 percent of the conclusions by participants were wrong.
“Both semantic differences and contradictory interpretations contributed to errors and disagreements, which could have serious implications if they occurred in casework,” the study said.
Noblis’ authors stressed that the participants were volunteers and the percentage of error and other rates found shouldn’t be taken as an exact measure of what is happening in actual cases. Still, the “magnitude [of error rates] and the fact that they corroborate the rates measured in previous studies should raise concerns in the BPA community.”
The study came after a 2009 National Research Council report found “the uncertainties associated with bloodstain pattern analysis are enormous” and “the opinions of bloodstain pattern analysts are more subjective than scientific.”
That was backed up by a 2016 President’s Council of Advisors on Science and Technology.
One major issue the new study uncovered is disagreements on conclusions by analysts were often the result of semantics – definitions of words or how they are used.
Another issue, the study found, was that many conclusions by participants expressed an “excessive level of certainty” despite sparse available data.
Participants also often came to contradictory conclusions about the same blood pattern.
There were instances, too, where they got things just plain wrong – for instance, identifying a splash as a drip, or vice-versa. That may not seem like a big deal, but when it’s used to tell the tale of what happened at the crime scene it could be the difference between a guilty or innocent verdict.
NIJ agreed that the Noblis findings mean there is “a critical gap in the court testimony of the data presented in legal proceedings.”
NIJ added: “Because the conclusions reached by participants were sometimes erroneous or contradicted other analysts, this suggests potentially serious consequences for casework — especially regarding the possibility of conflicting testimony in court.”
To illustrate how critical, yet problematic, blood pattern analysis is at trial, the study cited the case of former Indiana State Trooper David Camm, who was convicted twice of the 2000 murder of his wife and two children before being exonerated at a third trial in 2013 by DNA evidence.
In Camm’s first two trials, experts for each side came to different conclusions about what the “key” blood evidence showed.
Analysts for the prosecution said the blood on Camm’s clothes was back spatter from a gunshot. Analysts for the defense said it was a transfer stain from Camm trying to help his mortally wounded children. Both experts were looking at the exact same thing.
The Noblis study points out that bloodstain pattern analysis is different from other types. DNA and fingerprints put a person at the scene of a crime. Blood pattern analysis attempts to determine what happened at the scene, including “information used in determining whether an incident was suicide or homicide, or whether a claim of self-defense is supported (or negated) by the evidence,” according to the study.
To illustrate the vagaries of analysis inconsistency, the study pointed out that one study participant was wrong 36 percent of the time for “highly consequential” factors in the photos where the causes were known by the study facilitators, but not the participants. Another participant was wrong only 1 percent on the same type of questions.
Both participants do bloodstain pattern analysis as part of their job, work in a laboratory, have testified in court as BPA experts, have at least a master’s degree and did not complete formal BPA instruction/supervision.
Their similar background underlines the fact that things like level of formal education or working in a lab aren’t the issues, but that analysts are operating with inconsistent standards and information.
The authors recommend that standard methodology and training become uniform. Another recommendation is that the word of one analyst not be taken as a conclusion.
While there were many times in the study when two analysts came to the wrong conclusion, there were few where multiple analysts did.
“Majority conclusions were almost always correct,” the study says. “Therefore, multiple independent verifications by different BPA analysts may be a reasonable path forward.”
Other recommendations are that standards for technical review and blind verification be developed, as well as criteria to support conclusions and making classification decisions.
The study only makes recommendations, so there is still a long way to go before there will be changes in the real world. Or on “CSI.”
This article is being shared by partners in the Granite State News Collaborative. For more information visit collaborativenh.org.