Carnegie Mellon University

Understanding Forensic Comparisons

April 03, 2019

Understanding Forensic Comparisons

On April 1, Statistics and Human Rights Program Manager Robin Mejia published an article on how to evaluate forensic comparisons--and what can go wrong if examiners overstate their conclusions--with  Maria Cuellar, Dana Delger, and Bill Eddy.  The piece is part of a special issue of the magazine Significance, guest edited by the Center for Statistics and Application in Forensic Evidence, on how to improve forensic science.

---

What does a match mean?

On 11 March 2004, terrorists in Madrid, Spain detonated bombs on several commuter trains. In total, 191 people were killed and 1,400 were injured. After the bombing, examiners from the Federal Bureau of Investigation (FBI) identified a latent fingerprint found on a bag containing detonators and explosives as coming from an Oregon lawyer named Brandon Mayfield. Mayfield was arrested and held as a material witness for two weeks, until the Spanish National Police determined that the print did not, in fact, come from Mayfield, but from another man living in Spain.

How did this happen? A “senior fingerprint examiner” at the FBI, who made the original identification, “‘consider[ed] it to be a 100% identification’ of Mayfield”. The match was verified by the unit chief of the FBI's Latent Print Unit, “a retired FBI fingerprint examiner with over thirty years of experience”, and an independent fingerprint examiner “widely considered a leader in the profession”.1 After the error was uncovered, the Office of the Inspector General for the United States Department of Justice investigated Mayfield's case. Among other findings, it concluded that “the unusual similarity of details on the fingers of Mayfield and the true source of the print… confused the FBI Laboratory examiners, and was an important factor contributing to the erroneous identification” (bit.ly/2Ezvbwr).

Mayfield is far from the only person to suffer from a miscarriage of justice. Since 1989, more than 2,000 individuals have been exonerated after having been wrongfully convicted, according to the National Registry of Exonerations (bit.ly/2EzTlXz). Disturbingly, around a quarter of those cases included “false or misleading forensic evidence”.

One of the issues that can lead to errors in forensic analysis (as is apparent in Mayfield's case) is the way in which examiners deal with uncertainty. In 2016, a report from the President's Council of Advisors on Science and Technology (PCAST) noted that forensic examiners frequently state that their conclusions about forensic evaluations are “100 percent certain”; have error rates that are “essentially zero”, “vanishingly small”, or “microscopic”; or have a chance of error so remote as to be a “practical impossibility” (bit.ly/2EFU89o).

To a statistician, these characterisations of error in a process of human matching sound vague and implausible, but to a jury member or a judge they can sound very convincing. This is especially true when such characterisations come from an expert witness. What is not clear from the confidence statements is that they often reflect only the forensic analyst's opinion about whether two items match or not, and they fail to take into account the value of that match. As demonstrated by Mayfield's case (and many others), similarity alone is not sufficient to understand the value of an item of evidence. The lack of a proper foundation to discuss uncertainty in those values in forensic conclusions is likely to have led to wrongful convictions.

View the full article

(The full article includes how to assess uncertainty and the importance of a match between forensic samples.)

Read the rest of the issue 

Image by OpenClipart-Vectors from Pixabay