On TV dramas like CSI: Crime Scene Investigation, the tiniest shreds of DNA are like magic keys, unlocking the identities of criminals with the speed of a supercomputer and the authority of science. In reality, DNA forensics isn’t nearly so exact, especially when the genetic material at a crime scene comes from more than one person.
Analyzing these DNA mixtures isn’t about achieving certainty. It’s about partial matches, probabilities, big-time math, and a healthy dose of judgment calls by forensic scientists.
“There are no national guidelines or standards saying that labs have to meet some critical threshold of a match statistic,” to conclude that a suspect might have been at a crime scene, says Catherine Grgicak, assistant professor of biomedical forensic sciences at Boston University.
Neither are there guidelines about when a DNA mixture is simply too complicated to analyze in the first place. Often, labs aren’t even certain how many people contributed to the jumble of DNA detected on a weapon or the victim’s clothing. Plus, the evidence may contain very little genetic material from some or all the contributors, and may include DNA degraded by heat and light.
Given the weight of DNA evidence in court, this uncertainty concerns many trial attorneys, forensic scientists, and federal authorities who hope additional training focused on handling DNA mixtures along with number-crunching software will bring more reliability to the interpretation of complex DNA evidence.
“It’s a problem,” says Sheree Hughes-Stamm, a forensic science professor at Sam Houston State’s College of Criminal Justice. “It’s a problem of reliability with the interpretation of the results, rather than the science” that yields those results, she adds. “Human interpretation is going to differ, and you risk misinterpreting the profile.”
Grgicak is among the forensic researchers trying to reduce this risk. She and her team want to help crime labs unwind this genetic evidence to help identify the guilty without entangling the innocent.
The first piece of software, called NOCIt (NOC=number of contributors), uses statistical analysis to estimate the number of people whose DNA is part of the evidence—assigning a probability from one to five contributors. The second software, called MATCHit, compares the DNA mixtures to the DNA from a suspect to compute a match statistic, known as a “likelihood ratio,” that this person contributed to the genetic mixture from the crime scene. Grgicak’s team’s goal is to combine both NOCIt and MATCHit into a single tool for forensic labs by 2017.
“Mixture analysis is a murky part of DNA forensics,” says Greg Hampikian, a forensic biologist at Boise State University in Idaho.
“Errors in DNA forensics can be multiplied in the justice system,” says Hampikian. Often, DNA is used to corroborate otherwise flimsy evidence. But because of DNA’s vaunted reputation, Hampikian says, “suddenly, all this weak evidence gets propped up by science.”
How could this gold standard of forensic evidence become so tarnished? Basically, our ability to detect DNA from a crime scene has outstripped our ability make sense of it. When DNA forensic science began in the 1980s, the tests didn’t work well unless investigators were able to gather a lot of DNA from one person, and so they were rarely used in court.
Since then, Grgicak says, the tests have become more than 100 times more sensitive, prompting investigators to swab more of the crime scene for genetic material—well beyond the bloody knife, to things like skin cells left on a computer keyboard or a doorknob.
“We have very sensitive techniques that give us these more complicated mixtures,” explains Robin Cotton, associate professor and director of biomedical forensic sciences at Boston University. “We need to be able to analyze this evidence. Otherwise, you just throw your hands up in the air and give up, which doesn’t do anybody any good.”
The first step to making sense of a DNA mixture, Grgicak explains, is to figure out how many people contributed to it. That number is the basis for nearly every other conclusion about the evidence.
So, Grgicak and collaborators at Rutgers University and the Massachusetts Institute of Technology spent years developing NOCIt—computational algorithms that could sort through all the possible combinations of DNA hits in a piece of evidence, taking into account their prevalence in the general population, to determine the likelihood that the genetic material came from one, two, three, four, or five people. The software appeared in the journal Forensic Science International: Genetics in May.
In testing using mock evidence, NOCIt might conclude that one mixture is 99.9 percent likely to have two contributors, for instance. Or it might estimate a 35 percent likelihood of three contributors and a 65 percent likelihood of four contributors. In these studies, Grgicak’s team designates any probability over one percent as a possible answer to the number of DNA contributors.
In September 2014, the Department of Defense awarded Grgicak’s lab a contract to turn their NOCIt prototype (free to download online) into something ready to be adopted by forensic labs nationwide.
The ultimate goal, of course, is to increase the certainty that a suspect’s DNA is or isn’t part of the crime scene evidence. The prototype of MATCHit is a bare-bones computer software program asking for the numbers that the algorithm will crunch, including the number of contributors, and how common every DNA variation is in the general population, according to a database such as the one compiled by the National Institute of Standards and Technology.
In addition to generating a match statistic between the suspect and the crime scene evidence, the program also yields a common statistical measure called a “p value” to indicate how likely it is that a random person’s DNA would have a match statistic as strong (or stronger) than the suspect’s.
As with NOCIt, the question with MATCHit is: where does a forensic lab draw the line in interpreting these probabilities? So far, Grgicak’s lab has tested MATCHit using DNA mixtures of one, two, and three people (their goal is five), and so far, it’s performed well.
“We know, at least from our own early tests of MATCHit, that we have not falsely included individuals using that threshold,” says Grgicak, “and that’s the most important thing.”
The US Department of Justice and Department of Defense have funded the work.
Source: Condensed from the original by Boston University