Back to Presentations
Overcoming DNA Stochastic Effects
M.W. Perlin, "Overcoming DNA stochastic effects", Northeastern Association of Forensic Scientists 2010 Annual Meeting, Manchester, VT, 10-Nov-2010.
Talk
PowerPoint presentation with live audio recording of the Northeastern Association of Forensic Scientists 2010 talk.
Download Handout
Download PowerPoint
Abstract
All natural phenomena exhibit random variation, often termed "stochastic" effects. With DNA evidence, most of the peak height variation arises from the polymerase chain reaction (PCR) used to amplify short tandem repeat (STR) genetic loci. At each cycle of the random PCR branching process, a DNA fragment is re-amplified (or not) with some probability. Low amounts of DNA template, or additional PCR cycles, increase this inherent amplification peak variability.
DNA analysts follow interpretation guidelines that use peak height thresholds. These laboratory-calibrated thresholds attempt to tame uncertainty by declaring that peaks over threshold are true alleles, whereas those under threshold are not. This all-or-none threshold process can discard considerable identification information, since (a) DNA peaks below threshold often represent actual amplified genetic material and (b) quantitative STR peak heights can tell us how much of each individual contributed to the sample. Moreover, while threshold methods may reduce the possibility of interpretation error, they cannot eliminate it entirely.
The 2010 SWGDAM DNA interpretation guidelines have the effect of raising STR review thresholds, in an attempt to reduce uncertainty by discarding more DNA data. Unfortunately, this approach can make highly informative DNA evidence less useful by either artificially lowering a match score or rendering it entirely "inconclusive". However, the same guidelines offer two mechanisms for scientifically preserving DNA match information.
The new SWGDAM guidelines permit the use of a validated "probabilistic genotype" method that can be used in place of peak thresholds (paragraph 3.2.2). Modern statistics enables computers to capture data uncertainty in probability models. The peak height uncertainty can then be used to help infer genotype possibilities, and their associated probabilities. By concentrating genotype probability more heavily on those allele pairs that are better supported by the data, DNA match information to the actual culprit can be preserved.
The new guidelines also allow combining DNA data (paragraph 3.4.3.1). Such combination is a well-established method in statistical science, implemented using a "joint likelihood function" that coherently explains the separate data components. These mathematical solutions further concentrate genotype probability on allele pairs supported by the data, and can thereby preserve more DNA identification information (i.e., yield higher match scores). Unlike "consensus" or "composite" human review procedures, the joint likelihood is solidly based on mathematical probability, and is widely used in many areas of science.
This talk will describe DNA interpretation approaches for addressing stochastic effects. We will show how thresholds discard match information, whereas probabilistic genotypes preserve it. We will also show how joint interpretation of multiple locus experiments can further preserve information. The presentation will include DNA results from cases and studies that illustrate these ideas. Modern computer interpretation of all the DNA evidence can overcome stochastic effects and preserve identification information.