77% of those proven to have been wrongly convicted were convicted because of a bad eyewitness identification. On average, witness I.D. is no better than a coin toss. But jurors believe it as if it was irrefutable. Apart from an outright confession, nothing is more persuasive.

There are other factors, of course, that have gone into false convictions. And there is a little bit of overlap between the false confessions and bad IDs and the other factors — it seems to be just coincidence that these two factors add up to 100%.

Be sure to share your comments in the Class Participation section below -- that's often the best part! The comments are never closed; you're always welcome to add to the discussion. Also, if you get tired of clicking on the buttons, you can always use the arrows on your keyboard ← → to move around.

Buy the books on Amazon ___ ___
Join the conversation! There are now 22 comments on “It Was You! pg 08
  1. Tualha says

    So, are defense attorneys not allowed to tell juries how horribly unreliable eyewitness testimony is? Are there not expert witnesses who can testify to that? Or is this one of those things that’s only available if the defendant has money?

    • We’ll get to this later, but the short answer is: It depends on your jurisdiction.

      In the overwhelming majority of jurisdictions, the defense may not introduce expert testimony on how unreliable eyewitness identifications are. There are two reasons: (1) How inaccurate witnesses may be in general has nothing to do with whether this witness is inaccurate; and (2) It’s already common knowledge (ha!) and so we don’t need an expert to explain it.

      I’ll be covering all this and more.

  2. Andrew M Farrell says

    Is this just for eyewitness identifications of strangers?
    How much of misidentification is due to stress and how much is due to looking through a dirty screen and a bunch of leaves when getting a brief glance at defendants walking out of the sack of spuds?

  3. Avonidas says

    I think there’s a potential problem here: Eyewitness testimony is much less reliable than DNA evidence, when said evidence is handled PROPERLY. However, isn’t DNA evidence much, much easier to tamper with than eyewitness memory, provided someone has a motive and access to it?

    Of course, an eyewitness can be intimidated or bribed in order to give false testimony, but I think most jury members are confident that such deception will be revealed. Perhaps their intuition is wrong; perhaps familiarity with the old methods breeds false confidence. But the question remains: is DNA evidence safe enough from tampering?

    • DNA analysis is a great tool, when done correctly. It is true, however, that mistakes can be made. The police who collect DNA evidence, the lab techs who analyze it, and the experts who testify about it are all perfectly able to screw it up. They’re all just people, after all. There are so many people involved at every stage of the process that it’s hard for mistakes NOT to happen. The trick is to minimize the mistakes, and whatever effect they might have on the outcome.

      The science itself is solid and reliable. Done right, it is fantastic at excluding people — showing that the DNA in evidence did not come from them. It is also very good at matching a person to a DNA sample closely enough that we can be super confident that it came from the same person. The odds against a random match are astronomical.

      DNA doesn’t tell you what happened, however. Nor does it tell you who did it, how it happened, or why. It doesn’t even tell you whether the DNA evidence itself is even connected to the crime. All it can tell you is whether this bit of hair, semen, saliva, blood, or what have you came from this person.

      DNA analysis doesn’t try to match the entire genome. That’s impractical and unnecessary. Instead, it looks at a handful of locations where a group of 4 letters repeats itself over and over (like GATT-GATT-GATT). These are called “Short Tandem Repeats” (STR). There are places where everyone has an STR, and the number of times it repeats is different from person to person. Each of these places is called a “locus.” You might have a locus that repeats GATT 8 times, while I might have one that repeats it 11 times. At that locus, you’d be Allele 8 and I’d be Allele 11.

      Actually, we each got one set of chromosomes from our mothers, and one from our fathers. So at that locus we’d each have two alleles. You might have 8 and 17, and I might have 11 and 5.

      If we look at 13 loci, the odds of anyone else having the exact same alleles as you at all 13 loci is astoundingly small. Close relatives will have more similarities, though, and a lot of DNA cases involve close relatives, so this is something to keep in mind.

      The problem is, it’s not always clear whether the DNA sample’s alleles are the same as yours. Instead of an objectively straightforward report, the data often require a lot of subjective interpretation. Which means human foibles are now affecting our conclusions. There is room for human error.

      DNA samples are often very small — a droplet of blood, a hair follicle, a cigarette butt with some saliva on it. Because they’re small, there is room for contamination with someone else’s DNA. A cop might sneeze, or open his mouth to talk and some saliva mists out, and he gets his DNA everywhere. A cop might not see the droplets of blood at first, and touch them with his bare hands. Or maybe he rubbed his eyes with his gloves, or touched other evidence with them and transferred DNA from thing 1 to thing 2. Even if the police are as careful as possible, they can’t control for contamination from other sources before they got there. Any contamination gets multiplied, because we make lots and lots of copies of the DNA sample in order to get enough to test.

      Contaminated DNA samples can be very hard to interpret, and require much more subjective interpretation.

      Lots of police use plastic bags to voucher evidence. This is a bad idea. Plastic bags retain moisture, and moisture degrades DNA. Plastic bags also let in light, which breaks down DNA. Single-use sanitary paper bags or envelopes are the way to go. Sometimes the police re-use a bag, or use a bag that’s been lying around collecting dust. Sometimes they staple it shut, poking holes in the bag. Very often, they put multiple items in a single bag. I’ve had cases where the police took the evidence, and then took a saliva sample from their suspect, and then put BOTH in the same bag to send to the lab!

      When the lab gets the evidence, they almost never start analyzing it immediately. Labs are so backed up, it’s not even funny. The DNA evidence goes in a queue to wait its turn. Over time, that DNA degrades. Things get lost, confused, contaminated. Previous contamination can get worse. (One nice thing that labs do is they take notes. They take notes when they do something. They take notes when something goes wrong, and when they take corrective action.)

      The same concerns we have with collection and storage by the police are concerns with labs. Was the storage place warm from all the equipment being operated nearby? That degrades DNA. Did they properly seal the evidence when they received it? Were their work stations sanitized after each analysis? Really? Were the techs interrupted while working? Did they do one thing while another thing was still in progress? Were the evidence and the suspect’s samples tested at different times and in different places, or together?

      When the lab actually gets to the analysis part, there’s not much to do. First, they do about 30 cycles of duplicating the DNA they got — which increases it a billionfold. Then they stick the vial of DNA into a machine that changes temperature from 95C to 60C to 72C about 30 times over several hours, which gives you a vial full of the STR alleles you’re looking for. Then a machine counts the alleles and prints out a pretty graph.

      It’s not rocket science. It’s not impressive. It’s repetitive and dull and routine. And routine leads to inattention and error.

      They do this for the evidence, and for the sample taken from the suspect. Then they compare the two. If all the alleles match, then they say they “cannot exclude” the suspect. If they’re not the same at every locus, then the suspect is “excluded” — this doesn’t mean he’s innocent, only that this wasn’t his DNA. If the graph can’t be interpreted well enough, then it’s called “inconclusive.” What counts as “well enough” can be a sticky issue, however.

      It really does happen that you can have two sequences that are not identical, and yet the expert will testify that there’s a match.

      The lab techs are not doing science. The science was done by the people who designed the test, not by the people who run it. The lab isn’t conducting controlled double-blind experiments. They don’t have time. But that’s precisely what you’d need, to ensure that the results are reliable. Because people are very susceptible to what’s called “confirmation bias.” We see what we’re looking for. So when an analyst knows what he’s looking for, he’s more likely to see it, even if it doesn’t exist. We interpret the world to fit our expectations.

      Cops (and impatient prosecutors) tell the lab all the time about the case. Why they want the evidence. What they want the evidence to prove. How badly they want the evidence to prove it. This does affect the analyst’s subjective interpretations, no matter how loudly he protests otherwise.

      There are several things that are subject to subjective interpretation. The most common glitch is called “stutter,” which is usually a very small peak just to the left of the main peak for an allele. It happens all the time, an artifact of the PCR process. Once in a while, the enzyme that copies the DNA goofs, and skips over one of the repeats. Not often, but just enough that you see a little bump that’s one repeat smaller. Usually these are teeny tiny little bumps like background noise. But analysts can call a peak “stutter” even when it’s fairly high, as much as a quarter or a third of the height of the peak next to it. But you also get peaks of that size when the DNA is contaminated with DNA from another person. If the contamination is smaller than the main sample, the peak for that other person’s alleles won’t be as high, because there won’t have been as many copies. Some of that person’s alleles might not even get noticed. So you could be looking at a chart for two or more people’s DNA, but the analyst is interpreting it as belonging to a single person. (Stutter is usually obvious, but not always.)

      Sometimes you see a twin peak where the PCR enzyme added an A to the end of each STR at some point in the process. Now you’ve got twin peaks over 11 and 12. The machine calls it an 11. The analyst knows the suspect has 12, believes the machine was incorrect, and calls the 11 allele an artifact. So he goes in and changes the computer data. Lab techs can do that. (That allele looks more like stutter to this analyst? Deleted. You’ll never see it. All in the interests of clarity of course.)

      Mixtures of more than one person’s DNA are especially susceptible to misinterpretation. For any given locus, if you’ve got two people in the mix, there are four alleles to look at, with six different ways to group them (A+B A+C A+D B+C B+D C+D). If there are three people in the mix, there are 15 possible pairs. Sometimes they might overlap. Any time you have a mixture, it’s a judgment call for the analyst to say which alleles belong to whom.

      Sometimes an analyst will look at a mixture and interpret it as a degraded sample, when the peaks get smaller from left to right. It’s not a uniform diminution, however, so it’s a matter of interpretation sometimes to say it was one thing and not another.

      Does the chart contain a dye blob? That’s just a smear when the dye in the machine globs together. You can never tell if it’s obscuring an actual peak, though. Ideally, the lab would just run the test again to get a chart without the blob, but if you’re seeing a blob that probably means they didn’t. But don’t be surprised if the expert testifies that the allele(s) they were looking for is(are) in that smear.

      You get spikes when the machine’s voltage was uneven for a moment. These are usually narrower than true peaks, but true peaks are already pretty narrow to begin with, so unless you’ve actually got a printout of voltage data, it’s a judgment call.

      Bleed-through and pull-up are what you get when a peak is so strong it affects a reading in a different locus. The more STRs, the stronger the effect. These phantom alleles can be interpreted to say it’s person X’s DNA when it really wasn’t.

      Little peaks can be called full alleles, excused by saying there wasn’t enough DNA in the sample, or it was too degraded. Missing alleles can be explained away the same way. But that’s interpretation.

      I’m rambling. The point is, it’s not a matter of looking at objective data. The data very often require some sort of judgment call to interpret what they mean.

      A third area that can be problematic is — believe it or not — the statistical analysis. This is basic math, but you’d be amazed how many people, experts included, don’t really understand how statistics work. Just like the “birthday paradox,” where you only need a group of 23 people for the odds of two of them sharing the same birthday to be 50-50, when the odds of a random DNA match are a trillion to one, you only need a bit more than a million people for a 50-50 chance that two unrelated people will have the same profile. That’s a medium city, not “more than the number of people who have ever lived.”

      Ditto for false positives and false negatives. They happen, but try and find someone in a courtroom who understands that a test with a 99% accuracy rate can mean that one in eleven positive matches is wrong.

      Anyway, this got really long really fast, and I don’t have time to shorten it. Suffice it to say that DNA evidence is not perfect, it can’t tell you everything, and it’s a lot more subjective than you might think.

      That said, it’s a fucking awesome tool. Used properly.

  4. Alec Neal says

    >test with a 99% accuracy rate can give you one in eleven wrong positive matches
    The rest of the maths in here makes sense to me, but I’m missing it on this one.

    • That’s because my saying “99% accuracy” didn’t give you enough information.

      Let’s say you have 1000 tests. 990 will be correct. 10 will be wrong. Let’s say there’s 5 false positives and 5 false negatives.

      That’s not enough information. We need to know how many correct matches there were, and how many correct exclusions there were.

      A not unrealistic figure might be something like, out of a thousand tests, 50 will be correct positive matches, and 940 are correct exclusions (people we know didn’t do it — uninvolved people, police officers at the scene, etc.)

      That gives us 55 positives overall. 5 of those 55 are false positives. The odds of a false positive? 5 in 55, or 1 in 11.

      • Wow, that was some analysis there! :-) I wanted to ask about that “99% accuracy” too. So, to sum it up, you mean that the odds of a false positive *among positives* can be very high, just because a positive is so rare to begin with?

        “This is basic math, but you’d be amazed how many people, experts included, don’t really understand how statistics work.”

        *Sigh* no, I really wouldn’t :-( Statistics is one of those areas of expertise everyone believes they have intuitively mastered — contrary to rocket science, which they know they haven’t.

        I remember reading Richard Dawkins’ “Unweaving the Rainbow”; In chapter 5, he rambles on about statistics in general and DNA evidence in particular (and he’s not very fond of lawyers, apparently…) So, I had some idea of the methods and pitfalls of DNA evidence, though your “ramble” was much more thorough and informative. Thanks! :-)

        • In your first paragraph, you have the right of it. If the number of true positives in the population is small, then it’s easy for their numbers to be comparable to (or even dwarfed by) the number of false positives the test will find.

  5. Tualha says

    Assuming each kind of jurisdiction has had that rule for a while (say, at least 20 years), it would be interesting to compare their rates of false convictions, controlling for other factors. I suspect the ones that do allow such testimony would have a statistically significant reduction in false convictions.

    Not that I have the faintest idea how to do such an analysis. Just spinning ideas.

  6. I’m curious, what’s the normal proportion of evidence that results in a conviction? For example, if 77% of all convictions were the result of eyewitness testimony, then the fact that 77% of all false convictions also resulted from it wouldn’t be surprising. But if eyewitness convictions lead to false convictions more often than true ones, then there’s a problem.

Class Participation