Today’s LA Times story about the Los Angeles Police Department’s use of facial-recognition software to spot violators of gang injunctions is mostly routine, but that makes two aspects of it even drearier than they would otherwise have been.
First is the rote, mechanical objection by Ramona Ripston, the head of the Southern California ACLU. She doesn’t even bother to come up with an actual scenario under which the use of facial-recognition software would produce problems different in kind or in important degree from the general problem of mistakenly grabbing the wrong suspect.
Ripston cites, rather nonsensically, the problem of operator bias; that’s a legitimate objection to the Breathalyzer as used in practice, but I can’t imagine how it would be problem when a computer runs a face-matching program. What’s the cop going to do, hit the computer until it admits there’s a match? Hack the program so it falsely records matches as having been made?
Of course, I’m relying on the reporter here; perhaps Ms. Ripston had some real objections. (For example, the story doesn’t make it clear whether the matching program gets run again once the subject is stopped — allowing a better picture — and before he’s arrested, or whether the fingerprint check is a field test or a station-house test. There’s a big difference between a mistaken stop and a mistaken arrest.)
But the giveaway, it seems to me, comes when Ripston worries out loud about the false negatives: cases where the machine fails to record a match when the suspect’s picture was actually in the book. How did that get to be a civil-liberties problem? In any case, if ACLU fails to strangle the technology in its crib, no doubt it will progress according to Moore’s Law, reducing cost, false negatives, and false positives.
I’ve had some dealings with the SoCal ACLU, as well as with the national group, around the drug issue, and Ripston’s comments in the story match my strong impression that the ACLU feels about crime the way the smelting industry feels about air pollution or the Bush Administration feels about torture: against it in principle, but in practice vehemently against anything that might actually control it.
That’s really too bad: in the era of Abu Ghraib, Guantanamo, and supermax prisons, we need a strong organization to challenge the excesss of law enforcement and national security agencies. But the ACLU’s imitation of the Groucho Marx as a a college president in Horse Feathers (I don’t care who proposed it or commenced it: I’m against it!) gets old fast, both when it comes to terrorism and when it comes to ordinary crime. If they’re automatically against everything, then why bother to listen to them?
[None of this is new, alas. Having edited the newsletter of the Maryland CLU while still in high school, I quit the organization a few years later when it announced that opposition to nuclear power was a civil liberties issue, because workers at nuclear plants would have to pass background checks. No, I’m not making this up.]
Does anyone know of a genuninely non-ideological civil liberties group that acknowledges tradeoffs in practice as well as in principle? I feel I owe them a check, but I don’t know where to send it.
The other bit of dreariness in the story was the quality of the reporter’s technical prose, reflecting the quality of his technical knowledge. Here’s the full text of the relevant sentence:
Hartmut Neven, developer of the software the LAPD is trying out, says his system uses an algorithm to translate various parts of the face into complex mathematical patterns employed to develop unique numerical templates.
Before reading that sentence, I could have guessed that the system used an algorithm. After reading it, I know precisely nothing more, except that the reporter probably couldn’t define “algorithm” on a bet. What on Earth is a “unique numerical template”?
If reporters aren’t technically trained in the stuff they cover, and if they aren’t willing to do the work to understand it well enough to explain it, then why don’t they just let it be, rather than printing such nonsensical pseudo-explanations? And did an actual live editor look at that story, or is the editing done by algorithm too?