17 August 2006

Thought Surveillance Agency

That's what the TSA wishes their abbreviation stood for apparently. Both fortunately and unfortunately, their goals are (at least currently) pseudoscientific pipe dreams. Via Scaramouche, we find that the TSA wants to start polygraphing everyone before letting them through security. To shamelessly reappropriate Wolfgang Pauli's infamous remark, it's so bad it's not even wrong. I don't even know where to begin with why such a system is a bad idea. So let's just start at the top of this disorganized mess of a WSJ article:
With one hand inserted into a sensor that monitors physical responses, the travelers used the other hand to answer questions on a touch screen about their plans. A machine measured biometric responses -- blood pressure, pulse and sweat levels -- that then were analyzed by software. The idea was to ferret out U.S. officials who were carrying out carefully constructed but make-believe terrorist missions.
Oh, yeah, that's going to work great for testing and calibrating a system designed to catch actual terrorists. 'Cause everyone knows a TSA employee running a security test will have the same kind of psychological response to the prospect of being caught as will a terrorist. I'm sorry, but "stupid" doesn't even begin to describe that assumption. (In all seriousness, sometimes what seems silly at first can be true, so I'm willing to be empirically convinced that this is a valid inference. But just assuming it is downright retarded.) Continuing:
The trial of the Israeli-developed system represents an effort by the U.S. Transportation Security Administration to determine whether technology can spot passengers who have "hostile intent."
So basically, they want to know if you have "hate in [your] heart" when you get on a plane to, say, visit family that you hate or go on a business trip to meet with that sombitch David Nelson. Now on to the buzzwords:
In effect, the screening system attempts to mechanize Israel's vaunted airport-security process by using algorithms, artificial-intelligence software and polygraph principles.
Here's a handy-dandy translation guide. "Algorithms" = "computer program". "Artificial-intelligence software" = "marketing bullshit that means no such thing (yet)". And "polygraph principles" = "pseudoscience, intimidation, and wishful thinking". We're not even halfway done yet:
The test alone signals a push for new ways to combat terrorists using technology. Authorities are convinced that beyond hunting for weapons and dangerous liquids brought on board airliners, the battle for security lies in identifying dangerous passengers.
Wow, what brilliant insight by the authorities. Who would have ever thought that stopping terrorists might involve trying to, you know, identify terrorists? As to the bit of techno-worship, technology is a tool like any other. Or rather, a "tool" is just "technology" that's been around a long time.
The method isn't intended to catch specific lies, says Shabtai Shoval, chief executive of Suspect Detection Systems, the start-up business behind the technology dubbed Cogito. "What we are looking for are patterns of behavior that indicate something all terrorists have: the fear of being caught," he says.
Using polygraphs for something other than catching specific lies is when they're at they're absolute worst, at least according to reputable sources like the National Academies of Sciences (the full executive summary is available free here). The short version is, the less precise the questions the less clear it is to what exactly you are measuring a response. I'll come back to this in a bit. Just keep in mind Mr. Shoval's statement that they are looking for "fear of being caught," i.e., nervousness.
Security specialists say such technology can enhance, but not replace, existing detection machines and procedures.
Finally, some sanity! But it doesn't last long:
Some independent experts who are familiar with Mr. Shoval's product say that while his technology isn't yet mature, it has potential. "You can't replicate the Israeli system exactly, but if you can incorporate its philosophy, this technology can be one element of a better solution," says Doron Bergerbest-Eilon, chief executive of Asero Worldwide consulting firm and a former senior official in Israel's security service.
Let's skip over whether "the Israeli system" is desirable, since any mention of Israel is bound to piss someone off. Instead, let's focus on the fact that for an "independent expert" they talk to the CEO of a company that is in part a security technology marketing firm. From ASERO Worldwide's mission statement:
Helping market emerging companies producing innovative technologies whose potential has not yet been realized in the homeland security market and providing a consultancy service to the venture capital market oriented specifically to the homeland security sector.
Yeah, real impartial. ASERO also refers to APCO Worldwide, the propaganda firm that brought us TASSC (and probably Steve Milloy, based on his astroturf organization's co-location with TASSC before they collapsed), as a "partner." Just sayin'. Back to the article:
To date, the TSA has more confidence in people than machines to detect suspicious behavior. A small program now is using screening officers to watch travelers for suspicious behavior. "It may be the only thing I know of that favors the human solution instead of technology," says TSA chief Kip Hawley.
Kip must have a really depressing sex life.
Here is the Cogito concept: A passenger enters the booth, swipes his passport and responds in his choice of language to 15 to 20 questions generated by factors such as the location, and personal attributes like nationality, gender and age. The process takes as much as five minutes, after which the passenger is either cleared or interviewed further by a security officer.

At the heart of the system is proprietary software that draws on Israel's extensive field experience with suicide bombers and security-related interrogations. The system aims to test the responses to words, in many languages, that trigger psycho-physiological responses among people with terrorist intent.
Don't you just love how "proprietary" is so often said like it's a good thing? "Proprietary" means you don't know shit about what it really does, like with Diebold and ES&S voting machines. As to what it's looking for, just how the hell do you determine what words evoke those responses in terrorists but not in other people?
The technology isn't geared toward detecting general nervousness: Mr. Shoval says terrorists often are trained to be cool and to conceal stress.
Remember how I told you to keep in mind that this is supposed to detect "fear of getting caught"? I suppose you could parse that as being distinct from general nervousness, but I'll just let you ponder how Mr. Shoval's explanation isn't supposed to apply to his earlier statement.
Unlike a standard lie detector, the technology analyzes a person's answers not only in relation to his other responses but also those of a broader peer group determined by a range of security considerations. "We can recognize patterns for people with hostile agendas based on research with Palestinians, Israelis, Americans and other nationalities in Israel," Mr. Shoval says. "We haven't tried it with Chinese or Iraqis yet." In theory, the Cogito machine could be customized for specific cultures, and questions could be tailored to intelligence about a specific threat.
Call me cynical, but I just know the customization would be shamelessly misused, especially behind the cloak of "proprietary software."
The biggest challenge in commercializing Cogito is reducing false results that either implicate innocent travelers or let bad guys slip through.
Right, because what's important isn't whether it works but whether it's commercially successful.
Mr. Shoval's company has conducted about 10 trials in Israel, including tests in which control groups were given terrorist missions and tried to beat the system. In the latest Israeli trial, the system caught 85% of the role-acting terrorists, meaning that 15% got through, and incorrectly identified 8% of innocent travelers as potential threats, according to corporate marketing materials.
And this has what to do with its reliability in catching actual terrorists or distinguishing them from the general population?
The company's goal is to prove it can catch at least 90% of potential saboteurs -- a 10% false-negative rate -- while inconveniencing just 4% of innocent travelers.
Okay, math time. What percentage of people "caught" by the system would actually be terrorists, even generously granting these numbers? Well, let's make the very generous (to Mr. Shoval) assumption that 1 in 100,000 people boarding planes in the US are terrorists (that's a ridiculously high 600+ a month based on these BTS statistics). Crunching the numbers, that's roughly 540 terrorists flagged each month, compared to 2,400,000 innocent people. So about 1 in 4500 people flagged by the system (or 0.02%) would actually be terrorists -- if it works as well as the company would like. Fewer terrorists makes these numbers even worse. Oh, yes, and 60 terrorists would still get through.
Even though his expertise is in human observation, U.S. behavior-recognition expert Dr. Ekman says projects like Cogito deserve a shot. He expects technology to advance even further, to devices like lasers that measure people's vital signs from a distance. Within a year, he predicts, such technology will be able to tell whether someone's "blood pressure or heart rate is significantly higher than the last 10 people" who entered an airport.
That's it, we are now officially living in a Philip K. Dick novel.