The United States Government Accountability Office (GAO) recently issued a report focused on the Revised Behavioral Indicators used by the U.S. Transportation Security Administration (TSA) in its Behavior Detection Activities. Drawing on findings from a performance audit conducted from November 2016 to July 2017, the GAO concluded that the TSA does not have valid evidence to support its behavioral detection activities. More specifically, the GAO noted that, in total, only 28 out of 36 revised behavioral indicators were legitimately supported.

The GAO ultimately recommended that the TSA ““limit funding for the agency’s behavior detection activities until TSA can provide valid evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security.” [1]

Performance reviews add to the ability to strengthen established programs. However, continued funding of behavior detection programs is the only way to collect more operational data and validate that programs work. Hopefully the GAO Report will not discourage internal TSA stakeholders from charging ahead.

We have outlined below a few considerations to keep in mind when reading the GAO Report.


Behavioral Indicators Not the Only Approach

Governments that run behavior detection programs based on a predetermined list of indicators for selecting individuals for enhanced engagement are in the minority. The bulk of countries with active programs follow their “gut,” trying to articulate identified behaviors only after selecting the passenger for questioning – and not based on a prescribed list.

Actors using an indicator-based approach tend to be non-law enforcement organizations, entities that are extremely cautious to minimize the risk of accusations of profiling based on racial, religious, and/or ethnic backgrounds. Having a prescribed list of indicators very firmly puts in place the swim lanes for selection.  The thought is that these swim lanes then reduce occurrences of profiling.

Rather than questioning the foundation of TSA’s behavior detection program based on the behavioral indicators it uses, it would also be interesting to question whether the use of a prescribed list of behavioral indicators is the best way to conduct a behavior detection operation.

If you ditched the indicators list and let TSA’s Behavioral Detection Officers (BDOs) identify passengers for enhanced screening based on an instinctual feeling that the passenger is behaving differently than the baseline, allowing them to articulate their initial concerns post-engagement with the passenger, would BDOs be more successful in identifying passengers with hostile intent? Would they be more likely to profile?


Linking Operational Programs and Science a Must

The GAO report highlights that very few of the indicators that TSA uses are based on sound science. This is not surprising, as operational activities have moved at a faster pace than the research.  TSA and other governments around the world sped up the deployment of behavior detection programs basing them largely on a belief that the human element is key to stopping the next 9/11 (admirable!).  Meanwhile, research in this area is relatively new and continually evolving.

Behavioral indicators in a counter-terrorism context are difficult, if not impossible, to validate. It’s quite rare that a terrorist is caught in action.  Based on the limited number of behavior detection officers deployed globally, it’s even rarer that a terrorist is caught by a behavior detection officer.  Therefore, it’s unlikely that we’ll be able to prove through real-world events that certain behavioral indicators are tied to hostile intent.  Researchers are stuck with the lab and simulated events, and with post-event video footage if they are fortunate enough to be granted access.

TSA and others have accepted the potential risk of criticism to get their programs stood up quickly in the hope that it might even slightly further reduce the risk of another major terrorist attack. Had they waited on robust scientific evidence that behavior detection “works”, there would be no behavioral element in the US Government’s layered aviation security approach today.  This said, the continued evolution of behavior detection programs cannot be divorced from science and ongoing research in the field.


More Operational Data Leads to Better Indicators

One of the most effective ways to validate behavioral indicators is to gather more operational data related to behavior detection screening and related findings. Cutting TSA’s funding won’t help with this.

Given TSA’s “catches” through its behavior detection program are, statistically speaking, limited (see the point above re few terrorists being caught in action), what better way to enhance TSA’s operational data set than by sharing with close partners? Two data sets are better than one.

TSA and other U.S. homeland security government entities have a unique opportunity to open their books and compare operational data on behavioral indicators to cross-validate. Similarly, TSA and its international partners should deepen their efforts to do the same.  Rather than pull back funding, another opportunity could be to use the funds to strengthen inter-DHS/intel/law enforcement-community and international cooperation activities to share data on behavioral indicators and their effectiveness, and to invest in additional and more profound joint research in this area.


[1] Source: Aviation Security: TSA Does Not Have Valid Evidence Supporting Most of the Revised Behavioral Indicators Used in Its Behavior Detection Activities. Rep. Government Accountability Office (GAO), 20 July 2017. Web. 25 July 2017.

Recommended Posts