A comment on CNET regarding Top Secret recently caught my attention. It’s worth quoting in full:
“This sounds like an interesting concept, but I’d be afraid to play a game like this using real email. How many gamers playing this are going to be flagged as terrorists or spies by the NSA? How many are going to be placed on the no fly list, and how many will be investigated/arrested as a result of this game? Something like this has the potential to screw someone’s life up for real.
It’s all fun and games until someone gets placed on the terrorist watch list…”
Sensational hyperbole, or valid concern? Can you really be added to a terrorist watchlist for playing a video game?
Last year The Intercept released The March 2013 Watchlisting Guidance. A ponderous 166 page document full of legalese securocrat speak, it does nevertheless shine some light on the watchlisting process. In particular it attempts to define what ‘terrorism’ is, and the criteria required for a suspect to be added to the various watchlists currently maintained by US government departments. The examples given for ‘terrorist activities’ are sensible, including use of chemical or biological weapons, destruction of aircraft, and kidnapping the president. ‘Terrorism’ itself is described as involving ‘violent acts or acts dangerous to human life, property, or infrastructure’ which ‘appear to be intended to intimidate or coerce a civilian population’ or ‘to influence the policy of a government by intimidation or coercion’. Terms such as ‘coerce’, ‘intimidate’ and ‘dangerous to’ can be misinterpreted or stretched, but overall the definition seems quite reasonable and well-thought out. Certainly video game playing would not qualify. Case closed? Not quite.
The Intercept also revealed that the number of watchlisted individuals, as of 2014, was over 680,000. This seems high given the rather narrow definition above. How do we explain this discrepancy?
Firstly the level of evidence required to be watchlisted is quite low. Whereas US courts use the ‘reasonable doubt’ threshold, and warrants require ‘probable cause’, watchlisting requires only ‘reasonable suspicion’. This is the same standard for a police officer to briefly stop you on the street, but not enough for a full search. It has been suggested that even an uncorroborated social media post may be enough evidence to watchlist an individual.
Secondly, you need not be actively planning anything yourself to be added, only to be associated with an individual on the watchlist. What does ‘associated’ mean? If you’re a relative, friend, have communicated with, or offer financial or material support in any way to a suspected terrorist, you can be added to the list without any suspicion. This rule is behind the belief that just by being on someone’s phone, or in the same internet chat room as them, you too could be added to the watchlist.
Ultimately how the guidance is interpreted comes down to the political climate of the time, oversight mechanisms in place, and available resources. The hundredfold increase in the number of watchlisted individuals post 9/11 is as much due to expanded intelligence budgets as with policy change.
Despite this, you can rest easy. You won’t get added to a watchlist for playing Top Secret any more so than for watching Citizenfour, or reading about the Snowden leaks in The Guardian, or New York Times.
The more interesting point is the fact that people are worried about this in the first place.
Imagine a cylindrical prison where every inmate is within line-of-sight to a central pillar. You can see out from the pillar but not into it. Prisoners never know when they are being watched so must constantly act as if they are under surveillance.
This concept (known as the Panopticon), was devised by Jeremy Bentham in the 18th century as a means of controlling many prisoners with few guards. The key insight being that the mere possibility of surveillance is enough to change behaviour.
Most surveillance is covert and clandestine, but a panopticon derives its power from the awareness of its subjects. In fact, belief in the existence of it is the sole requirement. Neither intent, nor reality are necessary for control. This contrasts with more obviously intentioned mechanisms such as China’s recent credit score system (https://www.aclu.org/blog/free-future/chinas-nightmarish-citizen-scores-are-warning-americans).
Today, technology allows us to construct panopticons with ease. A single CCTV camera in each cell would suffice, or interception of satellite communications, or wiretaps on global fibre optic cables…
Post-Snowden we are all Bentham’s prisoners.
For those in the creative or research industries this can be keenly felt. I’ve spent a lot of time researching the NSA and their surveillance programs whilst developing Top Secret. Perversely, the more you know, the stronger the controlling effect. With every new fact and detail, the panopticon gains strength.
People I meet jokingly suggest I must be on a watchlist. Humour tinged with the tiniest sliver of doubt. Others joke that they won’t play the game because they’ll be added to one. We like to believe that our behaviour isn’t affected, the panopticon has no hold over us, but there remains an insidious fear that our self-censorship is inescapable and manifest.
Should I google that word, or visit that site? Or use that encryption method? Or play that game?
Maybe not.« Back to blog