LIRNEasia is celebrating its 10th anniversary.
See our story here.


Work with us - Applications are being received for the positions of Big Data System Administrator, Research Manager and Junior Researcher.

The 80:20 rule in electronic communication surveillance

Last week, LIRNEasia taught a course on broadband policy and regulation in Sohna. One of the modules was on privacy and surveillance. One of the instructors was Sunil Abraham, acknowledged for his thoughtful and creative approach to sticky ICT policy questions. Drawing a diagram, he pointed out that if surveillance was exclusively focused on the small percentage, perhaps five percent, of people who were engaged in terrorism or other bad acts, law enforcement would be more efficient and the liberties enjoyed by the non-terroristically inclined majority would be that much safer.

On the face, a beguiling proposition.

But one BIG problem. The solution depends on ALL the potential terrorists being included in the defined subset and NONE being included in the excluded-from-surveillance subset. How can this be ensured, when in fact the terroristically inclined have every incentive in the world to get themselves included in the excluded subset.

This is the classic 80:20 problem, experienced by all sorts of entities, including bookshops. Eighty percent of the customers tend to buy 20 percent of the titles, more or less. A most efficient bookshop would carry only the 20 percent. But the problem is that it’s very very difficult to know beforehand what constitutes the 20 percent. This problem explains the sterility of most airport bookshops.

That is for something innocuous like books. The challenge with regard to surveillance is much harder. We are dealing with people like David Headley, whose modus operandi is to look completely different to a terrorist. Current surveillance failed to catch Headley, so I am not praising it while criticizing Sunil’s solution. But I am saying that we need to think through this problem keeping people like Headley in mind.

2 Comments to The 80:20 rule in electronic communication surveillance

  1. March 24, 2014 at 10:13 pm | Permalink

    If the objective of surveillance is to have 100% of terrorists and criminals under surveillance then we are talking about a very small percentage of people. This will come close to 20% of the population only in very unusual and extreme situations for ex. an ongoing attack on an airport. So 80:20 is not the right analogy, a better analogy is surveillance is like salt in cooking. Essential in tiny amounts but counterproductive even if slightly in excess. 20% salt means you are in a bit of a pickle, which cannot be mistaken for staple diet. On a graph with “percentage of population under surveillance” on the x-axis and “security” on the y-axis this will look like a inverted hockey stick with a small head. The worse the security situation in a country the bigger the head.

    Always remember the more surveillance you do, the more you will be the target of attacks and leaks. Nobody wants an Evil Snowden situation on their hands. There are perhaps two primary types of surveillance – preventive and post-facto.

    In post-facto surveillance, those people who were or are in some way connected to an event are targetted. Here, there is a strong temptation to adopt the “hops” or “degrees of separation” model from the NSA – however this leads to exponentiation growth of targets without sufficient return on surveillance dollar. Therefore increase in the number of target should be made more expensive through technology and oversight.

    Preventive surveillance should be based on old-fashioned police work and perhaps more controversially through fishing expeditions based on a “person”, a “class of persons” and a “class of messages”. Fishing expeditions should not be used to build cumulative databases. A “person” could be targeted randomly. A “class of persons” is the most problematic because this is where profiling will be used. Content analysis of a “class of messages” could be easily defeated using cultural cryptography and therefore it is time stamps, geographic information and other meta-data that ideally should be the basis of fishing expeditions. Again given that fishing expeditions are like searching for the needle in the hay-stack it should be used very carefully only when all other leads are dead.

    Global privacy principles could be applied to keen percentages healthy for ex. notice to those unnecessarily placed under surveillance [this will grow privacy awareness] and transparency regarding totally number of persons placed under surveillance under various schemes. To some extent this can be technologically enforced through a key escrow dependency on privacy regulators. In an ideal world, the powerful will be placed under equal if not more surveillance than the poor because they have greater harm potential.

  2. March 26, 2014 at 7:23 am | Permalink

    Hi Sunil, Rohan,

    I am not convinced the 80/20 rule applies here. What we can say is that individuals are entitled to a reasonable level of privacy. Mass surveillance violates our expectation of privacy which has consequences we haven’t fully understood appreciated yet, most of them negative, ranging from generally inhibiting free expression to the use of data as a weapon for power.

    Mass surveillance is also bad because, as Cardinal Richelieu apocryphally said, “give me six lines written by the hand of the most honest man, I will find something in them to have him hanged.” Trawl for enough data and eventually we will all be criminals.

    So if everyone has a reasonable expectation of privacy, how can we find terrorists? As Sunil suggests, how it has always been done, by first establishing probable cause, reasonable suspicion and seeking permission to violate individual privacy for a good reason. Obviously what constitutes a reasonable suspicion is going to be a subject of some debate in the digital world and we may need new standards of behaviour.

    Perhaps some of the ideas that are evolving around private sector big data ethics such as this paper by Kate Crawford and Jason Shultz on Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms could be applied to the security and intelligence services as well.

Search

Research Mailing List

Enter your email for research updates:

Login

Flickr Photos