top of page
  • Thierry Spanjaard

Facial recognition on the way to the Panopticon


Facial recognition is not only one of the fastest growing industries of our time but also the most pervasive. Along with it, raise all the usual concerns coming from an innovative technology: Is it legal? Is it ethical? Is it actually good for us? Is it a creation of the devil? These concerns have arisen from different parts of the society: not only people and privacy defense organizations, but also big tech companies and even governments themselves.

In the Panopticon, a building concept developed by the English philosopher and social theorist Jeremy Bentham in the late 18th century, a single watchman was able to have a permanent view of all residents (or inmates). More important, the residents never knew when they were under surveillance or not, leading the concept originator to consider that people will follow the rules precisely because they don’t know whether they are being watched; thus, they are effectively compelled to regulate their own behavior.

To tackle facial recognition concerns, the US Senate is considering a “Commercial Facial Recognition Privacy Act” that aims at protecting individuals’ privacy. According to the Senators proposing the bill, it “increases transparency and consumer choice by requiring individuals to give informed consent before commercial entities can collect and share data gathered through facial recognition.” Interestingly, this proposed bill only tackles companies collecting facial recognition data but not public entities such as governments, law enforcement agencies or even mass-transit operators.

As is often the case, regulation comes after the facts. For instance, Facebook has access to 300 million new pictures uploaded every day by their 2.3 billion users. This allows the company to set up an efficient facial recognition system. In addition, users are helping Facebook’s Artificial Intelligence system by “tagging” their friends with their names. This represents an enormous potential for misuse of facial recognition information, and Facebook has a long record of misusing all sorts of data, says CIO.com.

Privacy rights organizations, such as the Electronic Frontier Foundation (EFF) argue that facial recognition by public entities is a bigger threat than the one from private organizations. In most cases, facial recognition systems have been developed without proper accuracy testing of the systems, setting up an appropriate cybersecurity policy, assessing the real impact on civil liberties or a legal framework guaranteeing legal protections in order to prevent internal and external misuse.

The limit between government and private use of technologies is not always easy to define. For instance, the FBI and other law enforcement agencies are among the most important users of Rekognition, Amazon's facial recognition tool. Rekognition was originally designed to be used by companies in the marketing and advertising space, wishing to make use of their photo catalogs or by financial services companies in countries where bank customers don't have regular access to physical bank locations and rely on mobile banking.

The EU integrates facial recognition regulation under the GDPR umbrella: biometric data (when used for the purpose of uniquely identifying a natural person) is among the “special categories” of personal data that is prohibited from being processed at all unless certain exceptional circumstances apply, and the definition of biometric data specifically refers to “facial images,” says IAPP, a global information privacy community. GDPR also explicitly allows to process facial recognition data if “the data subject has given explicit consent” and “If biometric information is necessary for carrying out obligations of the controller or the data subject in the field of employment, social security and social protection law” or “if it's vital for any legal claims.” However, member states in the EU may introduce additional regulations. While the principles are set, implementation details are still to be worked out in order to protect privacy rights while allowing law-enforcement agencies to use of the technology. We all know the devil is in the details.

Already, half of US citizens are now in a law-enforcement face-recognition database of some description, according to a new report entitled “The Perpetual Line-up: Unregulated police face recognition in America,” by Georgetown University's Law Center on Privacy & Technology.

In addition, the Chinese government has set up an infrastructure that uses at least 200 million cameras and Artificial Intelligence to identify and track its whole 1.4 billion population with over 90% accuracy. Already, in some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes, according to The New York Times. A consequence of this huge program is the development of a new industry: the Chinese public security market was valued at more than US$ 80 billion last year (EUR 71 billion) triggering the emergence of a new ecosystem from startups to large companies.

“No system of mass surveillance has existed in any society that we know of to this point that has not been abused,” said Edward Snowden.

118 vues
Recent Posts
Archives
Rechercher par Tags
Retrouvez-nous
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page