Facial recognition software- Privacy rights and technological progress facing off?

Research Briefing- Facial recognition software

Privacy rights and technological progress facing off?

Background

Humanity’s fascination with the possibilities, real and imagined, of automated facial recognition goes back almost as far as the computer itself, to the work of pioneers such as Woody Bledsoe in the 1960’s.[1] Perhaps the source of our interest in facial recognition is twofold. One is the non-invasive nature of the technology: You could ascertain someone’s identity without them being aware of it, let alone giving their consent to proceedings. The other, philosophically more intriguing reason is that a human being’s ability to recognise other humans solely by their facial features is one of the brain’s finest achievements. The possibility that this capability could be matched or surpassed by artificial intelligence is highly significant as a consequence. 

In its infancy in the late 1980’s and early 90’s, pioneers like Teuvo Kohonen developed systems which could perform face recognition for aligned and normalised face images.[2] While Kohonen’s approach was significant in theoretical terms, the need to align and normalise faces before identifying them made the system difficult to use in commercial applications.

A major breakthrough for commercialisation of facial recognition software occurred on the back of the great strides made in the field of artificial intelligence. Major leaps in processing power and data storage capabilities have let to the emergence of ever more powerful machine learning algorithms, capable of identifying faces, juxtaposing the image to enormous data bases and feeding the result back within seconds.

In an interview conducted with the CTO of Digital Barriers, Mark Patrick, he demonstrates what facial recognition software at the cutting edge of technology is capable of. In the demo, the software could not only identify random, previously unknown individuals as male or female and compare their features to a data base of dangerous individuals; it could do so through glass and even where faces were partially concealed. The algorithm developed by Digital Barriers uses a combination of neural network technology and machine learning to achieve this.

These impressive technological advances come with numerous implications for users of the technology and policy makers alike. The economic implications of facial recognition having matured in recent years are perhaps most easily quantifiable. By the Economist’s estimate, revenue created by facial recognition biometrics in 2015 was just above 150 million dollar.[3] By their estimate, this figure will increase to just under 500 million dollar by 2020 and be driven largely by the Asia Pacific region. This projected boom in revenue created in the sector has led some commentators to speak jokingly of a ‘facial industrial complex’.

State of the Art

These projections do not seem excessively optimistic, if one considers the numerous applications of such software. As Mark Patrick points out in our interview, the technology for fixed facial recognition in 1 on 1 setting is already relatively mature and operates with accuracy ratings approaching 100%. Accuracy is increased by the fact that such systems have the luxury of fixing the target in place and operate a system designed to identify one face. This technology is currently used on a variety of devices, enabling facial recognition software to be used to unlock and lock smartphones, or to verify payments, an example of which is the ‘Smile to Pay’ app, developed by Ant Financial.  There are numerous examples of these so called one to one solutions being employed successfully, by the passenger hailing platform Careem for instance, which uses it to identify its drivers to passengers.

Other companies are currently working on facial recognition software built into TV sets that will enable the device to analyse not only how many individuals are watching a program at any given time, it will also be able to positively identify the member of the household and analyse their emotional response to a given programme or advertisement. Such data would indeed be almost invaluable for advertisers.

Given the impressive and expanding capabilities of the software, it appears inevitable that society in general and policy makers in particular give thought to whether they wish to regulate this new technology and if so, how. A crucial distinction will have to be made between the examples used above, which constitute examples of one to one solutions and one to N solutions designed to identify individuals within large groups of people. In the current political landscape, the option to not regulate at all can be safely dismissed as a theoretical thought exercise. Through the introduction of the GDPR in Britain in May 2018, the legal status of biometric data will be re-defined as personal data and be subject to the same rules. 

Rules on Facial Recognition Software

This is a game changer for facial recognition applications targeted at larger groups of individuals, because informed consent, a legal prerequisite to obtaining someone’s personal data according to the GDPR,[4] is almost impossible to obtain from a large group of people. However, Mark Patrick emphasises that there are technological solutions to address privacy concerns. In particular, he stresses the need to re-think the way companies and public organisations store and process data. They currently hold vast sets of data they do not, or no longer, need and are not processing efficiently. Therefore, to him, exploring pathways to ‘smarter data’ seems more promising than a blanket ban on using biometric data without consent.

The eventual departure of Britain from the European Union offers a choice: The government can either opt to retain the General Data Protection Regulation (or a functional equivalent thereof) or choose to jettison GDPR. Both options have potentially significant costs and benefits associated with them and will also be subject to the overall trajectory of the Brexit negations and the value attributed to uninterrupted data flow between the EU and the UK.

The Face of Progress

Simply abolishing the use of all facial recognition software in the UK would not only be difficult to do, as the technology is in rather widespread use already, it would also be a poor decision. Facial recognition software will be one of the fastest growing industries in the coming decade, as the above projections illustrate. Within this growing industry, Britain is currently very well positioned. In the words of Mark Patrick: ‘It is developed somewhere in the world anyway, why not the UK?’

It is necessary here worth distinguishing between one to one and one to N solutions. Within the former category fall innovations such as unlocking your phone using facial recognition and TV’s that can determine the viewer’s mood and identity. These applications are in a sense easier to regulate, as giving informed and explicit consent is possible. A customer may choose between two smart phone options, one with facial recognition capabilities and one without and also the individual whose face would unlock the phone could quite easily give consent to their biometric data being used in this way. Arguably, given the right safeguards for consumers, these one to one solutions can be retained whilst still complying with the provisions of the GDPR.

This is excellent news, as these technologies have numerous future applications which have not yet fully materialised, such as facial recognition built into the bodycams worn by policy officers. This could, at the push of a button, enable the police to identify a suspect, removing the need for a time intensive identification at the police station, which is the current practice. This would make police officers safer whilst on duty, as the software could flag up potentially violent offenders or such on terror data bases straight away. The associated potential savings to the public purse from a large scale roll-out are substantial.

Concerns about invasion of privacy should not be lightly dismissed, as public support for the use of facial recognition is crucial. There are a number of technological solutions which enable companies to ensure user anonymity. Personal data can be anonymised before it is send to a central data base for comparison. Another approach is to employ decentralised data storage, whereby all relevant biometric data is stored on the device (say a smartphone) which also has the facial recognition software installed. Mark Patrick of Digital Barriers says that such approaches are near impossible to hack.

The situation is significantly more difficult for one to N solutions. It is already common practice to delete unidentified individuals biometric data from data bases after it has been verified that they do not correspond to any terror or wanted person’s lists. Even so, this practice is incompatible with the provisions of the GDPR. This discrepancy suggests that a legal distinction between one to one and one to N solutions is advisable. This would permit the retention of the former within the framework of the GDPR. Facial recognition software applied to large groups of people, such as at public spaces will be feasible only if GDPR is revoked. This would open up a host of possible applications, particularly for security purposes.

Painted faces on the wall

The unencumbered and unrestricted use of facial recognition software without informed consent by data subjects would shift the balance of power between the state and citizens like few other inventions. History suggests that whenever something exists, it will eventually be used. Nuclear weapons are an unsettling case in point here. Once biometric data on citizens is stored in a centralised data base, it is only a question of time before it will be (mis)appropriated for a purpose initially unintended. Currently, facial recognition can identify a person’s gender and rough age. These data points are relatively uncontroversial, but could soon be complemented by information about significantly, more private information such as sexual orientation. There are currently pilot studies being conducted where sexual orientation can be identified with a reasonable degree of accuracy.[5] Governments holding this type of information should be more cause for concern.

Additionally, there is always the risk of involuntary loss of data. Recent incidents of large scale hacking should have brought this possibility, however remote, to everyone’s mind. To the severe curtailing of civil liberties is added the economic fallout from being unable to have uninterrupted flows of personal data between the UK and the EU. With some economists declaring that data is the new oil, this seems a very high price to pay for infringing citizen’s right to privacy.

The algorithms underlying facial recognition software have been found to be vulnerable to several forms of bias, racial or otherwise. It would seem prudent to find ways of making sure that these types of biases, sometimes caused by bias by the code writer but decidedly more often through inputting biased data into the AI system, are corrected before a potentially flawed piece of technology is released. This opens up a host of policymaking and technological challenges. Any genuine artificial intelligence system will eventually begin to write its own rules based on the data it received. At present, there is no unified and certified method of ensuring that these biases do not creep in at this stage. However, before such a method is found, tested, and agreed upon across the industry, it seems reckless to apply it.

 

 

 

 


[1] AI Magazine:

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1207/1108

[2] History of Facial Recognition, MIT

http://vismod.media.mit.edu/tech-reports/TR-516/node7.html

[3] Economist, September 9th 2017

https://www.economist.com/news/business/21728654-chinas-megvii-has-used-government-collected-data-lead-sector-ever-better-and-cheaper

[4] GDPR

http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf

[5] https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph