Facial Recognition, Data Ethics & Intelligent Risk

Another week, another raft of privacy, security and data protection stories - with the WhatsApp hack reminding us all to be vigilant and that nothing is 100% when it comes to the online services we use and cyber security. Digital risks are real, increasing and complex; building trust and putting people at the centre of the story has never been more important.

So what does that mean for businesses?

Organisations need to survive and grow against this backdrop of uncertainty and constant change - but making decisions from fear based emotions is not leadership. Instead, it’s about understanding the risks and the macro environment, doing the work to decode ever evolving complexity, and using critical thinking to step back, asses, act. Build capability for resilience where it’s needed and address vulnerabilities, based on priorities.

And then when/if the worst does happen - like a major data breach or cyber incident in an organisation - while it undoubtedly won’t be as predicted or per the plan, at least there’s been an exploration of the scenario and even muscle memory to draw on (if you’ve run an incident response table top exercise as a leadership team for example).

At Trace we work with our clients to develop an intelligent approach to privacy risk - understand the threats, the issues - make informed, coherent and balanced decisions, then move forward (with a plan) or act in a crisis. This ethos informs our professional services (e.g. Data Protection Impact Assessments), leadership training, and business software.

Risk comes from not knowing what you’re doing (Warren Buffet)

But back to the edit and the week’s privacy coverage in the media. For us the most interesting stories dominating the headlines have been those on Facial recognition - and more specifically those relating to the public, global debate it exposes on data ethics and human rights. And while Facial recognition is not new, recent use cases relating to state and police application of tech and the related debate on bias in AI algorithms have put Facial recognition tech stories into sharp focus this week.

Definition of Facial recognition

Facial recognition is a biometric software application capable of uniquely identifying or verifying a person by comparing and analyzing patterns based on the person's facial contours. (source: Techopedia)

Here’s Trace’s round up on some of the most relevant stories on this hot topic:

San Francisco

San Francisco

San Francisco, techiest of cities, became the first city to ban its use as covered in the Verge. Trace’s take on that? Where California leads, the world follows when it comes to technology - so we view this not as a fear based, anti-tech sentiment, but rather a rounded assessment on the fact that the application of this technology is currently immature in terms of planned usage and the risks and stakes huge

The Washington Post covered a review by Georgetown Law’s Center on Privacy and Technology that:

..police have used the systems in a number of questionable ways that could muddy the search results and potentially lead to misidentification or false arrest

Law enforcement had used celebrity look a likes and distorted images as training data, which highlights the need for testing, data quality, and refinement of the tech’s application. There are ongoing and important concerns when it comes to the potential for bias in AI if it’s trained on poor quality or unbalanced data, that’s only going to be exacerbated at scale in it’s application. So as any data governance pro knows, data quality matters (rubbish in, rubbish out, right?).

Balancing discovery with individual human rights

If you didn’t catch it, a Video was circulating this week of a man in London who was eventually fined by police for covering his face from the street surveillance cameras. This example and headline has brought the interplay of criminal justice, police tactics and human rights into question with respect Facial Recognition technology and how we move forward to balance the many rewards (efficiency, discovery, utility, public safety, locating criminals) with the risk for error and human rights infringements. How do we help police sift through data to find the needle (criminal) in the haystack of the population? And given their data is likely flawed (e.g. databases out of date, or not updated for those found innocent), there is clearly the potential for abuse and error - the stakes around that and individual impacts huge.

A UK legal vacuum?

So should the current versions of this technology continue to be used by justice departments, in civil life or indeed by privacy companies in current applications, when the risks are not yet clarified and tech not sufficiently tested? This all certainly puts the need for meaningful and effective regulation into focus - but we should also ask how legislators can keep up, never mind ahead of the fast moving innovations in this space? And who is responsible, given it touches on a number of bodies potential remit? It’s Trace’s view that tech companies, relevant bodies, lawyers and experts need to work with government to find strategic yet practical solutions and address the legislative gap. Self-regulation has been tried and proven not to work, but equally regs can be slow, bureaucratic and get out of date quickly (or even immediately)- so there are no easy answers. Taking a principle and ethics based approach is important when thinking of a framework, and delivering that with agility - it’s simply moves too quick to get caught up in detail.

barbed-wire-1670222_1920.jpg

Surveillance capitalism, surveillance state

The stories point to a wider, and critical debate on surveillance that we need to have as a society (and as leaders in responsible organisations), and one we need to engage in and contribute to as privacy and tech professionals, human rights lawyers or experts in relating fields. And with the potential risks so huge there’s an urgency to this (e.g. private companies accumulating public data bases for profiling, or tech being abused by authoritarian states to track minorities).

People should remain at the heart of data, tech & civil life

It all starts with understanding the risks and the benefits intelligently and remembering that people are at the heart of this. An individual’s personal data needs protection (a principle at the very heart of the GDPR), and people need to be kept safe by our justice system and the police will need to rely on tested and useful tech to enable them to do their job efficiently against the back drop of stretched resources. So how can we enable reliable, trusted tech to help humans do their job whilst protecting people’s rights? The answers are not yet clear, but meantime… all things in balance.

Sorcha Lorimer