Dezeen Magazine

Facial recognition used in Hong Kong protests

"Facial recognition is a fundamental threat to society"

The use of facial recognition in Hong Kong and London's King's Cross demonstrates the need to control access to the technology in the same way other dangerous items are regulated, says Owen Hopkins.


For the past few months I've been transfixed by the protests taking place in Hong Kong, and the way, as so often happens, a protest sparked by one particular issue – an extradition law to the mainland – has turned into broader demand for freedom and human rights.

It's an age-old struggle, but at the same time entirely bound to the political and – critically – technological circumstances of the present. This has been symbolised by the images of protestors pulling down lampposts – on the surface an act of petty vandalism until one learns that these lampposts carry facial recognition cameras, which are an increasingly important weapon in the state's armoury to track and ultimately control its citizens.

Coincidentally facial recognition technology and its deployment in urban spaces has also been in the news in London. There, the private owner of the recently redeveloped 67-acre, 50-building site just north of King's Cross Station was found to be using the technology to track pedestrians. For many observers, this was yet another indictment of the ongoing privatisation of formerly public spaces across the city.

In whoever's hands it ends up, it seems it can only be used in way that invades our privacy – and ultimately our freedom – without our consent

The common denominator of these two stories emanating from two quite different political and urban contexts and involving – on the one hand, an authoritarian state, and on the other, an oppressive private company – is facial recognition technology itself. In whoever's hands it ends up, it seems it can only be used in way that invades our privacy, and ultimately our freedom, without our consent. These nefarious uses are inherent in the technology itself.

Facial recognition works through something called an artificial neural network. This is not like a traditional computer programme that executes instructions contained in lines of code. Instead, a massive amount of data – in this case millions of photos of people's faces – are used to "train" the computer via examples and counter-examples to recognise the shapes and contours of faces. The results are checked by humans who then tweak the algorithms to correct the mistakes, with accuracy increasing incrementally over time.

We already have the technology on our smartphones, where photos apps automatically recognises the faces of our friends and family. This is relatively easy; most of the photos we take are staged: taken front-on and well-lit. The challenge has been recognising faces from wide-angle CCTV footage, but this is now possible too, but not necessarily evenly or consistently.

The biases of facial recognition technology are already well documented. For example, you are more likely to be misidentified if you have a darker tone. And if you are a women of darker tone, you are also more likely to be misidentified as a man. So far so predictable. Although these types of technologies purport – and are frequently claimed by their advocates – to be objective, they are anything but.

Who needs censorship when facial recognition ensures we censor ourselves?

In the field of consumer technology we're already become used this, given how its almost all made by – and therefore, usually for – affluent white male tech workers and the very particular social, economic and political spheres in which they operate in. When it comes to facial recognition, it's certainly possible to iron out these biases from a technological point of view, but it's rather harder, maybe impossible, to do so from a social or cultural one.

Even if we can, it doesn't magically make this technology suddenly no longer deeply problematic and troubling. Tracking where we go, who we meet and what we do poses a fundamental threat to the basic freedoms we take for granted: freedom of speech, freedom of expression and freedom of association.

It's not just individual liberties that are at threat, but political freedoms too. If our employers, for instance, know we are involved in a protest – as is the case in Hong Kong – then fear of losing out at work, or even losing our job, may cause us to decide not to go. Who needs censorship when facial recognition ensures we censor ourselves?

Inevitably those in favour will say that "the innocent have nothing to fear", as they already do with online tracking. But the biggest threats are never felt by those commentators, but are reserved for those whose lives are already economically precarious, who can't afford to lose their jobs, or whose behaviour is somehow different to "normal": subcultures, migrants, those who identify as LGBTQ and other minority groups.

Rather than make us safer, facial recognition only amplifies existing prejudices and further entrenches existing power structures. It doesn't matter who wields it – whether state, private company or some seemingly benevolent entity – the technology itself is a fundamental threat to society.

Rather than keep us safer, facial recognition only amplifies existing prejudices and further entrenches existing power structures

So, what can we do about? We already tightly control access to things that are harmful to us as individuals or as a collective, such as guns, chemicals and radioactive substances. We have legally enforceable regulations that ensure buildings are built properly and that manufactured products complies with safety, health and environmental standards. It seems only logical that facial recognition should be tightly regulated too.

But when it comes to regulating technology, governments have been lamentably slow to react, so much so that big tech operates in a kind of wild west. The tracking of where we go and what we see online is all but impossible to avoid, and aggressive user profiling and targeting is already used to sway public opinion, most infamously in the UK's 2016 vote to leave the European Union.

There have been some moves to try to "get tough" with big tech, including anti-trust investigations into the activities of Google and Facebook, while the EU's GDPR laws have attempted to give users back control of their personal information and data online. But the advent of facial recognition and AI more broadly renders these moves both tokenistic and woefully old-fashioned. The Hong Kong protestors use of umbrellas initially as a shield against police pepper spray, and latterly to obscure their identity from cameras, is surely more effective.

If we are to properly regulate facial recognition technology, as surely we must and urgently, then we not only require the political will, but we need to design new ways to conceive and manage regulation itself. This means squaring the circle of avoiding circumscribing or putting counterproductive limits on technological endeavour, yet at the same time ensuring new technology works for the benefit of everyone.

Main image is by Studio Incendo.