UK

A Lawless Zone: Surveillance Technologies and the Police

From body-worn cameras to facial recognition, as technology and artificial intelligence race ahead and transform modern-day policing, regulation is left behind in the dust.

We sat down with Big Brother Watch’s Jennifer Krueckeberg to discuss how the police are utilising new technologies, the implications of this for civil liberties, and to what extent we can justify mass surveillance in the name of security.

Green European Journal: The United Kingdom deploys more advanced surveillance technologies than many other European countries as part of its policing and security. What kinds of technologies are used by the government and the police services and for what purposes?

Jennifer Krueckeberg: For a couple of years now, the police have been experimenting with facial recognition technology, which is deployed through surveillance cameras. The cameras look at a crowd and try to identify individuals against images on a database and track them. Then, if the facial recognition software finds a match and the police think that a person out there in the crowd is wanted for arrest or is a potential criminal, they go out and stop the person.

Another technology that is very prevalent are body-worn cameras. Pretty much every police force in the country now uses them. They were first introduced in the United States to make the police more accountable in case of wrongdoing. However, the use of body-worn cameras in sensitive areas or situations, for example in the case of domestic violence, can be difficult or dangerous for the subjects being filmed. Of course, it’s good to have more transparency and accountability from the police but there is currently not enough guidance on how and in which situations these cameras should be used.

Do we know how footage from body-worn cameras is used?

There are no clear retention periods for the use of footage as evidence. In our report on body-worn cameras, we found that neither the police nor the Crown Prosecution Service could say how often footage from body-worn cameras has been used in court so it is not clear how important this is. Then the technology itself comes with many risks. Some cameras provide insecure and easily hackable wireless or Bluetooth functions to transmit footage. Research also shows that the shape of the camera lens can make people appear bigger and more aggressive. When you watch the footage, you identify with the person behind the camera because you see the scene from their point of view, producing a potential prejudice against the accused that could have implications in court.

You spoke initially about facial recognition, which Big Brother Watch has been very active on. Why is this technology of particular concern?

The whole premise of saying the police only use facial recognition to find criminals and terrorists is highly questionable, because it is already used for other purposes. The police use it at football games and at protests. The police even used it at London’s Cenotaph war memorial on Remembrance Day to identify what they call ‘fixated individuals’. These are people who have mental health issues and have a fixation on certain public figures such as the royal family. These people are not criminals and are not violent, but it is still used to identify them and keep them away from events.

Everybody is under suspicion, even if they have done nothing wrong.

Because of the way facial recognition works, it cannot just look at a specific person. Anybody who is in a crowd will be scanned and a biometric measure of their face will be taken. The artificial intelligence algorithm will then try to match that image against other biometric images on a database. So the question is also whose images are kept on that database? Quite often it includes images from custody but it’s easy to get biometric images from elsewhere nowadays. Biometric images can be based off CCTV footage, pictures from protests, or even an image from the internet.

Studies have shown serious flaws in the accuracy of facial recognition. It performs badly with people who have darker skin and when it comes to identifying women. I tried it at an exhibition once and, as a dark-skinned woman, it did not identify me at all. My image was matched with that of another dark-skinned woman, but one who looked nothing like me. This weakness has been demonstrated in studies too. Here in the United Kingdom, facial recognition has been used twice at Notting Hill Carnival, a black Afro-Caribbean festival. Using a technology that does not work effectively with dark-skinned people at one of their community events which is already over-policed has to be strongly scrutinised. The risk of misidentifying an innocent person here is much higher.

Banksy: One nation under CCTV, Creative Commons

There seems to be a major question regarding the oversight and management of facial recognition…

It’s a completely lawless space. The police are using the fact that there are no laws regulating this area to rush this technology out and make all the decisions about its use.

Recently the European Union passed the General Data Protection Regulation (GDPR), which is in force in the UK – at least for the time being. Are the police bound to comply with it or does it apply just to businesses and private individuals?

The GDPR is a really good law. It takes the technological developments that have changed everyday life and makes people understand that they have certain rights regarding their data. But there are exemptions for law enforcement and intelligence purposes, so the police can say that they need certain information to keep everyone safe and then they do not have to comply with the GDPR or the UK’s equivalent, the Data Protection Act.

Beyond its use in facial recognition, what does artificial intelligence mean for surveillance? 

Artificial intelligence is definitely becoming more prevalent. Durham Police have developed a tool called HART (Harm Assessment Risk Tool). HART uses machine-learning algorithms to assess people by their risk of re-offending and categorises them as low, medium, or high risk. The way this technology works is that it needs to be fed policing data and policing data tends to be very biased. If we took policing data from the US where black and Latino communities are already over-policed and fed them to an algorithm, then the machine would just recreate this bias. HART used to have a postcode variable that the algorithm would take into account. An assessment of someone’s likelihood to re-offend and the decisions made on that assessment were therefore also based on what area the person was from. And decisions made by the police based on algorithms could make or break a life. The risk of profiling and the question of whether we really want machines to make decisions about humans is something that society needs to deal with as a whole.

The police introduced facial recognition to protect us from criminals but now it is used against protesters and people who have mental health issues.

With the adoption of more and more of surveillance technologies, how has the way the police operate changed?

What is quite significant is how the police have shifted from focusing on certain suspects to looking at everyone in the hope of finding individuals that have done something wrong. This is suspicionless surveillance. Take the use of facial recognition at a public event, for example. Most people around will be innocent, while the police just hope that someone they are looking for will run into their nets. But if they already put someone up for arrest on their database, say that person has committed a sexual offence, why do they wait for them to turn up at a concert to do something bad? This is a really ineffective way of pursuing a criminal. To find specific people, you cannot use technology that has to look at everybody before it can make a decision.

Big Brother Watch

People live more and more of their lives online or through their phones. How is this online space policed?

In terms of the general legal framework in the United Kingdom, things changed towards the end of 2016 when the Investigatory Powers Act was passed. This law was a real first in its intrusiveness, even compared to the United States. It allows for the collection of everybody’s communications data, whether or not they are suspects. Your internet provider, your mobile phone provider, and your phone provider are all required by law to collect data on every phone call you make and every site you visit online. They do not collect what you have said or the content of the sites you visit, but they do store which sites you visit, how long you stayed on them, to whom you have been speaking to, and for how long.

Service providers are required to store this information for 12 months and government agencies or departments, such as GCHQ [Government Communications Headquarters, the UK telecommunications intelligence agency] or the police can apply for a warrant to access this information. The problem is that  the whole population is targeted, not just the communication of known terrorists, suspects, or criminals. Everybody is under suspicion, even if they have done nothing wrong. There are not enough safeguards, there is a commissioner and a process to apply for a warrant but its oversight is very muggy.

What role does the tech industry have in the adoption and use of surveillance technologies by government and the police?

Business and government often feed off one another. Surveillance technology has now developed really far and business really wants to get the police to use it. This technology is expensive and very good business so companies have a responsibility too. Technology sold by private companies induces the privatisation of policing and of other functions of government. Governments nowadays rarely develop their own systems or use their own tools and the logic of privately owned algorithms can fall under company secrets or copyrights. So for accountability and transparency, it is very hard to get into these technologies to track how they make their decisions. The decisions made by police or intelligence services can affect people’s lives drastically and, if you are accused, proving your innocence can be very difficult when charges or evidence are based on proprietary technology.

One of the key defences of surveillance is that sacrificing a bit of privacy is necessary to protect security. What do you make of this way of looking at surveillance?

People have an understandable desire to be safe and the police and other institutions are there to protect them. But taking an approach where everybody is under general suspicion is neither helpful nor effective. Going through all this data generates a huge amount of work and it is not delivering what it’s promising. We have reached a point where we need to decide how much technology we want in our lives and how to regulate it to ensure transparency and oversight.

We need to finally listen to people’s voices and stop blindly rolling out these technologies.

Maybe people are happy with the government they have right now and trust it to not abuse this information, but governments can change. Information could be used to prosecute people who don’t have the right political ideas or to target marginalised groups. The potential uses against migrants are easily conceivable under a kind of government that does not even seem that far away. The more this goes into different areas, the more worrying it is. The police introduced facial recognition to protect us from criminals but now it is used against protesters and people who have mental health issues. This is already an overstep. It sounds very dystopian when someone says “What happens under a different government?”, but we can already observe this function creep into different areas of our lives.

In your opinion, has the expanding use of surveillance technologies changed people’s behaviour, whether it’s their willingness to participate in protests or in other areas of social life?

If we look at Twitter, Facebook, or other platforms, there is already self-censorship out there. People are more aware that they can be seen online by their employer or other people. People are more careful about what they say online. When we speak to activists who participate in protests, they are always quite worried about these technologies and their uses. It does repress, if you see the police with a van capturing your face, you might be more concerned about saying something out loud.

“CCTV in Operation”: Big Brother is watching, Creative Commons

The way people are categorised also affects their readiness to speak out. In the United Kingdom, we have some people that are classified as “domestic extremists”. The Green Baroness Jenny Jones has been classified as a domestic extremist because she is an environmentalist. Classifying people and the deployment of surveillance technology definitely has a chilling effect on people’s willingness to protest and speak out against things they are not OK with.

Do you think in terms of public debate and awareness, especially after the SpyCops scandal, which saw undercover police agents maintain a long-term relationship and even have children with peaceful activists, we’re seeing a questioning of surveillance-based policing?

At the beginning, some people thought that technology would make them safer and this is what many politicians also hoped to achieve. But with more and more things that are coming out, people are more aware of what data actually means and what their online life means for their real life. Then you have the extreme example of SpyCops, who destroyed people’s lives just because of their political activism. Environmentalists and other activists were spied on simply because of their campaigning work. People start to recognise that not only criminals or terrorists are being targeted, but also people who have done nothing wrong. Everyone’s favourite sentence is “nothing to hide, nothing to fear”, but it’s not about having something to hide – it’s a question of protecting your own security and rights.

To what extent are surveillance technologies being transferred to other European countries and are there any lessons from the UK?

With increasing security measures all across Europe, many other countries are starting to gear up. Germany, for example, started to trial facial recognition cameras at Bahnhof Südkreuz station in Berlin last August and the federal state of Bavaria has changed its law to vastly expand the police’s powers, making it possible for the Bavarian police to increase suspicionless surveillance, especially online, and to potentially introduce new technologies like facial recognition, body-worn cameras, and drones.

Stop Watching Us, Berlin, 2013. Flickr, some rights reserved.

We can learn from the UK that these technologies are no longer a dystopian sci-fi fantasy, but that they have very much arrived. Instead of keeping us safe they are often feeding inequalities and are threatening civil liberties, the very fabric of European democracies. Both the facial recognition trial in Berlin and the new law in Bavaria have been met with huge resistance. In Munich, 30 000 people took to the streets. This shows that we need to finally listen to people’s voices and stop blindly rolling out these technologies.

***

This article was first published in the Green European Journal. It has been published here with permission.