Poland

Technology is no guarantee of equality

Jędrzej Niklas of the London School of Economics talks to Bartłomiej Kozek about how algorithms can perpetuate discrimination and argues that they should not be left in the hands of IT people.

This interview was originally published on Green European Journal.

***

Bartłomiej Kozek: What are algorithms and how can they influence our behaviour as consumers and citizens?

Jędrzej Niklas: Algorithms are mathematical models, behavioural rules that may induce a person to take certain decisions. You can think of them as the engines of IT systems, which determine your credit worthiness or the content of your Facebook feed.

They are a bit like a meat grinder into which you put data and which from which the ultimate product comes out at the other end – depending on what you use them for. Their applications are increasingly advanced – they are used to screen job applicants in the recruitment processes of large corporations offering attractive jobs, or to identify passengers who may pose a terror threat.

Algorithms are a bit like a meat grinder.

Why should we be interested in them?

Algorithms are increasingly omnipresent – both in the public and the private sector. Arguments for their use include efficiency, efficacy, and even the aura of modernity that surrounds them (and appeals to decision-makers). A machine is considered to be more objective in making decisions than a human, which is important in the case of risky decisions or where there’s a need to neutralise our conscious or unconscious biases.

A machine is supposed to be free of biases, but that’s not the case.

Jędrzej Niklas. Photo by Polskie Forum Cyberbezpieczeństwa.

Why?

Algorithms are usually used to manage large groups of people or large sums of money. They categorise people, sort them into groups, which may lead to discrimination. Those math models are created on the basis on certain assumptions, defined by humans. Therefore, algorithms can never be fully free of human presumptions or biases.

Automatic IT systems operate within a certain system: of institutions, people who use them, and the people on whom they are used. At this stage, we don’t know enough about the interactions between the technology and those groups of people. However, certain publicly known cases suggest that often, harm can be done. And those harmed will be poorer people, ethnic minorities, women.

In one well-known case, the British St. George Medical School created a recruitment algorithm on the basis of historical data. The problem was that the algorithm disproportionately screened off women and people with non-European names. The authors hoped to create an objective measure, but unwittingly incorporated the historical assumption that women are not fit for medical practice, which informed the student recruitment strategy in the past.

Another glaring case was described by the Pro Publica portal. It concerned a questionnaire presented to criminal suspects in certain US states. The aim was to assess the risk of repeat offences. When the system was scrutinised, it turned out that it discriminated against African Americans and Hispanic people.

In what way?

It included questions about criminal offences committed by family members, drug use and place of residence (city district) – in a country like the United States, where cities are heavily segregated geographically and the risk of ending up in jail is higher in large sections of the ethnic minorities than among white people.

Commercial IT models are out of control, protected by trade secret rules.

The answers were fed into a computer, which calculated scores, weights and exceptions. In some cases, when two people, one black one white, had committed the same crime together, the system would wrongly identify the one who would be more likely to break the law again.

Systems that search for potential buyers of specific products and profile buyers are another example. They, too, can replicate inequalities. In Poland, this problem concerns the companies which actively seek potential customers for shady financial services by targeting poorer people and pensioners on social media. In the US, universities such as Trump University target poor people and veterans. Facebook, on the other hand, allows advertisers to prevent property rental ads from being displayed to specific ethnic groups.

Advertising is increasingly targeted and increasingly difficult to resist – if only because the boundaries between advertising and information are getting blurred. Commercial IT models are out of control, protected by trade secret rules.

Photo by publicdomainpictures.net.

The problem seems to concern not only commercial algorithms, but also those used in public administration…

You can see that when you look at the profiling of the unemployed in Poland. It took an intervention by civil society organisation to find out how the unemployed get categorised. The rules for assigning them to one of the three profiles were non-transparent (while the kind of job offers you get depends on which profile you are in), and the Ministry of Labour would change the assessment criteria every now and then.

Australia had an equally opaque, and strongly criticised, system for granting or denying unemployment benefits. And the automatic system for unemployment benefit management in the state of Michigan ‘incorrectly’ accused 20,000 people of welfare fraud and cancelled their benefits. The state authorities had to pay over 5 million dollars in damages.

In the context of the algorithms debate, researchers are starting to raise issues such as social justice. Looking at the cases you mentioned, this seems to be an increasingly urgent problem…

Cathy O’Neil, the author of the excellent book Weapons of Math Destruction, demonstrates this on the example of the USA. In the US system, public schools are poorly financed and their students perform lower. Someone decided that root of the problem was not the inequalities but bad teachers. A system was then created to assess teachers’ performance based on the students’ results.

There was a teacher in Washington DC, who had an excellent opinion with the parents and the head teacher, but her assessment by the algorithm was very bad. When she tried to find out what was wrong, it turned out that even obtaining the detailed data was extremely difficult. Ultimately, even though a ‘system error’ was detected, she lost her job anyway.

Thus, the algorithm acted as a judge sustaining an unjust order. The teacher found a new job in a private school and the students at risk of social exclusion lost a good pedagogue.

Even though a ‘system error’ was detected, she lost her job anyway.

We already know why we should take interest in algorithms, but how can we get them under democratic control?

One idea is to establish separate offices to authorise algorithms before they can be marketed and examine the practical consequences of their implementation – pretty much like in the case of the pharmaceutical industry, or an agency to examine traffic accidents, to use the description by Ben Schneiderman who studies the subject. A similar idea is advocated by the German justice minister Heiko Maas.

Not everyone will be delighted to see the state create a new office…

The problem here is not more bureaucracy. A ‘one size fits all’ approach could turn out to be a much more dangerous trap. Technologies implemented by corporations and public institutions function in specific contexts. The practical consequences of using algorithms are different in the financial sector, in the administration or in public services. The fundamental question is: what type of algorithms to regulate?

Regulating each of those sectors requires specialist knowledge. Google modifies its algorithms several times a year. Who would be competent to assess each of those modifications is a legitimate question.

Photo by Flickr.

So what is the alternative to creating a single institution to regulate the digital market?

We should choose a different starting point. In the system that already exists, we need to identify the elements of concern which require intervention, with regard to issues such as transparency or accountability of market players and public institutions. The key issues are the processes whereby algorithms are created, and the practical consequences of their application.

What we need today most of all is a paradigm shift in institutions such as courts of justice, courts of auditors, human rights bodies, or market regulators. They need to quickly learn to work together to solve those techno-political problems.

Mechanisms to network different offices and enable them to use their powers better will be crucial. If a new office is created, it should be tasked with coordinating such efforts.

It is also important to adjust the current rule of law principles to the fundamental technological change taking place. It seems that the due process and the public participation and consultation principles can be effectively applied to the problems posed by digital issues. It is also worth revisiting the 1980s and 1990s debates on legal IT.

People today do not have the tools to assess the impacts of algorithms.

Let’s come back to algorithms. Are there countries that have made more progress than others in terms of regulating them?

In France, the code of various calculators used by public institutions has been opened in order to allow public scrutiny of the social justice implications. In the Netherlands, the academic community has come up with an idea of a special parliamentary committee in which MPs would discuss the issue with IT people.

All those solutions touch upon the fundamental problem that is the need for transparency. Anyway, just opening the codes may not be enough – people today do not have the tools to assess the impacts of algorithms.

This is where civil society or investigative journalists could have a role. Such actors, e.g. the American Pro Publica portal or Poland’s Panoptykon Foundation – have been effectively campaigning for the transparency of algorithms and are competent to assess their impacts. Attempts have been made at using the data protection laws to that end.

It has been argued that data protection is not enough…

It is clear that the discrimination issue does not fit into the data protection discourse. In many contexts, IT systems may solidify pre-existing social structures and inequalities.

This is a challenge that requires a broader approach. We need to realise that algorithms are a social issue, and not merely a technological problem. Tackling it requires legal and political tools, and the problem should not be confined to technocratic debates. Politicians, bureaucrats and citizens should not give all power to the IT people.

Photo by Flickr.

Looking at the European Union’s institutions, one could think they are aware of the problems you named. But are they offering adequate solutions?

Next year, a large privacy law reform will enter into force, which introduces new data protection rules. It marks an attempt at regulating corporate practices and awards data protection authorities the power to impose steep fines on corporations. It also introduces the obligation to carry out impact assessments of data processing. There is a good chance that those instruments can be effectively used not only for privacy protection, but also against unequal treatment.

The impact assessment obligation may for instance include the question of discriminatory data processing. Provisions on automatic decisions and sensitive data may also be applied for that purpose. Data protection rules may be helpful in obtaining evidence of unequal treatment. Another issue concerns assessing IT system from the point of view of human rights threats, e.g. threats to social rights.

Data protection rules may be helpful in obtaining evidence of unequal treatment.

So what needs to be done in order for the data protection reform to achieve those aims?

The algorithms and technology debate today is dominated by the data protection language. Yet that language does not cover all the issues posed by algorithms, such as discrimination in public services (e.g. profiling).

The European Data Protection Board that is being established right now could cooperate with Equinet, the network of anti-discrimination institutions dealing with human rights protection to establish common language or maybe even standards in this field. National data protection authorities could incorporate the inequality issue into their thinking about IT systems. This obviously goes beyond the current legal framework, but the General Data Protection Regulation offers some basis. Finally, businesses should include discrimination issues as part of impact assessments concerning personal data in their certification and self-regulation tools.