Bias in AI – There Is No Quick Technical Solution

October 2018

Professor Kate Crawford is a leading academic on the social implications of artificial intelligence and related technologies. She is the co-founder and co-director of the AI Now Institute at NYU, which is the first university institute in the US dedicated to understanding the role of advanced technologies on social systems. She is a Distinguished Research Professor at New York University, co-appointed in Law and Engineering, and a Principal Researcher at Microsoft Research New York.

Kate_Crawford_680.jpg

Dr. Crawford, what is artificial intelligence?

The definition has changed over the years, but what I like these days is that AI is seen so broadly. There are three aspects: the technical aspect, the social aspect, and the infrastructural aspect that the learning machines bring with them. And all three are important.

The social debate ranges from the great opportunities AI brings for all of us to concerns about surveillance and loss of privacy. Where do you see the debate?

In my view, it is not a single debate, but many debates on various aspects. These new technologies affect our daily lives in many different places. They affect everything from how we work, to our health systems, to education, and even to criminal justice. Artificial intelligence already has an impact on our work, for example through surveillance at the workplace or so-called nudging, which influences the motivation or attitude of employees. All of these phenomena are having an influence before many people notice, and much sooner than robots replacing humans at work.

The ownership of data and algorithms results in a new distribution of power. To what extent does the technology of AI reinforce asymmetrical power structures?

In our daily working life, technologies are increasingly introduced to track what employees are doing: how often they send e-mails, how many people they have meetings with, how many external communications they have, how often they get up from their desks each day, and so on. In some workplaces, people are even monitored with cameras on top of their laptops that track everything they are doing. They count productive hours and breaks. This is incredibly invasive. And it creates an asymmetry of power between employers and employees. So we need to have a debate on how we structure power and how we can ensure that artificial intelligence does not strengthen the power of the already powerful.

When you say that the ownership of data and algorithms creates power, what does that mean for the global distribution of power, and which countries in your view have it?

The geopolitical scenario of AI is very important right now. Currently there is this strong narrative, according to which there is a battle between two superpowers: the U.S., on the one hand, and China on the other. But I am concerned about the war rhetoric that emerges from different cultures. China has a very different AI culture than the U.S., and Europe has another very different culture.

What could be the role of Germany and Europe?

During my time here in Germany, as a fellow of the Robert Bosch Academy, I followed the discussions about AI closely. Here in Germany there is a great opportunity to hold the debate in such a way that a technology is created that is accountable, fair, and transparent, and in which the responsibility is also clear. People should know when they are judged by an AI system, and also how this decision came about. We in the U.S. are not yet holding this debate to this extent, and that will become a problem for civil society. I think Europe has a moment to decide if it is going to become a player in AI and on what terms it will play.

A major debate at present concerns the distortions of decisions taken by machine learning systems, the so-called bias. This leads, for example, to racist discrimination. Where does this debate originate?

There are many examples of discriminatory decisions made by algorithmic systems. Women in the U.S. were not shown highly paid jobs in a Google advertising; U.S. software rated the risk of relapse of black prisoners higher in the criminal justice system. Predictive policing software, which has been shown to be ineffective at reducing crimes, causes over-policing of low-income communities. In health care, people suddenly no longer received their treatment because an algorithmic system decided that they no longer needed it – without a human being taking part in the decision. These questions have been present for some years, already with simple AI systems. And when it comes to very advanced AI systems, we have problems with the traceability of decisions.

The debate finally seems to have reached the big companies. Why only now?

I have been working on the topic of bias in machine learning and inequality for several years now. And indeed, something is changing. In the last two years, we have seen that the industry is taking this issue very seriously. And why is that? Because these systems now determine the everyday lives of many people. And if they then discriminate against or disadvantage individuals, the companies behind these technologies face real risks to their customers and to their public reputations. Moreover, there is a primary responsibility on the part of the technology sector to ensure that their tools are not causing harm.

Are companies and science responding appropriately?

I see a worrying pattern emerging: the quick-fix. The idea is that we can simply bring a mathematical idea, a formula for fairness, into technical systems. That will fail. Because when we talk about these biases, we are talking about very deep, structural inequality that emerges from a long history. Our AI systems are trained using data from the past, a past in which bias and discrimination are deeply embedded. We need to think broadly, socio-technically, because this is a much larger debate that must not only be held in computer science laboratories. We need a much more interdisciplinary approach. We need to integrate politicians as well as sociologists, political scientists, philosophers, historians. The question right now is: how do we want to live and how should our technical systems support this? This is the biggest challenge in the next years.

Will there ever be a world without these biases, without discrimination?

It is certainly fair to say that in the history of mankind there has always been discrimination and injustice. But there is a difference: in the past, people have produced social change by rising up against a system that they thought was unjust. You can do that when you can see a system and demand a different way of living. In complex AI systems, however, we are often not even aware of which systems are at work, and even when we are, their decisions are usually hard to understand. This makes it extremely difficult for those affected to defend themselves. We must therefore insist that these systems are accountable and transparent. But we also know that there is no quick technical solution. We must accept that these systems will always produce forms of discrimination. We will have to decide in which areas this is acceptable. If we cannot guarantee that these tools will not generate discrimination, then we should not use them.

How can we know if there is bias or not before implementing AI systems?

At the AI Now Institute, the first ever university research institute dedicated to understanding the social implications of AI, we are researching how a system can be tested early and systematically so that the extent to which it discriminates against different groups can be understood from the outset and over time. We have also developed a framework for Algorithmic Impact Assessment that can help public agencies and the people they serve monitor and understand AI and algorithmic decision-making systems and decide whether or not a system should be used. We also need real accountability inside tech companies that can include pre-release trials and evaluation of AI systems, so we know how they work before they are released on live populations. And we need regulation of the most invasive and error-prone tools – such as facial recognition. That may also mean that there are areas in which we shouldn’t use AI tools at all until they are shown to be working better – for example, in criminal justice.

Can AI then reverse inequalities in the real world or at least draw our attention to them?

Whether AI can be used to uncover discrimination in everyday life is a very exciting question. We need a new socio-technical field of research that deals with these societal consequences instead of a purely technical approach that looks only at how an algorithm can become even faster or more efficient. This debate brings us back to the question: What do we want for our society? And this is a complex problem. Image search is an example. If we do not want mainly men to appear in the search for a “doctor” and mainly women for “nurse,” then we must decide how many men and women should appear in each case. How do we represent the world fairly? These questions are currently being discussed within tech companies, but this debate must be open to the public. These are political discussions that have a great deal of influence but are currently taking place behind closed doors.

What does the future of AI hold, let’s say in ten years from now?

(She laughs.) I am not making any predictions in this area. AI changes too quickly and too much. But more importantly, I am convinced that we spent too little time learning from history. This is much more important than asking what AI will bring in the next few years. The way to achieve a better future with these technologies is to learn from the past.

You could also be interested in

A Korean Take on 30 Years of German Reunification: Lessons for Korea?

The former Korean Minister of Unification and Richard von Weizsäcker Fellow, Woo-ik Yu, reflects on lessons learned from the German reunification.

Read more

In Poland, Democracy Is Deeply Rooted

A conversation with Rafał Dutkiewicz about the logic of nationalism and populism, the value of European solidarity for Poland, the state of Polish democracy and the relationship with neighboring Ukraine.

Read more

Lunch Talk @Academy: The Polarization Trap

There is a lot of discussion about “populism” these days. Meanwhile, the overarching phenomena behind it and the risk of severe polarization is less debated. Yet, there is evidence that political polarization is growing in every European society with...

Watch video