“It doesn't have fundamental rights front and center”: Sarah Chander on the EU’s new AI draft regulation

Interview

The wording in the EU's draft AI legislation is strong, but Chander from European Digital Rights (EDRi) says the proposal ultimately centers the needs of businesses instead of people.

The European Commission has revealed a draft of its highly anticipated artificial intelligence regulation. The legislation bans facial recognition in public spaces (with exceptions) and identifies “high risk” areas – including educational training, law enforcement, and migrant and border control management – that will be subject to further regulatory scrutiny. The proposal cements the EU’s bid to be the world leader in technology regulation and is an important part of Europe’s “third way” approach to digital policy.

But civil rights groups say that the proposal is too permissive and does not go far enough in protecting vulnerable communities. I spoke with European Digital Rights Senior Policy Advisor Sarah Chander about the loopholes in the new regulation, how conversations about algorithmic bias can obscure the greater context, and the key points of the battle ahead.

Angela Chen: The initial reaction to the draft regulation from civil rights groups seems to be that it has too many loopholes. Would you agree with that assessment, and could you clarify what those loopholes are, especially regarding the ban on facial recognition?

Sarah Chander: In wording, it is a ban. However, there are exceptions – which for many people invested in the ban on the use of facial recognition in public spaces means it is not good enough, particularly considering how widely the exemptions stand at the moment.

Generally, the exemptions are around anti-terrorism and counter-terror measures at the EU level. Many racial justice organizations have pointed to the fact that when we look at counter-terror measures across Europe, you see very potentially discriminatory application of all types of counter-terror policies, particularly over-profiling of Muslim communities.

And there’s another exemption which is even wider and it’s for “serious crime.” That list of crimes is huge, all the way from rape and murder, which are concrete, to things like “swindling” where I don’t even know what they mean.

Angela Chen: Are there other areas of disparate impact that civil rights experts worry about when it comes to these exceptions?

Sarah Chander: This concern about disparate impact goes across the whole range of different types of “high-risk” AI. It’s not just for Muslim communities but for Black communities, Roma communities, people of color. There’s a context that we need to view these things in when we’re talking about deploying various different surveillance technologies – not just facial recognition but also predictive policing systems and even individual risk assessments in the criminal justice system more broadly.

We take a lot of arguments from the U.S. as foresight. We see how predictive policing systems have impacted Black and brown communities, we see how [Immigration and Customs Enforcement] have collaborated with various data-driven organizations to step up their deportation rampage of undocumented communities. Many of these concerns are valid here too if these systems are rolled out more.

There’s other technologies that we need to be concerned about too, particularly technologies that purport to automate the recognition of very sensitive identity traits, like race, disability, gender identity. These are not all banned in the draft regulation and are not even “high risk.” Not only are they hugely invasive of privacy, you could imagine, depending on the political regime of the day, various different problematic uses of automated race recognition, for example.

Sarah Chander

Sarah Chander leads EDRi's policy work on AI and non-discrimination with respect to digital rights. She is interested in building thoughtful, resilient movements and she looks to make links between the digital and other social justice movements. Sarah has experience in racial and social justice, previously she worked in advocacy at the European Network Against Racism (ENAR), on a wide range of topics including anti-discrimination law and policy, intersectional justice, state racism, racial profiling and police brutality. Before that she worked on youth employment policy for the UK civil service. She was actively involved in movements against immigration detention. She holds a masters in Migration, Mobility and Development from SOAS, University of London and a Law Degree from the University of Warwick. Twitter: @sarahchander

Angela Chen: Another criticism I’ve seen is that the draft allows developers of high-risk technologies to self-assess.

Sarah Chander: While this “high-risk” language sounds really strong, it requires an internal self-assessment by the developers themselves. So the people that make profit get to determine whether they’ve sufficiently complied with the transparency or data governance requirements, which are already not so specific.

It points to the general ideological underpinning of this regulation. It's a very commercially minded regulation. It doesn't have fundamental rights front and center, even though this is the claim.

It very much governs the relationship between providers (AI developers and companies) and users (in this case being companies, public authorities, governments) as opposed to governing the relationship between the people or institutions deploying the AI and the people affected by them – which is what I think many people invested in human rights and social justice would have wanted to see. We want to know, what rights do I have when I’m interacting with an AI system? What rights do I have if I’m wrongfully profiled or overly monitored by an AI system?

Angela Chen: Is the goal then to have better safeguards and more specific requirements or is the goal a ban?

Sarah Chander: I can’t profess to have researched every particular high-risk use in detail, but many of them should be banned. Predictive policing is a really good example. Many people believe that if you de-bias predictive policing systems, they will no longer profile and lead to the over-policing of racialized and poor communities. I disagree. Because such systems are steeped in a broader context of racial inequality and class inequality, there is no way you can make a technical tweak or slightly improve the dataset such that discriminatory results will not ensue from the use of the system. And this leads me to believe that it should be banned. This is one of the areas where the bias debate can be a little bit obscuring.

There may be other uses on that list that could be potentially fixed via safeguards. But I do think the very nature of classifying something as “high risk” should mean they should be subject to external conformity checks, not internal conformity checks. Otherwise, why would you bother framing it as “high risk” if you have complete trust in the entities developing and profiting from them to assess their own compliance?

Angela Chen: What interaction does this draft regulation have with the Digital Services Act?

Sarah Chander: There were multiple points in the regulation where they said they’re specifically not governing what comes under the Digital Services Act. But one potential area where you see overlap is in the prohibitions and this reference to systems that exploit people. In the final version, it’s prohibiting exploiting people specifically on the basis of their age or disability, but you have to show there’s physical or psychological harm that ensues from their exploitation – which is weird wording because it's basically saying you can exploit people on the basis of these things as long as it doesn't cause physical or psychological harm.

However, this previously was broader. It wasn’t limited to those things. It was [about] exploiting people’s vulnerabilities, which made me think about targeted advertising. Does this mean a ban on targeted advertising? However, we’re just not sure exactly how this intersects and what the consequences of that are.

Angela Chen: This draft regulation will be debated for years to come. What do you think will be the biggest battles?

Sarah Chander: The prohibitions and list of high-risk technologies was determined by the European Commission, which is an unelected bureaucratic body, highly unrepresentative in terms of our continent’s demographics, highly elite in many ways. They have determined in this proposal not only what constitutes prohibitions and what should constitute “high risk,” but also that they alone will be the overseers of what is categorized as “high risk” in the future. There’s not a democratic way in which civil society or people affected could contribute to the categorization of something which is “high risk.” I think that will be a key battleground for us. Particularly considering how AI will change, how do we make that process – of arguing for what should be banned and what should be “high risk” – inclusive and democratic? How do people affected have a say in that process?

Angela Chen: It seems like the EU wants to promote the wider adoption of AI generally. Is this too broad a goal?

Sarah Chander: The generalized uptake of AI in any sector should not be a policy goal in and of itself. The only thing that it does promote is the extension of the private sector into the public sector, which is something I think we should contest. It should be a caveat at least to say, “adopt so far as these things benefit people and comply with human rights” and these caveats we don’t see.