A Bill of Rights for the AI-enabled world, regulatory challenges, and socio-technical risks: Jessica Newman, who leads the AI Security Initiative at UC Berkeley’s Center for Long-Term Cybersecurity, discusses recent AI developments in the United States and Europe with our Transatlantic Media Fellow Ekaterina Venkina in an interview for BigData-Insider.
Ms. Newman, you wrote that talk about a regulatory “gulf” between the EU and the US in developing AI standards is both “excessive and counterproductive.” Why do you think these differences blown out of proportion?
Newman: There is this widespread public perception that the EU is the world's technology watchdog, and the US is the digital Wild West. But when it comes to AI, the reality seems to be more nuanced than that. The EU Artificial Intelligence Act (AIA) bans a small number of AI uses that pose unacceptable risks and introduces obligations for high-risk systems. But it does leave the majority of AI uses unregulated, only proposing voluntary guidelines to encourage responsible use. So, I think this is not an extreme regulatory proposal.
At the same time in the United States there are already laws that do some of the work that the AIA is trying to enable. Numerous cities here have already banned real time biometric surveillance by law enforcement. The Federal Trade Commission has also indicated that AI products will be subject to its consumer protection laws. While there are more stringent privacy laws in the EU, differences (between Washington and Brussels) on AI technologies are less fundamental.
Also, we are seeing numerous signals of shared interest in transatlantic cooperation and coordination on AI. The OECD principles on AI were endorsed by both the US and Germany and dozens of other like-minded countries. Both are members of the Global Partnership on Artificial Intelligence (GPAI). The recent launch of the EU-US Trade and Technology Council (TTC) is a really powerful opportunity for allies to work together on critical technologies. There is a genuine acknowledgement that AI technologies can violate human safety and fundamental rights, both from the US and the EU, and a shared sense of urgency that we need to prevent those dangerous and harmful ends by working together, to prevent those high-risk cases, as well as to proactively establish shared AI standards.
Eric Lander, who heads the White House Office of Science and Technology Policy (OSTP), and Alondra Nelson, Deputy Director for Science and Society in the OSTP, recently published an op-ed in WIRED. In it, they point out that the White House is preparing a “Bill of Rights for an AI-powered world.” What implications might this have for the U.S. strategy on AI?
Newman: I think it’s a good and an important sign. It signifies a change in the USA AI policy. It’s not just saying: “We’re going to mitigate these AI risks,” but also saying: “We're going to put Americans rights and values front and center to ensure that our emerging technologies support the kind of society that we want to be part of.“ It’s saying that it is unacceptable to have AI systems that will harm many people. We do need to codify this concept that powerful technologies have to respect our democratic values. Right now, the “Bill of Rights” is in the public consultation phase, but they are indicating that that could include government procurement requirements, or new laws and regulations.
Do you see any willingness on the part of Silicon Valley to support Washington’s proposed “Bill”?
Newman: In general, there have been calls from the companies saying, we need more guidance, we want regulation. It’s pretty clear with the Facebook Papers that the current accountability mechanisms for the big technology companies are not sufficient.
The AI “Bill of Rights” only applies to the U.S., but these companies operate across national borders ...
Newman: To some extent, even the regional laws can shape how the companies act on the global stage. This is the case, for example, with the EU General Data Protection Regulation and California Consumer Privacy Act. But this is not just about the individual harm, but also about harms to communities and to societies. There are societal rebel risks that could potentially be at stake. It could also be about the impact of social credit scoring on societal level and the state of democracies. I think the work that the National Institute of Standards and Technology is currently leading to develop the AI risk management framework will be impactful. It’ll provide a common language and set of practices that the companies can work with to develop a more robust set of AI practices with commonality between them. And I know that the EU is also watching how this work goes.
Olaf Groth, professor of global strategy, talks about a Magna Carta for the global AI economy, a collectively developed AI charter of rights. Do you think something like that is possible?
Newman: I love this idea. My understanding of it is that it’s about a global document that asserts human freedom and control over opaque machine decisions. I think it certainly could be picked up by one of the different international forums. I would love to see that gain more traction; we need meaningful global frameworks of that kind.
Which forums do you think are currently the most successful platforms for exchange on AI between the EU and the US?
Newman: GPAI is doing great work on the research side. The standards that are being developed at the moment and the way they are crafted will change the course of technological development as well as its impacts around the world. So, having EU and US alignment about those AI standards could establish a powerful platform to help prevent the erosion of human rights from the spread of AI technologies.
At the first TTC meeting in Pittsburgh, a number of commitments were made that I think are exactly where the US and EU should be starting. They named a number of mechanisms to share information between them and to coordinate on international standards. They talked about how to counter surveillance and disinformation or social manipulation using KI technologies; how to develop convergent control approaches on sensitive dual technologies. That’s an area that can be very important. I think what’s critical to the long-term success of the TTC in general is to have that broad and multi-stakeholder engagement.
What are the biggest threats related to developments in AI? Is it threats to critical infrastructures, to the social order, or to democratic institutions because of the distortions algorithms can cause?
Newman: I think the most useful framework is to think about AI-related dangers as sociotechnical risks. One aspect is how the systems are trained and biases emerge. But at the same time, those are humans curating those datasets and deciding what to build. If we only look at the technical dimension, we're missing already all of the social influences. And then of course, it's the impact. We have to think about what countries and communities that are being impacted.
Do we currently have sufficient monitoring mechanisms in place to exercise control?
Newman: The OECD AI Policy Observatory is tracking different developments in AI technologies and how different actors are responding to it. The Partnership on AI developed the Artificial Intelligence Incident Database. This is an effort to try to track when AI systems have accidents, or make mistakes, or when they’ve inadvertently harmed somebody. My understanding is that independent researchers are responsible for this database at this point. So, for that to expand we will need incentives for companies to be more transparent.
The U.S. Department of Defense (DoD), as some experts point out, is a strong proponent of the “centaur model” - how great is the risk of rapid militarization of AI in the U.S.?
Newman: The Joint Artificial Intelligence Center has a robust program to implement responsible AI premises throughout the Department of Defense. Right now, there’s an ongoing pilot program to institute responsible AI procurement to make sure there are frameworks to evaluate what data is used in the training and development of the AI technologies that will be purchased by the DoD, and how. It also adopted a series of ethical principles for AI, following over a year of public consultations on this. Furthermore, I was interested to see that NATO recently released an AI strategy. I think this is another indicator that the US as well as other powerful militaries, understand that we need to not move so quickly that we risk implementing unsafe technology.
What do you consider the biggest weaknesses in the development of ethical standards for AI, and how can these be overcome?
Newman: I can highlight two key challenges at this stage. The first is around holding the AI developers accountable in terms of following the standards. The second challenge I would highlight is around representation. The decisions about AI ethics really need to be made by diversity holders with broad expertise and to be representative of all of the people they’re going to impact. In particular, we need more meaningful engagement with the Global South than what we’ve seen so far.
Where could German-American AI cooperation leverage synergies? The Biden administration talked this past year about the concept of uniting “techno-democracies.” Is that possible? Or is this a race of lone wolves competing against one another?
Newman: My sense is that Germany has excellent AI researchers, institutes, and really impactful and exciting policy platforms. The Data Ethics Commission report that established that pyramid risk profile for AI, has been really influential.
There is a shared understanding that the values we are embedding into the AI systems are then being magnified around the world. So, I think there is a real interest in working with like-minded democracies to ensure that AI systems will respect universal human rights and democratic principles. Of course, there are also national interests at stake. And of course, the current trajectory does not ensure that all countries will equally be lifted up. There will need to be efforts to work with other countries that have regions outside of the EU to make sure that entire parts of the world are not left behind.
Do you think that the developments in artificial intelligence that we are currently experiencing will have impacts far beyond mere technological aspects? Are we in the midst of a global transformation?
Newman: I think that we are at a cognitive revolution. We are already losing some amount of control in certain industries of how the algorithms are impacting people and societies. We’re seeing this in how they’re being deployed in social media. We’re seeing this in how the financial world is dealing with maintaining meaningful human control and ensuring human values, as the interaction between humans and AI just becomes deeper. The technology should work for us and promote the interests and values of humans.
In his book “Rule of the Robots: How Artificial Intelligence Will Transform Everything,” Martin Ford describes two possible future AI scenarios. He calls one of them the “Star Trek” scenario: People are highly educated, pursue challenges they find rewarding, and are valued for their intrinsic humanity. The second one is much more sinister and refers to the “The Matrix.” Inequalities in the real world become so skewed that the population decides to escape to alternative realities. What’s your prognosis?
Newman: I think we are now closer to heading towards the darker scenario of seeing just significant increases in the inequities and inequalities within the countries as well as across the world. This is really where we need to intervene, to push that future scenario towards the first one where people are able to pursue rewarding activities and then have a meaningful life.
A German version of this interview was first published by BigData-Insider on November 29, 2021. Research for this article was made possible with the support of the Heinrich Boell Foundation Washington, DC.