President Biden issued an Executive Order last week on artificial intelligence that, while different in form and enforcement authority from the EU’s AI Act, targets many of the current risks of the technology. With policies soon to be in effect, the EU and US have the chance to work together on implementation, enforcement, and lingering issues for AI policy.
In a few years, AI has skyrocketed to the top of policymakers’ priorities. The industry is growing quickly and AI is having a widespread effect on the ways we interact with technology, on human rights, democracy, and climate. The EU is taking a big legislative step with the AI Act, for which the EU institutions are currently negotiating a final text. But for the US, the new EO focused on “safe, secure, and trustworthy” AI will likely be the most impactful AI policy this year. It addresses a long list of AI risks, but without the same opportunity for enforcement as the AI Act.
An Issue of Global Importance
On the international level, both the US and EU have aligned against the use of AI to suppress human rights and threaten safety. Both have also sought to increase domestic production of the microchips used in AI, among other technologies, via their own subsidies. The EO will not resolve any trade tensions, but it does call for cooperation with allies in managing AI’s risks and increasing community preparedness for the impacts of climate change. Ultimately, the US and EU have more in common than not on AI.
The EU and US have already integrated their AI priorities into negotiations within international organizations, like the UN, where there are a number of efforts to shape AI policy. The UK hosted a multilateral AI meeting last week, which included agreement from Germany, the US, EU, China, and a number of other countries to work together on challenges, including human rights, explainability, and fairness, among others. But AI is everywhere. Global South countries are implementing AI systems to address local challenges, but have less opportunity to shape AI policy. As an executive action, the EO could not commit new money toward AI priorities in the US global development agenda, but it committed the US to incorporating AI principles into global development work. While a helpful step, the US and EU should more broadly work with global actors on AI risks.
Human Rights and Democracy
AI can be used to conduct surveillance through facial recognition, reinforce discrimination and bias, and impact decision making and opportunity on housing, healthcare, and labor. The EU’s AI Act will likely ban some of the most risky AI uses, such as manipulative AI that distorts behavior and social scoring that leads to unfavorable treatment. Rather than banning certain AI technologies, the EO focuses on increasing transparency by requiring the organizations behind some of the most powerful AI systems to share their safety testing results with the government. It also encourages the Federal Trade Commission, the primary consumer protection agency, to make use of its existing authority to protect people from the harms of AI. The EO will also require a number of studies and the development of non-binding standards. Overall, the EO spreads responsibility for addressing the risks of AI to a number of agencies where implementation can vary.
AI models make use of personal information, and, in turn, AI can more precisely target ads based on personal information. The US has not yet passed a general privacy law akin to the EU’s GDPR. Nonetheless, the EO expands the internal efforts of the federal government to use privacy principles and privacy-enhancing technologies. The EO may also serve as a reminder to Congress of the importance of privacy as the EU’s GDPR has been in force for over five years now.
Generative AI - the AI systems that can make images, videos, and audio - are at the heart of the current boom in AI policymaking. Generative AI tools has been be used for cruel purposes, like students generating fake nude photos of their female classmates. The technology also has broader social implications. Synthetic disinformation and misinformation have been widely shared around elections to mislead voters. Similarly, synthetic content has created distrust in reporting on the conflicts in Ukraine and the Middle East. While disinformation already existed, AI has made it easier to create and spread deceptive content.
The EU is in the early stages of utilizing the Digital Service Act (DSA) to address the challenges to public discourse of disinformation. The European Commission warned X (formerly Twitter) that it was allowing the circulation of manipulated images during the early days of the conflict in the Middle East. The US has no such national law, but the EO takes a step to try to address the influence of synthetic content: it promotes the labeling of synthetic content via watermarking and detection. However, even if some synthetic content is watermarked, much of it will continue to go unlabeled, so more solutions will be needed.
The Future of AI Policy
As AI is more widely adopted, its environmental impacts—both positive and negative—will increase. The EO supports the use of AI for climate resilience, mitigation, and preparedness. Those are valuable goals, but the operation of AI models is itself energy intensive and leads to the release of significant amounts of carbon, not to mention the impacts of mining the resources behind AI hardware. The AI Act creates certain transparency requirements for AI operators, but both US and EU policymakers will need to have a stronger focus on tackling the adverse environmental and environmental and climate impacts of AI.
While the Biden EO is broad, and generally aligns with the EU’s AI efforts, there are still areas for policymakers on both sides of the Atlantic to address. There is still a need to include the expertise of more women and global south participants in AI development, to ensure that AI models are developed more inclusively and with less bias. There is an opportunity for US and EU policymakers to help empower these actors to more effectively participate in the growing AI market, while developing the enforcement tools to tackle the human rights, democracy, and climate challenges of AI.