Europe’s Approach to Regulate AI Seen as Too Hasty for US

Media Fellowship

The EU’s draft law on AI regulation leaves key questions unanswered, US policymakers say.

Former California Democratic Rep. Jerry McNerney
Teaser Image Caption
Former California Democratic Rep. Jerry McNerney, who co-chaired the Congressional Artificial Intelligence Caucus, at a campaign event.

Dragos Tudorache, a key member of the European Parliament who shepherded the European Union’s legislation on artificial intelligence technologies to a vote, recalls reaching out in 2021 to then-Rep. Jerry McNerney, who was co-chair of the Congressional Artificial Intelligence Caucus. 

Tudorache, a lawyer and judge who previously served as Romania’s minister for communications and digital society, wanted to tap U.S. policymakers’ thinking on AI’s impact on workers, industry and society at large. McNerney, who holds a Ph.D. in mathematics, had proposed legislation in 2018 that would have paved the way for AI expertise in government agencies. A 2020 version of the bill, called the AI in Government Act, passed the House but not the Senate.

Similarly, as early as 2018, the European Commission, the executive arm of the 27-member bloc, had proposed regulations aimed at harnessing the power of AI while protecting people’s rights. At the time of Tudorache’s call, only a small group of lawmakers on both sides of the Atlantic seemed concerned about the benefits and consequences of what was then considered early-stage tech.

McNerney left Congress in 2023 after 16 years as a lawmaker, citing “grotesque and ugly” partisanship. When asked last September what his colleagues got wrong about AI, he told Roll Call, “They basically don’t understand what it means. Is this some big mind that’s going to take over the world? No. It’s going to enhance people’s productivity. But there is a chance that it could cause displacements in the workforce.”

McNerney, who chaired the caucus for five years, in an interview Tuesday recalled the conversation with Tudorache, saying U.S. lawmakers and their staff showed a “tremendous amount of interest in learning about AI.” But McNerney, now a senior policy adviser at the law firm of Pillsbury Winthrop Shaw Pittman LLP, noted that attendance at caucus meetings dropped off “pretty significantly” after the COVID-19 pandemic and the Jan. 6, 2021, attack on the Capitol.

Since then, Europe has marched forward. The EU began considering AI regulations because “self-regulation, self-compliance and discipline by companies were no longer sufficient,” Tudorache said in an interview in his Brussels office. “I think that’s where we have served our purposes better by anticipating some of the impacts,” as the rest of the world scrambles to catch up.  

As the European Parliament took up the proposals to turn them into legislation, Tudorache was leading an effort to help educate fellow lawmakers by arranging briefings with experts and stakeholders to understand how AI would affect “everything from agriculture to space exploration,” he said.

“We started asking questions about the technology, and the base of people who understood concerns about AI grew to the point” that the European Parliament in June adopted draft legislation, the EU AI Act. “Whereas in the U.S., I think it remained that handful of lawmakers, until ChatGPT came out,” Tudorache said, referring to the launch of the generative AI launched by OpenAI that jolted U.S. lawmakers into action.

Since April, Senate Majority Leader Charles E. Schumer, D-N.Y., has increasingly focused on the issue.

In May, a group of more than 350 researchers, executives and engineers working on AI systems said that “mitigating the risk of extinction from AI should be a global priority.” The warning followed the rapid development and deployment of so-called large language models that are AI systems fed with vast quantities of text and images to teach them how humans think and express themselves. 

In June, Schumer unveiled a plan that would “protect, expand, and harness AI’s potential.” He has tasked a small group of lawmakers — including Sens. Martin Heinrich, D-N.M.; Todd Young, R-Ind.; and Mike Rounds, R-S.D. — to draw up proposals. The majority leader already has held three closed-door briefings for lawmakers on the technology’s potential and dangers. 

Starting Sept. 13, Schumer plans to host as many as 10 forums featuring experts and civil society groups in which senators can participate. In the House, Speaker Kevin McCarthy, R-Calif., has tapped an informal group of lawmakers led by Rep. Jay Obernolte, R-Calif., a computer scientist by training, to brainstorm ideas.

The European approach

As the U.S. gears up to tackle the fast-growing technology, lawmakers, key congressional aides and U.S. experts see the European approach to regulating AI as too hasty and too narrow in its scope. 

Tudorache said he hopes the EU AI Act, now being discussed in the Council of the European Union, consisting of EU members’ heads of state, will be passed into law by the end of this year.

It takes a so-called risk-based approach to regulation. Unacceptable risks would be banned, including systems that engage in cognitive behavioral manipulation of people, social-scoring systems based on people’s behavior, and real-time biometric identification such as facial recognition systems.

High-risk applications are defined as those used to recruit and train employees, manage critical infrastructure, oversee migration and border control systems, and law enforcement uses, as well as AI technologies used in toys, aviation, cars and medical devices. Those applications would be regulated and monitored as they are launched and throughout their life cycles, with national regulatory bodies being responsible for the supervision. 

Applications that don’t fall into these categories and considered limited risk would be required to comply with minimal transparency requirements.

The EU proposals had been crafted long before generative AI systems began to appear late in 2022. As the EU Parliament was debating legislation, one set of proposals called for leaving out generative AI from the law’s purview while others pushed for placing the systems in the high-risk category, subjecting them to stringent oversight. 

Top AI companies including OpenAI, which developed ChatGPT, and Google, which developed Bard, lobbied European governments to keep their systems out of the high-risk category, arguing that while the technology itself wasn’t dangerous it could be misused. The companies promised to police such misuse, flag content generated by AI and take other steps to address risks. 

The EU settled the issue by creating an annex for generative AI systems, leaving them out of the risk categories. But the law would require the systems to meet transparency requirements, including labeling of content generated by AI; design the models to prevent illegal content; and publish summaries of copyrighted data used in training the models. 

In July, the seven top AI companies — Google, Amazon, Inflection, Meta, Microsoft, Anthropic and OpenAI — made similar voluntary commitments to the White House. Schumer has said that Congress would have to enshrine such commitments into law.

A ‘permission slip’

The EU approach to “risk-categorize everything that’s AI related” strikes Obernolte as too restrictive. 

“If you fall into anything but a low-risk category you need essentially a license, a permission slip from the government to use AI,” he said at a recent event hosted by TechNet, a trade group that represents tech companies.

The EU regulations would have to contend with questions such as, “What happens if you retrain an AI algorithm they have already issued a license for?” Obernolte said. “Does that require a new license? Or is it a condition of the existing license?” 

The EU’s risk-based approach also aims at regulating applications of AI technologies, said Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies. 

“AI technology for the better part of the last decade has been application specific,” Allen said. A system trained to identify cats or cars was good at recognizing cats and cars, but “not useful for anything else,” he said. “But the future of AI is not going to be application-specific” because generative AI systems are becoming adept at learning and doing multiple things.

ChatGPT is trained on such a large volume of data that “you could have it dispense medical advice or write a screenplay,” Allen said. 

The EU’s risk-based approach “won’t even touch a whole lot of questions,” McNerney said. “Things like who are we going to sue when there’s a problem, or who are we going to attribute information to” in the era of generative AI, were left out of the EU rules, he said.

Despite the EU law’s potential limitations, its early start has meant several other countries are using its approach as a starting point for drawing up their own regulations. 

“Already for some time we have been interacting with Japan, Australia, Canada, India, South Korea and some Latin American countries,” Tudorache said. “With us being more advanced, and a bit more anticipatory,” other countries are trying to figure out if the EU model is the one to emulate, and while some differences may emerge, other countries are concluding “that we are very aligned in terms of how we look at things.”

This article originally appeared in Roll Call on September 6, 2023.