EU Poised to Enact Sweeping AI Rules With US, Global Impact (1)

European Parliament scheduled to vote next weekLaw would apply to US companies doing business in EU
The European Union is on the verge of passing sweeping artificial intelligence regulation—with teeth.
The European Parliament is scheduled to vote on the legislation March 13, and one of the lawmakers leading the legislation said he expects it to pass easily. Under the current timeline the law would largely take effect in 2026, although some provisions would kick in this year.
The Act would regulate different uses of AI based on risk. It includes prohibitions on certain use cases—such as “emotion recognition” systems in the workplace—which carry the highest fines: 7% of global revenue or 35 million euros (about $38 million), whichever is higher.
The EU wouldn’t be the first jurisdiction to regulate AI, but the size of the European market means the EU AI Act would be felt globally, shaping how multinational companies manage data and navigate AI. The law would apply to businesses operating in the 27-nation market and builders of AI systems that are used in the EU that are based anywhere, including the US.
If a company’s product is put on the market in the EU, it would be in scope of the AI Act and the company would have to comply, said Evi Fuelle, global policy director at the AI governance platform company Credo AI.
“A lot of companies question whether or not they’re in-scope, and if they’re American companies they may be thinking, ‘This is something happening in Europe, it has nothing to do with me,’” Fuelle said. “That’s simply not true.”
In the US, lawmakers have also discussed AI regulation, held industry listening sessions, and introduced bills. However, Congress has not passed legislation that would broadly regulate use of the technology.

‘Organizations Need to Get Started’

With the pending new law, the EU wants to regulate AI’s riskiest uses:
  • It would prohibit what it deems to be unacceptable uses, such as AI that subliminally manipulates a person’s behavior or is used for social scoring.
  • It would put stricter controls around high-risk functions, like using AI to screen job applicants, while applying a softer touch to limited-risk uses.
  • It would include transparency requirements for companies and standards for their internal procedures.
Ultimate approval looks likely after it cleared major hurdles this winter: Approval by member states and two key committees of the European Parliament, Dragoș Tudorache, a member of European Parliament with the Renew Europe political group and co-rapporteur of the AI Act, said in an email.
A crucial remaining step is a plenary, or vote, by the full Parliament, now scheduled for March 13. The text will be legally vetted and translated to produce a final version, Tudorache said.
“The vote in committee is quite indicative of what we expect in the plenary: overwhelming support,” he said. “There are no remaining issues being discussed.”
After next week’s Parliament vote, the Council—comprising EU member state governments—will formally endorse the Act, a process not expected to be controversial.
The AI Act would join previous legislation aimed at protecting the digital rights of EU citizens, including 2018’s General Data Protection Regulation, which carries potential fines up to 20 million euros or 4% of a company’s global turnover—whichever is higher. Meta Platforms Inc. was fined a record 1.2 billion euros last year for violating GDPR rules, eclipsing a prior 746 million euro penalty levied on Amazon.com Inc.
As lawmakers work to finalize the AI Act, lawyers warn that companies need to prepare early—ideally, starting now.
“I probably can’t stress this enough: Organizations need to get started as soon as they possibly can,” said Ryan Donnelly, co-founder of the Belfast-based AI compliance company Enzai. “It’s not going to be very pretty if they wait until right before enforcement to start trying to implement all of the requirements.”
One of the most important things companies should do now is take an inventory of all the ways they’re using AI within their organizations, multiple advisers said.

What’s Covered?

The rules would apply to providers, or developers, whose products are put on the EU market; deployers of AI systems located in the EU, which could include any employer using off-the-shelf AI software; and developers or deployers of systems whose output is used inside the EU.
The law has carve-outs, including for military and national security. The rules would also include specific provisions for general-purpose AI models, and obligations to identify AI-generated content.
Some US tech companies have been watching the discussions about rules for open-source developers, and have welcomed the latest developments, said Shelley McKinley, chief legal officer at San Francisco-based GitHub.
“We’re pleased to see an express risk-based open source exemption in the final text,” she said. “Unless they’re building general-purpose AI models that may pose systemic risks, developers will be able to continue collaborating responsibly and openly in the EU.”

Prohibited Uses

Like the EU’s 2018 GDPR, the AI Act would force companies within and outside of Europe to reshape how they manage information.
“It’s the Brussels effect writ large,” said Barry Scannell, a consultant at the law firm William Fry in Dublin and a member of the Irish government’s AI Advisory Council.
Provisions for general-purpose AI models would go into effect 12 months after entry into force. And 24 months after entry into force—likely mid-2026, on the current timeline—the majority of the provisions would come into effect.
“We have prohibited these practices because they are not in accordance with our European values and we don’t want AI to be used this way in Europe,” Tudorache said.
“This is not a fine ‘against companies'; it is a dissuasive fine against prohibited uses of AI,” he added. “The other fines for breaches of the Regulation are also meant to be dissuasive, but they are aligned with EU practices of enforcing noncompliance.”
Smaller fines apply to noncompliance with other aspects of the law.
Companies need to figure out whether they’re deploying prohibited systems, which isn’t always straightforward, Scannell said. For example, biometric characterization systems are prohibited when they infer sensitive characteristics like race, but they otherwise fall in the high-risk category, he said.
“If developers don’t pay attention to these requirements now, the systems they are developing may not be allowed into the EU market and they may have to redevelop the systems in compliance, potentially even from scratch,” said Peter Schildkraut, senior counsel at Arnold & Porter in Washington, D.C.

High-Risk AI Systems

Meanwhile, the rules would also set out obligations, such as registration requirements, for the companies developing and using high-risk AI systems.
Registration obligations generally fall on providers—or developers—because authorities want to track the use of high-risk AI systems on the EU market, said Ashley Casovan, managing director of the AI Governance Center at the International Association of Privacy Professionals.
But the companies deploying—or using—those systems may also have to register them, for example if they significantly modify a system, she said, adding that the lines between the two will be clarified over time and in practice.