Meta refuses to sign EU’s AI code of practice

Share

Meta Refuses to Sign EU’s AI Code of Practice, Sparks Tension Over Future of AI Regulation

n a bold move just weeks before the European Union’s landmark AI rules are set to take effect, Meta has officially declined to sign the EU’s newly introduced Code of Practice for general-purpose AI (GPAI) models. The announcement came directly from Meta’s Chief Global Affairs Officer, Joel Kaplan, who shared the company’s stance in a detailed post on LinkedIn.

“Europe is heading down the wrong path on AI,” Kaplan wrote, voicing serious concerns over the EU’s approach. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models, and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

What Is the EU’s AI Code of Practice?

The European Commission unveiled the voluntary Code of Practice for GPAI earlier this month as a part of its wider framework to regulate artificial intelligence under the EU AI Act. While the AI Act itself is legally binding, the Code of Practice is meant to be an early compliance tool — a sort of soft launch that helps companies align their processes and infrastructure with the law’s future enforcement.

The Code outlines best practices for how AI companies should responsibly train and deploy their models. It requires that organizations maintain and regularly update documentation about their AI systems, respond to content owners who wish to exclude their data from training sets, and avoid using pirated or unauthorized content when building datasets. Essentially, it sets a standard for transparency, accountability, and ethical practices.

But Meta, among other major AI developers, sees it differently.

What Is the EU’s AI Code of Practice?

According to Kaplan, the Code demands far more than what the actual AI Act legislates — creating what he calls “legal uncertainties” and overreaching expectations. From Meta’s perspective, the European Union is stepping beyond its regulatory role and imposing burdens that could “throttle the development and deployment of frontier AI models in Europe” and hinder local innovation.

Kaplan argued that the new framework not only slows progress for large AI developers like Meta but also creates an uneven playing field for European startups and businesses looking to build products atop these large models. By enforcing such strict conditions early on, the EU might unintentionally stifle competition and innovation within its own borders.

The Bigger Picture: EU’s AI Act and Its Global Impact

The AI Act itself is a historic and far-reaching piece of legislation. It’s the first comprehensive legal framework in the world aimed at regulating artificial intelligence based on the level of risk it poses to individuals and society. The law bans certain high-risk AI practices outright — including cognitive behavioral manipulation, social scoring, and other applications deemed unethical or harmful.

Additionally, the Act categorizes AI tools used in sensitive areas — such as biometrics, facial recognition, employment, and education — as “high risk.” Developers of such systems must meet strict obligations related to risk assessment, data governance, quality control, and must register their AI models within an official EU database.

Companies have known the AI Act was coming, but now the timeline is real. As of August 2, 2025, the core rules will go into effect. And even though companies with existing models on the market — like OpenAI, Google, Meta, and Anthropic — have a grace period until August 2, 2027, the expectations for long-term compliance are clear.

Mounting Industry Resistance

Meta’s public refusal to sign the voluntary code is just the latest chapter in a broader industry pushback. Other AI giants including Alphabet, Microsoft, and the French startup Mistral AI have also expressed deep concerns. Together, they have lobbied the European Commission to delay or soften the rollout of these rules, claiming that the framework is premature and could have negative consequences for AI innovation and competitiveness in Europe.

However, the EU Commission has made it clear: there will be no delay.

On the same day Meta announced its refusal, the EU released final implementation guidelines for GPAI providers. These documents are intended to help companies understand their upcoming obligations and prepare ahead of time. The rules will apply to any provider of “general-purpose AI models with systemic risk,” a category that clearly includes Meta’s Llama model, OpenAI’s GPT-4, and other large-scale models on the market.

What Happens Next?

Meta’s rejection of the Code of Practice signals a growing divide between regulatory expectations in Europe and the strategic goals of AI companies operating globally. While the EU sees its regulation as a necessary guardrail to ensure AI is used safely and ethically, companies like Meta view it as a roadblock to progress — one that could even cause them to reconsider how and where they deploy their most advanced AI tools.

For now, the EU stands firm, and the world is watching closely. Will other companies follow Meta’s lead and refuse to sign the Code of Practice? Or will the pressure to access the lucrative European market force their hand?

One thing is clear: the next few months are going to be crucial in shaping the future of AI governance, not just in Europe, but globally. As powerful AI systems become increasingly embedded in our lives, the tension between innovation and regulation is only going to intensify.


Join our newsletter to stay updated

Related Posts

Join Our Newsletter

Our Services

Lets Get In Touch