loader image

Meta refuses to sign EU’s AI code of practice

Meta Refuses to Sign EU’s AI Code of Practice, Sparks Tension Over Future of AI Regulation

n a bold move just weeks before the European Union’s landmark AI rules are set to take effect, Meta has officially declined to sign the EU’s newly introduced Code of Practice for general-purpose AI (GPAI) models. The announcement came directly from Meta’s Chief Global Affairs Officer, Joel Kaplan, who shared the company’s stance in a detailed post on LinkedIn.

“Europe is heading down the wrong path on AI,” Kaplan wrote, voicing serious concerns over the EU’s approach. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models, and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

What Is the EU’s AI Code of Practice?

The European Commission unveiled the voluntary Code of Practice for GPAI earlier this month as a part of its wider framework to regulate artificial intelligence under the EU AI Act. While the AI Act itself is legally binding, the Code of Practice is meant to be an early compliance tool — a sort of soft launch that helps companies align their processes and infrastructure with the law’s future enforcement.

The Code outlines best practices for how AI companies should responsibly train and deploy their models. It requires that organizations maintain and regularly update documentation about their AI systems, respond to content owners who wish to exclude their data from training sets, and avoid using pirated or unauthorized content when building datasets. Essentially, it sets a standard for transparency, accountability, and ethical practices.

But Meta, among other major AI developers, sees it differently.

What Is the EU’s AI Code of Practice?

According to Kaplan, the Code demands far more than what the actual AI Act legislates — creating what he calls “legal uncertainties” and overreaching expectations. From Meta’s perspective, the European Union is stepping beyond its regulatory role and imposing burdens that could “throttle the development and deployment of frontier AI models in Europe” and hinder local innovation.

Kaplan argued that the new framework not only slows progress for large AI developers like Meta but also creates an uneven playing field for European startups and businesses looking to build products atop these large models. By enforcing such strict conditions early on, the EU might unintentionally stifle competition and innovation within its own borders.

The Bigger Picture: EU’s AI Act and Its Global Impact

The AI Act itself is a historic and far-reaching piece of legislation. It’s the first comprehensive legal framework in the world aimed at regulating artificial intelligence based on the level of risk it poses to individuals and society. The law bans certain high-risk AI practices outright — including cognitive behavioral manipulation, social scoring, and other applications deemed unethical or harmful.

Additionally, the Act categorizes AI tools used in sensitive areas — such as biometrics, facial recognition, employment, and education — as “high risk.” Developers of such systems must meet strict obligations related to risk assessment, data governance, quality control, and must register their AI models within an official EU database.

Companies have known the AI Act was coming, but now the timeline is real. As of August 2, 2025, the core rules will go into effect. And even though companies with existing models on the market — like OpenAI, Google, Meta, and Anthropic — have a grace period until August 2, 2027, the expectations for long-term compliance are clear.

Mounting Industry Resistance

Meta’s public refusal to sign the voluntary code is just the latest chapter in a broader industry pushback. Other AI giants including Alphabet, Microsoft, and the French startup Mistral AI have also expressed deep concerns. Together, they have lobbied the European Commission to delay or soften the rollout of these rules, claiming that the framework is premature and could have negative consequences for AI innovation and competitiveness in Europe.

However, the EU Commission has made it clear: there will be no delay.

On the same day Meta announced its refusal, the EU released final implementation guidelines for GPAI providers. These documents are intended to help companies understand their upcoming obligations and prepare ahead of time. The rules will apply to any provider of “general-purpose AI models with systemic risk,” a category that clearly includes Meta’s Llama model, OpenAI’s GPT-4, and other large-scale models on the market.

What Happens Next?

Meta’s rejection of the Code of Practice signals a growing divide between regulatory expectations in Europe and the strategic goals of AI companies operating globally. While the EU sees its regulation as a necessary guardrail to ensure AI is used safely and ethically, companies like Meta view it as a roadblock to progress — one that could even cause them to reconsider how and where they deploy their most advanced AI tools.

For now, the EU stands firm, and the world is watching closely. Will other companies follow Meta’s lead and refuse to sign the Code of Practice? Or will the pressure to access the lucrative European market force their hand?

One thing is clear: the next few months are going to be crucial in shaping the future of AI governance, not just in Europe, but globally. As powerful AI systems become increasingly embedded in our lives, the tension between innovation and regulation is only going to intensify.


LATEST BLOG

Discover Our Latest News

Discover the achievements that set us apart. From groundbreaking projects to industry accolades, we take pride in our accomplishments.

Latest News.
A year in technology can be tracked through product launches, but it

Databricks said on Thursday that it is incorporating OpenAI’s models, including GPT-5,

Nvidia has agreed to buy a $5 billion stake in Intel as

Google is taking a big step forward in bringing its powerful AI

Nvidia has agreed to buy a $5 billion stake in Intel as

OpenAI executives are reportedly weighing the possibility of relocating the company out

Meta Refuses to Sign EU’s AI Code of Practice, Sparks Tension Over

More invites to Amazon’s upgraded digital assistant, Alexa+, powered by generative AI,

Midjourney Launches V1: A Bold Step into AI Video Generation Midjourney, one

After 13 Years in Business, Fivetran Expands to Offer Full Data Movement

Nvidia is doubling down on the growing opportunities in robotics and industrial
Earlier this week, DeepSeek, a well-funded Chinese AI lab, released its latest
Features

The biggest AI stories of the year (so far)

Databricks will bake OpenAI models into its products in $100M bet to spur enterprise adoption

Nvidia buys $5B stake in Intel, planning AI chip collaboration

Google’s Gemini AI is coming to your TV

Nvidia buys $5B stake in Intel, planning AI chip collaboration

Technology

Passionfroot is a marketplace for business-focused content creators looking for brand partnerships — and vice versa

Constellation Technologies & Operations wants to work with telecom operators to deliver 5G internet from space

A year later, what Threads could learn from other social networks

Whistleblowers accuse OpenAI of ‘illegally restrictive’ NDAs

The Crucial Role of Marketing: Building Bridges Between Businesses and Consumers

AI Artifice..

As data center usage heats up, Submer raises $55.5M to cool things down

Lacework, last valued at $8.3B, is in talks to sell for just $150M to $200M, say sources

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Semperis, a specialist in Active Directory security now worth more than $1B, raises $125M

How bad bots are dominating Internet traffic in 2024

Our Services

Lets Get In Touch