AI Act: What Compromise Will Enable the Rise of European Artificial Intelligence?

the European Union reached a political agreement to regulate the development of artificial intelligence (AI). What does this unprecedented global framework entail, and what are its implications for EU member states and tech industry lobbies? What were the main sticking points in negotiations between EU institutions and certain countries? What do these disputes reveal about the solidity of the agreement? Why is AI a critical issue for Europe, and what would be the economic consequences for the continent? Where does France stand in this debate?

What Does This Unprecedented Global Agreement Entail, and What Are Its Implications?
The AI Act, in development since 2021, has faced numerous challenges, particularly due to the explosion of generative AI, which disrupted its original risk-based approach. Initially designed to classify applications—from harmless spam filters to unacceptable uses of facial recognition in daily life—the regulation had to be hastily revised to address generative AI’s unexpected rise. While the need for regulation is undeniable, the last-minute additions risk stifling European startups just as they begin to close the gap, burdening them with complex rules that ironically favor more advanced U.S. giants.
The rapid progress of large language models, built on neural networks with billions of parameters trained on opaque datasets, raises concerns about privacy, copyright, and security risks tied to their unpredictable behavior. Given AI’s breakneck evolution, a flexible, adaptive regulatory approach is essential. Yet the current framework—hundreds of pages of self-referential legal considerations—risks quick obsolescence.
Beyond mere exemptions, flexibility is crucial, especially given the growing role of open-source AI, which European startups are leveraging. Many repurpose existing models from tech giants, while some are now developing their own foundational models. An open regulatory approach is needed to address emerging risks while fostering innovative, homegrown European AI capable of competing with U.S. and Chinese dominance.

Key Disputes in Negotiations: What Do They Reveal About the Agreement’s Strength?
EU lawmakers initially sought to replicate the GDPR’s success—a global gold standard for data regulation—but AI presents a different challenge. Europe already lags behind U.S. tech giants, and the AI Act introduces uncertainty just as European startups like Mistral (France) and Aleph Alpha (Germany) begin gaining traction.
In recent weeks, France, Germany, and Italy pushed back, creating a cacophony over two issues:

State use of facial recognition (some member states refuse to fully abandon it).
Preserving the potential of startups working on foundational models (the backbone of generative AI).
These governments proposed self-regulation and codes of conduct for such models, clashing with EU institutions rushing to finalize the agreement amid pressure from NGOs advocating for strict adoption. The compromise includes broad exemptions for open-source developers, central to Europe’s AI foundation models.
French Digital Minister Jean-Noël Barrot claimed the deal would preserve Europe’s ability to develop its own AI technologies and strategic autonomy. But why is AI such a critical issue for Europe, and what are the economic stakes? Where does France fit in?

Why AI Is a Major Stake for Europe—and What’s at Risk for Its Economy?
Europe is falling behind not only the U.S. but also China—a situation that was not inevitable. Neural networks owe much to European pioneers, whether they stayed on the continent or moved abroad. Geoffrey Hinton, the “godfather of AI,” left Silicon Valley for Toronto to distance himself from U.S. military influence, while Yann Le Cun (Meta’s Chief AI Scientist) and Sepp Hochreiter (who introduced long-term memory in neural networks in 1991) laid the groundwork for today’s transformer-based language models, the core of generative AI since 2017.
Despite educational crises and declining math proficiency, Europe—particularly France—still hosts pockets of excellence that could drive a distinctive AI approach. The idea that Europe should settle for a regulatory role, dependent on U.S. and Chinese tech, is economically and strategically suicidal, given AI’s pivotal role in technological development. Historically, mastering cutting-edge technology has been key to catching up, growing, and projecting power. Yet for decades, the EU focused on competition policy over industrial strategy, treating citizens more as consumers than producers—a trend the AI Act risks perpetuating.
While Thierry Breton’s leadership marks a shift toward industrial sovereignty, the task remains monumental. The AI Act must balance regulation with innovation, ensuring Europe doesn’t just consume AI but develops it. The alternative—a future where Europe remains a rule-maker but not a tech-maker—would be a strategic failure.

This piece was originally published by the French Institute for International and Strategic Affairs – IRIS.