Category: Data-en

  • After the Bubble: AI Can Serve Industrial Power Instead of Draining It

    After the Bubble: AI Can Serve Industrial Power Instead of Draining It

    This op-ed has originally been published by Les Echos(fr).

    The generative AI bubble is built on circular funding between sector players, valuations disconnected from economic realities, and an extreme concentration of resources on large language models (LLMs). What should be alarming is not so much the scale of these investments as their stark contrast with the disintegration of Western industrial capacities. The war in Ukraine exposed this structural flaw, revealing the inability to produce sufficient quantities of essential military equipment—the result of decades of deindustrialization and skewed capital allocation. Beyond its strategic dimension, this paradox calls into question how we measure economic power.

    On the AI front itself, the success of more frugal players like Mistral or DeepSeek demonstrates that innovation does not depend solely on a relentless race to build ever-larger models. Billions continue to pour into colossal physical infrastructures—energy-hungry data centers, specialized chips, computing networks—without questioning the fundamental limits of LLMs. These massive investments stand in sharp contrast to the chronic underfunding of industry, and paradoxically, of automation.

    Beyond the fantasy of a dematerialized digital world, data centers are infrastructures that consume vast material resources: energy, rare metals, electronic components. Their proliferation highlights the current paradox: we are exponentially increasing computing power, while the productive sectors that could benefit from these technologies lack funding and orders. Many of these sectors launch AI projects merely to tick a box and make announcements to attract investors. In the military domain, autonomous drones, intelligent combat systems, and predictive maintenance represent concrete applications where AI will make a difference—but only if integrated into a solid industrial base, rather than betting everything on unreliable models.

    The production chains for ammunition, armored vehicles, and electronic components, weakened by years of underinvestment, struggle to meet demand. Factories have closed, skills have dwindled, and revival attempts are hampered by the absence of long-term strategic planning. The United States, despite its own contradictions, is trying to correct this imbalance by relocating some strategic production. Europe, however, remains on the sidelines, locked in extreme technological dependence that undermines its sovereignty.

    The core issue lies in this skewed allocation of resources. Capital and talent are concentrated on speculative technologies, while industrial applications of AI—advanced robotics, autonomous systems, production process optimization—remain underfunded. Above all, they lack commercial guarantees in the form of orders. This creates a vicious cycle: the more investments flow into LLMs and their infrastructure, the fewer resources remain to modernize the real productive apparatus.

    Yet AI could be a major lever for reindustrialization if approached differently. A more balanced strategy would involve redirecting some investments toward industrial automation, developing practical applications embedded in production processes, and fostering hybrid skills that combine digital expertise with industrial know-how, rather than chasing publicity stunts.

    Without this strategic shift, the gap will widen between an oversized digital sector and an industrial base unable to meet material challenges. The war in Ukraine served as a wake-up call. Power is not measured solely by the ability to develop sophisticated algorithms but also by the capacity to produce essential equipment. The challenge is not to reject AI but to reintegrate it into an industrial logic, where digital innovation finally serves material production rather than replacing it. Without this rebalancing, the West risks ending up with an economy where computing power soars, but factories continue to close.

  • AI Bubble and Military Bottleneck: A Systemic Crisis

    AI Bubble and Military Bottleneck: A Systemic Crisis

    The financial bets on the revolutionary promises of generative AI have soared to dizzying heights. Circular funding among industry giants is proliferating, while structural limitations are emerging regarding the reliability and economic value of large language models (LLMs). From one bubble to another, this new frenzy points to the deeper disorganisation affecting Western economies in the deployment of capital and skills. In this respect, the simultaneous weakness in industrial capacity among Ukraine’s backers reflects a systemic crisis.

    An opinion piece by Rémi Bourgeot, economist and engineer, Associate Fellow at IRIS.

    While the world was waking up to the concrete potential of artificial intelligence with ChatGPT, the collapse of Silicon Valley Bank in early 2023 triggered the onset of a financial crisis. Technology stocks were hit hard. Venture capital funds were blamed for their risky financial schemes, particularly in the cryptocurrency space, which was hit by a series of scandals.

    These reservations were soon swept aside by a new wave of financial euphoria, this time centred on AI, but following similar patterns. Nvidia emerged as the big winner, with its graphics cards tailored to the requirements of giant neural network calculations. It effectively locked up the market with its proprietary platform, Cuda. The very notion of valuation ratios was overshadowed by the prospect of a radical transformation of human activity.

    It comes as no surprise that the intrinsic limitations of LLMs were overlooked during the initial phase of euphoria. Beneath the sweeping reactions of both AI apologists and staunch detractors, a more nuanced perspective emerged from discreet commentators, combining a technical grasp of neural networks with a philological intuition about the strengths and the limits of the syntactic logic captured by LLMs.

    OpenAI began by developing open, non-profit models, and its status remained hybrid for years. The prevailing idea was that LLMs would reach a qualitative tipping point, thanks to an explosion in size and compute resources. The confusing notion of AGI (artificial general intelligence) then served as a horizon for the most extravagant funding schemes.

    However, by 2024, the technical achievements of companies like Mistral in France and DeepSeek in China, with incomparably more limited resources, began to cast doubt on the idea that model deployment required the trillions of dollars mentioned by Sam Altman at OpenAI.

    The companies developing core AI models do not currently exhibit a real business model, beyond using investor funds to cover their expenses, particularly for the purchase of chips. On top of the issue of financial stability, the allocation of such resources to a particular technology must also be questioned. AI Pioneer Yann Le Cun has repeatedly emphasised the limitations of LLMs and called for efforts to be made on other types of models, which have been ignored by the bulk of investors. Instead, the bubble took on a new dimension, with massive funding from semiconductor companies like Nvidia to their own customers, like OpenAI.

    This latest bubble raises questions not only about this very industry, but more generally about the way the economy is funded. It seems increasingly difficult for developed countries to sustain industrial momentum beyond waves of financial and institutional frenzy that suggest magical thinking, or sometimes even mass hysteria.

    Meanwhile, the Ukraine war highlights the limitations facing Western industry in producing equipment. Production capacities for ammunition, armoured vehicles and electronic components have proven chronically inadequate to meet sustained and prolonged demand. Many factories capable of manufacturing critical components have been closed in recent decades. Supply chains are limited, often dependent on rare or offshore suppliers.

    This situation reveals a systemic failure centred on insufficient production, which goes beyond the defence industry. It results from a lack of strategic planning, particularly in terms of financing, energy supply and skills deployment. Reviving production requires restoring complex industrial chains and long-term profitability models. Otherwise, even massive investments will have no effect.

    Industrial strength does not come from stock market bubbles fuelled by the ecstasy of a post-physical digital nirvana. It requires careful interaction between businesses, research institutions and government agencies, based on long-term strategies and human skills. Behind the cutting-edge intellectual resources poured into LLMs, the bubble lays bare the erosion of industrial development strategies, exacerbated by failing educational systems and the relegation of scientific skills.

    Nevertheless, in light of the manufacturing rout epitomised by Boeing, the US policy focused on redeploying manufacturing and controlling energy costs is showing tentative signs of improvement. This is the case even in semiconductors, with TSMC establishing operations. Although financial shocks hamper in-depth reindustrialisation, the country is ultimately managing to assert its dominance in the digital field.

    The European Union, meanwhile, finds itself in a more precarious situation due to its technological retreat and the energy chaos stemming from Germany’s phase-out of nuclear power. By positioning itself as a faithful user of US technologies, it is undermining its industrial potential. In the dot com bubble of the late 1990s, Europe typically lagged behind during the upswing, endured the full brunt of the market crash, and ultimately failed to catch up on the technical front. In this respect, Ursula von der Leyen’s determination to cement the EU’s role as a digital and military vassal of the US for decades to come foreshadows a decline in living standards and political dislocation.

  • Behind DeepSeek: France’s Path to AI Excellence

    Behind DeepSeek: France’s Path to AI Excellence

    By leveraging its mathematical expertise and open-source innovation, Europe can compete with the United States and China—not just through massive investments, but above all by keeping scientific culture at the heart of its strategic vision.

    As China’s DeepSeek reshuffles the global AI competition, France is also seeking to highlight its cutting-edge capabilities, announcing major investment projects in digital infrastructure at the Paris Global Summit. The rapid success of Mistral AI has demonstrated France’s potential, with its researchers and engineers defying the educational crisis through their mathematical talent. Yet a gap persists between this scientific excellence and public action, as seen in recent missteps—most notably the premature launch of the open-source AI model Lucie. The state must redeploy its scientific expertise to ensure strategic cohesion in these investments and prevent Europe’s digital ecosystem from being systematically overshadowed by Silicon Valley.

    This moment is all the more critical as the notion that cutting-edge AI is an exclusively American domain fades, given the proven capabilities of countries like China—and France, with its strong mathematical tradition perfectly aligned with the challenges of neural networks. DeepSeek has shown the world that, with just a few million dollars and limited graphics cards, it’s possible to achieve results that rival those of American giants. Barely a year ago, Mistral also unveiled a model that competed with OpenAI’s, developed in a matter of months by a team of just a few dozen people. France’s AI expertise is undeniable. This talent is also evident within U.S. tech giants: Yann LeCun, Meta’s chief AI scientist, has inspired an entire generation. His company’s open-source model, LLaMA, was initially developed by a Paris-based team.

    Many of us already recognized in 2023 the rise of a more efficient and refined AI than that of California’s giants. French minds often find opportunities in Big Tech to apply their mathematical brilliance. Several of Mistral’s founders, in fact, honed their skills in these companies. However, if every European success is ultimately absorbed by American giants—as Mistral nearly was—the benefits for Europe will remain minimal. Given the economic upheaval AI brings, such a trend would lock us into dangerous dependency. Transhumanist visionaries have no real plan for Europe beyond its picturesque landscapes.
    The development of infrastructure and data centers, backed by massive investments, is essential for our autonomy. While France’s efforts in this direction are commendable—assuming they materialize fully—they must avoid hiding behind convoluted consortia reminiscent of Airbus-era strategies. Yet we cannot overlook the need for a deeper reflection on funding sources, decision-making balance with international partners, and the long-term viability of these projects.

    This also requires addressing the persistent technological deficit in public administration, despite the renewed focus on industrial policy. Scattered funding, insufficient analysis, and the excessive event-driven communication of “France 2030,” along with the overhyped “hydrogen revolution” and reindustrialization statistics skewed by self-employment, demand a more fundamental effort from the state. This is especially urgent as global political shifts threaten to disrupt the open-source ecosystem, which is central to Europe’s AI catch-up strategy.

    Open source represents a remarkable opportunity for technological knowledge sharing. Yann LeCun is a vocal advocate, and he seems receptive to the idea of his home country reclaiming its rightful place in scientific tradition. However, given U.S. officials’ outcry against DeepSeek and calls for stricter restrictions, there is a risk that Big Tech’s dominance could tighten further, leaving only China as a credible counterbalance. Governments will now have to address the circulation of AI models and open-source frameworks as a key issue in trade negotiations.

    Europe will not match the scale of American investments. Yet DeepSeek, Mistral, and others worldwide have proven that we can reposition ourselves in the digital landscape—by relying on open source for now, but above all by placing engineering culture, with all its versatility, back at the core of our strategic decisions. This path, neglected by Europe over the past three decades, is the one being followed by BRICS nations that are effectively positioning themselves in the tech race. We will not succeed by focusing solely on regulatory questions, but by restoring scientific culture to the heart of our choices.

    This text was originally published on the website of Les Echos.

  • Semiconductors Are the Achilles’ Heel of the AI Giants

    Semiconductors Are the Achilles’ Heel of the AI Giants

    The ultra-concentration in the design and production of semiconductors for AI, centred around Nvidia and TSMC, is fuelling the interest of digital giants, which are highly dependent in this regard. However, catching up looks to be a difficult task, despite the mobilisation of state actors.

    The explosion of artificial intelligence rests on two pillars of a different nature: on the one hand, the development of large language models such as GPT, and on the other, spectacular computing power with dedicated processors. These are designed in particular by the omnipresent giant Nvidia, and manufactured by a tight handful of actors, especially the Taiwanese company TSMC. AI models and semiconductors both require gigantic investments and cutting-edge expertise. However, these are two worlds that, although they cooperate closely, respond to very different requirements.

    In terms of model development, American digital giants such as Microsoft, Meta and Google have all the technological, economic and political resources to dominate the sector, both internally and through acquisitions/partnerships. This latter aspect even enables them to domesticate the diversification seen with the explosion of open source, i.e., models that are freely distributed and reusable by anyone. Although open source allows an entire AI ecosystem to exist, it cannot exactly be seen as David’s weapon against Goliath, as the giants themselves are deeply invested in it. Meta’s LLaMa language models are, for example, open source. Moreover, the financial weight and grip of Big Tech are such that we are seeing independent actors being drawn into their orbit one after the other. The French gem Mistral recently announced it was joining Microsoft’s fold, entrusting it with the distribution of its most advanced model, which will therefore be closed. The giants thus have ample means to maintain control over model development.

    Nevertheless, behind the domination of these behemoths, the importance of the processors that enable the training of these AI models should not be underestimated. It is in fact the crux of today’s technological warfare and lies in the hands of industrial giants of a different kind. The entire AI scene remains highly dependent on a semiconductor design and production chain that is incredibly concentrated, revolving around Nvidia and TSMC.

    A boom in demand for semiconductors dedicated to AI, and very few suppliers

    For digital giants, autonomy in terms of semiconductors remains a challenge in which it is difficult to position oneself. After years of investment, Nvidia holds a near-monopoly position in the design of semiconductors dedicated to AI. The American company designed 80% of this type of semiconductor worldwide last year.

    Once the design is completed, Nvidia outsources their manufacturing to Taiwan’s TSMC, one of only ten companies in the world capable of producing them. Nvidia is said to be “fabless”. In this industry, manufacturing a semiconductor requires a production line with specific characteristics (manufacturing equipment, testing and packaging). These new production lines are extremely costly. A brand-new factory (or foundry in the sector’s terminology) requires between 15 and 20 billion dollars and a minimum of two years of construction. Very few economic players can invest such colossal sums and overcome the entry barriers to the foundry market (“Fabs”).

    States are seizing the issue in the name of technological sovereignty

    Despite the enormity of the investments, some actors are entering or returning to this market, such as the American Intel or the Japanese Rapidus. Manufacturers already in the race, like TSMC or South Korea’s Samsung, are continuing to invest in an attempt to maintain their market shares. After the Covid-19 crisis and the subsequent semiconductor shortage, several states decided to relaunch their financial support for the sector. “Chip Acts” have multiplied to increase national semiconductor manufacturing capacity, bolster economic security and guarantee supplies for military use even in times of crisis. Among these countries are the United States in 2022 with the Chips and Science Act ($39 billion), the European Union with the Chip Act (€43 billion in 2023), Japan with the creation of the Rapidus conglomerate and a support plan ($100 billion for Rapidus and new TSMC factories over 2023–2027), China with the launch in 2023 of phase 3 of the Chinese government’s semiconductor fund ($46 billion for 2023–2027), and South Korea with a government plan of $7.3 billion. In the United States, the leverage effect of public subsidies in the sector is noteworthy. The $39 billion of the Chips and Science Act encouraged a wave of private investments amounting to $200 billion, spent by American and foreign companies on American soil.

    New entrants and a new scale of financing

    Until new factories produce more chips, supply will not be able to meet global demand for AI-dedicated chips. Hence a significant rise in prices. A Nvidia GPU (H100) can cost up to $40,000 per unit. Its availability is limited, because even with increased production volumes, the company still cannot meet market demand. Some users and buyers of Nvidia chips are concerned about being dependent on a single supplier. This is the case for Sam Altman, CEO of OpenAI, because the lack of AI chips risks hindering the development of his own company. Why not try to create one’s own industrial tool to restore this supply-demand imbalance? This is the logic of every new entrant in a booming sector. Sam Altman has been holding numerous meetings with manufacturers and investment funds over the past few months. In his initial estimates, he mentioned a (staggering) investment goal of $7 trillion to build a new segment of the semiconductor industry. The project is still ongoing. And Altman is not alone. Initiatives are springing up. Apple is working with TSMC to manufacture AI chips. The head of the Japanese group SoftBank, Masayoshi Son, wants to turn his group into an AI powerhouse. His latest project is to enable its subsidiary ARM to create a new AI chip division. A prototype will be tested in spring 2025, and mass production should begin in autumn 2025. For its part, Nvidia is maintaining its technological lead in a rapidly growing market. According to the Canadian research centre Precedence Research, the global market is expected to reach $100 billion by 2029 and $200 billion by 2032.

    This new type of shortage is prompting digital giants to position themselves in the segment, each in their own way. Faced with these ambitions, Nvidia continues tirelessly to position itself to do even better, notably better than what the giants will probably be able to achieve in designing AI-dedicated processors. The digital giants find themselves caught in an industrial vice that will be difficult to overcome. The prospect of balanced global competition in which all major regions manage to position themselves remains distant and uncertain. Beyond their own interests, the ultra-concentration of AI-dedicated semiconductors highlights a very real risk to industrial resilience across the entire chain, down to end users. In this regard, diversification is a major political issue.

    This piece was originally published by the French Institute for International and Strategic Affairs – IRIS.

  • Mistral Under Microsoft: Europe’s AI Catch-Up Challenge Remains Unresolved

    Mistral Under Microsoft: Europe’s AI Catch-Up Challenge Remains Unresolved

    Mistral AI’s move into Microsoft’s sphere has sparked political criticism in Europe. As a champion of open source, the company had recently advocated for a more flexible AI Act before announcing its shift to a closed model. Nevertheless, its technical success in developing foundational models with limited resources demonstrates Europe’s—and other global players’—potential to catch up. However, achieving true autonomy would still require overcoming a difficult economic equation that pushes the most promising startups into the arms of Big Tech.

    Mistral’s Success Highlights Europe’s Technical Potential in the AI Race

    Many observers had assumed Europe was destined to remain merely a user of American AI models for developing various applications. Technically, Mistral’s success confirms the opportunity for a relatively resource-efficient AI compared to Big Tech’s massive data usage and financial and human resources.

    In just a few months, Mistral managed to develop AI models that rival OpenAI, Google, and Meta in performance, with significant but far more limited resources than those of the American giants. This is particularly striking in terms of workforce, with its team of around thirty employees. This achievement not only showcases the team’s prowess but also sheds light on the nature of the technology driving the generative AI boom.

    Beyond new neural network architectures (like transformers), the spectacular progress in AI over the past decade has largely been due to the use of enormous amounts of data and computing power. While riding this wave of quantitative explosion, Mistral has also carved out a path for more refined AI engineering, allowing it to establish itself on the global stage in record time.

    Even amid an educational crisis and severe deindustrialization, it remains possible to mobilize skills from top-tier training programs to compete with global tech giants. Beyond the issue of European autonomy, this technical reality offers valuable lessons about the global AI race. Catching up and competing in AI is possible, provided there is sustained funding and market opportunities.

    Mistral’s Move into Microsoft’s Sphere Illustrates the Economic Challenge of Independent and Open AI

    After positioning themselves as champions of open, reusable models, Mistral’s leaders decided that their new, most advanced model would be closed—distributed through an agreement with Microsoft, which is also taking a stake in the company. The open-source approach had boosted Mistral’s appeal among developers, alongside other open models like Meta’s LLaMA, in contrast to the now radically closed model of the misnamed OpenAI.

    In fact, it was precisely this shift that led Elon Musk, who had been involved in OpenAI’s launch, to recently announce legal action against Sam Altman’s company. Beyond the irony of the billionaire’s outbursts, it is true that OpenAI, with its labyrinthine structure, reflects a gap between its original open-source and research-focused mission and its current purely commercial purpose. The issue of Big Tech’s grip on AI is particularly sensitive for Europe but is also relevant in the United States.

    Like OpenAI, Mistral’s agreement with Microsoft confirms its technical success and popularity. The French company is also launching a chatbot called “Le Chat,” modeled after ChatGPT. However, this partnership, for now, buries the dream of an independent, open-source European AI.

    Beyond the recent virulent attacks on the company’s leadership, we must question the European economic environment. The core issue remains the prospects for development, funding, and commercial opportunities needed to maintain a leading position in the digital sector. These challenges and the financial power of tech giants inevitably draw successful startups into their orbit. It is this economic aspect that has turned Mistral’s technical feat, which could have marked a turning point toward autonomy, into a strategic setback for Europe.

    Beyond Distrust of Lobbying, a Flexible Approach to AI Regulation Remains Essential

    The AI Act addresses an obvious need for regulation and risk management in AI. However, its complicated development has resulted in particularly convoluted agreement terms. Its creators had missed the generative AI revolution and embarked on a titanic adaptation effort last year.

    The idea of positioning Europe as the world’s digital regulator, with too little concern for the continent’s technological offerings, poses an existential risk to the European economy and its competitive autonomy. Moreover, with its difficult application to future technical developments, the AI Act risks serving the interests of Big Tech, which has the means to navigate these regulatory labyrinths. Mistral’s move into Microsoft’s orbit seems to confirm this.

    Mistral had strongly advocated at the end of last year for a loosening of the AI Act, particularly regarding open-source foundational models of generative AI. It is natural to think that the company had already considered its shift to a closed model in partnership with Microsoft. Nevertheless, the concessions made in response to objections from the French and German governments, defending their national companies like Mistral and Aleph Alpha, mainly concerned open source, which will thus benefit from greater flexibility. While Mistral’s reversal may be regrettable, its lobbying primarily resulted in a loosening of the AI Act that could, under certain economic conditions, encourage the emergence of future open-source competitors.

    This piece has initially been published by the French Institute for International and Strategic Affairs – IRIS.

  • AI Act: What Compromise Will Enable the Rise of European Artificial Intelligence?

    AI Act: What Compromise Will Enable the Rise of European Artificial Intelligence?

    the European Union reached a political agreement to regulate the development of artificial intelligence (AI). What does this unprecedented global framework entail, and what are its implications for EU member states and tech industry lobbies? What were the main sticking points in negotiations between EU institutions and certain countries? What do these disputes reveal about the solidity of the agreement? Why is AI a critical issue for Europe, and what would be the economic consequences for the continent? Where does France stand in this debate?

    What Does This Unprecedented Global Agreement Entail, and What Are Its Implications?
    The AI Act, in development since 2021, has faced numerous challenges, particularly due to the explosion of generative AI, which disrupted its original risk-based approach. Initially designed to classify applications—from harmless spam filters to unacceptable uses of facial recognition in daily life—the regulation had to be hastily revised to address generative AI’s unexpected rise. While the need for regulation is undeniable, the last-minute additions risk stifling European startups just as they begin to close the gap, burdening them with complex rules that ironically favor more advanced U.S. giants.
    The rapid progress of large language models, built on neural networks with billions of parameters trained on opaque datasets, raises concerns about privacy, copyright, and security risks tied to their unpredictable behavior. Given AI’s breakneck evolution, a flexible, adaptive regulatory approach is essential. Yet the current framework—hundreds of pages of self-referential legal considerations—risks quick obsolescence.
    Beyond mere exemptions, flexibility is crucial, especially given the growing role of open-source AI, which European startups are leveraging. Many repurpose existing models from tech giants, while some are now developing their own foundational models. An open regulatory approach is needed to address emerging risks while fostering innovative, homegrown European AI capable of competing with U.S. and Chinese dominance.

    Key Disputes in Negotiations: What Do They Reveal About the Agreement’s Strength?
    EU lawmakers initially sought to replicate the GDPR’s success—a global gold standard for data regulation—but AI presents a different challenge. Europe already lags behind U.S. tech giants, and the AI Act introduces uncertainty just as European startups like Mistral (France) and Aleph Alpha (Germany) begin gaining traction.
    In recent weeks, France, Germany, and Italy pushed back, creating a cacophony over two issues:

    State use of facial recognition (some member states refuse to fully abandon it).
    Preserving the potential of startups working on foundational models (the backbone of generative AI).
    These governments proposed self-regulation and codes of conduct for such models, clashing with EU institutions rushing to finalize the agreement amid pressure from NGOs advocating for strict adoption. The compromise includes broad exemptions for open-source developers, central to Europe’s AI foundation models.
    French Digital Minister Jean-Noël Barrot claimed the deal would preserve Europe’s ability to develop its own AI technologies and strategic autonomy. But why is AI such a critical issue for Europe, and what are the economic stakes? Where does France fit in?

    Why AI Is a Major Stake for Europe—and What’s at Risk for Its Economy?
    Europe is falling behind not only the U.S. but also China—a situation that was not inevitable. Neural networks owe much to European pioneers, whether they stayed on the continent or moved abroad. Geoffrey Hinton, the “godfather of AI,” left Silicon Valley for Toronto to distance himself from U.S. military influence, while Yann Le Cun (Meta’s Chief AI Scientist) and Sepp Hochreiter (who introduced long-term memory in neural networks in 1991) laid the groundwork for today’s transformer-based language models, the core of generative AI since 2017.
    Despite educational crises and declining math proficiency, Europe—particularly France—still hosts pockets of excellence that could drive a distinctive AI approach. The idea that Europe should settle for a regulatory role, dependent on U.S. and Chinese tech, is economically and strategically suicidal, given AI’s pivotal role in technological development. Historically, mastering cutting-edge technology has been key to catching up, growing, and projecting power. Yet for decades, the EU focused on competition policy over industrial strategy, treating citizens more as consumers than producers—a trend the AI Act risks perpetuating.
    While Thierry Breton’s leadership marks a shift toward industrial sovereignty, the task remains monumental. The AI Act must balance regulation with innovation, ensuring Europe doesn’t just consume AI but develops it. The alternative—a future where Europe remains a rule-maker but not a tech-maker—would be a strategic failure.

    This piece was originally published by the French Institute for International and Strategic Affairs – IRIS.

  • Crypto-Bubbles and the Decentralized Eldorado

    The crypto rollercoaster has consequences beyond the realm of mass speculation. It shapes key discussions on the future of money and the Internet, which revolve around notions of decentralization and economic power.

    Web3: The Quest of Decentralization, and the Market Hype

    The idea of Web3, with blockchain at its core, is meant as a promise of decentralization, a return to the spirit of web1 (whose early protocols still underpin the Internet). It aims to supersede Web2, marked by the rise of social media giants. They filled the void left by the absence of an identification protocol in the original Internet, in order to expand their control over personal data, for advertising purposes. Giving users back control of their data, through the blockchain, and ensuring interoperability across services is the key rationale behind web3. The idea that artists could use NFTs – usually defined as digital property certificates – to directly market their creations and cut the middleman is undeniably appealing. Similarly, programmable blockchains like Ethereum, with decentralized apps (dApps), could offer a prospect to overcome the exorbitant privilege wielded by app stores.

    However, the main promise of web3 clashes with the reality of blockchains, caught up in the centralization of large exchanges and key venture capital firms. Besides, the massive crypto bubbles – fueled by herd behavior, shaky digital constructs and (central) monetary policy – do not quite fit with the common vision of financial and digital decentralization… A bubble is generally defined as a mismatch between the trend of an asset price and some underlying value. In the crypto bubble, the very idea of an underlying asset – or reality – has been derided. Some NFTs have pushed that logic with undeniable humor, like those based on drawings of adorable monkeys and their ApeCoin…

    The global financial landscape – with inflation-driven monetary tightening – is throwing many asset classes into trouble, drying up the liquidity flows that have fueled the rally. Extreme volatility has been a hallmark of cryptos since their inception, but the last few years have seen a considerable drift, based on authentic Ponzi schemes, with concepts as far-fetched as that of virtual land. The most recent projects rarely show the kind of monetary thinking that underpinned the (very experimental) creation of bitcoin in 2008, using the cryptographic concept of Merkle tree, developed as early as the 1970s.

    Most of the confusion this time came from stablecoins, which aspired to be the poster child of cryptos by offering a fixed exchange rate with a currency, like the dollar. Some, however, operate without collateral… This is the case with TerraUSD, which relies on a highly vulnerable system of rebalancing, using a floating crypto named Luna. TerraUSD has seen its peg to the dollar collapse as result of massive outflows. Collateral-based, centralized stablecoins like Tether already show more resilience. Beyond reports of destabilizing movements by large investment funds betting on the downside, the rout has, in any case, occurred against a background of severe fragility.

    Blockchain Is Still an Experiment, However Fascinating

    The concept of monetary decentralization, using cryptography, remains exciting. It is a substantial contribution to the discussion on the nature of our monetary and banking system, and its reform. This system is said to be centralized in the sense that it relies on central banks, but also on the privilege of massive money creation by commercial banks (through loan issuance out of thin air) – centralized institutions indeed. On the other hand, the concept of decentralization is also relevant in the face of Big Tech’s concentration in the digital sector and its control over user data. This control is likely to increase exponentially with the level of immersion, as will be the case with the metaverse.

    Overall, the crypto world needs to further question the purpose, stability and legal status of its constructs. The crypto-currencies and assets that have only capitalized on the bubble of the past few years are unlikely to thrive. The (few) true pioneers of blockchain keep insisting on its experimental nature. For example, a crucial discussion centers on overcoming proof of work (a mining mechanism based on a cryptographic contest between blockchain nodes), which comes at an exorbitant energy cost. Considerable effort is being made in this direction in the case of Ethereum, to move towards the more reasonable concept of proof of stake – which accredits the nodes on the basis of their proven involvement, like a substantial holding of the cryptocurrency. It is hard to see how bitcoin could reform in this direction. If web3 is to bear fruit in favor of any kind of decentralization, the crypto ecosystem will first have to refocus.

    Regulation and Central Bank Digital Currencies Will Redefine the Landscape

    Emerging and updated regulations – like MiCA and TFR in the European Union – focus mainly on the issue of anonymity and trafficking. This type of rules may indeed disrupt the model of crypto platforms and can be expected to spread worldwide. At the same time, other important pieces of regulation target Big Tech, like the twin Digital Services and Digital Markets acts, which the EU is in the process of ratifying. Competition policy is waking up to the challenges of the digital age. However, governments will have to find a balance between tackling Big Tech monopolies and regulating decentralized players, which present major risks but also opportunities to restore a healthier level of competition.

    Public digital projects, especially on the monetary stage, are also crucial to seize the opportunity for reform. Central bank digital currencies are not crypto currencies as such but official currencies in their own right. They will be backed by their respective central bank (rather than a cryptographic creation mechanism) and enjoy full equivalence with other forms – digital or physical – of the currency. The development of CBDCs must be pursued in a more ambitious way to give more meaning and stability to money, with a more direct link between monetary authorities and economic players. This brings us back to discussions that have endured underground since the Great Depression (on the fractional reserve system). Admittedly, the emergence of crypto-currencies helped to revive the interest in these ideas, after the great recession. The crypto rout could undermine the interest in digital currencies as a whole. On the contrary, we should engage in a broad political reflection on the use of digital innovation to stabilize our monetary system.