This month’s AI Action Summit in Paris comes at a critical juncture in the development of artificial intelligence. At issue is not whether Europe can compete with China and the United States in an AI arms race; it is whether Europeans can pioneer a different approach that puts public value at the centre of technological development and governance. The task is to move away from digital feudalism, the term I coined back in 2019 to describe the dominant digital platforms’ model of rent extraction.
AI is not just another sector. It is a general-purpose technology that will shape all sectors of the economy. That means it could either create tremendous value or cause serious harm. Though many commentators talk about AI as if it was a neutral technology, this understates its fundamental economic power. Even if AI was free to build, it would need to be powered and deployed, which requires access to the gatekeeper’s cloud computing platforms — such as Amazon Web Services, Microsoft Azure, and Google Cloud.
This dependency makes steering the technology’s development towards the common good more urgent than ever. The real question isn’t whether to regulate AI but how to shape markets for AI innovation. Rather than regulating or taxing the sector only after the fact, we must create a decentralised innovation ecosystem that serves the public good.
The history of technological innovation shows what is at stake. As I argued in my book The Entrepreneurial State, many of the technologies that we use every day came about as a result of collective public investment. What would Google be without the Defense Advanced Research Projects Agency (Darpa)-funded internet? What would Uber be without US Navy-funded GPS? What would Apple be without CIA-funded touchscreen technology and Darpa-funded Siri?
The companies that have profited from these public investments — while often dodging their tax contributions — are now using their excessive rents to drain talent from the very public institutions that made their success possible. This parasitism is best epitomised by Elon Musk’s “Department of Government Efficiency” (Doge), which advocates cutting the government funding programmes that allowed Tesla to benefit from US$4.9 billion (167 billion baht) in government subsidies. A lack of state capacity will make regulating new technologies in the public interest increasingly difficult. The state has already been depleted of expertise owing to higher private-sector wages and decades of outsourcing to private consultants (what Rosie Collington and I call The Big Con). What happens when most technical knowledge becomes concentrated in just five private companies? Instead of waiting to find out, we must step in now to regulate AI in a dynamic and adaptable way while the AI technology stack and various mechanisms of monetisation are still evolving.
In a recent research project at the UCL Institute for Innovation and Public Purpose, my colleagues and I took another look at digital feudalism and the need to differentiate between value creation and value extraction in AI — what we call “algorithmic rents”. We show that platforms like Facebook and Google have evolved in ways that focus on “attention rents.” As users’ experience is manipulated to maximise profit, their feeds are crammed with ads and “recommended” addictive content in a process the Canadian journalist Cory Doctorow colourfully described as “enshittification”. Infinite scroll, nonstop notifications, and algorithms designed to maximise “engagement” by displaying harmful content and borderline-illegal activities have all become the norm.
AI systems could follow the same extractive path and supercharge this rent-seeking behaviour, such as by requiring payment for access to essential information, data privacy, online safety, freedom from advertising, or basic listings for one’s small business in global information searches. Because platforms currently hide their algorithms and attention-allocation mechanisms (the sources of their “algorithmic attention rents”), the key to regulating the sector, as in addressing climate change, is to force digital gatekeepers to disclose how their algorithms are being used. This information should then be integrated into reporting standards for all digital platforms.
Similarly, AI developers like OpenAI and Anthropic hide, among other things, the sources of their training data; what guardrails they have placed on their models; how they enforce their terms of service; their products’ downstream harms (such as addictive use and underage access); and the extent to which their platforms are being used to monetise eyeballs around the world through targeted advertising. Moreover, AI’s large and growing environmental impact adds yet another layer of urgency to the challenge. Major AI companies’ emissions have surged, leading the International Energy Agency to warn that global “electricity consumption from data centres, AI, and the cryptocurrency sector could double by 2026”.
Fortunately, recent developments suggest that alternative pathways are possible. DeepSeek, the Chinese AI company that sent many US tech stocks into a brief tailspin in late January, appears to have demonstrated that comparable performance can be achieved with significantly less computing power and energy consumption. Could more efficient approaches to AI development help break the stranglehold that major cloud computing companies have established through their control of vast computing resources?
While it is too early to tell whether DeepSeek’s breakthrough will lead to a market restructuring, it does remind us that innovation at the software level remains both feasible and necessary for addressing AI’s environmental impact.
As Unesco’s Gabriela Ramos and I have argued, AI can enhance our lives in many ways, from improving food production to bolstering resilience against natural disasters. European leaders from Mario Draghi to Ursula von der Leyen and Christine Lagarde regard AI as crucial for reviving European productivity. But unless they address the nature of digital feudalism, the extractive behaviour that underpins AI model development, and the current lack of regulatory capacity in the public sector, any attempt to unleash more robust, sustainable growth will crash on the rocks of new and deeper inequalities. One potential path forward is “EuroStack,” an independent digital infrastructure initiative that includes cloud computing, advanced chips, AI, and data, all governed as public goods rather than through monopolistic enterprises.
This isn’t about choosing between innovation and regulation, nor is it about managing technological development from the top down. It is about creating incentives and conditions to steer markets towards delivering the outcomes we want as a society. We must reclaim AI so that it provides public value, rather than becoming another rent-extracting machine. The Paris summit offers an opportunity to showcase this alternative vision. ©2025 Project Syndicate