Jonah Schwope, Lena Schröder – Beyond AI Futurism: A Socio-Ecological Vision for AI

The AI tech race is well and truly on, and the EU wants to be part of it, but if it only serves the imperatives of capitalism it is going to be worse than useless in meeting the ecological and social challenges of our age.

Jonah Schwope works at the Berlin-based think tank Das Progressive Zentrum, where he examines the conditions under which democratic governance and value-driven innovation can be realised within society. His academic background is in Political Science and International Public Governance. His work has been published in Tech Policy Press, Springer Gabler, and the Foundation for European Progressive Studies (FEPS).
Lena Schröder is pursuing a master’s degree in Science, Media, and Communication at the Karlsruhe Institute of Technology (KIT) while working at the Institute for Technology Assessment and Systems Analysis (ITAS). Through her studies, she has gained interdisciplinary experience at the interface of cognitive science, machine learning, AI ethics, and sustainability research. She is particularly interested in the social and ecological implications of AI systems in the context of sustainable transformations, as well as the ideologies behind AI.

Cross-posted from Green European Journal

File:Anatomy-1751201 1280.png
Picture by GDJ

For centuries, Europe’s political imagination has been powered by a belief in progress – that tomorrow would be better than today. But this promise seems to be faltering. The geopolitical order is shifting, social inequality is deepening, ecological limits are being breached, and democratic rule is under strain. Against this backdrop, a question resurfaces: can liberal democracy still deliver on its promise of progress?

While some social thinkers warn of a loss of the possibility of progress or declare that loss is an unavoidable byproduct of modernity, others appear convinced that AI holds the key to reviving stagnant economies, taming the climate crisis, and even reconciling divided societies. It serves as the ultimate promise of collective salvation. 

Governments buy into the hype, with China and the US contesting a new “Tech Cold War,and Britain’s prime minister Keir Starmer promising nothing short of “incredible change” as he prepares to “turbocharge AI”. Through the AI Continent Action Plan, EU Digital Commissioner Henna Virkkunen meanwhile aims to position Europe as a “global leader in Artificial Intelligence”.

These trends are hardly surprising as governments worldwide derive a considerable share of their political legitimacy from their ability to deliver on the promise of progress – and AI offers them a vehicle to do so. Yet behind this narrative of salvation lies a simple truth: the fantasy fueling today’s AI hype is written, packaged, and sold by a handful of big tech corporations.

AI futurism

In the current reality, according to Astrid Mager and Christian Katzenbach, technology companies “not only take over the imaginative power of shaping future society, but also partly absorb public institutions’ ability to govern these very futures with their rhetoric, technologies, and business models.” Or as Karen Hao puts it, AI companies colonise our future, claiming that “they alone are the ones with the scientific and moral clarity to bring people to heaven or else risk sending everyone to hell.” This discursive dynamic can be described as AI futurism. Coined by AI ethicist Paul Schütze, AI futurism describes the “hegemonic and (materially) institutionalised system of ideas and imaginaries [which] sustains the societal and individual perception of AI technologies.”

Central to AI futurism are three interrelated myths: that technology alone can solve our deepest problems (solutionism); that its advance is inevitable (determinism); and that it will ultimately surpass human intelligence (singularity). Influenced by libertarian and neo-reactionary ideologies, the AI futurist narrative reflects an inherently undemocratic belief system.1 As these ideologies rebrand crises as engineering problems, they become tools of power: justifying monopolies, deregulation, and exploitative practices in the present, all in the name of an envisioned future.

Squeezed between empires

Despite their differences, the United States and China – the two global leaders in AI – both pursue development trajectories that exhibit AI futurist traits. Their trajectories, framed as technological progress in the service of human advancement, de facto enable the concentration of power in the hands of a few economic actors and their allied political coalitions. As it scrambles to catch up, Europe is increasingly in danger of blindly following this same path.

In the US, the dominant AI vision has, at least since the beginning of Trump’s second term, fused techno-futurist beliefs with right-authoritarian agendas. AI serves as a tool for dogmatic deregulation, dismantling the modern administrative state, and eroding democratic rights. At the same time, online platforms reshape online spaces according to a techno-libertarian worldview – one that treats technology as a force that should operate free from government oversight or social responsibility. This logic has proven highly compatible with authoritarian-populist communication, amplifying polarising content and weakening public accountability. A prime example of this dynamic is Trump’s international trade strategy, which seeks to protect the intellectual property rights of Big Tech nationally and abroad – for instance, through tariff threats – while platforms roll back content moderation and fact-checking.

China’s vision likewise echoes techno-futurist imaginaries. Here, innovation is centrally promoted as both an engine of growth and an instrument of social control. A key aim of the Chinese strategy is to demonstrate the ideological superiority of authoritarian rule and communist values through AI leadership.2 Tech giants like Huawei, Alibaba, and Tencent function as extensions of state power – domestically, within surveillance architectures, and internationally, as vehicles of geopolitical influence. China’s long-term strategy of technological autonomy seeks full independence from Western technology and the export of its model through initiatives such as the “Digital Silk Road”

And Europe? What role does it claim for itself in the age of AI? The AI Continent Action Plan aims to position Europe as a leader in AI as well. Safeguards and regulation, as previously addressed by the 2024 EU AI Act, are no longer mentioned. Instead, the plan prioritises competitiveness, technological sovereignty, and European leadership. In other words, it seems to increasingly deprioritise core values such as data protection, democratic accountability, and social inclusion in favour of geopolitical ambition.

Pushed by industrial actors, such as the  EU AI Champions Initiative, this shift rests on the assumption that geo-economic competition is the defining logic of the digital age. While it may be so for profit-driven companies, thoughtlessly adopting the industry narrative is a political mistake. Reducing innovation strategy to competitiveness and growth imperatives neglects the political, social, and ecological dimensions essential to shaping AI in the public interest. Equally questionable is the oft-accompanying assumption that any kind of regulation hinders innovation. While there is no doubt that Europe needs strategic investment in digital sovereignty and competitive industry, blind deregulation is not only hostile to our democratic goals, it also doesn’t rise up to the task of making Europe resilient.

AI capitalism vs. proportionality

The discrepancy between the increasing adoption of AI futurism in Europe and real progress is most evident when viewed through the lens of socio-ecological crises.

The solutionist nature of the AI futurist approach depicts technological advancement as the central tool in a seamless transition to climate neutrality. AI can indeed contribute to optimising transport, monitoring emissions, and improving resource efficiency, through tools like smart grids that help bridge the gap between energy demand and renewable supply, or geodata-driven forecasting and risk assessments to support climate adaptation.

Yet AI technologies are currently failing to deliver on their promise of ecological progress. As Kate Crawford compellingly shows, AI is not simply lines of code or server clusters: it is much better conceptualised as a socio-material system, sustained by vast infrastructures of data, labour, and energy. It is algorithms encoding assumptions; it is platforms concentrating power; it is infrastructures demanding resources, extracted unevenly from the Global South. To call AI a “neutral tool” is to miss its essence.

In this context, “AI capitalism” refers to the structural entanglement of AI technologies with data-extractive business models, publicly subsidised infrastructure development, and financial market-driven logics of valorisation. AI innovations are not created in a vacuum, but emerge within extractive economic and institutional contexts that are primarily geared toward private sector interests and profit for the few. Their deployment is profoundly resource-hungry. Training large-scale models requires immense energy and rare materials, while generating growing streams of electronic waste. Such dynamics risk creating a “CO2 lock-in”, where AI infrastructures lock societies into unsustainable trajectories.3

Moreover, sustainability discourses often reflect priorities of the Global North, neglecting conceptual dimensions such as sufficiency or broader decolonial perspectives. And increased consumption can erode the efficiency gains that form a central pillar of the European climate strategy.4 In short, AI futurism dismisses today’s ecological costs by invoking a promised future where a superintelligent Artificial General Intelligence (AGI) will supposedly solve the very crises its development exacerbates in the present.

As Aimee van Wynsberghe proposes, a path towards more sustainable AI development and deployment would be based on a principle of proportionality, requiring that AI’s ecological costs be justified in line with the public value it generates. Accordingly, developing robust models for ecological accounting and frameworks integrating both numerical and normative insights to establish an estimate of what counts as “proportional”  must be a future priority.

If AI continues to drive energy-hungry extractivism, it risks undermining the European Green Deal. But if its development is aligned with principles of sufficiency, care, and resilience, AI could contribute to socio-ecological transformation. But what are the political conditions to make a socio-ecological vision for AI possible?

From concentration to collective governance

The European Commission already posits that to help build a resilient Europe, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected. However, this idea only holds true if AI doesn’t undermine both the conditions of a liveable future within planetary boundaries or our conditional ability to democratically shape this future. Instead of uncritically joining the global race for deregulation, Europe should move from concentration of compute and AI power towards fostering collective governance over AI’s path.

First, it must tackle power concentration by regulating markets and investing in commons. Effective antitrust enforcement5 combined with interoperability standards and open-source infrastructures is vital for digital sovereignty. Commons-based models such as Data Solidarity or the DECODE project in Barcelona demonstrate how data can be reclaimed as a public good with ecological and social benefits.

Second, Europe has to rethink its innovation policy and move from market incentives to mission-oriented governanceEconomist MarianaMazzucato reminds us that public investment has historically driven transformative breakthroughs, and that innovation policy should focus on broader societal goals, rather than short-term profits. Yet even the much-discussed Draghi Report fails to embed European investment policy in such a strategic vision. Rather than funnelling resources into AGI, public funds should support context-sensitive AI for decarbonisation, circular economies, and democratic resilience.

Governments can redirect AI innovation by tying public funding to socially and ecologically aligned missions – a strategy Carsten Jung calls AI directionism (as opposed to accelerationism). Using the UK as an example, Jung demonstrates that only one out of seven AI companies focuses on solving a specific problem, yet approximately one out of five companies receives public funding​​. This offers the government an opportunity to strategically steer the sector through targeted allocation of contracts and funding, and to give it a clear orientation, for example towards socio-ecological standards.

Third, public alternatives need to be built. Models for such a quest are provided by Mozilla Foundation, which calls for the creation of a non-commercial AI ecosystem that promotes public goods, ensures democratic access, and provides a counterweight to the dominance of private AI companies. Others suggest a democratically governed AI stack of public compute, open data, and open models, which rather than suppressing private innovation, aims to rebalance ecosystems toward the common good. It is clear that a thorough evaluation is necessary to identify the areas of society where strategic vulnerabilities exist and where political priorities must be set, such as in sectors handling highly sensitive data (for example, healthcare) or in areas where decisions influence individuals’ ability to exercise their civic rights, experience political efficacy, and ultimately realise their broader life opportunities (for instance, the public sector).

Fourth, democratisation needs to be understood and embedded as a cross-cutting task. AI risks reshaping representation, communication, and participation in exclusionary ways if left unchecked. Yet projects like Amsterdam’s Responsible AI approach illustrate how openness, transparency, and citizen involvement can foster experiments in both building state capacity and gaining public trust. Models such as Helsinki’s AI-based public service delivery and Taiwan’s digital democracy approaches already showcase how AI can be used for the common good.

For AI to serve the public good, it must be governed through genuinely democratic processes, not treated as a purely technical domain. Yet current debates often neglect this fact, approaching technology governance as an afterthought rather than as the foundation of social benefit. The democratisation of AI means going beyond imparting merely instrumental knowledge (know-how or technical competence; digital literacy). It requiresreflexive knowledge, or know-why – in other words, an understanding of the underlying implications and logic of AI. But above all, it requires transformative knowledge, or know-what, which relates to design options, desirable futures for society, and the inherently political dimensions of AI.

In practice, the democratisation of AI further requires the state to create the institutional conditions that allow democratic intermediaries to meaningfully influence how AI systems are designed, procured, and governed. This entails building transparency over funding priorities, procurement processes, and data-sharing agreements. Yet transparency is meaningless without the structures and resources that allow civil society to participate and exercise democratic oversight before decisions are finalised.

Lastly, democratic progress builds on collective action. The dominance of AI futurism and deregulatory agendas, mirrored in the EU Simplification Agenda, erodes the democratic capacity to steer technology. AI must be understood and treated as “normal technology” in the sense that its diffusion and impacts are shaped more by the social, political, and institutional contexts in which it is embedded than by the pace of technological innovation itself.

As Mazzucato insists, innovation is political and regulation must be reclaimed as a tool of collective shaping. AI is not an autonomous, world-altering force, but a bundle of technologies whose very real effects are determined by how societies choose to develop and embed it. In the current geopolitical climate, new democratic alliances must be forged to institutionalise co-creative, commons-oriented AI governance – both among the trifecta of state, civil society, and transformative businesses, and across borders with like-minded actors such as Canada, Japan, and South Korea.

Keeping futures plural

In the 1990s, French philosopher Paul Virilio coined the term “racing standstill” to describe the dizzying speed of “progress” that goes nowhere. Today, AI futurism manifests this idea. 

AI risks becoming a progress trap. While it concentrates power, it doesn’t build homes, draw down carbon emissions, or generate the genuinely novel ideas needed for transformative change. The danger lies less in AI’s utility, which is real, than in the misguided visions projected onto it. When these inflated expectations collapse, the resulting disillusionment profits only the authoritarian-populist narratives that exploit feelings of loss and nostalgia.

The challenge, therefore, is not to abandon progress but to reclaim and redefine it. As sociologist Ralf Dahrendorf contended, the goal of a progressive society is to expand people’s life chances – their opportunities to shape their own lives and fulfil their potential.   Philosopher Rainer Forst further argues that only those processes can be considered progress that empower those affected by them to “autonomously determine in which direction their society should develop”. Translated into political governance, these ideas converge on a fundamental task: preserving the capacity for self-determined change. We should avoid dependencies, dead ends, or situations portrayed as having no alternatives, as these conditions undermine the possibility of democratic stewardship.

The fundamental characteristic of progressive governance, therefore, lies in consistently strengthening a society’s capacity to change. The development and governance of AI can only be called progressive if they empower people themselves to shape and justify how technology transforms their society. It also entails safeguarding against dynamics – such as polarisation and inequality – that undermine social cohesion and, consequently, people’s ability to organise collectively. In Europe’s current political climate, this implies resisting deregulatory agendas and investing in political institutions capable of translating abstract democratic values into actionable concepts. It means regulating algorithms that make social platforms divisive, rather than spaces for deliberation. It means building environments where new ideas can emerge and democratically controlled infrastructures are in place to support their realisation, while clear benchmarks for purpose and proportionality steer technological innovation to the fulfilment of broader socio-ecological goals, safeguarding the openness of the future.

We must engage in a deliberate act of political imagination: to recognise that AI’s path is not predetermined by Big Tech, but is contested terrain – a technology that can either concentrate power or strengthen democracy, accelerate ecological collapse or support sustainability. Which path prevails will be determined by the collective choices we make about governance, ownership, and purpose.


  1. Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday, 29(4).https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599  ↩︎
  2. Jinghan Zeng (2022). Artificial Intelligence with Chinese Characteristics: National Strategy, Security and Authoritarian Governance. Singapore: Palgrave Macmillan. ↩︎
  3. Scott Robbins & Aimee van Wynsberghe (2022). “Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future.” Sustainability, 14(8), Article 4829.  ↩︎
  4. [4] Anne-Laure Ligozat, Julien Lefèvre, Aurélie Bugeau & Jacques Combaz (2022). “Unraveling the Hidden Environmental Impacts of AI Solutions for Environment: Life Cycle Assessment of AI Solutions.” Sustainability, 14(9), Article 5172. ↩︎
  5. Friederike Rohde, Maike Gossen, Josephin Wagner & Tilman Santarius (2021). “Sustainability challenges of Artificial Intelligence and policy implications.” Ökologisches Wirtschaften, 36(O1), pp. 36–40. ↩︎

BRAVE NEW EUROPE is one of the very few Resistance Media in Europe. We publish expert analyses and reports by some of the leading thinkers from across the world who you will not find in state and corporate mainstream media. Support us in our work.

To donate please go HERE.

Be the first to comment

Leave a Reply

Your email address will not be published.


*