Florent Marcellesi – Can AI be sustainable and just?

Artificial Intelligence incurs great dangers to people and planet, but like any technology it could be used for good in the right hands.

Florent Marcellesi is co-spokesperson and a former MEP for the Spanish Green party EQUO. He is a political ecology researcher and activist, a member of the editorial council of the magazine Ecología política, and a member of the think thank Ecopolítica.

Cross-posted from Green European Journal

Picture by Tuesday Digital

Today, the favourite metaphor of the Western reactionary and denialist international elite is the chainsaw. But what if this mechanical tool took a technological leap and was backed or controlled by Artificial Intelligence (AI)? In an uncertain and unstable world where the illiberal and climate sceptical chainsaws of Donald Trump, Javier Milei or Viktor Orbán are practically allied with the “tech bro” oligarchs of Silicon Valley, this question is central to the future of our democracies and the planet.

And it is not a novel question either. Throughout history, new information technologies have posed complex challenges for Homo sapiens. For example, did the invention of radio and television bring about a democratic and social improvement? As it turns out, they have been used for both democratic and totalitarian purposes. Take, for instance, Radio Londres, which broadcast during the French resistance to the German occupation of the Second World War, or inversely, Leni Riefenstahl’s Nazi propaganda films. As Yuval Noah Harari rightly reminds us, “Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth. (…) More information, ironically, can sometimes result in more witch hunts.”1 Be warned, there are precedents.

Of course, since the Second World War, new – and soon not-so-new – technologies (computers, internet, smartphones, social networks, etc.) have been developed at an increasingly vertiginous and exponential speed. Whether by chance or not, they have also coincided with another human-induced catastrophe: the ecological crisis. These technologies have brought the planet both benefits and drawbacks – although arguably much more of the latter. Now, as the most hyped technological innovation of recent years, will AI be the holy grail for sustainability, or will it be the artificial saw that cuts the last branch on which the human species is sitting? Even AI itself is incapable of answering this question. Try it and see. Spoiler: cognitive dissonance guaranteed.

At this point in the movie, technological gold rush after technological gold rush, we know all too well that modern information technologies are both a source of new solutions and new problems. AI adds another critical factor to this equation since, in its own way, it is capable of learning and thinking – and at an astronomical speed. Faced with these challenges, with a sense of déjà vu yet beset by new unknowns, how does AI intersect with human rights and ecology? And, above all, what should be the public and collective response to make the best of AI’s potential so that it is neither a modern witch-hunting weapon nor an ecological bomb?

Big Brother strikes again

Artificial intelligence reopens the debate about the relationship between technology and fundamental rights. AI’s potential makes it a double-edged sword, just as television and cinema became weapons of mass propaganda or as social networks can be sources of significant disinformation in the 21st century.2 In an increasingly turbulent world riddled with authoritarian and reactionary tendencies, what role can AI play and whom will it serve?

Many public authorities are clamouring that AI well employed can serve as a system to reinforce citizen security. However, while this may be true in some cases, facial recognition and biometric surveillance in public and private spaces3 also opens the door to severely curtailing fundamental rights, such as freedom of expression and information, the right to demonstrate, or the right to privacy. And this is not only about the future; it is happening right now. In Argentina, two days before a major demonstration in 2023, president Milei’s chainsaw government, with its anti-picketers slogan “he who cuts doesn’t get paid”, threatened to use facial recognition to identify protesters and cut their social benefits. The threat was effective. As a result of the warning, few people were present on the streets, and the right to protest was de facto repressed. In Russia, people attending the funeral of opposition leader Alexei Navalny, who died in a prison of Vladimir Putin’s regime, were arrested after being identified by facial recognition software that analysed images from security cameras and social networks. Big Brother strikes again.

But that’s not all. As if we were already immersed in an episode of Black Mirror, AI can reinforce the so-called social scoring in a very tangible way. Through such tools, it is possible to assess people’s behaviours by collecting personal data and then link individuals’ rights, social benefits, or access to public services to their scores. The consequences for everyday life are potentially enormous: less privacy, more social discrimination, control in the public and private spheres, an adverse impact on mental health, etc. Impossible? China is already testing it with its social credit system. So, let us not swoon over the bombshell arrival of DeepSeek. Beyond the more visible censorship in this Chinese AI (ask the model, for example, about Tiananmen or Taiwan), Beijing has already proven that it knows how to use its technological power as a weapon for surveillance, control and coercion both at home and abroad. Be careful: all that glitters is not gold.

Still, let us not be hypocritical. In the United States, the country of the “Broligarchy”,4 while the initial objective of some AI applications is to improve public efficiency (which may even be achieved), studies have shown that algorithms can also harbour a discriminatory bias against poor and working-class groups. In addition, in Amsterdam, an algorithm was used by the authorities to classify young people from disadvantaged neighbourhoods based on their likelihood of becoming delinquents. As 100 European NGOshave highlighted, AI, as it is developed today, amplifies class-based, sexual and racial discrimination. If artificial intelligence is not properly regulated or controlled, we are not and will not all be equal in the face of “Big Bro Tech”.

And we must not forget: rights are not only civil but also social and economic. In this sense, how does AI affect a basic constitutional right such as work?5While AI has so far burst into the world of work (or at least that is how it is sold) like a Formula 1 car in a 30 km/h zone, the reality is perhaps less disruptive. For example, in Spain, a report on this issue does not point to a great labour revolution but rather to differentiated and nuanced changes depending on which sector AI is employed in.6 At the global level, the International Labor Organization predicts a complementarity between existing jobs and AI, indicating a change in work intensity and autonomy, with a greater risk for women.7 Similarly, the conversation about AI and work should, first and foremost, be qualitative and finalistic. Will AI strengthen working people or erode their rights? Will it bring more equality between men and women in the workplace, or will it widen the gender gap? Will artificial intelligence help us get rid of bullshit jobs, or will it make them even more bullshit? And lastly, will it encourage green sectors or reinforce unsustainable ones?

The ecological cost of AI

It is no secret that AI consumes astronomical amounts of natural resources. It isn’t just Luddites or lunatic environmentalists that say this, but Sam Altman, the CEO of OpenAI and perhaps the most recognisable figurehead of artificial intelligence. Declaring that AI could face an energy crisis, he told the 2024 Davos forum that “there’s no way to get there without a breakthrough.” But while the AI guru himself acknowledges the obvious, the industry gives and leaks almost no data. Nevertheless, whether for its data centres, its learning processes or its daily use, AI needs massive amounts of electricity, water, and minerals while it also emits a lot of CO2.

First, it is estimated that the electricity demand related to AI created by Big Tech could double by 2030. By 2027, generative AI could consume as much power as Spain in 2022!8 The problem is that this demand is growing faster than the development of renewable energies and presents us with two major problems. On the one hand, how do we generate this electricity? While Trump 2.0 bets on dirty energies such as coal, oil or gas, and thus leads us to the worst climate scenarios, Altman’s OpenAI and other large AI multinationals, such as Microsoft or AWS, bet on nuclear energy to complement renewables. And not even nuclear fission, but fusion, which remains a distant dream today.

So, if one bets on AI’s potential to protect the climate, another problem arises. If the supply of renewable electricity is insufficient for global demand, who would keep the green electricity produced? In a world where it is not possible to have everything at the same time (infinite energy and stabilised climate within the limits of the Paris Agreement), one would have to decide between chatting with a language model, driving an electric car, or turning on the lights at home. Alternatively, only part of the population would have access to these services that may one day be considered luxuries for another part of the population. Again, ecology and justice are two sides of the same coin.

Additionally, AI needs lots of water. Even according to conservative models, it is projected that the freshwater consumption of US data centres in 2028 could be four times as much as the 2023 level. For every ten queries you give ChatGPT-3, 500 millilitres of water is consumed. Multiply this by billions of queries per day, and you get an idea of the total volume required – this is the so-called rebound effect. On top of that, AI, like many other recent new technologies, also requires a lot of minerals (copper, lithium, cobalt, etc.). This could reinforce competition for these materials between AI, electric cars and renewable energies – all of which are in great need of raw materials and rare earths – and increase the risk of socio-ecological conflicts, such as the tensions around the opening of new mines in Europe. Technology is not innocuous; its development has an ecological and democratic price.

Last but not least, AI is by no means carbon-neutral. If you have a short conversation with the latest ChatGPT model, you will have emitted around 0.27 kilogram of CO2. And if you make an average of 10 such exchanges a day for a year, we are looking at almost a tonne of CO2 emissions. This is equivalent to half of what a person would have to emit in 2050 to respect the Paris Agreement. However, in light of events coming out of China, this pharaonic energy consumption does not seem entirely inevitable. Whereas it seemed that only the very resource-hungry Big US Tech could perform well in AI, the Chinese DeepSeek reaches similar results using much fewer chips. According to early data from its developers, this language model consumes 10 to 40 times less energy than its larger competitors. Although these reports should be taken with a pinch of salt until there is more transparent and definitive data about Chinese AI, this relatively Little Tech is already exposing the excesses of Silicon Valley AI and fuelling calls to put it on a diet.

But beyond this technological and geopolitical rivalry between the US and China, the $64000 question arises for both: can AI offset the ecological footprint it itself causes and even tip the balance towards sustainability? That’s what Bill Gates9 thinks. It is true that there are promising applications for the ecological transition, such as more efficient energy grids, more powerful climate modelling, or faster scientific research.10 But beware, the same was said of the High Tech digital revolution 30 years ago. Today, despite its initial promises far away in the 1990s and notable uses in improving efficiency, the internet sector emits as much CO2 as air traffic11 and the global ecological footprint has been growing steadily since then.  And remember: in the field of AI, it’s not just the Green Deal team that is playing. Just as they did with the previous digital revolution, fossil energy tycoons are also using AI to accelerate hydrocarbon exploration and exploitation.12 And evidently, as urgent as the climate crisis may be, they are not playing to lose. So, in Gates we trust?

Big High Tech vs. Little Low Tech

Now, between risks and potentialities, and in the face of its inevitable development, what do we do with AI? First, let’s remember the basics once again. As Climate Change AI rightly states, AI is not a silver bullet. Rather, it is an additional tool, to be used sparingly when it brings added environmental and social value. Furthermore, AI will only be useful if it is employed to enhance the ecological and just transition.

For this, it is necessary to regulate how AI can be used.13 This is what the European Union, for example, has done in a pioneering way, mainly around fundamental rights. With a laudable eagerness to provide both security and ethics, the EU passed a law in 2024 to ban social scoring, emotion recognition in the workplace and in schools, predictive policing, manipulation of human behaviour, exploitation of people’s vulnerabilities, and some biometric categorisation systems (however, the legislation does authorise use of biometric recognition for police purposes).14 In addition, for any application deemed high-risk,15 responsible agencies and companies will be required to assess and mitigate risks, keep records of use, be transparent, and have human oversight. If found errant, these actors could face fines worth millions of euros. The EU has read Hobbes and Orwell, and is protecting itself against the modern Leviathan.

Now, although the European Union is often a pioneer in legislation, is it prepared to fight in the field of technological innovation with the US and China? In the current context where the US creates AI, China copies it, and the EU regulates it, Europe must urgently put in place all possible tools to move beyond its role as merely a global legislator and ensure its technological sovereignty. Between the Tech Bros captained by Elon Musk and the Chinese AI sector, Brussels has an opportunity to do things differently, combining public momentum and public-private cooperation with the protection of fundamental rights, less need for monetary and natural resources, and its own technology at the service of people and the planet.

To do so, the bloc must meet several clear criteria. First, in addition to maintaining high standards of citizenship rights, upholding AI transparency is critical. This is also about knowing the information about the training data, i.e. understanding how a given model thinks and the values behind it. An AI tool trained with Buddha’s thinking is not the same as one developed with Ku-Klux-Klan doctrines, nor will its responses be the same. The ideological and cultural bias of the initial training matters. In this context, critical education from an early age for youth and continuous training of workers is necessary for an informed, responsible and ethical use of technology in general and AI in particular.

Second, given its disruptive potential, it is necessary to propose an open-source AI as a base model for its future development instead of a closed and opaque tool in the hands of an out-of-control Silicon Valley. Just as the World Wide Web saw the light of day thanks to public funding from many European countries and then entered the public domain in 1993, a universal, public, and open AI would be a clear commitment to a democratic, innovative and collaborative technology.16 In essence, it would champion a model wherein both the global community and Little Tech – and not only Big Tech – can contribute and benefit socially and economically from the advances in this area.

Thirdly, the ecological impact of AI has to be a central variable in the equation. As AI expert Sasha Luccioni reminds us, certain applications of AI make no sense in an ecologically sustainable and environmentally just world: for a simple calculation, do everyone the favour of picking up a pen or a calculator, not AI. Put another way, making your shopping list with AI or asking it about the weather or how to dress the next day is like taking a Ferrari to go shopping for bread. Owning a Ferrari is not advisable from a social and environmental justice point of view, but even using its power to travel 500 metres from home to the bakery is even worse.

In this sense, and following the model proposed by the French Standardisation Association, the aim is to develop the so-called “frugal AI”. We are talking about an austere and light AI that is only used when it can demonstrably bring real ecological and social change compared to other less energy-intensive means, and only if its uses are always compatible with the limits of the planet. In fact, just as AI has been regulated to respect fundamental rights, it would be perfectly feasible to assess its applications in terms of its ecological footprint. Depending on whether its ecological impact has been characterised as “unacceptable”, “high”, or “limited”, AI should be respectively replaced by another low-tech tool that provides a similar service at a much lower ecological price. Moreover, AI should be managed with public criteria and fair sharing when used for services that have a large ecological footprint but which cover an important social need (health, education, etc.); and if the ecological impacts are minimal, such technology should be freely available. The ecological and just transition is not solved with Ferraris, but with bicycles and high-quality public transport.

In the face of challenges to fundamental rights and sustainability, AI is, like any past or present technology, ambiguous and double-edged. Like a two-faced Janus, it can support social and environmental justice or the opposite. So far, Big High Tech has dominated AI theory, narrative and practice. So, let’s work for Low Little Tech to turn the tables and, thus, put AI on the green and just side of History.

BRAVE NEW EUROPE is one of the very few Resistance Media in Europe. We publish expert analyses and reports by some of the leading thinkers from across the world who you will not find in state and corporate mainstream media. Support us in our work.

To donate please go HERE

Be the first to comment

Leave a Reply

Your email address will not be published.


*