Why AI won’t save us from climate disaster
Wolfgang Knorr is a climate scientist, consultant for the European Space Agency and guest researcher at the Department of Geography and Ecosystem Science, Lund University
Cross-posted from Wolfgang’s Substack Climate Uncensored
My attempt at AI generating a new blog logo on AI Logo Maker, after having set the option ‘LogoCraft’ to ‘moderate’ (as opposed to ‘minimalistic’ or ‘complex’). The fact that AI doesn’t understand or reason can be seen in how it has incorporated the word ‘moderate’ into the logo.
Recently I was observing an enduring pattern of particularly subpar weather forecasts in the area where we live. Across all weather apps and sites, either cloudiness was under-predicted, rainfall was over-predicted, or freezing or near-freezing temperatures always turned out to be well above zero Celsius. And that was not three, not two days in advance, but on the same day, just ours into the forecasting future. We live in a mountainous area close to the sea, which is particularly difficult for weather forecasting, but I had never noticed anything like this.
My first thought was that this might have to do with a shifting physical state of the climate. After all, January had been a record setting warm month, and the Mediterranean has been warming rapidly. When this record warm state clashes with cold air from Eastern Europe, it can create physical states never seen before.
To understand how this could in principle derail weather forecasting and climate prediction, we need to have a quick look at the three basic components of weather prediction systems. The first of these uses some basic equations from physics, for example how air flows, how much moisture it can store at what temperature, and how fast warm pockets of air rise when surrounded by a cooler atmosphere. The second are also physical models, but simplified or ‘semi-empirical’ ones, for example of how clouds form and how they generate rain or snow. These simplified models need to be tuned in such a way that they reproduce observed patterns. And finally, there are statistical models which have nothing to do with the physical world, but simply convenient mathematical equations with sometimes large numbers of ‘knobs’. Since these latter models know nothing about reality, for them, tuning is everything. All three together produce a weather forecast.
If I want to, I can easily add a statistical model of my own. For example, if I notice that whenever the wind blows from the north-east, minimum temperature is underestimated by about two degrees, I simply build a model that says Tmin = Tmin(forecast) + X, where X = 2. This is a statistical model with one free ‘knob’, or free parameter, X. Real weather prediction, of course, has many more of those knobs, but the principle is the same. We can even have forecasts that use only statistics. These ‘analog forecasts’ compare the current weather situation with past events and simply assume that the weather will evolve the same way. At least since Google DeepMind has set its ‘mind’ on the problem, this technique has come to be known as ‘AI weather forecasting’.
The problem with statistical methods, of which AI and ‘machine learning’ are just examples, is that no prior understanding is built into the model itself – everything comes from tuning the ‘knobs’. For example, when the weather people had tuned their model to give them the best predictions, logically they would have set my value of X to zero, assuming they had the best forecast, based on past observations. Now I come along and have a fresh look, and find, based on my own recent experience, that X should be 2. And in the future, we might find other values. And so on. Other than the physics part in numerical weather prediction, the AI or statistics part has no understanding of reality, and the ‘knobs’ have been turned only to reproduce events that have happened in the past. Whether we are dealing with one or billions of knobs does not matter.
So when the fundamental physical state of the atmosphere changes, the ‘tuned’ parts of a climate model – the tuned empirical models or the statistics – may fail. They don’t have to, as long as the atmosphere reproduces similar patterns as before, only at different locations or more or less frequently. But the more we move out into climate terra incognita, the less likely that is. And this is the problem with AI in climate prediction: we can use AI to build statistical models that can tell us the probability of flooding given a certain atmospheric situation now or in the recent past. But when we go from weather to climate forecasting, predicting how likely floods, droughts or heat waves will be under global warming, we have absolutely no way of finding out if these AI driven models will still work1.
Of course, very few people care about how climate models work, let alone if they are powered by AI. But there is currently a pattern of cheerful and often fake optimism surrounding anything AI related in the mainstream and corporate2 media. And this is the part where things do get dangerous: not from false predictions, but from a general sense that technological progress is going to save us. For the problem with climate models is not that they are ‘wrong’ (they are), but that they are being used naively, in particular by politicians. So when the IPCC publishes model results that show how we can use lots of fossil fuels by 2050 and still stay within the 1.5 degree temperature goal of the Paris agreement, policy makers and the public just hear ‘models’, and ‘all good’. That those models are not climate but economic models usually gets lost. And when this big promise called ‘AI’ is also thrown in the ring, there is a story that very obviously – given how much mainstream media like techno-optimism – stokes a very real craving for hopeful messages. But unfortunately, in the current media environment, stoking either fears or false hopes is what counts. The victims are realism and a thoroughly disinformed public.
There are two more big problems with AI: 1) the statistical models used are black boxes, and even in simplified cases it is extremely difficult to find out what happens inside them. 2) There is a tendency to ‘over-fit’ the data and look for causes where there can’t be any. As an illustration, suppose you throw the dice a few hundred times at different hours of the day and feed an AI with the generated data, i.e with hours/minutes and the value of the dice. The AI, because it has many ‘knobs’, will surely find a solution that fits your data perfectly and will predict your next throw based on previous throws and the time of day you throw the dice.
There is an interesting blog post by the PR and consulting firm EMB Global called ‘The Role of AI in Predicting Climate Change Patterns’. The hype culminates in the sentence “AI is a powerful ally in the fight against climate change, offering hope for a more sustainable future.” Almost everything in this post is hyperbole, misleading or simply fantasy. Let’s take one statement as an example: “AI-driven climate models can simulate various scenarios and assess the potential impacts of climate change. These models provide valuable insights into rising sea levels, extreme weather events, and shifts in ecosystems.” First of all, for reasons explained in the main text, AI driven models can’t look into the future – all that AI can do is help interpret future simulations of traditional physics based climate models. Second, in order to simulate impacts, AI would need huge amounts of impact data, still on the assumption that in the future, the climate to impact link will be the same as in the past. As for ecosystem shifts, it would need detailed data on past shifts – but the amount of available such data is small. The same applies to sea level rise or extreme events – which are, by their definition, rare. That means those data have long been exploited without the need for massive data processing as offered by ‘AI’. And third, insight is exactly not what AI offers, given that machine learning algorithms are essentially black boxes.
Be the first to comment