Do Climate Models Predict Extreme Weather?
Advertisement

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

YouTube video

The politics of climate change receives a lot of coverage. The science, not so much. That’s unfortunate because it makes it very difficult to tell apart facts from opinions about what to do in light of those facts. But that’s what you have me for, to tell apart the facts from the opinions.

What we’ll look at in this video is a peculiar shift in the climate change headlines that you may have noticed yourself. After preaching for two decades that weather isn’t climate, now climate scientists claim they can attribute extreme weather events to climate change. They also at the same time say that their climate models can’t predict the extreme events that actually happen. How does that make sense? That’s what we’ll talk about today.

In the last year a lot of extreme weather events made headlines, like the heat dome in North Western America in summer 2021. It lasted for about two weeks and reached a record temperature of almost 50 degrees Celsius in British Columbia. More than a thousand people died from the heat. Later in the summer, we had severe floods in middle Europe. Water levels rose so fast so quickly that entire parts of cities were swept away. More than 200 people died.

Advertisement

Let’s look at what the headlines said about this. The BBC wrote “Europe’s extreme rains made more likely by humans… Downpours in the region are 3-19% more intense because of human induced warming.” Or, according to Nature, “Climate change made North America’s deadly heatwave 150 times more likely”. Where do those numbers come from?

These probabilities come out of a research area called “event attribution”. The idea was first put forward by Myles Allen in a Nature commentary in 2003. Allen, who is a professor at the University of Oxford, was trying to figure out whether he might one day be able to sue the fossil fuel industry because his street was flooded, much to the delight of ducks, and he couldn’t find sufficiently many sandbags.

But extreme event attribution only attracted public attention in recent years. Indeed, last year two of the key scientists in this area, Frederike Otto and Geert Jan van Oldenborgh made it on the list of Time Magazine’s most influential people of 2021. Sadly, van Oldenborgh died two months ago.

The idea of event attribution is fairly simple. Climate models are computer simulations of the earth with many parameters that you can twiddle. One of those parameters is the carbon dioxide level in the atmosphere. Another one is methane, and then there’s aerosols and the amount of energy that we get from the sun, and so on. 

Now, if carbon dioxide levels increase then the global mean surface air temperature goes up. So far, so clear. But this doesn’t really tell you much because a global mean temperature isn’t something anyone ever experiences. We live locally from day to day and not globally averaged over a year.

For this reason, global average values work poorly in science communication. They just don’t mean anything to people. So the global average surface temperature has increased by one degree Celsius in 50 years? Who cares.

Enter extreme event attribution. It works like this. On your climate model, you leave the knob for greenhouse gases at the pre-industrial levels. Then you ask how often a certain extreme weather event, like a heat wave or flood, would occur in some period of time. Then you do the same again with today’s level of greenhouse gases. And finally you compare the two cases.

So, say, with pre-industrial greenhouse gas levels you find one extreme flood in a century, but with current greenhouse gas levels you find ten floods, then you can say these events have become ten times more likely. I think such studies have become popular in the media because the numbers are fairly easy to digest. But what do the numbers actually mean?

Well first of all there’s the issue of interpreting probabilities in general. An increase in the probability of occurrence of a certain event in a changed climate doesn’t mean it wouldn’t have happened without climate change. That’s because it’s always possible that a series of extreme events was just coincidence. Bad luck. But that it was just coincidence becomes increasingly less likely the more of those extreme events you see. Instead, you’re probably witnessing a trend.

That one can strictly speaking never rule out coincidence but only say it’s unlikely to be coincidence is always the case in science, nothing new about this. But for this reason I personally don’t like the word “attribution”. It just seems too strong. Maybe speaking of shifting trends would be better. But this is just my personal opinion about the nomenclature. There is however another issue with extreme event attribution. It’s that the probability of an event depends on how you describe the event.

Think of the floods in central Europe as an example. If you take all the data which we have for the event, the probability that you see this exact event in any computer simulation is zero. To begin with that’s because the models aren’t detailed enough. But also, the event is so specific with some particular distribution of clouds and winds and precipitation and what have you, you’d have to run your simulation forever to see it even once.

What climate scientists therefore do is to describe the event as one in a more general class. Events in that class might have, say, more than some amount of rainfall during a few days in some region of Europe during some period of the year. The problem is such a generalization is arbitrary. And the more specific you make the class for the event the more unlikely it is that you’ll see it. So the numbers you get in the end strongly depend on how someone chose to generalize the event.

Here is how this was explained in a recent review paper: “Once an extreme event has been selected for analysis, our next step is to define it quantitatively. The results of the attribution study can depend strongly on this definition, so the choice needs to be communicated clearly.”

But that the probabilities ultimately depend on arbitrary choices isn’t the biggest problem. You could say as long as those choices are clearly stated, that’s fine, even if it’s usually not mentioned in media reports, which is not so fine. The much bigger problem is that even if you make the event more general, you may still not see it in the climate models.

This is because most of the current climate models have problems correctly predicting the severity and frequency of extreme events. Weather situations like dry spells or heat domes tend to come out not as long lasting or not as intense as they are in reality. Climate models are imperfect simulations of reality, particularly when it comes to extreme events. 

Therefore, if you look for the observed events in the model, the probability may just be zero, both with current greenhouse gas levels and with pre-industrial levels. And dividing zero by zero doesn’t tell you anything about the increase of probability.

Here is another quote from that recent review on extreme event attribution “climate models have not been designed to represent extremes well. Even when the extreme should be resolved numerically, the sub-grid scale parameterizations are often seen to fail under these extreme conditions. Trends can also differ widely between different climate models”. Here is how the climate scientist Michael Mann put this in an interview with CNN “The models are underestimating the magnitude of the impact of climate change on extreme weather events.”

But they do estimate the impact, right, so what do they do? Well, keep in mind that you have some flexibility for how you define your extreme event class. If you set the criteria for what counts as “extreme” low enough, eventually the events will begin to show up in some models. And then you just discard the other models. 

This is actually what they do. In the review that I mentioned they write “The result of the model evaluation is a set of models that seem to represent the extreme event under study well enough. We discard the others and proceed with this set, assuming they all represent the real climate equally well.”

Ok. Well. But this raises the question, if you had to discard half of the models because they didn’t work at all, what reason do you have to think that the other half will give you an accurate estimate. By all chances, the increase in probability which you get this way will be an underestimate.

Here’s a sketch to see why.

Suppose you have a distribution of extreme events that looks like this red curve. The vertical axis is the probability of an event in a certain period of time, and the horizontal axis is some measure of how extreme the events are. The duration of a heat wave or amount of rainfall or whatever it is that you are interested in. The distributions of those events will be different for pre-industrial levels of greenhouse gases than for the present levels. The increase in greenhouse gas concentrations changes the distribution so that extreme events become more likely. For the extreme event attribution, you want to estimate how much more likely they become. So you want to compare the areas under the curve for extreme events.

However, the event that we actually observe is so extreme it doesn’t show up in the models at all. This means the model underestimates what’s called the “tail” of this probability distribution. The actual distribution may instead look more like the green curve. We don’t know exactly what it looks like, but we know from observations that the tail has to be fatter than that of the distribution we get from the models.

Now what you can do to get the event attribution done is to look at less extreme events because these have a higher probability to occur in the models. You count how many events occurred for both cases of greenhouse gas levels and compare the numbers. Okay. But, look, we already know that the models underestimate the tail of the probability distribution. Therefore, the increase in probability which we get this way is just a lower bound on the actual increase in probability.

Again, I am not saying anything new, this is usually clearly acknowledged in the publications on the topic. Take for example a 2020 study of the 2019/20 wildfires in Australia that were driven by a prolonged drought. They authors who did the study write:

“While all eight climate models that were investigated simulate increasing temperature trends, they all have some limitations for simulating heat extremes… the trend in these heat extremes is only 1 ºC, substantially lower than observed. We can therefore only conclude that anthropogenic climate change has made a hot week like the one in December 2019 more likely by at least a factor of two.”

At least a factor of two. But it could have been a factor 200 or 2 million for all we know.

And that is what I think is the biggest problem. The way that extreme event attributions are presented in the media conveys a false sense of accuracy. The probabilities that they quote could be orders of magnitude too small. The current climate models just aren’t good enough to give accurate estimates. This matters a lot because nations will have to make investments to prepare for the disasters that they expect to come and figure out how important that is, compared to other investments. What we should do, that’s opinion, but getting the facts right is a good place to start.