climate models

NW Pacific Heatwave Attribution – Multiple Climate Model Failure

The authors describe the models used thus:

Model simulations from the 6th Coupled Model Intercomparison Project (CMIP6; Eyring et al., 2016) are assessed. We combine the historical simulations (1850 to 2015) with the Shared Socioeconomic Pathway (SSP) projections (O’Neill et al., 2016) for the years 2016 to 2100. Here, we only use data from SSP5-8.5, although the pathways are very similar to each other over the period 2015–2021. Models are excluded if they do not provide the relevant variables, do not run from 1850 to 2100, or include duplicate time steps or missing time steps. All available ensemble members are used. A total of 18 models (88 ensemble members), which fulfill these criteria and passed the validation tests (Section 4), are used.

SSP5-8.5 means Shared Socioeconomic Pathway 5 combined with RCP8.5, leading to 8.5W/m2 GHG forcing at the earth’s surface by 2100. It is a very extreme worst case emissions/atmospheric GHG concentration scenario, not at all realistic but, for the 5 years from 2016-2021, when it is used in the models, it doesn’t make that much difference from other more realistic scenarios. Where it does make a great deal of difference is in the assessment of how much more frequent such extreme heatwaves will be over the coming century, which the authors rely on to make the alarming claim that such events will happen every 5-10 years by 2100.

The authors used other models as well for simulating the historical period:

In addition to the CMIP6 simulations, the ensemble of extended historical simulations from the IPSL-CM6A-LR model is used (see Boucher et al., 2020 for a description of the model). It is composed of 32 members, following the CMIP6 protocol (Eyring et al., 2016) over the historical period (1850-2014) and extended until 2029 using all forcings from the SSP2-4.5 scenario, except for the ozone concentration which has been kept constant at its 2014 climatology (as it was not available at the time of performing the extensions). This ensemble is used to explore the influence of internal variability.

We also examine five ensemble members of the AMIP experiment (1871-2019) from the GFDL-AM2.5C360 (Yang et al. 2021, Chan et al. 2021), which consists of the atmosphere and land components of the FLOR model but with horizontal resolution doubled to 25 km for a potentially better representation of extreme events.

They describe the basic attribution procedure as follows:

As discussed in section 1.2, we analyse the annual maximum of daily maximum temperatures (TXx) averaged over 45°N-52°N, 119°W-123°W. Initially, we analyse reanalysis data and station data from sites with long records. Next, we analyse climate model output for the same metric. We follow the steps outlined in the WWA protocol for event attribution. The analysis steps include: (i) trend calculation from observations; (ii) model validation; (iii) multi-method multi-model attribution and (iv) synthesis of the attribution statement.

The first stage of the process above is known as ‘detection’, i.e. the detection of the event from observations. Observations are then compared to models to arrive at an attribution. Here is what the authors say about the detection:

The detection results, i.e., the comparison of the fit for 2021 and for a pre-industrial climate, show an increase in intensity of TXx of ΔT = 3.1 ºC (95% CI: 1.1 to 4.7 ºC) and a probability ratio PR of 350 (3.2 to ∞).

They then introduce the section on the multi-model attribution:

5 Multi-method multi-model attribution

This section shows probability ratios and change in intensity ΔT for models that pass the validation tests and also includes the values calculated from the fits to observations (Table 2). Results are given both for changes in current climate (1.2°C) compared to the past (pre-industrial conditions) and, when available, for a climate at +2˚C of global warming above pre-industrial climate compared with current climate. The results are visualized in Section 6.

Here are the results:

Note that the observed change intensity of the heatwave in the study area is 3.1C, according to observations (ERA5). The best estimate modelled change in intensity is anywhere between 0.22C and 2.6C, i.e. none of the models actually capture the observed change in intensity. The mean best estimate change in intensity of all the models is 1.77C, which is just 57% of the actual observed change. Thus, the models don’t come close to simulating actual reality. But again, this does not deter the authors from going ahead with an attribution anyway. They call it a hazard synthesis. I call it a hazardous synthesis!

6 Hazard synthesis


We calculate the probability ratio as well as the change in magnitude of the event in the observations and the models. We synthesise the models with the observations to give an overarching attribution statement (please see e.g. Kew et al. (2021) for details on the synthesis technique including how weighting is calculated for observations and for models).

Results for current vs past climate, i.e. for 1.2°C of global warming vs pre-industrial conditions (1850-1900), indicate an increase in intensity of about 2.0 ˚C (1.2 ˚C to 2.8 ˚C) and a PR of at least 150. Model results for additional future changes if global warming reaches 2°C indicate another increase in intensity of about 1.3 ˚C (0.8 ˚C to 1.7 ˚C) and a PR of at least 3, with a best estimate of 175. This means that an event like the current one, that is currently estimated to occur only once every 1000 years, would occur roughly every 5 to 10 years in that future world with 2°C of global warming.

So there you are. A highly dubious statistical analysis combined with an observation/model synthesis using models which all fail to capture the observed intensity of the actual event, which mysteriously translates into the statement that the NW Pacific heatwave would be ‘virtually impossible without climate change’ and furthermore that we can expect such intense heatwaves every 5 to 10 years by the end of the century if we don’t urgently reduce emissions. What a farce and an insult to proper science, but it did its job, i.e. generated alarming, but highly misleading headlines around the world re. the supposed irrefutable connection with this extreme weather event and man-made climate change.

Snow Models, Ice Models and Climate Models Generate ‘Data’ According to Scientists

Arctic Sea-Ice

This is what Professor Johan Rockstrom posted on Twitter 2 days ago:

Here is Rockstrom’s profile. As you can see he’s an earth science bigwig on ‘global sustainability’ and ‘planetary boundaries’ and he’s also Director of the Potsdam Institute, so he’s definitely an ‘expert’ who we should take very seriously. When he says that the Arctic sea ice ‘tipping element’ is fast approaching a ‘tipping point’ of no return, we should put our fingers to our lips and tremble with trepidation whilst whispering ‘Oh my God’, over and over, in barely audible, abject, stupefied terror.

Here’s what that Graun article says:

Arctic sea ice thinning twice as fast as thought, study finds

Less ice means more global heating, a vicious cycle that also leaves the region open to new oil extraction

Sea ice across much of the Arctic is thinning twice as fast as previously thought, researchers have found.

Arctic ice is melting as the climate crisis drives up temperatures, resulting in a vicious circle in which more dark water is exposed to the sun’s heat, leading to even more heating of the planet.

OMG, ‘climate crisis, vicious circle, even more heating’. We’re all going to DIE!

So what’s the evidence, where’s the data for this imminent irreversible planetary catastrophe? Well, it’s models, innit:

Calculating the thickness of sea ice from satellite radar data is difficult because the amount of snow cover on top varies significantly. Until now, the snow data used came from measurements by Soviet expeditions on ice floes between 1954 and 1991. But the climate crisis has drastically changed the Arctic, meaning this information is out of date.

The new research used novel computer models to produce detailed snow cover estimates from 2002 to 2018. The models tracked temperature, snowfall and ice floe movement to assess the accumulation of snow. Using this data to calculate sea ice thickness showed it is thinning twice as fast as previously estimated in the seas around the central Arctic, which make up the bulk of the polar region.

Robbie Mallett of University College London (it’s gone right down hill since I left, I can tell you), who led the study, says:

The Soviet-era data was hard won, Mallett said. “They sent these brave guys out and they sat on these drifting stations and floated around the Arctic, sometimes for years at a time, measuring the snow depth.” But the Intergovernmental Panel on Climate Change identified the lack of more recent data as a key knowledge gap in 2019.

Yep, those hardy Russians actually went out and collected real data from the real world. They got off their arses and endured arduous conditions for long periods in order to physically measure sea ice thickness. This is what used to exclusively be called ‘data’. But now ‘data’ can be obtained by sitting on your lazy backside in a nice warm room in front of a computer screen, using ‘models’. Weather models, climate models, snow models, ice models, you name it, they’ve got models for everything these days and they generate ‘data’. You can probably even download them as an app on your iPhone, so you can now do what those brave, intrepid Russians did even whilst sipping your soy latte in some cafe in Islington. It’s great. Way back in 2019, even the IPCC admitted that there was a lack of real data on sea ice thickness. Now, 2 years into the post normal, post empiricist, post colonial, post Enlightenment, computer generated era of ‘Science’ (which governments religiously ‘follow’ to produce allegedly ‘evidence-based policy’ on stuff as diverse as public health in a pandemic, bad weather and sea level rise), we have new data which ‘evidences’ an imminent tipping point in Arctic sea-ice decline due to the fast approaching anthropogenic fossil fuel carbon-based Thermageddon.

Here are a few quotes from the actual UL paper:

To investigate the impact of variability and trends in snow cover on regional sea ice thickness we use the results of SnowModel-LG (Liston et al.2020aStroeve et al.2020). SnowModel-LG is a Lagrangian model for snow accumulation over sea ice; the model is capable of assimilating meteorological data from different atmospheric reanalyses (see below) and combines them with sea ice motion vectors to generate pan-Arctic snow-depth and density distributions.

SnowModel-LG exhibits more significant interannual variability than mW99 in its output because it reflects year-to-year variations in weather and sea ice dynamics.

SnowModel-LG creates a snow distribution based on reanalysis data, and the accuracy of these snow data is unlikely to exceed the accuracy of the input. There is significant spread in the representation of the actual distribution of relevant meteorological parameters by atmospheric reanalyses (Boisvert et al.2018Barrett et al.2020). The results of SnowModel-LG therefore depend on the reanalysis data set used.

So basically, their new model which relies upon meteorological reanalysis data (more models) shows that interannual variability in weather conditions in the Arctic is much greater than thought and this results, curiously, in the regional trend in sea ice thickness decline being also larger than previously estimated in some areas.

4.3 New and faster thickness declines in the marginal seas

As well as exhibiting higher interannual variability than mW99, SnowModel-LG values decline over time in most regions due to decreasing SWE values year over year. Here we examine the aggregate contribution of a more variable but declining time series in determining the magnitude and significance of trends in .

We first assess regions where was already in statistically significant decline when calculated with mW99. This is the case for all months in the Laptev and Kara seas and 4 of 7 months in the Chukchi and Barents sea. The rate of decline in these regions grew significantly when calculated with SnowModel-LG data (Fig. 10; green panels). Relative to the decline rate calculated with mW99, this represents average increases of 62 % in the Laptev sea, 81 % in the Kara Sea and 102 % in the Barents Sea. The largest increase in an already statistically significant decline was in the Chukchi Sea in April, where the decline rate increased by a factor of 2.1. When analysed as an aggregated area and with mW99, the total marginal seas area exhibits a statistically significant negative trend in November, December, January and April. The East Siberian Sea is the only region to have a month of decline when calculated with mW99 but not with SnowModel-LG.

We also analyse these regional declines as a percentage of the regional mean sea ice thickness in the observational period (2002–2018; Fig. 11). We observe the average growth-season thinning to increase from 21 % per decade to 42 % per decade in the Barents Sea, 39 % to 56 % per decade in the Kara Sea, and 24 % to 40 % per decade in the Laptev Sea when using SnowModel-LG instead of mW99. Five of the 7 growth-season months in the Chukchi Sea exhibit a decline with SnowModel-LG of (on average) 44 % per decade. This is much more than that of the 4 significant months observable with mW99 (25 % per decade). We find the marginal seas (when considered as a contiguous, aggregated group) to be losing 30 % of its mean thickness per decade in the 6 statistically significant months when SIT is calculated using SnowModel-LG (as opposed to mW99).

So it’s the marginal seas, more than the central Arctic region which, according to this study, are declining even faster in sea ice thickness than previously estimated. So let’s take a look at the map of sea-ice thickness for this year, May 2021 and compare it with 10 years ago, May 2011

Can you spot the significant decline in sea-ice thickness? Here is what marine biologist Susan Crockford says about this year’s sea-ice thickness:

Surprising sea ice thickness across the Arctic is good news for polar bears

This year near the end of May the distribution of thickest sea ice (3.5-5m/11.5-16.4 ft – or more) is a bit surprising, given that the WMO has suggested we may be only five years away from a “dangerous tipping point” in global temperatures. There is the usual and expected band of thick ice in the Arctic Ocean across northern Greenland and Canada’s most northern islands but there are also some patches in the peripheral seas (especially north of Svalbard, southeast Greenland, Foxe Basin, Hudson Strait, Chukchi Sea, Laptev Sea). This is plenty of sea ice for polar bear hunting at this time of year (mating season is pretty much over) and that thick ice will provide summer habitat for bears that choose to stay on the ice during the low-ice season: not even close to an emergency for polar bears.

Thick ice along the coasts of the Chukchi and Laptev Seas in Russia seems to be reasonably common, see closeup of the 2021 chart below:

Note that the Chukchi Sea and Laptev Sea both have thick ice this year. These two were singled out by the study above as showing the fastest declines in sea-ice thickness; indeed the Chuckchi provides the Graun headline ‘Arctic ice thinning twice as fast as thought’. Perhaps it is just interannual variability and these regions will show a marked decline next year, placing polar bears once again at risk of extinction. Alarmists can but hope.

Matt Ridley in the Telegraph

Matt also takes aim at the epidemiological and climate modelers, who are so fond of their worst case scenarios, in the Telegraph. He says:

The Government’s reliance on Sage experts’ computer modelling to predict what would happen with or without various interventions has proved about as useful as the ancient Roman habit of consulting trained experts in “haruspicy” – interpreting the entrails of chickens.

Again and again, worst-case scenarios are presented with absurd precision, sometimes deliberately to frighten us into compliance. The notorious press conference last October that told us 4,000 people a day might die was based on a model that was already well out of date.

Pessimism bias in modelling has two roots. The first is that worst-case scenarios are more likely to catch the attention of ministers and broadcasters: academics are as competitive as anybody in seeking such attention. The second is that modellers have little to lose by being pessimistic, but being too optimistic risks can ruin their reputations. Ask Michael Fish, the weather forecaster who in 1987 reassured viewers that hurricanes hardly ever happen.

Then he identifies the tendency I have criticised here, namely the false assumption that the output of models can be treated as ‘data’:

As Steve Baker MP has been arguing for months, the modellers must face formal challenge. It is not just in the case of Covid that haruspicy is determining policy. There is a growing tendency to speak about the outcomes of models in language that implies they generate evidence, rather than forecasts. This is especially a problem in the field of climate science. As the novelist Michael Crichton put it in 2003: “No longer are models judged by how well they reproduce data from the real world: increasingly, models provide the data. As if they were themselves a reality.”

Examine the forecasts underpinning government agencies’ plans for climate change and you will find they often rely on a notorious model called RCP8.5, which was always intended as extreme and unrealistic. Among a stack of bonkers assumptions, it projects that the world will get half its energy from coal in 2100, burning 10 times as much as today, even using it to make fuel for aircraft and vehicles. In this and every other respect, RCP8.5 is already badly wrong, but it has infected policy-makers like a virus, a fact you generally have to dig out of the footnotes of government documents.

I was pointing out the parallels between climate and Covid modelling in April last year:

“They got it wrong the second time because they relied upon an epidemiological model (adapted from an old ‘flu model) which predicted 510,000 deaths from a virus which we knew virtually nothing about.

Climate change modellers never get it wrong, simply because even when their models don’t agree with reality, this is either because the observations are wrong, or because they still ‘do a reasonable job’ of modelling past and present climate change (especially when inconvenient ‘blips’ are ironed out by retrospective adjustments to the data), but principally because the subject of their claimed modelling expertise lies many years off in the future – climate change to be expected in 2050 or 2100, when the real impacts will begin to be felt. Imperial’s and IMHE’s worst case scenarios look way off, just weeks after they were proposed and after governments acted on the modeller’s advice. Their assumptions are being rapidly challenged by new data and research. Nothing similar happens in climate change land. Their worst case scenario (RCP8.5), though comprehensively debunked, still lives on and is still being defended by Met Office scientists on the basis that ‘carbon feedbacks (however unlikely) cannot be ruled out’.

Ice models and climate models combined are data points

At least, they are according to Dr Tamsin Edwards of King’s College London, writing in the Graun:

Sea levels are going to rise, no matter what. This is certain. But new
research I helped produce shows how much we could limit the damage: sea level rise from the melting of ice could be halved this century if we meet the Paris agreement target of keeping global warming to 1.5C.

The aim of our research was to provide a coherent picture of the future of the world’s land ice using hundreds of simulations. 

Connecting parts of the world: the world’s land ice is made up of global glaciers in 19 regions, and the Greenland and Antarctic ice sheets at each pole. Our methods allow us to use exactly the same predictions of global warming for each. This may sound obvious, but is actually unusual, perhaps unique at this scale. Each part of the world is simulated separately, by different groups of people, using different climate models to provide the warming levels. We realigned all these predictions to make them consistent.

Connecting the data: at its heart, this study is a join-the-dots picture. Our 38 groups of modellers created nearly 900 simulations of glaciers and ice sheets. Each one is a data point about its contribution to future sea level rise. Here, we connected the points with lines, using a statistical method called “emulation”. Imagine clusters of stars in the sky: drawing the constellations allow us to visualise the full picture more easily – not just a few points of light, but each detail of Orion’s torso, limbs, belt and bow.

Not only are model outputs ‘data’; they are also stars in the firmament! Tamsin and the other eighty four authors of this study are also very fond of focusing on worst case scenarios:

So, for those most at risk, we made a second set of predictions in a pessimistic storyline where Antarctica is particularly sensitive to climate change. We found the losses from the ice sheet could be five times larger than the main predictions, which would imply a 5% chance of the land ice contribution to sea level exceeding 56cm in 2100 – even if we limit warming to 1.5C. Such a storyline would mean far more severe increases in flooding.

How did they generate this particular set of ‘data points’? This is explained in the actual paper:

Given the wide range and cancellations of responses across models and parameters, we
present alternative ‘pessimistic but physically plausible’ Antarctica projections for risk-averse
stakeholders, by combining a set of assumptions that lead to high sea level contributions.
These are: the four ice sheet models most sensitive to basal melting; the four climate models
that lead to highest Antarctic sea level contributions, and the one used to drive most of the ice
shelf collapse simulations; the high basal melt (Pine Island Glacier) distribution; and with ice
shelf collapse ‘on’ (i.e. combining robustness tests 6 and 7 and sensitivity tests 6 and 10). This
storyline would come about if the high basal melt sensitivities currently observed at Pine
Island Glacier soon become widespread around the continent; the ice sheet responds to these
with extensive retreat and rapid ice flow; and atmospheric warming is sufficient to
disintegrate ice shelves, but does not substantially increase snowfall. The risk-averse
projections are more than five times the main estimates: median 21 cm (95th percentile range
7 to 43 cm) under the NDCs (Fig. 3j), and essentially the same under SSP5-85 (Table 1;
regions shown in Extended Data Figure 4: test 11), with the 95th percentiles emerging above
the main projections after 2040 (Fig. 3d). This is very similar to projections under an
extreme scenario of widespread ice shelf collapses for RCP8.5 (median 21 cm; 95th percentile
range 9 to 39 cm).

I’m sorry Tamsin, but model output is not data and your worst case scenario of glacier melt and resultant sea level rise is not physically or socio-economically ‘plausible’. Climate scientists and epidemiological modelers do not live in the same world as the rest of us, but they insist that we make plans and real sacrifices to prepare for the nightmarish world which they do inhabit, if only on a part time basis.

CMIP6: In a Sea of Junk Models, The Met Office’s UKESM1.0 Model Stands Out as Even More Junk

There’s a post published at Watts Up With That which provides a sneak preview of some CMIP6 models runs for the upcoming release of the IPCC’s AR6 (Part 1: Physical Science Basis due in April 2021). As the author, Andy May says:

The new IPCC report, abbreviated “AR6,” is due to come out between April 2021 (the Physical Science Basis) and June of 2022 (the Synthesis Report). I’ve purchased some very strong hip waders to prepare for the events. For those who don’t already know, sturdy hip waders are required when wading into sewage. 

Andy has looked at some of CMIP6 climate model runs posted on KNMI Explorer and this is what he found:

The base period is 1981-2010 and the emissions pathway is ssp245, which is similar to the old RCP4.5 concentration pathway. Most as you can see project global warming in 2100 to be somewhere between just over 1.0C and 2.5C, which in itself is quite a spread. But then you look at UKESM1.0 (light blue) and CanESM5 (yellow – partly obscured) and they are projecting warming anywhere between about 2.5C and 3.8C. They stand out like sore thumbs in 2100, as does UKESM1.0 hindcast warming in the 1960s using historical forcings. As you can see, UKESM1.0 cools the mid 20th century cooling period by -1.5C compared to 1981-2010! That is huge and is not borne out by actual observations. I went into the reasons for this discrepancy here.

To get a clearer picture of how UKESM deviates from actual measurements, here are the graphs of Hadcrut 4 against the model runs:

Quite obviously, UKESM1.0 vastly overstates mid 20th century cooling in the northern hemisphere. Why? Because it greatly overestimates the impact of anthropogenic aerosol cooling. Here is what the Met Office say about UKESM1.0 and the physical general circulation model on which it is based:

The Earth System Model UKESM1, and the physical model (or General Circulation Model) it is based on, HadGEM3-GC3.1 are the result of years of work by scientists and software engineers from the Met Office and wider UK science community.

Analysis shows the climate sensitivity of the models is high. For both models the Transient Climate Sensitivity (TCR) is about 2.7 °C, while the Equilibrium Climate Sensitivity (ECS)  is about 5.4°C for UKESM1 and about 5.5°C for GC3.1. Future projections using the new models are in progress. When these have been analysed, we will have a better understanding of how the climate sensitivity affects future warming and associated impacts.

Very high sensitivity means that historic aerosol forcings must be correspondingly high in order for the model to align with current (presumed highly accurate) global mean surface temperature data. But the aerosol forcing is so high that it ends up unrealistically cooling the 1960s. As I pointed out:

UKESM1 massively overstates mid 20th century cooling but it has to if it is to get the rest of the historical record more or less correct with such a ridiculously high sensitivity built in. Note that it is indeed overestimated aerosol cooling which is responsible for this 20th century mismatch because it is much more pronounced in the Northern Hemisphere where most of the heavy industry was and still is.

The Met Office confirms that large anthropogenic aerosol forcings were incorporated into the development of UKESM1.0:

UKESM1 is developed on top of the coupled physical model, HadGEM3-GC3 (hereafter GC3). GC3 consists of the Unified Model (UM) atmosphere, JULES land surface scheme, NEMO ocean model and the CICE sea ice model. The UM atmosphere in GC3 is Global Atmosphere version 7 (GA7). Inclusion in GA7 of both a new cloud microphysics parameterization and the new GLOMAP aerosol scheme led to a concern the model might exhibit a strong negative historical aerosol radiative forcing (i.e. a strong aerosol-induced cooling due to increasing anthropogenic emission of aerosol and aerosol precursors over the past ~150 years) with potentially detrimental impacts on the overall historical simulation of both GC3 and UKESM1.

A protocol was therefore developed to assess the Effective Radiative Forcing (ERF) of the mainclimate forcing agents over the historical period (~1850 to 2000), namely; well mixed greenhouse gases (GHGs), aerosols and aerosol precursors, tropospheric ozone and land use change. This protocol follows that of the CMIP6 RFMIP project (Andrews 2014, Pincus et al. 2016). The aim was to assess the change in the mean top-of-atmosphere (TOA) ERF between average pre-industrial (~1850 in our experiments) and present-day (~2000) conditions. In particular to assess the aerosol ERF, with a requirement that the total (all forcing agents) historical ERF be positive. Initial tests revealed an aerosol ERF of -2.2 Wm-2, significantly stronger than the -1.4 Wm-2 simulated by HadGEM2-A (Andrews 2014) and also outside the IPCC AR5 5-95% range of -1.9 to -0.1 Wm-2. As a result of the large (negative) aerosol ERF, the total ERF diagnosed over the historical period was approximately 0 Wm-2.

They were so large initially that they had to find a method of actually reducing them:

We therefore investigated aspects of GA7 that could be causing this strong aerosol forcing and, where possible, introduced new processes and/or improved existing process descriptions to address these. The goal of this effort was to develop an atmosphere model configuration solidly based on GA7.0 that:1.Had a less negative aerosol ERF and thereby a total historical ERF of >+ 0.5 Wm-22.

The above is bad enough news for the historical authenticity of UKESM1.0 and hence its reliability in terms of future projections, but it gets worse. A paper recently published argues that anthropogenic aerosol forcings cool the climate even less than originally thought, meaning that UKESM1.0 is even more out of sync with reality than as described above:

“Our conclusion is that the cooling effect of aerosols on clouds is overestimated when we rely on ship-track data,” says Glassmeier. “Ship tracks are simply too short-lived to provide the correct estimate of cloud brightening.” The reason for this is that ship-track data don’t account for the reduced cloud thickness that occurs in widespread pollution. “To properly quantify these effects and get better climate projections, we need to improve the way clouds are represented in climate models,” Glassmeier explains further.

Oh dear, it’s not looking good for the Met Office’s ‘flagship’ CMIP6 climate model. Maybe they need to raise the white flag of surrender. It’s not much better for the Canadian model either, or in fact any of the CMIP6 13 model ensemble according to Andy May.

Historical forcings are used prior to 2014 and projected values after. The blue and orange curves are from two runs from a single Canadian model. The two runs are over 0.2°C different in 2010 and 2011, some months they are over 0.5°C different. There are multiple periods where the model runs are clearly out-of-phase for several years, examples are 2001-2003 and 2014 to 2017. The period from 2015 to 2019 is a mess.

I’m unimpressed with the CMIP6 models. The total warming since 1900 is less than one degree, but the spread of model results in Figure 1 is never less than one degree. It is often more than that, especially in the 1960s. The models are obviously not reproducing the natural climate cycles or oscillations, like the AMOPDO and ENSO. As can be seen in Figure 2 they often are completely out-of-phase for years, even when they are just two runs from the same model. I used the Canadian model as an example, but the two NCAR model runs (CESM2) are no better. In fact, in the 2010-2011 period and the 2015-2019 period they are worse as you can see in Figure 4.