Category Archives: Temperature History

Update On My Climate Model (Spoiler: It’s Doing a Lot Better than the Pros)

Cross posted from Coyoteblog

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends:

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this “model”

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun’s output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about “since 1950″, but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to… ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can’t assume that these factors caused the current “pause” in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued….

Computer Generated Global Warming. Edit – Past Special – Add

Way back, I had a number of posts on surface temperature adjustments that seemed to artificially add warming to the historical record, here for example.  Looking at the adjustments, it seemed odd that they implied improving station location quality and reduced warming bias in the measurements, despite Anthony Watts work calling both assumptions into question.

More recently, Steve Goddard has been on a roll, looking at GISS adjustments in the US.   He’s found that the essentially flat raw temperature data:

Has been adjusted upwards substantially to show a warming trend that is not in the raw data.  The interesting part is that most of this adjustment has been added in the last few years.  As recently as 1999, the GISS’s own numbers looked close to those above.   Goddard backs into the adjustments the GISS has made in the last few years:

So, supposedly, some phenomenon has that shape.  After all, surely the addition of this little hockey stick shaped data curve to the raw data is not arbitrary simply to get the answer they want, the additions have to represent the results of some heretofore unaccounted-for bias in the raw data.  So what is it?  What bias or changing bias has this shape?

Revising History

This is a topic we have covered here a lot – downward revisions to temperatures decades ago that increase the apparent 20th century warming.  Here is a great example of this from the GISS for Reykjavik, Iceland.  The GISS has revised downwards early 20th century temperatures by as much as 2C, despite Iceland’s Met office crying foul.  It is unclear exactly what justification is being used to adjust the raw data.  Valid reasons include adjustments for changes in the time-of-day of the reading, changes to the instrument’s location or type, and urbanization effects.  It is virtually impossible to imagine changes in the first two categories that would be on the order of magnitude of 2C, and urbanization adjustments would have the opposite sign (e.g. make older readings warmer to match current urban-warming-biased readings).

Arctic stations like these are particularly important to the global metrics because the GISS extrapolates the temperature of the entire Arctic from just a few thermometers.  Changes to one reading at a station like Reykjavik could change the GISS extrapolated temperatures for hundreds of thousands of square miles.

Return of “The Plug”

I want to discuss the recent Kaufman study which purports to reconcile flat temperatures over the last 10-12 years with high-sensitivity warming forecasts.  First, let me set the table for this post, and to save time (things are really busy this week in my real job) I will quote from a previous post on this topic

Nearly a decade ago, when I first started looking into climate science, I began to suspect the modelers were using what I call a “plug” variable.  I have decades of experience in market and economic modeling, and so I am all too familiar with the temptation to use one variable to “tune” a model, to make it match history more precisely by plugging in whatever number is necessary to make the model arrive at the expected answer.

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

So now we can turn to Kaufman, summarized in this article and with full text here.  In the context of the Kiehl study discussed above, Kaufman is absolutely nothing new.

Kaufmann et al declare that aerosol cooling is “consistent with” warming from manmade greenhouse gases.

In other words, there is some value that can be assigned to aerosol cooling that offsets high temperature sensitives to rising CO2 concentrations enough to mathematically spit out temperatures sortof kindof similar to those over the last decade.  But so what?  All Kaufman did is, like every other climate modeler, find some value for aerosols that plugged temperatures to the right values.

Let’s consider an analogy.  A big Juan Uribe fan (plays 3B for the SF Giants baseball team) might argue that the 2010 Giants World Series run could largely be explained by Uribe’s performance.  They could build a model, and find out that the Giants 2010 win totals were entirely consistent with Uribe batting .650 for the season.

What’s the problem with this logic?  After all, if Uribe hit .650, he really would likely have been the main driver of the team’s success.  The problem is that we know what Uribe hit, and he batted under .250 last year.  When real facts exist, you can’t just plug in whatever numbers you want to make your argument work.

But in climate, we are not sure what exactly the cooling effect of aerosols are.  For related coal particulate emissions, scientists are so unsure of their effects they don’t even know the sign (ie are they net warming or cooling).  And even if they had a good handle on the effects of aerosol concentrations, no one agrees on the actual numbers for aerosol concentrations or production.

And for all the light and noise around Kaufman, the researchers did just about nothing to advance the ball on any of these topics.  All they did was find a number that worked, that made the models spit out the answer they wanted, and then argue in retrospect that the number was reasonable, though without any evidence.

Beyond this, their conclusions make almost no sense.  First, unlike CO2, aerosols are very short lived in the atmosphere – a matter of days rather than decades.  Because of this, they are poorly mixed, and so aerosol concentrations are spotty and generally can be found to the east (downwind) of large industrial complexes (see sample map here).

Which leads to a couple of questions.  First, if significant aerosol concentrations only cover, say, 10% of the globe, doesn’t that mean that to get a  0.5 degree cooling effect for the whole Earth, there must be a 5 degree cooling effect in the affected area.   Second, if this is so (and it seems unreasonably large), why have we never observed this cooling effect in the regions with high concentrations of manmade aerosols.  I understand the effect can be complicated by changes in cloud formation and such, but that is just further reasons we should be studying the natural phenomenon and not generating computer models to spit out arbitrary results with no basis in observational data.

Judith Currey does not find the study very convincing, and points to this study by Remer et al in 2008 that showed no change in atmospheric aerosol depths through the heart of the period of supposed increases in aerosol cooling.

So the whole basis for the study is flawed – its based on the affect of increasing aerosol concentrations that actually are not increasing.  Just because China is producing more does not apparently mean there is more in the atmosphere – it may be reductions in other areas like the US and Europe are offsetting Chinese emissions or that nature has mechanisms for absorbing and eliminating the increased emissions.

By the way, here was Curry’s response, in part:

This paper points out that global coal consumption (primarily from China) has increased significantly, although the dataset referred to shows an increase only since 2004-2007 (the period 1985-2003 was pretty stable).  The authors argue that the sulfates associated with this coal consumption have been sufficient to counter the greenhouse gas warming during the period 1998-2008, which is similar to the mechanism that has been invoked  to explain the cooling during the period 1940-1970.

I don’t find this explanation to be convincing because the increase in sulfates occurs only since 2004 (the solar signal is too small to make much difference).  Further, translating regional sulfate emission into global forcing isnt really appropriate, since atmospheric sulfate has too short of an atmospheric lifetime (owing to cloud and rain processes) to influence the global radiation balance.

Curry offers the alternative explanation of natural variability offsetting Co2 warming, which I think is partly true.  Though Occam’s Razor has to force folks at some point to finally question whether high (3+) temperature sensitivities to CO2 make any sense.  Seriously, isn’t all this work on aerosols roughly equivalent to trying to plug in yet more epicycles to make the Ptolemaic model of the universe continue to work?

Postscript: I will agree that there is one very important affect of the ramp-up of Chinese coal-burning that began around 2004 — the melting of Arctic Ice.  I strongly believe that the increased summer melts of Arctic ice are in part a result of black carbon from Asia coal burning landing on the ice and reducing its albedo (and greatly accelerating melt rates).   Look here when Arctic sea ice extent really dropped off, it was after 2003.    Northern Polar temperatures have been fairly stable in the 2000′s (the real run-up happened in the 1990′s).   The delays could be just inertia in the ocean heating system, but Arctic ice melting sure seems to correlate better with black carbon from China than it does with temperature.

I don’t think there is anything we could do with a bigger bang for the buck than to reduce particulate emissions from Asian coal.  This is FAR easier than CO2 emissions reductions — its something we have done in the US for nearly 40 years.

Just 20 Years

I wanted to pull out one thought from my longer video and presentation on global warming.

As a reminder, I adhere to what I call the weak anthropogenic theory of global warming — that the Earth’s sensitivity to CO2, net of all feedback effects, is 1C per doubling of CO2 concentrations or less, and that while man may therefore be contributing to global warming with his CO2 (not to mention his land use and other practices) the net effect falls far short of catastrophic.

While in the media, alarmists want to imply that the their conclusions about climate sensitivity are based on a century of observation, but this is not entirely true.  Certainly we have over a century of temperature measurements, but only a small part of this history is consistent with the strong anthropogenic theory.  In fact, I observed in my video is that the entire IPCC case for a high climate sensitivity to CO2 is based on just 20 years of history, from about 1978 to 1998.

Here are the global temperatures in the Hadley CRUT3 data base, which is the primary data from which the IPCC worked (hat tip Junk Science Global Warming at a Glance)  click to enlarge

Everything depends on how one counts it, but during the period of man-made CO2 creation, there are really just two warming periods, if we consider the time from 1910 to 1930 just a return to the mean.

  • 1930-1952, where temperatures spiked about a half a degree and ended 0.2-0.3 higher than the past trend
  • 1978-1998, where temperatures rose about a half a degree, and have remained at that level since

Given that man-made CO2 output did not really begin in earnest until after 1950 (see the blue curve of atmospheric CO2 levels on the chart), even few alarmists will attribute the runup in temperatures from 1930-1952 (a period of time including the 1930′s Dust Bowl) to anthropogenic CO2.  This means that the only real upward change in temperatures that could potentially be blamed on man-made CO2 occurred from 1978-1998.

This is a very limited amount of time to make sweeping statements about climate change causation, particularly given the still infant-level knowledge of climate science.  As a result, since 1970, skeptics and alarmists have roughly equal periods of time where they can make their point about temperature causation (e.g. 20 years of rising CO2 and flat temperatures vs. 20 years of rising CO2 and rising temperatures).

This means that in the last 40 years, both skeptics and alarmists must depend on other climate drivers to make their case  (e.g. skeptics must point to other natural factors for the run-up in 1978-1998, while alarmists must find natural effects that offset or delayed warming in the decade either side of this period).  To some extent, this situation slightly favors skeptics, as skeptics have always been open to natural effects driving climate while alarmists have consistently tried to downplay natural forcing changes.

I won’t repeat all the charts, but starting around chart 48 of this powerpoint deck (also in the video linked above) I present some alternate factors what may have contributed, along with greenhouse gases, to the 1978-1998 warming (including two of the strongest solar cycles of the century and a PDO warm period nearly exactly matching these two decades).

Postscript: Even if the entire 0.7C or so temperature increase in the whole of the 20th century is attributed to manmade CO2, this still implies a climate sensitivity FAR below what the IPCC and other alarmists use in their models.   Given about 44% of a doubling since the industrial revolution began in CO2 concentrations, this would translate into a temperature sensitivity of 1.3C  (not a linear extrapolation, the relationship is logarithmic).

This is why alarmists must argue that not only has all the warming we have seen been due to CO2 ( heroic assumption in and of itself) but that there are additional effects masking or hiding the true magnitude of past warming.  Without these twin, largely unproven assumptions, current IPCC “consensus” numbers for climate sensitivity would be absurdly high.  Again, I address this in more depth in my video.

Does This Sound Familiar to Anyone?

Greg Mankiw on scoring the federal stimulus package:

the CEA took a conventional Keynesian-style macroeconomic model and used those set of equations to estimate the effect the stimulus should have had.  Essentially, the model offers an estimate of the policy’s effect, conditional on the model being a correct description of the world.  But notice that this exercise is not really a measurement based on what actually occurred.  Rather, the exercise is premised on the belief that the model is true, so no matter how bad the economy got, the inference is that it would have been even worse without the stimulus.  Why?  Because that is what the model says.  The validity of the model itself is never questioned.

Does this sound like climate science or what?  The same models that are used to predict future temperature increases are used to decide how much of past warming was dues to Co2 and how much was due to natural effects.  Here is the retrospective IPCC chart which assigns more than 100% of post-1950 warming to CO2 (since the blue “natural forcings” is shown to go down, see more here)

Here is the stimulus version, showing flat employment, but positing that the stimulus created jobs because employment “would have gone down without it” (sound familiar?)

This kind of retrospective look at causality has the look of science but in fact is nothing of the sort, and can be not much more than guesses laundered to look like facts.

But this may in fact be worse than guessing.  In both cases, these graphs are drawn by folks who think they know the answer (in the first case that CO2 caused all warming, in the second that the stimulus created millions of jobs).  Since in both cases the lower “without” case (either without CO2 or without stimulus) is horrendously, almost impossible to derive and totally impossible to measure, there is good reason to believe it is merely a plug, fixed in value to get the answer they want.  But if I plugged it just on the back of an envelope, everyone would call me out for it, so I plug it in an arcane model where numerous inputs can be tweaked to get different results, to avoid this kind of unwanted scrutiny.

Readers of climate sites will also recognize this criticism of Obama’s self-serving stimulus analysis

Moreover, the fact that other organizations simulating similar models come to similar conclusions is no evidence about the validity of the model’s simulations.  It only tells you the CEA staff did not commit egregious programming errors when running their computer simulations.

Sounds like the logic behind the hockey stick spaghetti graphs, no?

More on Urban Biases

Roy Spencer has taken another cut at the data, and again the answer is about the same as what most thoughtful people have arrived at:  Perhaps half (or more) of past warming in the surface temperature record is likely spurious due to siting biases of surface measurement stations.

Again, there almost certainly is a warming trend since 1850, and some of that trend is probably due to manmade CO2, but sensitivities in most forecasts that get attention in the media are way too high.  A tenth of a degree C per decade over the next 100 years from manmade CO2 seems a reasonable planning number.

Spencer also looks at the global numbers here.

Urban Bias on Surface Temperature Record

A lot of folks have started to analyze the surface temperature record for urban biases.  This site has linked a number of past analyses, and I’ve done some first-hand analysis of local surface temperature stations and measurements of the Phoenix urban heat island.  My hypothesis is that as much as half of the historic warming signal of 0.7C or so in the surface temperature record is actually growing urban heat islands biasing measurement stations.

Edward Long took a selection of US measurement points from the NCDC master list and chose 48 rural and 48 urban locations (one for each of the lower-48 states).  While I would like to see a test to ensure no cherry-picking went on, his results are pretty telling:

Station Set
oC/Century, 11-Year Average Based on the Use of
Raw Data
Adjusted Data
Rural (48)
0.11
0.58
Urban (48)
0.72
0.72
Rural + Urban (96)
0.47
0.65

More at Anthony Watt, who has this chart from the study:

The Reference Frame has more analysis as well.

If this data is representative of the whole data set, we see two phenomena that should not be news to readers of this site:

  • Inclusion of biased urban data points may be contributing as much as 5/6 of the warming signal in the test period
  • The homogenization and adjustment process, which is supposed to statistically correct for biases, seems to be correcting the wrong way, increasing clean sites to matched biased ones rather than vice versa  (something I discussed years ago here)

The homogenization process has always bothered me.  It is probably the best we can do if we don’t know which of two conflicting measurements are likely to be biased, but it makes no sense in this case, as we have a fair amount of confidence the rural location is likely better than the urban.

Let’s say you had two compasses to help you find north, but the compasses are reading incorrectly.  After some investigation, you find that one of the compasses is located next to a strong magnet, which you have good reason to believe is strongly biasing that compass’s readings.  In response, would you

  1. Average the results of the two compasses and use this mean to guide you, or
  2. Ignore the output of the poorly sited compass and rely solely on the other unbiased compass?

Most of us would quite rationally choose #2.

Most climate data bases go with approach #1.

Let’s remind everyone why this matters:  We are not going to eliminate past warming.  The Earth was at one of its coldest periods in 5000 years through about 1800 and it has gotten warmer since.   The reason it matter is twofold:

  • The main argument for anthropogenic causes of warming is that the rise of late (particularly 1978 – 1998)  has been so steep and swift that it couldn’t be anything else.  This was always an absurd argument, because we have at least two periods in the last 150 years prior to most of our fossil fuel combustion where temperature rises were as fast and steep as 1978-1998.  But if temperatures did not rise as much as we thought, this argument is further gutted.
  • High sensitivity climate models have always had trouble back-casting history.  Models that predict 5C of warming with a doubling have a difficult time replicating past warming of 0.6C for 40% of a doubling.  If the 0.6C is really 0.3C, then someone might actually raise their hand and observe that the emperor has not clothes – ie, that based on history, high sensitivity models make no sense.

The Madness of Prince Charles

Charleses have not had the best of luck on the English throne.  And the current Prince of Wales does not seem to be doing much to change that tradition.  The other day he said:

“Well, if it is but a myth, and the global scientific community is involved in some sort of conspiracy, why is it then that around the globe sea levels are more than six inches higher than they were 100 years ago?

“This isn’t an opinion – it is a fact.”

He added: “And, ladies and gentlemen please be in no doubt that the evidence of long-term and potentially irreversible changes to our world is utterly overwhelming.”

Here is the deal with sea levels.  Yes, they were rising in 2009.  And they were rising in 2000.  And they were rising in 1950.  And they were rising in 1900.  And they were rising in 1850.   In fact, sea levels have been rising (due to thermal expansion of water and perhaps some melting land ice**) since the end of the little ice age  (and longer, see WUWT)

slide81

In fact, I would argue that this extended sea level rise helps disprove, rather than prove, the strong anthropogenic hypothesis.   The influence of manmade CO2 had to be small from 1850 to 1900 or even 1950.  Therefore, for the 1950-2000 sea level rise to be due to man, it means the natural warming had to stop at the exact same moment that anthropogenic effects took over.  Occam’s Razor says a better answer is that the end of the little ice age around 1800 has led to a general recovery of temperatures ever since.  We see the exact same pattern in glaciers melting

slide79

So many people are obsessed over whether or not current temperatures are the highest in the last 100o years or not, they forget that the temperatures in the little ice age were in fact lower than at any time in perhaps the last 5000 years.  It was very cold.

slide50

Postscript: By the way, I love the carbon footprint for me, but not for thee angle of the Prince Charles story:

Charles spoke after arriving in Manchester by Royal Train pulled by a coal-fired steam locomotive, named the Tornado, which was rebuilt from a 1948 design.

** Footnote: We know glaciers around the world have retreated since 1850, as shown above, but 90% of the world’s land ice is in Antarctica and we don’t fully understand what has happened there.  Some climatologists believe that warming weather actually increases the ice pack in Antarctica because it never will cause much melting but it increases  snowfall.

Assuming Your Conclusion

I thought this was pretty interesting, and oh-so typical of climate science, from an article by Viscount Monkton:

The paper was based on a test of a widely-used climate model on the mid-Pleiocene warm period, 3 million years ago, when the Earth warmed in response to natural processes. Cores drilled from ocean sediment provide some evidence for atmospheric carbon levels and temperature at the time.

The team found that at that era, although CO2 levels were close to today’s 388 parts per million by volume, global temperature was 3 C° (5.5 F°) warmer than today. The paper assumes – without evidence – that the difference can only be fully explained by the long-term loss of ice sheets and changes in vegetation that caused the Earth’s surface to absorb more solar radiation. One of the authors said that today’s CO2 concentration of 388 ppmv might already be too high to prevent more than 2 C° (3.5 F°) of warming compared with pre-industrial times – the limit agreed as an aspiration by the recent Copenhagen accord.

The authors are concluding that there is therefore another 3C of warming we should see over time due to our current CO2 levels that has just not showed up yet because slow-response-time feedbacks like ice melting / albedo changes haven’t fully come into play.

I presume you see the problem.  This conclusion can only be drawn if either

1.  We know the value of every other climate forcing that was in play 3 million years ago, and know them to be identical to their values today, such that the only changed variable in the temperature system between then and now is CO2.  Of course, this is absurd — we can’t possibly know all the other forcings from 3 million years ago (we argue about what they are today) and there is a very low probability they were all of the same value as today to set up a nice controlled experiment.  – OR -

2.  We assume that the only major driver of climate, the one that dominates and makes all others irrelevant, is CO2.  This is not only not proven, it is not even reasonably true.

These guys, as is so often the case in climate, are assuming their conclusion.  “If we assume that CO2 is the primary driver of climate, then sensitivity of the climate to CO2 is high.”  Duh.

Defending the Tribe

This is a really interesting email string form the CRU emails, via Steve McIntyre:

June 4, 2003 Briffa to Cook 1054748574
On June 4, 2003, Briffa, apparently acting as editor (presumably for Holocene), contacted his friend Ed Cook of Lamont-Doherty in the U.S. who was acting as a reviewer telling him that “confidentially” he needed a “hard and if required extensive case for rejecting”, in the process advising Cook of the identity and recommendation of the other reviewer. There are obviously many issues involved in the following as an editor instruction:

From: Keith Briffa
To: Edward Cook
Subject: Re: Review- confidential REALLY URGENT
Date: Wed Jun 4 13:42:54 2003

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting - to support Dave Stahle’s and really as soon as you can. Please
Keith

Cook to Briffa, June 4, 2003
In a reply the same day, Cook told Briffa about a review for Journal of Agricultural, Biological, and Environmental Sciences of a paper which, if not rejected, could “really do some damage”. Cook goes on to say that it is an “ugly” paper to review because it is “rather mathematical” and it “won’t be easy to dismiss out of hand as the math appears to be correct theoretically”. Here is the complete email:

Hi Keith,
Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper. Below is part of that file. Is this the right one? Also, is it possible to resurrect the column headings? I would like to play with it in an effort to refute their claims. If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced. Your assistance here is greatly appreciated. Otherwise, I will let Tornetrask sink into the melting permafrost of northern Sweden (just kidding of course).
Cheers,
Ed

A couple of observations

  1. For guys who supposedly represent the consensus science of tens of thousands of scientists, these guys sure have a bunker mentality
  2. I would love an explanation of how math can have theoretical deficiencies but be better in a practical sense.  In the practical sense of … giving the answer one wants?
  3. The general whitewash answer to all the FOIA obstructionism is that these are scientists doing important work not to be bothered by nutcases trying to waste their time.  But here is exactly the hypocrisy:  The email author says that some third party’s study is deficient because he can’t demonstrate how his mathematical approach might change the answer the hockey team is getting.  But no third party can do this because the hockey team won’t release the data needed for replication.  This kind of data – to check the mathematical methodologies behind the hockey stick regressions – is exactly what Steve McIntyre et al have been trying to get.  Ed Cook is explaining here, effectively, why release of this data is indeed important
  4. At the very same time these guys are saying to the world not to listen to critics because they are not peer-reviewed, they are working as hard as they can back-channel to keep their critics out of peer-reviewed literature they control.
  5. For years I have said that one problem with the hockey team is not just that the team is insular, but he reviewers of their work are the same guys doing the work.  And now we see that these same guys are asked to review the critics of their work.

A First

To my knowledge, this may be a first.  After years of folks like Steve McIntyre deconstructing numerous problems in historical temperature proxy studies, a major media outlet actually does a detailed article on some of the issues with proxy studies.  David Rose in the Mail Online.  Whoever thought we would see this chart in the MSM?

article-0-07949b82000005dc-809_634x447

Only 25 months after a similar chart was shown on this site (and here).

Powers of 10

This is a really interesting post at WUWT by J. Storrs Hall. It reminds me of one of those powers of ten films. He looks at data from a Greenland ice core (archived at NOAA here) going back over 50,000 years.   He begins looking at the last few hundred years, and then pulls back the view on larger and large time scales.  Highly recommended.

Note: Box, et al in 2009 claim to have found from 1-1.5C of warming since around 1900 when this chart leaves off.  It is very, very , very dangerous to splice data sets together, but one probably has to add a degree or so to the tail of the chart to bring it up to date, putting current warming about at the Medieval level but below earlier Holocene temperatures.

A Total Bluff

Gavin Schmidt has absolutely no evidence for this:  (via Tom Nelson)

Gavin [Schmidt],

In your opinion, what percentage of global warming is due to human causes vs. natural causes?

[Response: Over the last 40 or so years, natural drivers would have caused cooling, and so the warming there has been (and some) is caused by a combination of human drivers and some degree of internal variability. I would judge the maximum amplitude of the internal variability to be roughly 0.1 deg C over that time period, and so given the warming of ~0.5 deg C, I'd say somewhere between 80 to 120% of the warming. Slightly larger range if you want a large range for the internal stuff. - gavin]

This is a complete bluff.  There is no way he or anyone else knows this.  I could reverse his numbers and say 0-20% for CO2 and have just as much justification (actually more, see below).  We have devised no good way to parse the temperature changes into any reliable division between various drivers given the complexity of climate.  The only way climate scientists claim to do it is with their highly flawed temperature models, which is a fit of hubris that is unfathomable.

But, beyond the fact that he simply can’t know the answer, his guess here is just awful.  It does not reality check at all.   Here are a few pointers:

1.  Over the last 40 years, or at least over the portion from 1975-1995 when we saw most of the temperature increase, the sun was at its most active this century, as measured by sunspot numbers.  The PDO, which has close links to temperature, was in its warm cycle.  We likely were continuing to see long-term cyclical recovery from the little ice age.  And anthropogenic land use changes were increasing both urban and rural temperatures.  But he claims that the net effect of non-CO2 factors would have been negative?  This is roughly equivalent to Obama’s jobs claims numbers, saying that he saved jobs that would otherwise have been lost.  It’s appeal is that it makes a useful political point while being impossible to prove.

2.  Hansen is basically repeating the IPCC position that there could be no possible natural explanation for the the 0.2C per decade temperature increases from 1975-2000  — ie that such a pace of temperature increase has to be due to CO2 alone (80-120% in my mind equates to CO2 alone).  But world temperatures increased from 1910 to 1940 by 0.2C per decade, in a period almost certainly only minimally influenced by CO2  (see below).  So natural effects can cause warming in the 1930′s but not in the 1980′s because, why?

temperature-chart1

I often use this chart with audiences:

slide48

3.  I am positive that Hansen would argue that natural effects are currently (and temporarily) canceling out some of the warming.  He would say this as a way to deflect criticism that the world has stopped warming over the last decade (something the CRU emails admit they don’t understand, though they won’t admit this publicly**).  But Hansen et al. think we should be seeing 0.2C a decade or more in CO2 warming that is apparently being overcome by natural effects.  So natural effects have enough variability to cancel out 0.2C of warming but not enough to cause 0.2C of warming?  Huh?

This is sort of a special theme this week on this blog, as the topic keeps coming up.  In short, climate scientists need the climate to be alternately sensitive and insensitive, unstable and stable, driven by nature and not driven by nature, all depending on the period they are trying to explain.   All these wildly contradictory assumptions are required to try to keep the hypothesis of very high sensitivities to CO2 alive.

Here, by the way, was my attempt to explain the last 100 years of temperature with a cyclical wave plus a small linear trend:

slide53

Not bad, huh?  Here is a similar analysis using a linear trend plus the PDO

slide54

My answer seems at least as plausible as Gavin’s.  Here is where I did this analysis in more depth. If I really had an official climate scientist decoder ring, I would blame the gap between measured temperatures and my simplified model in orange during the 1980′s on aerosols.  I don’t know how much if any they affect the climate, but neither do climate scientists and that does not stop them from using it as the universal model plug to improve historic correlations.

By the way, for reference, here is the sunspot cycle:

slide51

Here is the world temperature graph overlayed with the PDO

slide52

And finally here is some evidence (from ice core analysis) that we may just still be recovering from a period that could well have been the coldest period in the last 5000 years  (notice the regular millennial trend as well).

slide50

But CO2 explains 80-120% of the warming?  The time is hopefully coming when smart people stop taking such statements on faith and demand proof.

**Postscript-  Last year I attended a fantastic series of lectures and discussions at ASU called the Origins Conference.  One thing that I observed there was the scientists, in talking about things like the origins of the universe, were quick to admit where they didn’t understand things — in fact they sort of were gleeful about it, like something that they didn’t understand was a new toy under the Christmas tree.  And for real scientists, I suppose it is.  This is not at all what we see in the CRU emails.

Cognitive Dissonance

Mann’s got an interesting problem.  His various hockey sticks show incredibly low temperature variability until about 1850 or so.  But his and his counterparts models assume the climate temperature system is dominated by very high positive feedbacks that multiply even tiny changes to forcings into large temperature swings.  These two points of view are extraordinarily hard to reconcile.

Similarly, climate alarmists assume that some sort of natural phenomenon is hiding or masking warming for the last decade.  Given their forecasts, this has to be a pretty muscular phenomenon, but at the same time they have to argue that natural factors are not muscular enough to have cause much or any of the temperature increases in the 1980s and 1990s.

The ability to handle cognitive dissonance is important in climate science.

Today’s Double-Speak Translation

As a public service, I will translate the double speak coming out of Phil Jones and the CRU

SCIENTISTS at the University of East Anglia (UEA) have admitted throwing away much of the raw temperature data on which their predictions of global warming are based.

It means that other academics are not able to check basic calculations said to show a long-term rise in temperature over the past 150 years.

The UEA’s Climatic Research Unit (CRU) was forced to reveal the loss following requests for the data under Freedom of Information legislation.

The data were gathered from weather stations around the world and then adjusted to take account of variables in the way they were collected. The revised figures were kept, but the originals — stored on paper and magnetic tape — were dumped to save space when the CRU moved to a new building…

In a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”

By “value-added,” the CRU means raw data where arbitrary scaling factors and adjustments have been added to the data in a totally opaque and non-replicable sort of way.  From past experience in other locations (see this post on New Zealand and the US), the adjustments to the raw data tend to drive 80-100% of the global warming signal.  In other words, in areas where we have been able to check, these data adjustments account for 80+% of what the scientists call “global warming.”  Without these adjustments, warming has been more modest or non-existent.

By destroying the raw data and thereby hiding the amount of massaging and adjustment that has been made to the data (“value add”) we are therefore unlike to be able to scrutinize the source of 80% of the warming signal.  More from Anthony Watts here.

Update:  This does not mean that there has been no warming, just that it has been exaggerated.  Satellites have shown warming over the last 30 years and are unaffected by the same biases and issues as at the CRU.  But the whole point is the exaggeration.  Skeptics generally don’t think there is no warming from man’s CO2, just that it is greatly exaggerated.  And this matters.  Ten degrees of warming vs. a half degree of warming over the next century have very very different policy implications.  See my video here for more.

“The Trick”

Steve McIntyre explains the “trick” referred to in the CRU emails.  The trick is subtle, which allows the scientists to weasel out, saying things that are technically true but in essence false and misleading.

Most of the proxy series are smoothed in some way.  Most smoothing algorithms adjust a data point by averaging in data both forwards and backwards in the series.  A simple algorithm puts high weights on nearby data points in this averaging and relatively lower weights on data points further away.

The problem occurs when the series reaches its end.  There are not points forward in the data series to average.  By the last point in the series, fully half the data necessary for smoothing does not exist.  There are various techniques for handling this, all of which have trade-offs and compromises (at the end of the day, you can’t create a signal when there is no data, no matter how clear one’s math tends to be).

The trick involved taking instrumental temperature records and using these records to provide data after the end point for smoothing purposes.  This tends to force the smoothed curves upwards at the end, when there is no such data in the proxy trend to substantiate this.  The perpetrators of this trick can argue with a semi-straight face that they did not “graft” the instrumental temperature record onto the data, but the instrumental temperature records does in fact affect the data series by contributing as much as half of the data for the smoothed curve in the end years.

Another Problem

I have always considered the “we-don’t-graft” claim disingenuous for another reason.  This is driven in large part because I have spent a lot of time not just manipulating data, but thinking about the most effective ways to represent it in graphical form.

To this end, I have always thought that while folks like Mann and Briffa have not technically grafted the instrumental data, they have effectively done so in their graphical representations — which is the form in which 99.9% of the population have consumed their data.

Below is the 1000-year temperature reconstruction (from proxies like tree rings and ice cores) in the Fourth IPCC Assessment.  It shows the results of twelve different studies, one of which is the Mann study famously named “the hockey stick.”

S_1000years

All the colored lines are the proxy (tree ring, ice cores, sediments, etc) study results.  The black line is the instrumental temperature record from the Hadley CRU.  There is no splice here – they have not joined proxy to instrument.  But they have effectively done so by overlaying the lines on top of each other.  The visual impact that says hockey stick is actually driven by this overlay.

S_1000years_inflection_high

To prove it, lets remove the black instrumental temperature line as well as the gray line which I think is some kind of curve fitted to all of the above.  This is what we get:

S_1000years_inflection

Pretty different visual impact, huh?  The hockey stick is gone.  So in fact, the visual image of a hockey stick is driven by the overlay of the instrumental record on the proxies.  The hockey stick inflection point occurs right at the point the two lines join, raising the distinct possibility the inflection is due to incompatibility of the two data sources rather than a natural phenomenon.

More here.

Temperature Cycles

I have always been fascinated with the chart below, and the apparent strong correlation of global temperature changes and ocean cycles — particularly considering that ocean cycles are not included in climate cycles but never-the-less climate scientists act as if these models are accurate.

slide52

So, just for the fun of it, I tried to see if I could fit a linear trend plus a sine wave to historic temperature (similar to Klyashtorin and Lyubushin, 2003).  This is what we might see if temperature were a function of a constant recovery from the little ice age plus ocean cycles.  It is not the fit we would expect from an anthropogenic-driven model.  This is what I got  (temperature history a blend of Hadley CRUT3 and UAH satellite as shown here):

slide53

I didn’t spend a lot of time on it, and this is what I got — about 0.04C per decade linear trend plus a cycle.  This is one of those things that I can’t figure out if it is insightful or meaningless, but I thought I would share it with you this holiday week, since things are slow around the office here.

As a final set, I tried it again with a linear trend plus the PDO.

slide54

Update: The formula for the first chart is -0.55+0.005*(year-1861)+0.145*cos((2*pi*(year-1861)/64.1453)-1.8)

The formula for the second chart is -0.05+0.008*(year-1900)+0.2*PDO

Can’t Be Explained by Natural Causes

The fact that CO2 in the atmosphere can cause warming is fairly settled.  The question is, how much?  Is CO2 the leading driver of warming over the past century, or just an also-ran?

Increasingly, scientists justify the contention that CO2 was the primary driver of warming since 1950 by saying that they have attempted to model the warming of the last 50 years and they simply cannot explain the warming without CO2.

This has always struck me as an incredibly lame argument, as it implies that the models are an accurate representation of nature, which they likely are not.  We know that significant natural effects, such as the PDO and AMO are not well modelled or even considered at all in these models.

But for fun, lets attack the problem in a different way.  Below are two global temperature charts.  Both have the same scale, with time on the X-axis and temperature anomaly on the Y.   One is for the period from 1957-2008, what I will call the “anthropogenic” period because scientists claim that its slope can only be explained by anthropogenic factors.  The other is from 1895-1946, where CO2 emissions were low and whose behavior must almost certainly be driven by “nature” rather than man.

Sure, I am just a crazy denier, but they look really similar to me.  Why is it that one slope is explainable by natural factors but the other is not?  Especially since the sun in the later period was more active than it was in the earlier “natural” period.  So, which is which?

slide48

Continue reading

Regression Abuse

As I write this, I realize I go a long time without getting to climate.  Stick with me, there is an important climate point.

The process goes by a number of names, but multi-variate regression is a mathematical technique (really only made practical by computer processing power) of determining a numerical relationship between one output variable and one or more other input variables.

Regression is absolutely blind to the real world — it only knows numbers.  What do I mean by this?  Take the famous example of Washington Redskins football and presidential elections:

For nearly three quarters of a century, the Redskins have successfully predicted the outcome of each and every presidential election. It all began in 1933 when the Boston Braves changed their name to the Redskins, and since that time, the result of the team’s final home game before the election has always correctly picked who will lead the nation for the next four years.

And the formula is simple. If the Redskins win, the incumbent wins. If the Redskins lose, the challenger takes office.

Plug all of this into a regression and it would show a direct, predictive correlation between Redskins football and Presidential winners, with a high degree of certainty.  But we denizens of the real world would know that this is insane.  A meaningless coincidence with absolutely no predictive power.

You won’t often find me whipping out nuggets from my time at the Harvard Business School, because I have not always found a lot of that program to be relevant to my day-to-day business experience.  But one thing I do remember is my managerial economics teacher hammering us over and over with one caveat to regression analysis:

Don’t use regression analysis to go on fishing expeditions.  Include only the variables you have real-world evidence really affect the output variable to which you are regressing.

Let’s say one wanted to model the historic behavior of Exxon stock.  One approach would be to plug in a thousand or so variables that we could find in economics data bases and crank the model up and just see what comes out.  This is a fishing expedition.  With that many variables, by the math, you are almost bound to get a good fit (one characteristic of regressions is that adding an additional variable, no matter how irrelevant, always improves the fit).   And the odds are high you will end up with relationships to variables that look strong but are only coincidental, like the Redskins and elections.

Instead, I was taught to be thoughtful.  Interest rates, oil prices, gold prices, and value of the dollar are all sensible inputs to Exxon stock price.  But at this point my professor would have a further caveat.  He would say that one needs to have an expectation of the sign of the relationship.  In other words, I should have a theory in advance not just that oil prices affect Exxon stock price, but whether we expect higher oil prices to increase or decrease Exxon stock price.   In this he was echoing my freshman physics professor, who used to always say in the lab — if you are uncertain about the sign of a relationship, then you don’t really understand the process at all.

So lets say we ran the Exxon stock price model expecting higher oil prices to increase Exxon stock price, and our regression result actually showed the opposite, a strong relationship but with the opposite sign – higher oil prices seem to correlate better with lower Exxon stock price.  So do we just accept this finding?  Do we go out and bet a fortune on it tomorrow?  I sure wouldn’t.

No, what we do instead is take this as sign that we don’t know enough and need to research more.  Maybe my initial assumption was right, but my data is corrupt.  Maybe I was right about the relationship, but in the study period some other more powerful variable was dominating  (example – oil prices might have increased during the 1929 stock market crash, but all the oil company stocks were going down for other reasons).  It might be there is no relation between oil prices and Exxon stock prices.  Or it might be I was wrong, that in fact Exxon is dominated by refining and marketing rather than oil production and actually is worse off with higher oil prices.    But all of this points to needed research – I am not going to write an article immediately after my regression results pop out and say “New Study: Exxon stock prices vary inversely with oil prices” without doing more work to study what is going on.

Which brings us to climate (finally!) and temperature proxies.  We obviously did not have accurate thermometers measuring temperature in the year 1200, but we would still like to know something about temperatures.  One way to do this is to look at certain physical phenomenon, particularly natural processes that result in some sort of annual layers, and try to infer things from these layers.  Tree rings are the most common example – tree ring widths can be related to temperature and precipitation and other climate variables, so that by measuring tree ring widths (each of which can be matched to a specific year) we can infer things about climate in past years.

There are problems with tree rings for temperature measurement (not the least of which is that more things than just temperature affect ring width) so scientists search for other “proxies” of temperature.  One such proxy are lake sediments in certain northern lakes, which are layered like tree rings.  Scientists had a theory that the amount of organic matter in a sediment layer was related to the amount of growth activity in that year, which in term increased with temperature  (It is always ironic to me that climate scientists who talk about global warming catastrophe rely on increased growth and life in proxies to measure higher temperature).  Because more organic matter reduces x-ray density of samples, an inverse relationship between X-ray density and temperature could be formulated — in this case we will look at the Tiljander study of lake sediments.   Here is one core result:

picture1

The yellow band with lower X-ray density (meaning higher temperatures by the way the proxy is understood) corresponds pretty well with the Medieval Warm Period that is fairly well documented, at least in Europe (this proxy is from Finland).  The big drop in modern times is thought by most (including the original study authors) to be corrupted data, where modern agriculture has disrupted the sediments and what flows into the lake, eliminating its usefulness as a meaningful proxy.  It doesn’t mean that temperatures have dropped lately in the area.

But now the interesting part.  Michael Mann, among others, used this proxy series (despite the well-know corruption) among a number of others in an attempt to model the last thousand years or so of global temperature history.   To simplify what is in fact more complicated, his models regress each proxy series like this against measured temperatures over the last 100 years or so.  But look at the last 100 years on this graph.  Measured temperatures are going up, so his regression locked onto this proxy and … flipped the sign.  In effect, it reversed the proxy.  As far as his models are concerned, this proxy is averaged in with values of the opposite sign, like this:

picture2

A number of folks, particularly Steve McIntyre, have called Mann on this, saying that he can’t flip the proxy upside down.  Mann’s response is that the regression doesn’t care about the sign, and that its all in the math.

Hopefully, after our background exposition, you see the problem.  Mann started with a theory that more organic material in lake sediments (as shown by lower x-ray densities) correlated with higher temperatures.  But his regression showed the opposite relationship — and he just accepted this, presumably because it yielded the hockey stick shape he wanted.  But there is absolutely no physical theory as to why our historic understanding of organic matter deposition in lakes should be reversed, and Mann has not even bothered to provide one.  In fact, he says he doesn’t even need to.

This mistake (fraud?) is even more egregious because it is clear that the jump in x-ray values in recent years is due to a spurious signal and corruption of the data.  Mann’s algorithm is locking into meaningless noise, and converting it into a “signal” that there is a hockey stick shape to the proxy data.

As McIntyre concludes:

In Mann et al 2008, there is a truly remarkable example of opportunistic after-the-fact sign selection, which, in addition, beautifully illustrates the concept of spurious regression, a concept that seems to baffle signal mining paleoclimatologists.

Postscript: If you want an even more absurd example of this data-mining phenomenon, look no further than Steig’s study of Antarctic temperatures.   In the case of proxies, it is possible (though unlikely) that we might really reverse our understanding of how the proxy works based on the regression results. But in Steig, they were taking individual temperature station locations and creating a relationship between them to a synthesized continental temperature number.  Steig used regression techniques to weight various thermometers in rolling up the continental measure.  But five of the weights were negative!!

bar-plot-station-weights

As I wrote then,

Do you see the problem?  Five stations actually have negative weights!  Basically, this means that in rolling up these stations, these five thermometers were used upside down!  Increases in these temperatures in these stations cause the reconstructed continental average to decrease, and vice versa.  Of course, this makes zero sense, and is a great example of scientists wallowing in the numbers and forgetting they are supposed to have a physical reality.  Michael Mann has been quoted as saying the multi-variable regression analysis doesn’t care as to the orientation (positive or negative) of the correlation.  This is literally true, but what he forgets is that while the math may not care, Nature does.