Category Archives: Warming Forecasts

Revisiting (Yet Again) Hansen’s 1998 Forecast on Global Warming to Congress

I want to briefly revisit Hansen’s 1998 Congressional forecast.  Yes, I and many others have churned over this ground many times, but I think I now have a better approach.   The typical approach has been to overlay some actual temperature data set on top of Hansen’s forecast (e.g. here).  The problem is that with revisions to all of these data sets, particularly the GISS reset in 1999, none of these data sets match what Hansen was using at the time.  So we often get into arguments on where the forecast and actuals should be centered, etc.

This might be a better approach.  First, let’s start with Hansen’s forecast chart (click to enlarge).

hansen forecast

Folks have argued for years over which CO2 scenario best matches history.  I would argue it is somewhere between A and B, but you will see in a moment that it almost does not matter.    It turns out that both A and B have nearly the same regressed slope.

The approach I took this time was not to worry about matching exact starting points or reconciling difference anomaly base periods.  I merely took the slope of the A and B forecasts and compared it to the slope over the last 30 years of a couple of different temeprature databases (Hadley CRUT4 and the UAH v6 satellite data).

The only real issue is the start year.  The analysis is not very sensitive to the year, but I tried to find a logical start.  Hansen’s chart is frustrating because his forecasts never converge exactly, even 20 years in the past.  However, they are nearly identical in 1986, a logical base year if Hansen was giving the speech in 1988, so I started there.  I didn’t do anything fancy on the trend lines, just let Excel calculate the least squares regression.  This is what we get (as usual, click to enlarge).

click to enlarge

I think that tells the tale  pretty clearly.   Versus the gold standard surface temperature measurement (vs. Hansen’s thumb-on-the-scale GISS) his forecast was 2x too high.  Versus the satellite measurements it was 3x too high.

The least squares regression approach probably under-estimates that A scenario growth rate, but that is OK, that just makes the conclusion more robust.

By the way, I owe someone a thanks for the digitized numbers behind Hansen’s chart but it has been so many years since I downloaded them I honestly forgot who they came from.

Update On My Climate Model (Spoiler: It’s Doing a Lot Better than the Pros)

Cross posted from Coyoteblog

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends:

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this “model”

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun’s output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about “since 1950”, but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to… ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can’t assume that these factors caused the current “pause” in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued….

Return of “The Plug”

I want to discuss the recent Kaufman study which purports to reconcile flat temperatures over the last 10-12 years with high-sensitivity warming forecasts.  First, let me set the table for this post, and to save time (things are really busy this week in my real job) I will quote from a previous post on this topic

Nearly a decade ago, when I first started looking into climate science, I began to suspect the modelers were using what I call a “plug” variable.  I have decades of experience in market and economic modeling, and so I am all too familiar with the temptation to use one variable to “tune” a model, to make it match history more precisely by plugging in whatever number is necessary to make the model arrive at the expected answer.

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

So now we can turn to Kaufman, summarized in this article and with full text here.  In the context of the Kiehl study discussed above, Kaufman is absolutely nothing new.

Kaufmann et al declare that aerosol cooling is “consistent with” warming from manmade greenhouse gases.

In other words, there is some value that can be assigned to aerosol cooling that offsets high temperature sensitives to rising CO2 concentrations enough to mathematically spit out temperatures sortof kindof similar to those over the last decade.  But so what?  All Kaufman did is, like every other climate modeler, find some value for aerosols that plugged temperatures to the right values.

Let’s consider an analogy.  A big Juan Uribe fan (plays 3B for the SF Giants baseball team) might argue that the 2010 Giants World Series run could largely be explained by Uribe’s performance.  They could build a model, and find out that the Giants 2010 win totals were entirely consistent with Uribe batting .650 for the season.

What’s the problem with this logic?  After all, if Uribe hit .650, he really would likely have been the main driver of the team’s success.  The problem is that we know what Uribe hit, and he batted under .250 last year.  When real facts exist, you can’t just plug in whatever numbers you want to make your argument work.

But in climate, we are not sure what exactly the cooling effect of aerosols are.  For related coal particulate emissions, scientists are so unsure of their effects they don’t even know the sign (ie are they net warming or cooling).  And even if they had a good handle on the effects of aerosol concentrations, no one agrees on the actual numbers for aerosol concentrations or production.

And for all the light and noise around Kaufman, the researchers did just about nothing to advance the ball on any of these topics.  All they did was find a number that worked, that made the models spit out the answer they wanted, and then argue in retrospect that the number was reasonable, though without any evidence.

Beyond this, their conclusions make almost no sense.  First, unlike CO2, aerosols are very short lived in the atmosphere – a matter of days rather than decades.  Because of this, they are poorly mixed, and so aerosol concentrations are spotty and generally can be found to the east (downwind) of large industrial complexes (see sample map here).

Which leads to a couple of questions.  First, if significant aerosol concentrations only cover, say, 10% of the globe, doesn’t that mean that to get a  0.5 degree cooling effect for the whole Earth, there must be a 5 degree cooling effect in the affected area.   Second, if this is so (and it seems unreasonably large), why have we never observed this cooling effect in the regions with high concentrations of manmade aerosols.  I understand the effect can be complicated by changes in cloud formation and such, but that is just further reasons we should be studying the natural phenomenon and not generating computer models to spit out arbitrary results with no basis in observational data.

Judith Currey does not find the study very convincing, and points to this study by Remer et al in 2008 that showed no change in atmospheric aerosol depths through the heart of the period of supposed increases in aerosol cooling.

So the whole basis for the study is flawed – its based on the affect of increasing aerosol concentrations that actually are not increasing.  Just because China is producing more does not apparently mean there is more in the atmosphere – it may be reductions in other areas like the US and Europe are offsetting Chinese emissions or that nature has mechanisms for absorbing and eliminating the increased emissions.

By the way, here was Curry’s response, in part:

This paper points out that global coal consumption (primarily from China) has increased significantly, although the dataset referred to shows an increase only since 2004-2007 (the period 1985-2003 was pretty stable).  The authors argue that the sulfates associated with this coal consumption have been sufficient to counter the greenhouse gas warming during the period 1998-2008, which is similar to the mechanism that has been invoked  to explain the cooling during the period 1940-1970.

I don’t find this explanation to be convincing because the increase in sulfates occurs only since 2004 (the solar signal is too small to make much difference).  Further, translating regional sulfate emission into global forcing isnt really appropriate, since atmospheric sulfate has too short of an atmospheric lifetime (owing to cloud and rain processes) to influence the global radiation balance.

Curry offers the alternative explanation of natural variability offsetting Co2 warming, which I think is partly true.  Though Occam’s Razor has to force folks at some point to finally question whether high (3+) temperature sensitivities to CO2 make any sense.  Seriously, isn’t all this work on aerosols roughly equivalent to trying to plug in yet more epicycles to make the Ptolemaic model of the universe continue to work?

Postscript: I will agree that there is one very important affect of the ramp-up of Chinese coal-burning that began around 2004 — the melting of Arctic Ice.  I strongly believe that the increased summer melts of Arctic ice are in part a result of black carbon from Asia coal burning landing on the ice and reducing its albedo (and greatly accelerating melt rates).   Look here when Arctic sea ice extent really dropped off, it was after 2003.    Northern Polar temperatures have been fairly stable in the 2000’s (the real run-up happened in the 1990’s).   The delays could be just inertia in the ocean heating system, but Arctic ice melting sure seems to correlate better with black carbon from China than it does with temperature.

I don’t think there is anything we could do with a bigger bang for the buck than to reduce particulate emissions from Asian coal.  This is FAR easier than CO2 emissions reductions — its something we have done in the US for nearly 40 years.

Climate Models

My article this week at Forbes.com digs into some fundamental flaws of climate models

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

New Roundup

For a variety of reasons I have been limited in blogging, but here is a brief roundup of interesting stories related to the science of anthropogenic global warming.

  • Even by the EPA’s own alarmist numbers, a reduction in man-made warming of 0.01C in the year 2100 would cost $78 billion per year.  This is over $7 trillion a year per degree of avoided warming, again using even the EPA’s overly high climate sensitivity numbers.   For scale, this is almost half the entire US GDP.   This is why the precautionary principle was always BS – it assumed that the cost of action was virtually free.  Sure it makes sense to avoid low-likelihood but high-cost future contingencies if the cost of doing so is low.  But half of GDP?
  • As I have written a zillion times, most of the projected warming from CO2 is not from CO2 directly but from positive feedback effects hypothesized in the climate.  The largest of these is water vapor.  Water is (unlike CO2) a strong greenhouse gas and if small amounts of warming increase water vapor in the atmosphere, that would be a positive feedback effect that would amplify warming.   Most climate modellers assume relative humidity stays roughly flat as the world warms, meaning total water vapor content in the atmosphere will rise.  In fact, this does not appear to have been the case over the last 50 years, as relative humidity has fallen while temperatures have risen.  Further, in a peer-reviewed article, scientists suggest certain negative feedbacks that would tend to reduce atmospheric water vapor.
  • A new paper reduces the no-feedback climate sensitivity to CO2 from about 1-1.2C/doubling (which I and most other folks have been using) to something like 0.41C.  This is the direct sensitivity to CO2 before feedbacks, if I understand the paper correctly. without any reference to feedbacks.  In that sense, the paper seems to be wrong in comparing this sensitivity to the IPCC numbers, which are including feedbacks.  A more correct comparison is of the 0.41C to a number about 1.2C, which is what I think the IPCC is using.   Never-the-less, if correct, halving this sensitivity number should halve the post-feedback number.

My hypothesis continues to be that the post feedback climate sensitivity to CO2 number, expressed as degrees C per doubling of atmospheric CO2 concentrations, is greater than zero and less than one.

  • It is pretty much time to stick a fork in the hide-the-decline debate.  This is yet another occasion when folks (in this case Mann, Briffa, Jones) should have said “yep, we screwed up” years ago and moved on.  Here is the whole problem in 2 charts.  Steve McIntyre recently traced the hide-the-decline trick (which can be summarized as truncating/hiding/obfuscating data that undermined their hypothesis on key charts) back to an earlier era.

My Favorite Topic, Feedback

I have posted on this a zillion times over here, and most of you are up to speed on this, but I posted this for my Coyote Blog readers and thought it would be good to repost over here.

Take all the psuedo-quasi-scientific stuff you read in the media about global warming.  Of all that mess, it turns out there is really only one scientific question that really matters on the topic of man-made global warming: Feedback.

While the climate models are complex, and the actual climate even, err, complexer, we can shortcut the reaction of global temperatures to CO2 to a single figure called climate sensitivity.  How many degrees of warming should the world expect for each doubling of CO2 concentrations  (the relationship is logarithmic, so that is why sensitivity is based on doublings, rather than absolute increases — an increase of CO2 from 280 to 290 ppm should have a higher impact on temperatures than the increase from, say, 380 to 390 ppm).

The IPCC reached a climate sensitivity to CO2 of about 3C per doubling.  More popular (at least in the media) catastrophic forecasts range from 5C on up to about any number you can imagine, way past any range one might consider reasonable.

But here is the key fact — Most folks, including the IPCC, believe the warming sensitivity from CO2 alone (before feedbacks) is around 1C or a bit higher (arch-alarmist Michael Mann did the research the IPCC relied on for this figure).  All the rest of the sensitivity between this 1C and 3C or 5C or whatever the forecast is comes from feedbacks (e.g. hotter weather melts ice, which causes less sunlight to be reflected, which warms the world more).  Feedbacks, by the way can be negative as well, acting to reduce the warming effect.  In fact, most feedbacks in our physical world are negative, but alarmist climate scientists tend to assume very high positive feedbacks.

What this means is that 70-80% or more of the warming in catastrophic warming forecasts comes from feedback, not CO2 acting alone.   If it turns out that feedbacks are not wildly positive, or even are negative, then the climate sensitivity is 1C or less, and we likely will see little warming over the next century due to man.

This means that the only really important question in the manmade global warming debate is the sign and magnitude of feedbacks.  And how much of this have you seen in the media?  About zero?  Nearly 100% of what you see in the media is not only so much bullshit (like whether global warming is causing the cold weather this year) but it is also irrelevant.  Entirely tangential to the core question.  Its all so much magician handwaving trying to hide what is going on, or in this case not going on, with the other hand.

To this end, Dr. Roy Spencer has a nice update.  Parts are a bit dense, but the first half explains this feedback question in layman’s terms.  The second half shows some attempts to quantify feedback.  His message is basically that no one knows even the sign and much less the magnitude of feedback, but the empirical data we are starting to see (which has admitted flaws) points to negative rather than positive feedback, at least in the short term.  His analysis looks at the change in radiative heat transfer in and out of the earth as measured by satellites around transient peaks in ocean temperatures (oceans are the world’s temperature flywheel — most of the Earth’s surface heat content is in the oceans).

Read it all, but this is an interesting note:

In fact, NO ONE HAS YET FOUND A WAY WITH OBSERVATIONAL DATA TO TEST CLIMATE MODEL SENSITIVITY. This means we have no idea which of the climate models projections are more likely to come true.

This dirty little secret of the climate modeling community is seldom mentioned outside the community. Don’t tell anyone I told you.

This is why climate researchers talk about probable ranges of climate sensitivity. Whatever that means!…there is no statistical probability involved with one-of-a-kind events like global warming!

There is HUGE uncertainty on this issue. And I will continue to contend that this uncertainty is a DIRECT RESULT of researchers not distinguishing between cause and effect when analyzing data.

If you find this topic interesting, I recommend my video and/or powerpoint presentation to you.

Forecasting

One of the defenses often used by climate modelers against charges that climate is simple to complex to model accurately is that “they do it all the time in finance and economics.”  This comes today from Megan McArdle on economic forecasting:

I find this pretty underwhelming, since private forecasters also unanimously think they can make forecasts, a belief which turns out to be not very well supported.  More than one analysis of these sorts of forecasts has found them not much better than random chance, and especially prone to miss major structural changes in the economy.   Just because toggling a given variable in their model means that you produce a given outcome, does not mean you can assume that these results will be replicated in the real world.  The poor history of forecasting definitionally means that these models are missing a lot of information, and poorly understood feedback effects.

Sounds familiar, huh?  I echoed these sentiments in a comparison of economic and climate forecasting here.

Garbage In, Money Out

In my Forbes column last week, I discuss the incredible similarity between the computer models that are used to justify the Obama stimulus and the climate models that form the basis for the proposition that manmade CO2 is causing most of the world’s warming.

The climate modeling approach is so similar to that used by the CEA to score the stimulus that there is even a climate equivalent to the multiplier found in macro-economic models. In climate models, small amounts of warming from man-made CO2 are multiplied many-fold to catastrophic levels by hypothetical positive feedbacks, in the same way that the first-order effects of government spending are multiplied in Keynesian economic models. In both cases, while these multipliers are the single most important drivers of the models’ results, they also tend to be the most controversial assumptions. In an odd parallel, you can find both stimulus and climate debates arguing whether their multiplier is above or below one.

How similar does this sound to climate science:

If macroeconometrics were a viable paradigm, we would have seen major efforts to try to bring this sort of model up to date from its 1975 time warp. However, for reasons I have documented, the profession has decided that this macroeconometric project was a blind alley. Nobody bothered to bring these models up to date, because that would be like trying to bring astrology up to date.

This, from Arnold Kling about macroeconomic models could have been written just as well to describe the process for running climate models

Thirty-five years ago, I was Blinder’s research assistant, doing these sorts of simulations on the Fed-MIT-Penn model for the Congressional Budget Office. I think they are still done the same way. See lecture 13. Here are some of the things that Blinder had to tell his new research assistant to do.1. Make sure that there were channels in the model for credit market conditions to affect consumption and investment.

2. Correct the model’s past forecast errors, so that it would track the actual behavior of the economy over the past two years exactly. With the appropriate “add factors” or correction factors, the model then produces a “baseline scenario” that matches history and then projects out to the future. For the future, a judgment call has to be made as to how rapidly the add factors should decay. That is mostly a matter of aesthetics.

3. Simulate the model without the fiscal stimulus. This will result in the model’s standard multiplier analysis.

4. Make up an alternative path for what you think would have happened in credit markets without TARP and other extraordinary measures. For example, you might assume that mortgage interest rates would have been one percentage point higher than they actually were.

5. Simulate the model with this alternative scenario for credit market conditions.

6. (4) and (5) together create a fictional scenario of how the economy would have performed had the government not taken steps to fight the crisis. According to the model, this fictional scenario would have been horrid, with unemployment around 15 percent.

In the case of climate, the equivalent fictional scenario would be the world without manmade CO2, but the process of tweaking input variables and assuming one’s conclusions is the same.

Computer Model Fail

From the New Scientist:

What’s special about this latest dip is that the sun is having trouble starting the next solar cycle. The sun began to calm down in late 2007, so no one expected many sunspots in 2008. But computer models predicted that when the spots did return, they would do so in force. Hathaway was reported as thinking the next solar cycle would be a “doozy”: more sunspots, more solar storms and more energy blasted into space. Others predicted that it would be the most active solar cycle on record. The trouble was, no one told the sun.

The first sign that the prediction was wrong came when 2008 turned out to be even calmer than expected. That year, the sun was spot-free 73 per cent of the time, an extreme dip even for a solar minimum. Only the minimum of 1913 was more pronounced, with 85 per cent of that year clear.

As 2009 arrived, solar physicists looked for some action. They didn’t get it. The sun continued to languish until mid-December, when the largest group of sunspots to emerge for several years appeared. Finally, a return to normal? Not really.

Even with the solar cycle finally under way again, the number of sunspots has so far been well below expectations. Something appears to have changed inside the sun, something the models did not predict. But what?

Of course, Anthony Watt has been pointing this out for over two years, even pointing to a discontinuity in the Geomagnetic Average Planetary Index as one sign.

The people who model the sun, and failed, are not bad people.  It is an excercise worth attempting.  It turns out we just don’t know enough about the sun to accurately model its behavior.  Models are only as good as our understanding of the natural processes.  Something to think about with climate models.

Your Humble Scribe Quoted in WaPo Article on Computer Models

The article onj climate modelling is here, and is pretty good.  My bit is below, from web page 3:

But Warren Meyer, a mechanical and aerospace engineer by training who blogs at www.climate-skeptic.com, said that climate models are highly flawed. He said the scientists who build them don’t know enough about solar cycles, ocean temperatures and other things that can nudge the earth’s temperature up or down. He said that because models produce results that sound impressively exact, they can give off an air of infallibility.

But, Meyer said — if the model isn’t built correctly — its results can be both precise-sounding and wrong.

“The hubris that can be associated with a model is amazing, because suddenly you take this sketchy understanding of a process, and you embody it in a model,” and it appears more trustworthy, Meyer said. “It’s almost like money laundering.”

I actually like my term “knowlege laundering.”

Is It Wrong to Apply a Simple Amplifier Gain Mental Model to Climate?

Today will actually be fun, because it involves criticism of some of my writing around what I find to be the most interesting issue in climate, that of feedback effects.  I have said for a while that greenhouse gas theory is nearly irrelevant to the climate debate, because most scientists believe that the climate sensitivity to CO2 acting along without feedbacks is low enough (1.2C per doubling) to not really be catastrophic.   So the question whether man-made warming will be catastrophic depends on the assumption of strong net positive feedbacks in the climate system.  B Kalafut believes I have the wrong mental model for thinking about feedback in climate, and I want to review his post in depth.

Naming positive feedbacks is easy. In paleoclimate, consider the effect of albedo changes at the beginning of an ice age or the “lagging CO2” at the end. In the modern climate, consider water vapor as a greenhouse gas, or albedo changes as ice melts. In everyday experience, consider convection’s role in sustaining a fire. Consider the nucleation of raindrops or snowflakes or bubbles in a pot of boiling water. At the cellular level, consider the voltage-gated behavior of the sodium channels in a nerve axon or the “negative damping” of hair cells in the cochlea.

I am assuming he is refuting my statement that “it is hard to find systems dominated by strong net positive feedbacks that are stable over long periods of time.”  I certainly never said individual positive feedbacks don’t exist, and even mentioned some related to climate, such as ice albedo and increases in water vapor in air.  I am not sure we are getting anywhere here, but his next paragraph is more interesting.

On to the meat of Meyer’s argument: he seizes on one word (“feedback”) and runs madly, from metaphor to mental model. Metaphor: “like in an ideal amplifier”. Model: The climate experiences linear feedback as in an amplifier–see the math in his linked post or in the Lindzen slides from which he gets the idea. And then he makes the even worse leap, to claiming that climate models (GCMs) “use” something called “feedback fractions”. They do not–they take no such parameters as inputs but rather attempt to simulate the effects of the various feedback phenomena directly. This error alone renders Meyer’s take worthless–it’s as though he enquires about what sort of oats and hay one feeds a Ford Mustang. Feedback in climate are also nonlinear and time-dependent–consider why the water vapor feedback doesn’t continue until the oceans evaporate–so the ideal amplifier model cannot even be “forced” to apply.

First, I don’t remember ever claiming that climate models used a straight feedback-amplification method.  And I am absolutely positive I never said GCM’s use feedback fractions.    I would not expect them to.    This is a total straw man.  I am using a simple feedback amplification model as an abstraction to represent the net results of the models in a way layman might understand, and backing into an implied fraction f from published warming forecasts and comparing them to the 1.2C non-feedback number.  Much in the same way that scientists use the concept of climate sensitivity to shortcut a lot of messy detail and non-linearity.  I am, however, open to the possibility that mine is a poor mental model, so lets think about it.

Let’s start with an analogy.  There are very complicated electronic circuits in my stereo amplifier.  Nowadays, when people design those circuits, they have sophisticated modeling programs that can do a time-based simulation of voltage and current at every point in the circuit.  For a simulated input, the program will predict the output, and show it over time, even if it is messy and non-linear.  These models are in some ways like climate models, except that we understand electronic components better so our parametrization is more precise and reliable.    All that being said, it does not change the fact that a simple feedback-gain model for sections of the complex amplifier circuitry is still a useful mental model for the process at some level of abstraction, as long as one understands the shortcomings that come from any such simplification.

The author is essentially challenging the use of Gain = 1/ (1-f) to represent the operation of the feedbacks here.  So let’s think about if this is appropriate.  Let’s begin with thinking about a single feedback, ice albedo.   The theory is that there is some amount of warming from CO2, call it dT.  This dT will cause more ice to melt than otherwise would have  (or less ice to form in the winter).  The ice normally reflects more heat and sunlight back into space than open ocean or bare ground, so when it is reduced, the Earth gets a small incremental heat flux that will result in an increase in temperatures.  We will call this extra increase in temperature f*dT where f is most likely a positive number less than one.  So now our total increase, call it dT’ is dT+f*dT.   But this increase of f*dT will in turn cause some more ice to melt.  By the same logic as above, this increase will be f*f*dT.  And so on in an infinite series.  The solution to this series for a constant value of f is  dT’ = dT/(1-f) … thus the formula above.

So the underlying operation of the feedback is the same:  Input –> output –> output modifies input.   There are not somehow different flavors or types of feedback that operate in radically different ways but have the same name  (as in his Mustang joke).

The author claims the climate models are building up the affects of the processes like ice albedo from its pieces, ie rather than abstracting in to the gain formula, the models are adding up all the individual pieces, on a grid, over time.  I am sure that is true.   The question is not whether they use the simplified feedback formula, but whether it is a useful abstraction.  I see nothing from my description of the ice albedo process to say it is not.

What happens if there are time delays?  Well, as long as f is less than 1, the system will reach steady state at some point and this formula should apply.  What happens if the feedback is non-liner?  Well, in most natural systems, it is almost certainly non-linear.   In our ice albedo example, f is almost certainly different at different temperatures levels  (for example, a change from -30C to -31C has a lot less effect on ice albedo than a change from 0C to 1C.   The factor f is probably also dependent on the amount of ice remaining, since in the limit when all the ice is melted there should be no further effect.  But I would argue that when we pull back and look at the forest instead of the trees, a critical skill for modelers who too often get buried in their minutia while losing the ability to reality-check their results, that the 1/(1-f) is still an interesting if imperfect abstraction for the results, particularly since we are looking at tenths of a degree, and its hard for me to believe that it is wildly non-linear over that kind of range.  (By the way, it is not at all unusual for mainstream alarmist scientists to use this same feedback formula as a useful though imperfect abstraction, for example  in Gerard H. Roe and Marcia B. Baker, “Why Is Climate Sensitivity So Unpredictable?”, Science 318 (2007): 629–632 Not free but summarized here.)

To determine if it is a useful abstraction, I would ask the author what conclusions I draw that fall apart.  I really only made two points with the use of feedback anyway.

  1. I used the discussion to educate people that feedback is the main source of catastrophic warming, so that it should be the main focus of the scientific replication.   We can argue all day about time delays and non-linearity, but if the IPCC says the warming from CO2 alone is going to be 1.2C per doubling and the warming with all feedbacks considered is going to be, say, 4.8C per doubling (the author says himself that the models all converge at constant CO2), then we can say feedback is amplifying the initial man-made input by 4, or alternatively, 75% of the warming is from feedback effects, so these are probably where we need to focus.  I struggle to see how one can argue with this.
  2. I used the simple gain formula to say if feedback were quadrupling temperatures, this implies a feedback factor of 0.75, and that this number is pretty dang high for a long-term stable system.  Yes, the feedback is non-linear, but I don’t think this is an unreasonable reality check on the models to see what sorts of average feedbacks are being produced by the parameters.

The author’s points on non-linearity and time delays are actually more relevant to the discussion in other presentations when I talked about whether the climate models that show high future sensitivities to CO2 are consistent with past history, particularly if warming in the surface temperature record is exaggerated by urban biases.  But even forgetting about these, it is really hard to reconcile sensitivities of, say, four degrees per doubling with history, where we have had about 0.6C (assuming irrationally that its all man-made) of warming in about 42% of a doubling  (the effect, I will add, is non-linear, so one should see more warming in the first half than the second half of a doubling).  Let’s leave out aerosols for today  (those are the great modeler’s miracle cure that allows every model, even those of widely varying CO2 sensitivities and feedback effects, all exactly back-cast to history).  These time delays and non-linearities could help reconcile the two, though my understanding is that the time delay is thought to be on the order of 12 years, which would not reconcile things at all.  I suppose one could assume non-linearity such that the feedback effects accelerate with time past some tipping point, but I will say I have yet to see any convincing physical study that points to this effect.

Well, the weather is lovely outside so I suppose I should get on with it:

Meyer draws heavily from a set of slides from a talk by Richard Lindzen before a noncritical audience. These slides are full of invective and conspiracy talk, and their scientific content is lousy. Specifically, Lindzen supposedly estimates effective linear feedbacks for various GCMs and finds some greater than one. The mathematics presented by Lindzen in his slides does not allow that, and he doesn’t provide details of how such things even could be inferred. An effective linear feedback greater than one implies a runaway process, yet GCMs are always run for finite time, so there cannot be divergence to infinity. Moreover, as far as I know, all of the GCMs are known to converge once CO2 is stabilized.

I draw on Lindzen and Lindzen is wrong about a bunch of stuff and Lindzen uses invective and conspiracy talk so, what?  Lindzen can answer all of this stuff.  I used one chart from Lindzen, and it wasn’t even about feedback  (I will reproduce it below).

I did mention that in theory, if the feedback factor is greater than one, in other words, if the first order feedback addition to input is greater than the original input, then the function rapidly runs away to infinity.  Which it does.  I don’t know what Lindzen has to say about this or what the author is referring to.   My only point is that when folks like Al Gore talk about runaway warming and Earth becoming Venus, they are really implying runaway positive feedback effects with feedback factors greater than one.  Since I really don’t go anywhere with this and in reality the author is debating Lindzen over an argument or analysis I am not even familiar with, I will leave this alone.  The only thing I will say is that his last sentence seems on point, but his second to last is double talk.  All he is saying is that by only solving a finite number of terms in a a divergent infinite series his calculations don’t go to infinity.  Duh.

I am open to considering whether I have the correct mental model.  But I reject the notion that it is wrong to try to simplify and abstract the operation of climate models.  I have not modeled the climate, but I have modeled complex financial, economic, and mechanical systems.  And here is what I can tell you from that experience — the more people tell me that they have modeled a system in the most minute parametrization, and that the models in turn are not therefore amenable to any abstraction, the less I trust their models.  These parameters are guesses, because there just isn’t enough understanding of the complex and chaotic climate system to parse out their different values, or to even be clear about cause and effect in certain processes  (like cloud formation).

I worry about the hubris of climate modelers, telling me that I am wrong and impossible to try to tease out one value for net feedback for the entire climate, and instead I should be thinking in terms of teasing out hundreds or thousands of parameters related to feedback.  This is what I call knowledge laundering:

These models, whether forecasting tools or global temperature models like Hansen’s, take poorly understood descriptors of a complex system in the front end and wash them through a computer model to create apparent certainty and precision.  In the financial world, people who fool themselves with their models are called bankrupt (or bailed out, I guess).  In the climate world, they are Oscar and Nobel Prize winners.

This has incorrectly been interpreted as my saying these folks are wrong for trying to model the systems.  Far from it — I have spend a lot of my life trying to model less complex systems.  I just want to see some humility.

Postscript: Here is the only chart that I know of in my presentation from Lindzen, and its not even in the video he links to, it is in this longer and more comprehensive video

That seems a reasonable enough challenge to me, particularly given the data in this post and this quote from Judith Currey, certainly not a skeptic:

They don’t disprove anthropogenic global warming, but we can’t airbrush them away. We need to incorporate them into the overall story. We had two bumps—in the ’90s and also in the ’30s and ’40s—that may have had the same cause. So we may have exaggerated the trend in the later half of the 20th century by not adequately interpreting these bumps from the ocean oscillations. I don’t have all the answers. I’m just saying that’s what it looks like.

Again, as I have said before, man’s CO2 is almost certainly contributing to a warming trend.  But when we really look at history objectively and tease out measurement problems and cyclical phenomena, we are going to find that this trend is entirely consistent with a zero to negative feedback assumption for the climate as a whole, meaning that man’s CO2 is driving 1.2C or less of warming per doubling of CO2 concentrations.

The Single Most Important Point

Given all the activity of late challenging various aspects of the IPCC’s work, I wanted to remind folks of probably the most important assumption in the IPCC (and related climate models) that seldom makes the media.

Greenhouse gas theory alone does not give us a catastrophe.  By the IPCC numbers, originally I think from Michael Mann in 1998, greenhouse warming from CO2 should be about 1.2C per doubling of CO2 concentrations.  But the IPCC gets a MUCH higher final number than this.  The reason is positive feedback.  This is a second theory, that the Earth’s temperature system is dominated by very strong net positive feedback effects.  Even if greenhouse gas theory is “settled,” it does not get us a catastrophe.  The catastrophe comes from the positive feedback theory, and this is most definitely not settled.

I usually put it this way to laymen:  Imagine the Earth’s climate is a car.  Greenhouse gas theory says CO2 will only give the car a nudge.  In most cases, this nudge will only move the car a little bit, because a lot of forces work to resist the nudge.  Climate theory, however, assumes that the car is actually perched precariously at the very top of a steep hill, such that a small nudge will actually start the car rolling downhill until in crashes.  This theory that the Earth is perched precariously on the top of the hill is positive feedback theory, and is far from settled.  In fact, a reasonable person can immediately challenge it by asking the sensible question  — “well, how has the climate managed to avoid a nudge (and resulting crash)  for hundreds of millions of years?”

I got to thinking about all this because I saw a chart of mine in Nicola Scafetta’s SPPI report on climate change, where he uses this chart:

I am happy he chose this chart, because it is one of my favorites.   It shows that most of the forecast warming from major alarmist models comes from the positive feedback theory, and not from greenhouse gas theory.  Let me explain how it is built.

The blue line at the bottom is based on an equation right out of the Third IPCC Report (the Fourth Report seems to assume it is still valid but does not include it anywhere I can find).  The equation seems to be from Mann 1998, and is for the warming effect from CO2 without feedbacks.   The equation is:

∆T = F(C2) – F(C1)
Where F(C) = Ln(1+1.2c+0.005c^2+0.0000014c^3)

So the blue line is just this equation where C1=385ppm and C2 is the concentration on the X axis.

The other lines don’t exist in the IPCC reports that I can find, though they should**.  What I did was to take various endpoint forecasts in the IPCC and from other sources and simply scale the blue line up, which implicitly assumes feedback acts uniformly across the range of concentrations.   So, for example, a forecast after feedback of 4.8C of warming around 800ppm was assumed to scale the blue no feedback line up by a uniform factor of 4.8/1.2 = 4x.  For those who know the feedback formula, we can back into the implied feedback fraction (again not to be found anywhere in the IPCC report) which would be  4=1/(1-f)  so f=75%, which is a quite high factor.

** This seems like a totally logical way to show the warming effect from CO2, but the IPCC always insists on showing just warming over time.  But this confuses the issue because it is also dependent on expected CO2 emissions forecasts.  I know there are issues of time delays, but I think a steady-state version of this chart would be helpful.

Knowlege Laundering

Charlie Martin is looking through some of James Hansen’s emails and found this:

[For] example, we extrapolate station measurements as much as 1200 km. This allows us to include results for the full Arctic. In 2005 this turned out to be important, as the Arctic had a large positive temperature anomaly. We thus found 2005 to be the warmest year in the record, while the British did not and initially NOAA also did not. …

So he is trumpeting this approach as an innovation?  Does he really think he has a better answer because he has extrapolated station measurement by 1200km (746 miles)?  This is roughly equivalent, in distance, to extrapolating the temperature in Fargo to Oklahoma City.  This just represents for me the kind of false precision, the over-estimation of knowledge about a process, that so characterizes climate research.  If we don’t have a thermometer near Oklahoma City then we don’t know the temperature in Oklahoma City and lets not fool ourselves that we do.

I had a call from a WaPo reporter today about modeling and modeling errors.  We talked about a lot of things, but my main point was that whether in finance or in climate, computer models typically perform what I call knowledge laundering.   These models, whether forecasting tools or global temperature models like Hansen’s, take poorly understood descriptors of a complex system in the front end and wash them through a computer model to create apparent certainty and precision.  In the financial world, people who fool themselves with their models are called bankrupt (or bailed out, I guess).  In the climate world, they are Oscar and Nobel Prize winners.

Update: To the 1200 km issue, this is somewhat related.

Water Vapor Feedback

In most all of the climate models, the warming effect from feedback is actually much larger than the warming effect from CO2 alone.   That is why I have said for years that it is a waste of time to debate “greenhouse gas theory” as the real theory that matters to the proposition that climate sensitivity to CO2 is high is the theory that Earth’s temperature system is dominated by strong positive feedback.  And the largest feedback in climate models tends to be water vapor feedback, despite the fact that even the IPCC admits that such feedback is poorly understood.  To this end:

In a third paper, accepted for publication by the Journal of Theoretical and Applied Climatology, three scientists – two Australians and one American, revisit data on upper-atmospheric humidity. The three are Garth Paltridge, Albert Arking and Michael Pook, and they have found that, contrary to climate model predictions, water vapour in the upper atmosphere is acting as a brake on global warming.

Established climate models assume constant humidity at all levels in the atmosphere as the temperature rises. But, using data from weather balloons accumulated over 35 years, these researchers find this is not so. At the lower levels, it is higher than expected, dropping below normal at the higher altitudes.

This, they say, implies that “long-term water vapour feedback is negative – that it would reduce rather than amplify the response of the climate system to external forcing such as that from increasing atmospheric CO2.” This, in one fell swoop, challenges the central premise of the warmists that, once CO2 reaches a certain level, we experience runaway global warming.

Shut Up, For the Children

Thought I would share a couple of bits of an email I got today.  The email showed a distinct lack of familiarity with the nuances of my climate position, so my guess is this may be a form letter.  I find it interesting a 17-year-old knows the term “NGO” but does not know to capitalize the first letter in a sentence (emphasis added).

hello.
this is a (hopefully) reasonable and (hopefully) well thought out message.
firstly i will say that i am 17 years old and not under the sway of any goverments/NGOs.
i believe that what you are doing with your climate skeptic blog is dangerous.
dangerous not only to yourself (in a minor way), but to my generation(in a much bigger way)….  [portion snipped out here basically talking about the writer’s view of what science is beyond dispute and lecturing me on the precautionary principle]

you’ll probably think it’s rich, being lectured on ‘responsibility’ by a mere 17 year old, but hear (or read ;)) me out…
by publishing your blog i believe you are infringing upon successive generations’ fundamental basic human right to life.
denying climate change is fine if you just hold these veiws and keep them to yourself and don’t overtly act upon them.
it does however become infinitely more dangerous to my generation to preach these views as fact(or even air them in a serious manner).
as far as i see it, this is an issue of life and death.
the way i see it, you’re going along the ‘more likely to be death’ route, and please, if only for the sake of your children, or your children’s children, stop updating your blog.

Hmm, I will pass.  But it is nice to know that folks like Al Gore, Michael Mann, and Steve Jones have passed down their fear and loathing of debate to the next generation.    I won’t share my response, but I asked him if he would prefer that my generation, instead of handing his generation a degree or so of warming, instead handed his generation an extra billion or so people in poverty.

Feedback Assumptions Finally Being Challenged

When asked what one thing I would want to tell laymen about catastrophic man-made global warming theory, it is the following:  That this theory is in fact a two-part theory.  Greenhouse gas theory alone only gives us incremental warming and no catastrophe.  It is a second theory that Earth’s climate is dominated by strong positive feedbacks that multiplies warming of perhaps a degree over the next century from CO2 to 3,5, or more degrees of warming.  And while it is fairly well accepted by all that CO2 will cause a bit of warming alone, this second theory is not at all settled and in fact may even the the sign of the feedback wrong.

Two stories came out this week undercutting to of the largest sources of feedback.

1.  Water Vapor Feedback

Water vapor is a highly variable gas and has long been recognized as an important player in the cocktail of greenhouse gases—carbon dioxide, methane, halocarbons, nitrous oxide, and others—that affect climate.

“Current climate models do a remarkable job on water vapor near the surface. But this is different — it’s a thin wedge of the upper atmosphere that packs a wallop from one decade to the next in a way we didn’t expect,” says Susan Solomon, NOAA senior scientist and first author of the study.

Since 2000, water vapor in the stratosphere decreased by about 10 percent. The reason for the recent decline in water vapor is unknown. The new study used calculations and models to show that the cooling from this change caused surface temperatures to increase about 25 percent more slowly than they would have otherwise, due only to the increases in carbon dioxide and other greenhouse gases.

An increase in stratospheric water vapor in the 1990s likely had the opposite effect of increasing the rate of warming observed during that time by about 30 percent, the authors found.

2.  CO2  (outgassing from oceans) Feedback

The most alarming forecasts of natural systems amplifying the human-induced greenhouse effect may be too high, according to a new report.

The study in Nature confirms that as the planet warms, oceans and forests will absorb proportionally less CO2.

It says this will increase the effects of man-made warming – but much less than recent research has suggested….

The most likely value among their estimates suggests that for every degree Celsius of warming, natural ecosystems tend to release an extra 7.7 parts per million of CO2 to the atmosphere (the full range of their estimate was between 1.7 and 21.4 parts per million).

This stands in sharp contrast to the recent estimates of positive feedback models, which suggest a release of 40 parts per million per degree; the team say with 95% certainty that value is an overestimate.

OK readers, let’s see how close you have been paying attention.  The models have over-estimated this important feedback by a factor of 5 (40 to 7.7). As I have shown time and time again, the vast majority of the warming in climate forecasts is from feedback — about 1C per century is directly from CO2, the rest is from feedback multipliers.  Have a forecast that says 5C warming in the next century, then about 4C of that is probably due to feedback.

But remember this post, where I said

…there is a very strong social cost in academia to challenging global warming, so that even when findings in certain studies seem to undercut key pieces of the argument, the researches always add something like “but of course this does not refute the basic theory of global warming” at the end of the paper.

So what do this study’s author’s say?

The authors warn, though, that their research will not reduce projections of future temperature rises.

Further, they say their concern about man-made climate change remains high.

Of course, because if this factor goes down, they will just shore up their forecasts and keep them them high with some other plug variable.  Because no one is funding scientists (or quoting them in newspapers) whose models call for just 1C of warming over the next century.

Lindzen & Choi

In preparing for my climate presentation in Phoenix next week, I went back and read through Lindzen & Choi, a study whose results I linked here.  The study claims to have measured feedback, and have found feedback to temperature changes in the natural climate system to be negative –opposite of the assumption of strong positive feedback in climate models.  I found this interesting, as we often do of studies that confirm our own hypotheses.

Re-reading the study, I was uncomfortable with the methodology, but figured I was missing something.  Specifically, I didn’t understand how an increase in temperature could result in a decrease in outgoing radiation, as Lindzen says is assumed in all the models.   As I have always understood it, the opposite has to be true in a stable system.   With an added forcing, temperature increases which increases outgoing radiation until the radiation budget is back in balance.  Models that assumed otherwise would have near infinite temepratures.   I assumed perhaps that Lindzen & Choi were making measurements during the time the system came back into equilibrium.

Apparently, both Luboš Motl and Roy Spencer have spotted problems as well, and they explain the issue in a more sophisticated way here and here.

But the results I have been getting from the fully coupled ocean-atmosphere (CMIP) model runs that the IPCC depends upon for their global warming predictions do NOT show what Lindzen and Choi found in the AMIP model runs. While the authors found decreases in radiation loss with short-term temperature increases, I find that the CMIP models exhibit an INCREASE in radiative loss with short term warming.

In fact, a radiation increase MUST exist for the climate system to be stable, at least in the long term. Even though some of the CMIP models produce a lot of global warming, all of them are still stable in this regard, with net increases in lost radiation with warming (NOTE: If analyzing the transient CMIP runs where CO2 is increased over long periods of time, one must first remove that radiative forcing in order to see the increase in radiative loss).

So, while I tend to agree with the Lindzen and Choi position that the real climate system is much less sensitive than the IPCC climate models suggest, it is not clear to me that their results actually demonstrate this.

Spencer further makes the point he has made for a couple of years now that feedback is really, really, really hard to measure, because it is so easy to confuse cause and effect.

Spencer by the way points out this admission from the Fourth IPCC report:

A number of diagnostic tests have been proposed…but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections (of warming). Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.

This is kind of amazing, in effect saying “we have no idea what the feedbacks are or how to measure them, but lacking any knowlege, we are going to consistently and universally assume very high positive feedbacks with feedback factors > 0.7”

What A Daring Guy

Joe Romm has gone on the record at Climate Progress on April 13, 2009 that the “median” forecast was for warming in the US by 2100 of 10-15F, or 5.5-8.3C, and he made it very clear that if he had to pick a single number, it would be the high end of that range.

On average, the 8.3C implies about 0.9C per decade of warming.  This might vary slightly by what starting point he intended (he is not very clear in the post) and I understand there is a curve so it will be below average in the early years and above in the later.

Anyway, Joe Romm is ready to put his money where his mouth is, and wants to make a 50/50 bet with any comers that warming in the next decade will be… 0.15C.  Boy, it sure is daring for a guy who is constantly in the press at a number around 0.9C per decade to commit to a number 6 times lower when he puts his money where his mouth is.   Especially when Romm has argued that warming in the last decade has been suppressed (somehow) and will pop back up soon.  Lucia has more reasons why this is a chickensh*t bet.

I deconstructed a previous gutless bet by Nate Silver here.

Do Arguments Have to Be Symmetric?

I am looking at some back and forth in this Flowing Data post.

Apparently an Australian Legislator named Stephen Fielding posted this chart and asked, “Is it the case that CO2 increased by 5% since 1998 whilst global temperature cooled over the same period (see Fig. 1)?  If so, why did the temperature not increase; and how can human emissions be to blame for dangerous levels of warming?”

the_global_temperature_chart-545x409

Certainly this could sustain some interesting debate.  Climate is complex, so their might be countervailing effects to CO2, but it also should be noted that none of the models really predicted this flatness in temperatures, so it certainly could be described as “unexpected” at least among the alarmist community.

Instead, the answer that came back from Stephen Few was this (as reported by Flowing Data, I cannot find this on Few’s site):

This is a case of someone who listens only to what he wants to hear (the arguments of a few fringe organizations with agendas) and either ignores or is incapable of understanding the overwhelming weight of scientific evidence. He selected a tiny piece of data (a short period of time, with only one of many measures of temperature), misinterpreted it, and ignored the vast collection of data that contradicts his position. This fellow is either incredibly stupid or a very bad man.

Every alarmist from Al Gore to James Hansen has used this same chart in their every presentation – showing global temperatures since 1950  (or really since 1980) going up in lockstep with Co2.  This is the alarmists #1 chart.  All Fielding has done is shown data after 1998, something alarmists tend to be reluctant to do.  Sure it’s a short time period, but nothing in any alarmist prediction or IPCC report hinted that there was any possibility that for even so short a time as 15 years warming might cease  (at least not in the last IPCC report, which I have read nearly every page of).  So, by using the alarmists’ own chart and questioning a temperature trend that went unpredicted, Fielding is “either incredibly stupid or a very bad man.”  Again, the alarmist modus operandi – it is much better to smear the person in ad hominem attacks than deal with his argument.

Shouldn’t there be symmetry here?  If it is OK for every alarmist on the planet to show 1980-1995 temperature growing in lockstep with CO2 as “proof” of a relationship, isn’t it equally OK to show 1995-2010 temperature not growing in lockstep with CO2 to question the relationship?  Why is one ok but the other incredibly stupid and/or mean-spirited?   I mean graphs like this were frequent five years ago, though they have dried up recently:

zfacts-co2-temp

For extra credit, figure out how they got most of the early 2000’s to be warmer than 1998 in this chart, since I can find no major temperature metric that matches this.  I suspect some endpoint smoothing games here.

I won’t get into arguing the “overwhelming weight of scientific evidence” statement, as I find arguments over counting scientific heads or papers to be  useless in the extreme.  But I will say that as a boy when I learned about the scientific method, there was a key step where one’s understanding of a natural phenomenon is converted into predicted behaviors, and then those predictions are tested against reality.  All Fielding is doing is testing the predictions, and finding them to be missing the mark.  Sure, one can argue that the testing period has not been long enough, so we will keep testing, but what Fielding is trying to do here, however imperfectly, is perfectly compatible with the scientific method.

I must say I am a bit confused about those “many other measures of temperature.”  Is Mr. Few suggesting that the chart would have different results in Fahrenheit?  OK, I am kidding of course.  What I am sure he means is that there are groups other than the Hadley Center that produce temperature records for the globe  (though in Mr. Fielding’s defense the Hadley Center is a perfectly acceptable source and the preferred source of much of the IPCC report).  To my knowledge, there are four major metrics (Hadley, GISS, UAH, RSS).  Of these four, at least three (I am not sure about the GISS) would show the same results.  I think the “overwhelming weight” of temperature metrics makes the same point as Mr. Fielding’s chart.

In the rest of his language, Few is pretty sloppy for someone who wants to criticize someone for sloppiness.  He says that Fielding “misinterpreted” the temperature data.  How?  Seems straight forward to me.  He also says that there is a “vast collection of data that contradicts his position.”  What position is that?  If his position is merely that Co2 has increased for 15 years and temperatures have not, well, there really is NOT a vast collection of data that contradicts that.  There may be a lot of people who have published reasons whythis set of facts does not invalidate AGW, but the facts are still the same.

By the way, I get exhausted by the accusation that skeptics are somehow simplistic and can’t understand complex systems.    I feel like my understanding is pretty nuanced. By the way, its interesting how the sides have somewhat reversed here.  When temperature was going up steadily, it was alarmists saying that things were simple and skeptics saying that climate was complex and you couldn’t necessarily make the 1:1 correlation between CO2 and temperature increases.  Now that temperature has flat lined for a while, it is alarmists screaming that skeptics are underestimating the complexity.  I tend to agree — climate is indeed really really complex, though I think if one accepts this complexity it is hard to square with the whole “settled science” thing.  Really, we have settled the science in less than 20 years on perhaps the most complex system we have ever tried to understand?

The same Flowing Data post references this post from Graham Dawson.  Most of Dawson’s “answers” to Fieldings questions are similar to Few’s, but I wanted to touch on one or two other things.

First, I like how he calls findings from the recent climate synthesis report the “government answer” as if this makes it somehow beyond dispute.  But I digress.

The surface air temperature is just one component in the climate system (ocean, atmosphere, cryosphere). There has been no material trend in surface air temperature during the last 10 years when taken in isolation, but 13 of the 14 warmest years on record have occurred since 1995. Also global heat content of  the ocean (which constitutes 85% of the total warming) has continued to rise strongly in this period, and ongoing warming of the climate system as a whole is supported by a very wide range of observations, as reported in the peer-reviewed scientific literature.

This is the kind of blithe answer that is full of inaccuracies everyone needs to be careful about.  The first sentence is true, and the second is probably close to the mark, though with a bit more uncertainty than he implies.  He is also correct that global heat content of the ocean is a huge part of warming or the lack thereof, but his next statement is not entirely correct.  Ocean heat content as measured by the new ARGO system since 2003 has been flat to down.  Longer term measures are up, but most of the warming comes at the point the old metrics were spliced to the ARGO data, a real red flag to any serious data analyst.  The cryospehere is important as well, but most metrics show little change in total sea ice area, with losses in the NH offset by gains in the SH.

While the Earth’s temperature has been warmer in the geological past than it is today, the magnitude and rate of change is unusual in a geological context. Also the current warming is unusual as past changes have been triggered by natural forcings whereas there are no known natural climate forcings, such as changes in solar irradiance, that can explain the current observed warming of the climate system. It can only be explained by the increase in greenhouse gases due to human activities.

No one on Earth has any idea if the first sentence is true — this is pure supposition on the author’s part, stated as a fact.  We are talking about temperature changes today over a fifty year (or shorter) period, and we have absolutely no way to look at changes in the “geological past” on this fine of a timescale.  I am reminded of the old ice core chart that was supposedly the smoking gun between CO2 and temperature, only to find later as we improved the time resolution that temperature increases came before Co2 increases.

I won’t make too much of my usual argument on the sun, except to say that the Sun has been substantially more active during the warming period of 1950-2000 than it has been in other times.  What I want to point out, though, is the core foundation of the alarmist argument, one that I have pointed out before.  It boils down to:  Past warming must be due to man because we can’t think of what else it could be.   This is amazing hubris, representing a total unwillingness to admit what we do and don’t understand.  Its almost like the ancient Greeks, attributing what they didn’t understand in the cosmos to the hijinx of various gods.

It is not the case that all GCM computer models projected a steady increase in temperature for the period 1990-2008.  Air temperatures are affected by natural variability.  Global Climate Models show this variability in the long term but are not able to predict exactly when such variations will happen. GCMs can and do simulate decade-long periods of no warming, or even slight cooling, embedded in longer-term warming trends.

But none showed zero warming, or anything even close.

Sucker Bet

Vegas casinos love the sucker bet.  Nothing makes the accountants happier than seeing someone playing the Wheel of Fortune, or betting on “12, the hard way” in craps, or taking insurance in blackjack.  While the house always maintains a slim advantage, these bets really stack the deck in the house’s favor.

And just as I don’t feel guilty for leaving Caesar’s Palace without playing the Wheel of Fortune, I don’t feel a bit of guilt for not taking this bet from Nate Silver:

1. For each day that the high temperature in your hometown is at least 1 degree Fahrenheit above average, as listed by Weather Underground, you owe me $25. For each day that it is at least 1 degree Fahrenheit below average, I owe you $25.

I presume Silver is a smart guy and knows what he is doing, because in fact this is not a bet on future warming, but on past warming.  Even without a bit of future warming, he wins this bet.  Why?

I am sitting in my hotel room, and so I don’t have time to dig into the Weather Underground’s data definitions, but my guess is that their average temperatures are based on historic data, probably about a hundred years worth on average.

Over the last 100 years the world has on average warmed about 1F.  This means that today, again on average, most locations will sit on a temperature plateau about 0.5F higher than the average.  So by structuring this bet like this, he is basically asking people  to take “red” in roulette while he takes black and zero and double zero.   He has a built in 0.5F advantage.  Even with zero future warming.

Now, the whole point of this bet may be to take money from skeptics who don’t bother to educate themselves on climate and believe Rush Limbaugh or whoever that there has never been any change in world temperatures.  Fine.  I have little patience with either side of the debate that want to be vocal without educating him or herself on the basic facts.  But to say this is a bet on future warming is BS.

The other effect that may exist here (but I am less certain of the science, commenters can help me out) is that by saying “your hometown” we put the bet into the domain of urban heat islands and temperature station siting issues.  Clearly UHI has substantially increased temperatures in many cities, but that is because average temperatures are generally computed as the average of the daily minimum and maximum.  My sense is that UHI has a much bigger effect on Tmin than Tmax – such that my son and I found a 10 degree F UHI in Phoenix in the evening, but I am not sure if we could find one, or as large of one, at the daily maximum.  Nevertheless, to the extent that such an effect exists for Tmax, most cities that have grown over the last few years will be above their averages just from the increasing UHI component.

I don’t have the contents of my computer hard drive here with me, but a better bet would be from a 10-year average of some accepted metric  (I’d prefer satellites but Hadley CRUT would be OK if we just had to use the old dinosaur surface record).  Since I accept about 1-1.2C per century, I’d insist on this trend line and would pay out above it and collect below it  (all real alarmists consider a 1.2C per century future trend to be about zero probability, so I suspect this would be acceptable).