Ducking the Point

Most skeptics have been clubbed over the head with the “settled science” refrain at one time or another.  How can you, a layman, think you are right when every scientist says the opposite?  And if it is not settled science, how do folks get away unchallenged saying so?

I am often confronted with these questions, so I thought I would print my typical answer.  I wrote this in the comments section of a post at the Thin Green Line.  Most of the post is a typical ad hominem attack on skeptics, but it includes the usual:

The contrarian theories raise interesting questions about our total understanding of climate processes, but they do not offer convincing arguments against the conventional model of greenhouse gas emission-induced climate change.

Here is what I wrote in response:

I am sure there are skeptics that have no comprehension of the science that blindly follow the pronouncements of certain groups, just as I am sure there are probably as high a percentage of global warming activists who don’t understand the science but are following the lead of sources they trust. The only thing I will say is that there is a funny dynamic here. Those of us who run more skeptical web sites tend to focus our attention on deconstructing the arguments of Hansen and Schmidt and Romm, who alarmist folks would consider their top spokesmen. Many climate alarmists in turn tend to focus on skeptical buffoons. I mean, I guess its fun to rip a straw man to shreds, but why not match your best against the best of those who disagree with you?

Anyway, I am off my point. There is a reason both sides can talk past each other. There is a reason you can confidently say “well established and can’t be denied” for your theory and be both wrong and right at the same time.

The argument that manmade CO2 emissions will lead to a catastrophe is based on a three step argument.

  1. CO2 has a first order effect that warms the planet
  2. The planet is dominated by net positive feedback effects that multiply this first order effect 3 or more times.
  3. These higher temperatures will lead to and already are causing catastrophic effects.

You are dead right on #1, and skeptics who fight this are truly swimming against the science. The IPCC has an equation that results in a temperature sensitivity of about 1.2C per doubling of CO2 as a first order effect, and I have found little reason to quibble with this. Most science-based skeptics accept this as well, or a number within a few tenths.

The grand weakness of the alarmist case comes in #2. It is the rare long-term stable natural physical process that is dominated by positive feedback, and the evidence that Earth’s climate is dominated by feedbacks so high as to triple (in the IPCC report) or more (e.g. per Joe Romm) the climate sensitivity is weak or in great dispute. To say this point is “settled science” is absurd.

So thus we get to the heart of the dispute. Catastrophists posit enormous temperature increases, deflecting criticism by saying that CO2 as a greenhouse gas is settled. Though half right, they gloss over the fact that 2/3 or more of their projected temperature increase is based on a theory of Earth’s climate being dominated by strong positive feedbacks, a theory that is most certainly not settled, and in fact is probably wrong. Temperature increases over the last 100 years are consistent with neutral to negative, not positive feedback, and the long-term history of temperatures and CO2 are utterly inconsistent with the proposition there is positive feedback or a tipping point hidden around 350ppm CO2.

So stop repeating “settled science” like it was garlic in front of a vampire. Deal with the best arguments of skeptics, not their worst.

I see someone is arguing that skeptics have not posited an alternate theory to explain 20th century temperatures. In fact, a number have. A climate sensitivity to CO2 of 1.2C combined with net negative feedback, a term to account for ENSO and the PDO, plus an acknowledgment that the sun has been in a relatively strong phase in the second half of the 20th century model temperatures fairly well. In fact, these terms are a much cleaner fit than the contortions alarmists have to go through to try to fit a 3C+ sensitivity to a 0.6C historic temperature increase.

Finally, I want to spend a bit of time on #3.  I certainly think that skeptics often make fools of themselves.  But, because nature abhors a vacuum, alarmists tend to in turn make buffoons of themselves, particularly when predicting the effects on other climate variables of even mild temperature increases. The folks positing ridiculous catastrophes from small temperature increases are just embarrassing themselves.

Even bright people like Obama fall into the trap. Earlier this year he said that global warming was a factor in making the North Dakota floods worse.

Really? He knows this? First, anyone familiar with the prediction and analysis of complex systems would laugh at such certainty vis a vis one variable’s effect on a dynamic system. Further, while most anything is possible, his comment tends to ignore the fact that North Dakota had a colder than normal winter and record snowfalls, which is what caused the flood (record snows = record melts). To say that he knows that global warming contributed to record cold and snow is a pretty heroic assumption.

Yeah, I know, this is why for marketing reasons alarmists have renamed global warming as “climate change.” Look, that works for the ignorant masses, because they can probably be fooled into believing that CO2 causes climate change directly by some undefined mechanism. But we here all know that CO2 only affects climate through the intermediate step of warming. There is no other proven way CO2 can affect climate. So, no warming, no climate change.

Yeah, I know, somehow warming in Australia could have been the butterfly flapping its wings to make North Dakota snowy, but by the same unproven logic I could argue that California droughts are caused by colder than average weather in South America. At the end of the day, there is no way to know if this statement is correct and a lot of good reasons to believe Obama’s statement was wrong. So don’t tell me that only skeptics say boneheaded stuff.

The argument is not that the greenhouse gas effect of CO2 doesn’t exist. The argument is that the climate models built on the rickety foundation of substantial positive feedbacks are overestimating future warming by a factor of 3 or more. The difference matters substantially to public policy. Based on neutral to negative feedback, warming over the next century will be 1-1.5C. According to Joe Romm, it will be as much as 8C (15F). There is a pretty big difference in the magnitude of the effort justified by one degree vs. eight.

Numbers Divorced from Reality

This article on Climate Audit really gets at an issue that bothers many skeptics about the state of climate science:  the profession seems to spend so much time manipulating numbers in models and computer systems that they start to forget that those numbers are supposed to have physical meaning.

I discussed the phenomenon once before.  Scientists are trying to reconstruct past climate variables like temperature and precipitation from proxies such as tree rings.  They begin with a relationship they believe exists based on an understanding of a particular system – ie, for tree rings, trees grow faster when its warm so tree rings are wider in warm years.  But as they manipulate the data over and over in their computers, they start to lose touch with this physical reality.

In this particular example, Steve McIntyre shows how, in one temperature reconstruction, scientists have changed the relationship opportunistically between the proxy and temperature, reversing their physical understanding of the process and how similar proxies are handled in the same study, all in order to get the result they want to get.

McIntyre’s discussion may be too arcane for some, so let me give you an example.  As a graduate student, I have been tasked with proving that people are getting taller over time and estimating by how much.  As it turns out, I don’t have access to good historic height data, but by a fluke I inherited a hundred years of sales records from about 10 different shoe companies.  After talking to some medical experts, I gain some confidence that shoe size is positively correlated to height.  I therefore start collating my 10 series of shoe sales data, pursuing the original theory that the average size of the shoe sold should correlate to the average height of the target population.

It turns out that for four of my data sets, I find a nice pattern of steadily rising shoe sizes over time, reflecting my intuition that people’s height and shoe size should be increasing over time.  In three of the data sets I find the results to be equivical — there is no long-term trend in the sizes of shoes sold and the average size jumps around a lot.  In the final three data sets, there is actually a fairly clear negative trend – shoe sizes are decreasing over time.

So what would you say if I did the following:

  • Kept the four positive data sets and used them as-is
  • Threw out the three equivocal data sets
  • Kept the three negative data sets, but inverted them
  • Built a model for historic human heights based on seven data sets – four with positive coefficients between shoe size and height and three with negative coefficients.

My correlation coefficients are going to be really good, in part because I have flipped some of the data sets and in part I have thrown out the ones that don’t fit initial bias as to what the answer should be.  Have I done good science?  Would you trust my output?  No?

Well what I describe is identical to how many of the historical temperature reconstruction studies have been executed  (well, not quite — I have left out a number of other mistakes like smoothing before coefficients are derived and using de-trended data).

Mann once wrote that multivariate regression methods don’t care about the orientation of the proxy. This is strictly true – the math does not care. But people who recognize that there is an underlying physical reality that makes a proxy a proxy do care.

It makes no sense to physically change the sign of the relationship of our final three shoe databases.  There is no anatomical theory that would predict declining shoe sizes with increasing heights.  But this seems to happen all the time in climate research.  Financial modellers who try this go bankrupt.  Climate modellers who try this to reinforce an alarmist conclusion get more funding.  Go figure.

Sudden Acceleration

For several years, there was an absolute spate of lawsuits charging sudden acceleration of a motor vehicle — you probably saw such a story:  Some person claims they hardly touched the accelerator and the car leaped ahead at enormous speed and crashed into the house or the dog or telephone pole or whatever.  Many folks have been skeptical that cars were really subject to such positive feedback effects where small taps on the accelerator led to enormous speeds, particularly when almost all the plaintiffs in these cases turned out to be over 70 years old.  It seemed that a rational society might consider other causes than unexplained positive feedback, but there was too much money on the line to do so.

Many of you know that I consider questions around positive feedback in the climate system to be the key issue in global warming, the one that separates a nuisance from a catastrophe.  Is the Earth’s climate similar to most other complex, long-term stable natural systems in that it is dominated by negative feedback effects that tend to damp perturbations?  Or is the Earth’s climate an exception to most other physical processes, is it in fact dominated by positive feedback effects that, like the sudden acceleration in grandma’s car, apparently rockets the car forward into the house with only the lightest tap of the accelerator?

I don’t really have any new data today on feedback, but I do have a new climate forecast from a leading alarmist that highlights the importance of the feedback question.

Dr. Joseph Romm of Climate Progress wrote the other day that he believes the mean temperature increase in the “consensus view” is around 15F from pre-industrial times to the year 2100.  Mr. Romm is mainly writing, if I read him right, to say that critics are misreading what the consensus forecast is.  Far be it for me to referee among the alarmists (though 15F is substantially higher than the IPCC report “consensus”).  So I will take him at his word that 15F increase with a CO2 concentration of 860ppm is a good mean alarmist forecast for 2100.

I want to deconstruct the implications of this forecast a bit.

For simplicity, we often talk about temperature changes that result from a doubling in Co2 concentrations.  The reason we do it this way is because the relationship between CO2 concentrations and temperature increases is not linear but logarithmic.  Put simply, the temperature change from a CO2 concentration increase from 200 to 300ppm is different (in fact, larger) than the temperature change we might expect from a concentration increase of 600 to 700 ppm.   But the temperature change from 200 to 400 ppm is about the same as the temperature change from 400 to 800 ppm, because each represents a doubling.   This is utterly uncontroversial.

If we take the pre-industrial Co2 level as about 270ppm, the current CO2 level as 385ppm, and the 2100 Co2 level as 860 ppm, this means that we are about 43% through a first doubling of Co2 since pre-industrial times, and by 2100 we will have seen a full doubling (to 540ppm) plus about 60% of the way to a second doubling.  For simplicity, then, we can say Romm expects 1.6 doublings of Co2 by 2100 as compared to pre-industrial times.

So, how much temperature increase should we see with a doubling of CO2?  One might think this to be an incredibly controversial figure at the heart of the whole matter.  But not totally.  We can break the problem of temperature sensitivity to Co2 levels into two pieces – the expected first order impact, ahead of feedbacks, and then the result after second order effects and feedbacks.

What do we mean by first and second order effects?  Well, imagine a golf ball in the bottom of a bowl.  If we tap the ball, the first order effect is that it will head off at a constant velocity in the direction we tapped it.  The second order effects are the gravity and friction and the shape of the bowl, which will cause the ball to reverse directions, roll back through the middle, etc., causing it to oscillate around until it eventually loses speed to friction and settles to rest approximately back in the middle of the bowl where it started.

It turns out the the first order effects of CO2 on world temperatures are relatively uncontroversial.  The IPCC estimated that, before feedbacks, a doubling of CO2 would increase global temperatures by about 1.2C  (2.2F).   Alarmists and skeptics alike generally (but not universally) accept this number or one relatively close to it.

Applied to our increase from 270ppm pre-industrial to 860 ppm in 2100, which we said was about 1.6 doublings, this would imply a first order temperature increase of 3.5F from pre-industrial times to 2100  (actually, it would be a tad more than this, as I am interpolating a logarithmic function linearly, but it has no significant impact on our conclusions, and might increase the 3.5F estimate by a few tenths.)  Again, recognize that this math and this outcome are fairly uncontroversial.

So the question is, how do we get from 3.5F to 15F?  The answer, of course, is the second order effects or feedbacks.  And this, just so we are all clear, IS controversial.

A quick primer on feedback.  We talk of it being a secondary effect, but in fact it is a recursive process, such that there is a secondary, and a tertiary, etc. effects.

Lets imagine that there is a positive feedback that in the secondary effect increases an initial disturbance by 50%.  This means that a force F now becomes F + 50%F.  But the feedback also operates on the additional 50%F, such that the force is F+50%F+50%*50%F…. Etc, etc.  in an infinite series.  Fortunately, this series can be reduced such that the toal Gain =1/(1-f), where f is the feedback percentage in the first iteration. Note that f can and often is negative, such that the gain is actually less than 1.  This means that the net feedbacks at work damp or reduce the initial input, like the bowl in our example that kept returning our ball to the center.

Well, we don’t actually know the feedback fraction Romm is assuming, but we can derive it.  We know his gain must be 4.3 — in other words, he is saying that an initial impact of CO2 of 3.5F is multiplied 4.3x to a final net impact of 15.  So if the gain is 4.3, the feedback fraction f must be about 77%.

Does this make any sense?  My contention is that it does not.  A 77% first order feedback for a complex system is extraordinarily high  — not unprecedented, because nuclear fission is higher — but high enough that it defies nearly every intuition I have about dynamic systems.  On this assumption rests literally the whole debate.  It is simply amazing to me how little good work has been done on this question.  The government is paying people millions of dollars to find out if global warming increases acne or hurts the sex life of toads, while this key question goes unanswered.  (Here is Roy Spencer discussing why he thinks feedbacks have been overestimated to date, and a bit on feedback from Richard Lindzen).

But for those of you looking to get some sense of whether a 15F forecast makes sense, here are a couple of reality checks.

First, we have already experienced about .43 if a doubling of CO2 from pre-industrial times to today.  The same relationships and feedbacks and sensitivities that are forecast forward have to exist backwards as well.  A 15F forecast implies that we should have seen at least 4F of this increase by today.  In fact, we have seen, at most, just 1F  (and to attribute all of that to CO2, rather than, say, partially to the strong late 20th century solar cycle, is dangerous indeed).  But even assuming all of the last century’s 1F temperature increase is due to CO2, we are way, way short of the 4F we might expect.  Sure, there are issues with time delays and the possibility of some aerosol cooling to offset some of the warming, but none of these can even come close to closing a gap between 1F and 4F.  So, for a 15F temperature increase to be a correct forecast, we have to believe that nature and climate will operate fundamentally different than they have over the last 100 years.

Second, alarmists have been peddling a second analysis, called the Mann hockey stick, which is so contradictory to these assumptions of strong positive feedback that it is amazing to me no one has called them on the carpet for it.  In brief, Mann, in an effort to show that 20th century temperature increases are unprecedented and therefore more likely to be due to mankind, created an analysis quoted all over the place (particularly by Al Gore) that says that from the year 1000 to about 1850, the Earth’s temperature was incredibly, unbelievably stable.  He shows that the Earth’s temperature trend in this 800 year period never moves more than a few tenths of a degree C.  Even during the Maunder minimum, where we know the sun was unusually quiet, global temperatures were dead stable.

This is simply IMPOSSIBLE in a high-feedback environment.  There is no way a system dominated by the very high levels of positive feedback assumed in Romm’s and other forecasts could possibly be so rock-stable in the face of large changes in external forcings (such as the output of the sun during the Maunder minimum).  Every time Mann and others try to sell the hockey stick, they are putting a dagger in teh heart of high-positive-feedback driven forecasts (which is a category of forecasts that includes probably every single forecast you have seen in the media).

For a more complete explanation of these feedback issues, see my video here.

It’s Not Zero

I have been meaning to link to this post for a while, but the Reference Frame, along with Roy Spencer, makes a valuable point I have also made for some time — the warming effect from man’s CO2 is not going to be zero.  The article cites approximately the same number I have used in my work and that was used by the IPCC:  absent feedback and other second order effects, the earth should likely warm about 1.2C from a doubling of CO2.

The bare value (neglecting rain, effects on other parts of the atmosphere etc.) can be calculated for the CO2 greenhouse effect from well-known laws of physics: it gives 1.2 °C per CO2 doubling from 280 ppm (year 1800) to 560 ppm (year 2109, see below). The feedbacks may amplify or reduce this value and they are influenced by lots of unknown complex atmospheric effects as well as by biases, prejudices, and black magic introduced by the researchers.

A warming in the next century of 0.6 degrees, or about the same warming we have seen in the last century, is a very different prospect, demanding different levels of investment, than typical forecasts of 5-10 degrees or more of warming from various alarmists.

How we get from a modest climate sensitivity of 1.2 degrees to catastrophic forecasts is explained in this video:


In study 1, a certain historic data set is presented.  The data set shows an underlying variation around a fairly strong trend line.  The trend line is removed, for a variety of reasons, and the data set is presented normalized or de-trended.

In study 2, researches take the normalized, de-trended data and conclude … wait for it … that there is no underlying trend in the natural process being studied.  Am I really understanding this correctly?  I think so:

The briefest examination of the Scotland speleothem shows that the version used in Trouet et al had been previously adjusted through detrending from the MWP [Medievil Warm Period] to the present. In the original article (Proctor et al 2000), this is attributed to particularities of the individual stalagmite, but, since only one stalagmite is presented, I don’t see how one can place any confidence on this conclusion. And, if you need to remove the trend from the MWP to the present from your proxy, then I don’t see how you can use this proxy to draw to conclusions on relative MWP-modern levels.

Hope and change, climate science version.

Postscript: It is certainly possible that the underlying data requires an adjustment, but let’s talk about why the adjustment used is not correct.  The scientists have a hypothesis that they can look at the growth of stalagmites in certain caves and correlate the annual growth rate with climate conditions.

Now, I could certainly imagine  (I don’t know if this is true, but work with me here) that there is some science that the volume of material deposited on the stalagmite is what varies in different climate conditions.  Since the stalagmite grows, a certain volume of material on a smaller stalagmite would form a thicker layer than the same volume on a larger stalagmite, since the larger body has a larger surface area.

One might therefore posit that the widths could be corrected back to the volume of the material deposited based on the width and height of the stalagmite at the time (if these assumptions are close to the mark, it would be a linear, first order correction since surface area in a cone varies linearly with height and radius).  There of course might be other complicating factors beyond this simple model — for example, one might argue that the deposition rate might itself change with surface area and contact time.

Anyway, this would argue for a correction factor based on geometry and the physics / chemistry of the process.  This does NOT appear to be what the authors did, as per their own description:

This band width was signal was normalized and the trend removed by fitting an order 2 polynomial trend line to the band width data.

That can’t be right.  If we don’t understand the physics well enough to know how, all things being equal, band widths will vary by size of the stalagmite, then we don’t understand the physics well enough to use it confidently as a climate proxy.

Thinking About the Sun

A reader wrote me a while back and asked if I could explain how I thought the sun could be a major driver of climate when temperature and solar metrics appear to have “diverged” as in the following two charts:


In both charts, red is the solar metric (TSI in the first chart, sunspot number in the second).  The other line, either blue or green, is a global temperature metric.  In both cases, we see a sort of step change in solar output, with the first half of the century at one plateau and the second half on a higher plateau.  This chart of sunspot numbers may better illustrate this:

I had three answers for the reader:

  1. In any sufficiently chaotic and complicated system, no one variable is going to consistently regress perfectly with another variable.  CO2 does not line up with temperature any better.
  2. There are non-solar factors at work.  As I have said on any number of occasions, I agree that the greenhouse effect of CO2 exists and will add about 1C for each doubling of CO2.  What I disagree with is the proposition that the Earth’s climate is dominated by positive feedback that multiplies this temperature increase 3-5 or more times.  The PDO cycle is another example of a process that affects global temperatures.
  3. One should not necessarily expect a linear temperature increase to be driven by a linear increase in the sun’s output.   I will illustrate this with a simplistic example, and then invite further comment.   I believe the following is a correct illustration of one heat source -> temperature phenomenon.  If so, wouldn’t we expect something similar with step-change increases in the sun’s output, and doesn’t this chart look a lot like the charts with which I began the post?


Missing in Action

I have been pretty remiss in posting here lately.  One reason is that this is the busy season in my business.  The other reason is that there is just so much going on in the economy and the new administration on which I feel the need to comment, that I have spent most of my time at CoyoteBlog.

Steve McIntyre on the Hockey Stick

I meant to post this a while back, and most of my readers will have already seen this, but in case you missed it, here is Steve McIntyre’s most recent presentation on a variety of temperature reconstruction issues, in particular Mann’s various new attempts at resuscitating the hockey stick.  While sometimes his web site Climate Audit is hard for laymen and non-statisticians to follow, this presentation is pretty accessible.