Tilting at Straw Men

In my Forbes article a few weeks ago, I showed how the arguments alarmists most frequently use to “prove” that skeptics are wrong are actually straw men.  Alarmists want to fight the war over whether the greenhouse gas effect of CO2 is true and whether the world has seen warming over the last century, both propositions that skeptics like myself accept.

The issue for us is whether man is causing a catastrophe (mainly due to large positive feedbacks in the climate system), and whether past warming has been consistent with catastrophic rates of man-made warming.  Both of these propositions are far from proven, and are seldom even discussed in the media.

I found a blog I had not read before on energy policy issues that had a very sensible article on just this issue

The most frustrating thing about being a scientist skeptical of catastrophic global warming is that the other side is continually distorting what I am skeptical of.

In his immodestly titled New York Review of Books article “Why the Global Warming Skeptics Are Wrong,” economist William Nordhaus presents six questions that the legitimacy of global warming skepticism allegedly rests on.

  1. Is the planet in fact warming?
  2. Are human influences an important contributor to warming?
  3. Is carbon dioxide a pollutant?
  4. Are we seeing a regime of fear for skeptical climate scientists?
  5. Are the views of mainstream climate scientists driven primarily by the desire for financial gain?
  6. Is it true that more carbon dioxide and additional warming will be beneficial?

Since the answers to these questions are allegedly yes, yes, yes and no, no, no, it’s case closed, says Nordhaus.

Except that he is attacking a straw man. Scientists (or non-scientists) who are “skeptics” are skeptical of catastrophic global warming—not warming or human-caused warming as such. So much for 1 and 2. We refuse to label CO2 a “pollutant” because it is essential to life and because we do not believe it has the claimed catastrophic impact. So much for 3. And since 4-6 don’t pertain to the scientific issue of

The Alarmist Bait and Switch

This quote from Michael Mann is a great example of two common rhetorical tactics of climate alarmists:

And so I think we have to get away from this idea that in matters of science, it’s, you know, that we should treat discussions of climate change as if there are two equal sides, like we often do in the political discourse. In matters of science, there is an equal merit to those who are denying the reality of climate change who area few marginal individuals largely affiliated with special interests versus the, you know, thousands of scientists around the world. U.S. National Academy of Sciences founded by Abraham Lincoln back in the 19th century, all the national academies of all of the major industrial nations around the world have all gone on record as stating clearly that humans are warming the planet and changing the climate through our continued burning of fossil fuels.

Here are the two tactics at play here:

  1. He is attempting to marginalize skeptics so that debating their criticisms is not necessary.  He argues that skeptics are not people of goodwill; or that they say what they say because they are paid by nefarious interests to do so; or that they are vastly outnumbered by real scientists (“real” being defined as those who agree with Dr. Mann).  This is an oddly self-defeating argument, though the media never calls folks like Mann on it.  If skeptics’ arguments are indeed so threadbare, then one would imagine that throwing as much sunlight on them as possible would reveal their bankruptcy to everyone, but instead most alarmists are begging the media, as in this quote, to bury and hide skeptics’ arguments.  I LOVE to debate people when I know I am right, and have pre-debate trepidation only when I know my position to be weak.
  2. There is an enormous bait and switch going on in the last sentence.  Note the proposition is stated as “humans are warming the planet and changing the climate through our continued burning of fossil fuels.”  I, and many other skeptics, don’t doubt the first part and would quibble with the second only because so much poor science occurs in attributing specific instances of climate change to human action.  What most skeptics disagree with is an entirely different proposition, that humans are warming the planet to catastrophic levels that justify immensely expensive and coercive government actions to correct.  Skeptics generally accept a degree or so of warming from each doubling of CO2 concentrations but reject the separate theory that the climate is dominated by positive feedback effects that multiple this warming 3x or more.   Mann would never be caught dead in public trying to debate this second theory of positive feedback, despite the fact that most of the warming in IPCC forecasts is from this second theory, because it is FAR from settled.  Again, the media is either uninterested or intellectually unable to call him on this.

I explained the latter points in much more detail at Forbes.com

Revising History

This is a topic we have covered here a lot – downward revisions to temperatures decades ago that increase the apparent 20th century warming.  Here is a great example of this from the GISS for Reykjavik, Iceland.  The GISS has revised downwards early 20th century temperatures by as much as 2C, despite Iceland’s Met office crying foul.  It is unclear exactly what justification is being used to adjust the raw data.  Valid reasons include adjustments for changes in the time-of-day of the reading, changes to the instrument’s location or type, and urbanization effects.  It is virtually impossible to imagine changes in the first two categories that would be on the order of magnitude of 2C, and urbanization adjustments would have the opposite sign (e.g. make older readings warmer to match current urban-warming-biased readings).

Arctic stations like these are particularly important to the global metrics because the GISS extrapolates the temperature of the entire Arctic from just a few thermometers.  Changes to one reading at a station like Reykjavik could change the GISS extrapolated temperatures for hundreds of thousands of square miles.

Who Wrote the Fake Heartland Strategy Memo?

Certainly Peter Gleick is still in the running.

But as I wrote in Forbes last week, the memo does not have the feel of having been written by a “player” like Gleick.  It feels like someone younger, someone more likely to take the cynical political knife-fighting statements of someone like Glieck (e.g. skeptics are anti-science) and convert them literally (and blindly) to supposed Heartland agenda items like trying to discourage science teaching.  Someone like an intern or student, who might not realize how outrageous their stilted document might look to real adults in the real world, who understand that leaders of even non-profits they dislike don’t generally speak like James Bond villains.   Even Megan McArdle joked “Basically, it reads like it was written from the secret villain lair in a Batman comic.  By an intern.”

Now combine that with a second idea.  Gleick is about the only strong global warming believer mentioned by the fake strategy document.   I don’t think many folks who have observed Heartland from afar would say that Heartland has any special focus on or animus towards Gleick (more than they might have for any other strong advocate of catastrophic man-made global warming theory).   I would not have inferred any such focus by Heartland, and seriously, who would possibly think to single out Peter Gleick of all candidates (vs. Romm or Hansen or Mann et al) in a skeptic attack strategy?

The only person who might have inferred such a rivalry would have been someone close to Gleick, who heard about Heartland mainly from Gleick.  Certainly Gleick seems to have had a particular focus, almost obsession, with Heartland, and so someone who viewed Heartland only through the prism of Gleick’s rants might have inferred that Heartland had something special in for him.  And thus might have featured him prominently in a hypothesized attack in their strategy document.

So this is what I infer from all this:  My bet is on a fairly young Gleick sycophant — maybe a worker at the Pacific Institute, maybe an intern, maybe a student.  Which would mean in turn that Gleick very likely knows who wrote the document, but might feel some responsibility to protect that person’s identity.

Peter Gleick Admits to Stealing Heartland Documents

I have an updated article at Forbes.  A small excerpt

In a written statementPeter Gleick of the Pacific Institute, and vocal advocate of catastrophic man-made global warming theory, has admitted to obtaining certain Heartland Institute internal documents under false premises, and then forwarding these documents to bloggers who were eager to publish them.

Gleick (also a writer on these pages at Forbes) frequently styles himself a defender of scientific integrity (for example), generally equating any criticism of his work or scientific positions with lack of integrity (the logic being that since certain scientists like himself have declared the science to be settled beyond question, laymen or even other scientists who dispute them must be ethically-challenged).

In equating disagreement with lack of integrity, he offers a prime example of what is broken in the climate debate, with folks on both sides working from an assumption that their opponents have deeply flawed, even evil motives.  Gleick frequently led the charge to shift the debate away from science, which he claimed was settled and unassailable, to the funding and motives of his critics.  Note that with this action, Gleick has essentially said that the way to get a more rational debate on climate, which he often says is his number one goal, was not to simplify or better present the scientific arguments but to steal and publish details on a think tank’s donors….

Hit the link to read it all.

Heartland Documents: Whose Biases are Being Revealed Here?

I could not resist commenting on the brouhaha around the stolen Heartland Institute documents in my column at Forbes.  The key one that is the “smoking gun” now appears to be fake.  I wrote in part:

One reason I am fairly certain the document is fake is this line from the supposed skeptic strategy document:

His effort will focus on providing curriculum that shows that the topic of climate change is controversial and uncertain – two key points that are effective at dissuading teachers from teaching science.

For those of us at least somewhat inside the tent of the skeptic community, particularly the science-based ones Heartland has supported in the past, the goal of “dissuading teachers from teaching science” is a total disconnect.  I have never had any skeptic in even the most private of conversations even hint at such a goal.  The skeptic view is that science education vis a vis climate and other environmental matters tends to be shallow, or one-sided, or politicized — in other words broken in some way and needing repair.  In this way, most every prominent skeptic that works even a bit in the science/data end of things believes him or herself to be supporting, helping, and fixing science.  In fact, many skeptics believe that the continued positive reception of catastrophic global warming theory is a function of the general scientific illiteracy of Americans and points to a need for more and better science education (see here for an overview of the climate debate that does not once use the ad hominem words “myth”, “scam” or “lie”).

The only people who believe skeptics are anti-science per se, and therefore might believe skeptics would scheme to dissuade teachers from teaching science, are the more political alarmists (a good example was posted today right here at Forbes, which you might want to contrast withthis).  For years, I presume partially in an effort to avoid debate, certain alarmists have taken the ad hominem position that skeptics are anti-science.  And many probably well-meaning alarmists believe this about skeptics (since they may have not actually met any skeptics to know differently).  The person who wrote this fake memo almost had to be an alarmist, and probably was of the middling, more junior sort, the type of person who does not craft the talking points but is a recipient of them and true believer.

At the end I make a sort of bet

If the strategy memo turns out to be fake as I believe it to be, I am starting the countdown now for the Dan-Rather-esque “fake but accurate” defense of the memo — ie, “Well, sure, the actual document was faked but we all know it represents what these deniers are really thinking.”  This has become a mainstay of post-modern debate, where facts matter less than having the politically correct position.

But in the first update I note the winner may already be delcared

Is Revkin himself seeking to win my fake-but-accurate race?   When presented with the fact that he may have published a fake memo, Revkin wrote:

looking back, it could well be something that was created as a way to assemble the core points in the batch of related docs.

It sounds like he is saying that while the memo is faked, it may have been someones attempt to summarize real Heartland documents.  Fake but accurate!  By the way, I don’t think he has any basis for this supposition, as no other documents have come to light with stuff like “we need to stop teachers from teaching science.”

Overview of the Global Warming Debate

I know I have been dormant on this site of late (the perils of having a day job), but I have been thinking about and working for a while on a way to clearly portray the basic outlines of the global warming debate. I hope you will check it out in this article posted today at Forbes. Here is the opening:

Likely you have heard the sound bite that “97% of climate scientists” accept the global warming “consensus”.  Which is what gives global warming advocates the confidence to call climate skeptics “deniers,” hoping to evoke a parallel with “Holocaust Deniers,” a case where most of us would agree that a small group are denying a well-accepted reality.  So why do these “deniers” stand athwart of the 97%?  Is it just politics?  Oil money? Perversity? Ignorance?

We are going to cover a lot of ground, but let me start with a hint.

In the early 1980′s I saw Ayn Rand speak at Northeastern University.  In the Q&A period afterwards, a woman asked Ms. Rand, “Why don’t you believe in housewives?”  And Ms. Rand responded, “I did not know housewives were a matter of belief.”  In this snarky way, Ms. Rand was telling the questioner that she had not been given a valid proposition to which she could agree or disagree.  What the questioner likely should have asked was, “Do you believe that being a housewife is a morally valid pursuit for a woman.”  That would have been an interesting question (and one that Rand wrote about a number of times).

In a similar way, we need to ask ourselves what actual proposition do the 97% of climate scientists agree with.  And, we need to understand what it is, exactly,  that the deniers are denying.   (I personally have fun echoing Ms. Rand’s answer every time someone calls me a climate denier — is the climate really a matter of belief?)

It turns out that the propositions that are “settled” and the propositions to which some like me are skeptical are NOT the same propositions.  Understanding that mismatch will help explain a lot of the climate debate.

Insights on Climate Science, From Economics

I continue to be fascinated by parallels between climate science and economics.  In the past, I have mainly discussed how climate models have the same problems and abuses and shortcomings as macro-economic models.

I thought this post discussing Keynesian economics could easily been written about climate:

No small part of Keynes’s (and the Keynesians’s) success is due, I believe, to their dressing up in scientific jargon and garb what are, at bottom, little more than ad hoc excuses for people to follow “their first impulsive reactions.”  Keynesians’s pose as scientists – their substitution of scientism for science – masks their rejection of a genuinely scientific approach to the study of the economy.

Why Are Skeptics Piling on Irene Forecasters?

I am totally confused why a number of skeptic sites are piling on Irene forecasters who over-estimated the storm’s destructiveness.   Somehow, these sites seem to conflate alarm over Irene with alarm over global warming, and thus false Irene alarm somehow reduces the believeability of global warming forecasts.

This makes no sense.  Yes, the topics are vaguely related, but the models, the prediction process, even the people involved are totally different.  Heck, I heard Joe Bastardi, who I believe is a skeptic, right in there with everyone else last week warning the storm would be very, very dangerous.

The only element even marginally similar is the fact that there are strong incentives that might influence the forecasts.  News and weather outlets get better ratings by creating storm hype, the old joke being that the local news station has predicted ten of the last two natural disasters.  And politicians would certainly rather be caught out being too careful rather than too casual about impending storms.

Did CLOUD Just Rain on the Global Warming Parade?

Today in Forbes, I have an article bringing the layman up to speed on Henrik Svensmark and this theory of cosmic ray cloud seeding.  Since his theory helped explain some 20th century warming via natural effects rather than anthropogenic ones, he and fellow researchers have face an uphill climb even getting funding to test his hypothesis.  But today, CERN in Geneva has released study results confirming most of Svensmark’s hypothesis, though crucially, it is impossible to infer from this work how much of 20th century temperature changes can be traced to the effect (this is the same problem global warming alarmists face — CO2 greenhouse warming can be demonstrated in a lab, but its hard to figure out its actual effect in a complex climate system).

From the article:

Much of the debate revolves around the  role of the sun, and though holding opposing positions, both skeptics and alarmists have had good points in the debate.  Skeptics have argued that it is absurd to downplay the role of the sun, as it is the energy source driving the entire climate system.  Michael Mann notwithstanding, there is good evidence that unusually cold periods have been recorded in times of reduced solar activity, and that the warming of the second half of the 20th century has coincided with a series of unusually strong solar cycles.

Global warming advocates have responded, in turn, that while the sun has indeed been more active in the last half of the century, the actual percentage change in solar irradiance is tiny, and hardly seems large enough to explain measured increases in temperatures and ocean heat content.

And thus the debate stood, until a Danish scientist named Henrik Svensmark suggested something outrageous — that cosmic rays might seed cloud formation.  The implications, if true, had potentially enormous implications for the debate about natural causes of warming.

When the sun is very active, it can be thought of as pushing away cosmic rays from the Earth, reducing their incidence.  When the sun is less active, we see more cosmic rays.  This is fairly well understood.  But if Svensmark was correct, it would mean that periods of high solar output should coincide with reduced cloud formation (due to reduced cosmic race incidence), which in turn would have a warming effect on the Earth, since less sunlight would be reflected back into space before hitting the Earth.

Here was a theory, then, that would increase the theoretical impact on climate of an active sun, and better explain why solar irradiance changes might be underestimating the effect of solar output changes on climate and temperatures.

I go on to discuss the recent CERN CLOUD study and what it has apparently found.

Go Easy on the Polar Bear Fraud

The skeptic side of the blogosphere is all agog over the academic investigation into Charles Monnett, the man of drowning polar bear fame.  The speculation is that the investigation is about the original polar bear report in 2006.  A couple of thoughts

  1. If you read between the lines in the news articles, we really have no idea what is going on.  The guy could have falsified his travel expense reports
  2. The likelihood that an Obama Administration agency would be trying to root out academic fraud at all, or that if they did so they would start here, seems absurd to me.
  3. There is no room for fraud because the study was, on its face, facile and useless.  The authors basically extrapolated from a single data point.  As I tell folks all the time, if you have only one data point, you can draw virtually any trend line you want through it.  They had no evidence of what caused the bear deaths or if they were in any way typical or part of a trend — it was all pure speculation and crazy extrapolation.  How could there be fraud when there was not any data here in the first place?  The fraud was in the media, Al Gore, and ultimately the EPA treating this with any sort of gravitas.

Using Computer Models To Launder Certainty

(cross posted from Coyote Blog)

For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon.   We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision.   We then treat the output with a reverence that does not match the quality of the inputs.

I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it.  But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr.  This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).

“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:

  1. What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
  2. As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

Pielke goes on to deconstruct the study, but just compare the two bolded statements.  First, that there is not sufficiently extensive and accurate observational data to test a hypothesis.  BUT, then we will create a model, and this model is validated against this same observational data.  Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen.  If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?

A model is no different than a hypothesis embodied in code.   If I have a hypothesis that the average width of neckties in this year’s Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis  (though it may be enough to get me media attention).  The model is merely a software implementation of my original hypothesis.  In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.

This is not just a climate problem.  The White House studies on the effects of the stimulus were absolutely identical.  They had a hypothesis that government deficit spending would increase total economic activity.  After they spent the money, how did they claim success?  Did they measure changes to economic activity through observational data?  No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment).  And the press reported it like it was a real number.

Postscript: I did not get into this in the original article, but the other mistake the study seems to make is to validate the model on a variable that is irrelevant to its conclusions.   In this case, the study seems to validate the model by saying it correctly simulates past upper ocean heat content numbers (you remember, the ones that are too few and too inaccurate to validate a hypothesis).  But the point of the paper seems to be to understand if what might be excess heat (if we believe the high sensitivity number for CO2) is going into the deep ocean or back into space.   But I am sure I can come up with a number of combinations of assumptions to match the historic ocean heat content numbers.  The point is finding the right one, and to do that requires validation against observations for deep ocean heat and radiation to space.

Return of “The Plug”

I want to discuss the recent Kaufman study which purports to reconcile flat temperatures over the last 10-12 years with high-sensitivity warming forecasts.  First, let me set the table for this post, and to save time (things are really busy this week in my real job) I will quote from a previous post on this topic

Nearly a decade ago, when I first started looking into climate science, I began to suspect the modelers were using what I call a “plug” variable.  I have decades of experience in market and economic modeling, and so I am all too familiar with the temptation to use one variable to “tune” a model, to make it match history more precisely by plugging in whatever number is necessary to make the model arrive at the expected answer.

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

So now we can turn to Kaufman, summarized in this article and with full text here.  In the context of the Kiehl study discussed above, Kaufman is absolutely nothing new.

Kaufmann et al declare that aerosol cooling is “consistent with” warming from manmade greenhouse gases.

In other words, there is some value that can be assigned to aerosol cooling that offsets high temperature sensitives to rising CO2 concentrations enough to mathematically spit out temperatures sortof kindof similar to those over the last decade.  But so what?  All Kaufman did is, like every other climate modeler, find some value for aerosols that plugged temperatures to the right values.

Let’s consider an analogy.  A big Juan Uribe fan (plays 3B for the SF Giants baseball team) might argue that the 2010 Giants World Series run could largely be explained by Uribe’s performance.  They could build a model, and find out that the Giants 2010 win totals were entirely consistent with Uribe batting .650 for the season.

What’s the problem with this logic?  After all, if Uribe hit .650, he really would likely have been the main driver of the team’s success.  The problem is that we know what Uribe hit, and he batted under .250 last year.  When real facts exist, you can’t just plug in whatever numbers you want to make your argument work.

But in climate, we are not sure what exactly the cooling effect of aerosols are.  For related coal particulate emissions, scientists are so unsure of their effects they don’t even know the sign (ie are they net warming or cooling).  And even if they had a good handle on the effects of aerosol concentrations, no one agrees on the actual numbers for aerosol concentrations or production.

And for all the light and noise around Kaufman, the researchers did just about nothing to advance the ball on any of these topics.  All they did was find a number that worked, that made the models spit out the answer they wanted, and then argue in retrospect that the number was reasonable, though without any evidence.

Beyond this, their conclusions make almost no sense.  First, unlike CO2, aerosols are very short lived in the atmosphere – a matter of days rather than decades.  Because of this, they are poorly mixed, and so aerosol concentrations are spotty and generally can be found to the east (downwind) of large industrial complexes (see sample map here).

Which leads to a couple of questions.  First, if significant aerosol concentrations only cover, say, 10% of the globe, doesn’t that mean that to get a  0.5 degree cooling effect for the whole Earth, there must be a 5 degree cooling effect in the affected area.   Second, if this is so (and it seems unreasonably large), why have we never observed this cooling effect in the regions with high concentrations of manmade aerosols.  I understand the effect can be complicated by changes in cloud formation and such, but that is just further reasons we should be studying the natural phenomenon and not generating computer models to spit out arbitrary results with no basis in observational data.

Judith Currey does not find the study very convincing, and points to this study by Remer et al in 2008 that showed no change in atmospheric aerosol depths through the heart of the period of supposed increases in aerosol cooling.

So the whole basis for the study is flawed – its based on the affect of increasing aerosol concentrations that actually are not increasing.  Just because China is producing more does not apparently mean there is more in the atmosphere – it may be reductions in other areas like the US and Europe are offsetting Chinese emissions or that nature has mechanisms for absorbing and eliminating the increased emissions.

By the way, here was Curry’s response, in part:

This paper points out that global coal consumption (primarily from China) has increased significantly, although the dataset referred to shows an increase only since 2004-2007 (the period 1985-2003 was pretty stable).  The authors argue that the sulfates associated with this coal consumption have been sufficient to counter the greenhouse gas warming during the period 1998-2008, which is similar to the mechanism that has been invoked  to explain the cooling during the period 1940-1970.

I don’t find this explanation to be convincing because the increase in sulfates occurs only since 2004 (the solar signal is too small to make much difference).  Further, translating regional sulfate emission into global forcing isnt really appropriate, since atmospheric sulfate has too short of an atmospheric lifetime (owing to cloud and rain processes) to influence the global radiation balance.

Curry offers the alternative explanation of natural variability offsetting Co2 warming, which I think is partly true.  Though Occam’s Razor has to force folks at some point to finally question whether high (3+) temperature sensitivities to CO2 make any sense.  Seriously, isn’t all this work on aerosols roughly equivalent to trying to plug in yet more epicycles to make the Ptolemaic model of the universe continue to work?

Postscript: I will agree that there is one very important affect of the ramp-up of Chinese coal-burning that began around 2004 — the melting of Arctic Ice.  I strongly believe that the increased summer melts of Arctic ice are in part a result of black carbon from Asia coal burning landing on the ice and reducing its albedo (and greatly accelerating melt rates).   Look here when Arctic sea ice extent really dropped off, it was after 2003.    Northern Polar temperatures have been fairly stable in the 2000’s (the real run-up happened in the 1990’s).   The delays could be just inertia in the ocean heating system, but Arctic ice melting sure seems to correlate better with black carbon from China than it does with temperature.

I don’t think there is anything we could do with a bigger bang for the buck than to reduce particulate emissions from Asian coal.  This is FAR easier than CO2 emissions reductions — its something we have done in the US for nearly 40 years.

Just 20 Years

I wanted to pull out one thought from my longer video and presentation on global warming.

As a reminder, I adhere to what I call the weak anthropogenic theory of global warming — that the Earth’s sensitivity to CO2, net of all feedback effects, is 1C per doubling of CO2 concentrations or less, and that while man may therefore be contributing to global warming with his CO2 (not to mention his land use and other practices) the net effect falls far short of catastrophic.

While in the media, alarmists want to imply that the their conclusions about climate sensitivity are based on a century of observation, but this is not entirely true.  Certainly we have over a century of temperature measurements, but only a small part of this history is consistent with the strong anthropogenic theory.  In fact, I observed in my video is that the entire IPCC case for a high climate sensitivity to CO2 is based on just 20 years of history, from about 1978 to 1998.

Here are the global temperatures in the Hadley CRUT3 data base, which is the primary data from which the IPCC worked (hat tip Junk Science Global Warming at a Glance)  click to enlarge

Everything depends on how one counts it, but during the period of man-made CO2 creation, there are really just two warming periods, if we consider the time from 1910 to 1930 just a return to the mean.

  • 1930-1952, where temperatures spiked about a half a degree and ended 0.2-0.3 higher than the past trend
  • 1978-1998, where temperatures rose about a half a degree, and have remained at that level since

Given that man-made CO2 output did not really begin in earnest until after 1950 (see the blue curve of atmospheric CO2 levels on the chart), even few alarmists will attribute the runup in temperatures from 1930-1952 (a period of time including the 1930’s Dust Bowl) to anthropogenic CO2.  This means that the only real upward change in temperatures that could potentially be blamed on man-made CO2 occurred from 1978-1998.

This is a very limited amount of time to make sweeping statements about climate change causation, particularly given the still infant-level knowledge of climate science.  As a result, since 1970, skeptics and alarmists have roughly equal periods of time where they can make their point about temperature causation (e.g. 20 years of rising CO2 and flat temperatures vs. 20 years of rising CO2 and rising temperatures).

This means that in the last 40 years, both skeptics and alarmists must depend on other climate drivers to make their case  (e.g. skeptics must point to other natural factors for the run-up in 1978-1998, while alarmists must find natural effects that offset or delayed warming in the decade either side of this period).  To some extent, this situation slightly favors skeptics, as skeptics have always been open to natural effects driving climate while alarmists have consistently tried to downplay natural forcing changes.

I won’t repeat all the charts, but starting around chart 48 of this powerpoint deck (also in the video linked above) I present some alternate factors what may have contributed, along with greenhouse gases, to the 1978-1998 warming (including two of the strongest solar cycles of the century and a PDO warm period nearly exactly matching these two decades).

Postscript: Even if the entire 0.7C or so temperature increase in the whole of the 20th century is attributed to manmade CO2, this still implies a climate sensitivity FAR below what the IPCC and other alarmists use in their models.   Given about 44% of a doubling since the industrial revolution began in CO2 concentrations, this would translate into a temperature sensitivity of 1.3C  (not a linear extrapolation, the relationship is logarithmic).

This is why alarmists must argue that not only has all the warming we have seen been due to CO2 ( heroic assumption in and of itself) but that there are additional effects masking or hiding the true magnitude of past warming.  Without these twin, largely unproven assumptions, current IPCC “consensus” numbers for climate sensitivity would be absurdly high.  Again, I address this in more depth in my video.

Climate Models

My article this week at Forbes.com digs into some fundamental flaws of climate models

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

Global Warming Will Substantially Change All Weather — Except Wind, Which Stays the Same

This is a pretty funny point noticed by Marlo Lewis at globalwarming.org.  Global warming will apparently cause more rain, more drought, more tornadoes, more hurricanes, more extreme hot weather, more extreme cold weather, more snow, and less snow.

Fortunately, the only thing it apparently does not change is wind, and leaves winds everywhere at least as strong as they are now.

Rising global temperatures will not significantly affect wind energy production in the United States concludes a new study published this week in the Proceedings of the National Academy of Sciences Early Edition.

But warmer temperatures could make wind energy somewhat more plentiful say two Indiana University (IU) Bloomington scientists funded by the National Science Foundation (NSF).

. . .

They found warmer atmospheric temperatures will do little to reduce the amount of available wind or wind consistency–essentially wind speeds for each hour of the day–in major wind corridors that principally could be used to produce wind energy.

. . .

“The models tested show that current wind patterns across the US are not expected to change significantly over the next 50 years since the predicted climate variability in this time period is still within the historical envelope of climate variability,” said Antoinette WinklerPrins, a Geography and Spatial Sciences Program director at NSF.

“The impact on future wind energy production is positive as current wind patterns are expected to stay as they are. This means that wind energy production can continue to occur in places that are currently being targeted for that production.”

Even though global warming will supposedly shift wet and dry areas, it will not shift windy areas and so therefore we should all have a green light to continue to pour taxpayer money into possibly the single dumbest source of energy we could consider.

Using Models to Create Historical Data

Megan McArdle points to this story about trying to create infant mortality data out of thin air:

Of the 193 countries covered in the study, the researchers were able to use actual, reported data for only 33. To produce the estimates for the other 160 countries, and to project the figures backwards to 1995, the researchers created a sophisticated statistical model. [1]What’s wrong with a model? Well, 1) the credibility of the numbers that emerge from these models must depend on the quality of “real” (that is, actual measured or reported) data, as well as how well these data can be extrapolated to the “modeled” setting ( e.g. it would be bad if the real data is primarily from rich countries, and it is “modeled” for the vastly different poor countries – oops, wait, that’s exactly the situation in this and most other “modeling” exercises) and 2) the number of people who actually understand these statistical techniques well enough to judge whether a certain model has produced a good estimate or a bunch of garbage is very, very small.

Without enough usable data on stillbirths, the researchers look for indicators with a close logical and causal relationship with stillbirths. In this case they chose neonatal mortality as the main predictive indicator. Uh oh. The numbers for neonatal mortality are also based on a model (where the main predictor is mortality of children under the age of 5) rather than actual data.

So that makes the stillbirth estimates numbers based on a model…which is in turn…based on a model.

This sound familiar to anyone?   The only reason it is not a good analog to climate is that the article did not say that they used mortality data from 1200 kilometers away to estimate a country’s historic numbers.

Smart, numerically facile people who glibly say they support the science of anthropogenic global warming would be appalled if they actually looked at it in any depth.   While gender studies grads and journalism majors seem consistently impressed with the IPCC, physicists, economics, geologists, and others more used to a level of statistical rigor generally turn from believers to skeptics once they dig into the details.  I did.