Monthly Archives: February 2011

Extreme Events

My modelling backing began in complex dynamics (e.g. turbulent flows) but most of my experience is in financial modelling.  And I can say with a high degree of confidence that anyone in the financial world who actually bet money based on this modelling approach (employed in the recent Nature article on UK flooding) can be described with one word: bankrupt.  No one in their right mind would have any confidence in this approach.  No one would ever trust a model that has been hand-tuned to match retrospective data to be accurate going forward, unless that model had been observed to have a high degree of accuracy when actually run forward for a while (a test every climate model so far fails).  And certainly no one would trust a model based on pure modelling without even reference to historical data.

Te entire emerging industry of pundits willing to ascribe individual outlier weather events to manmade CO2 simply drive me crazy.  Forget the uncertainties with catastrophic anthropogenic global warming theory.  Consider the following:

  • I can think of no extreme weather event over the last 10 years that has been attributed to manmade CO2 (Katrina, recent flooding, snowstroms, etc) for which there are not numerous analogs in pre-anthropogenic years.   The logic that some event is unprecedented and therefore must be manmade is particularly absurd when the events in question are not unprecedented.  In some sense, the purveyors of these opinions are relying on really short memories or poor Google skills in their audiences.
  • Imagine weather simplified to 200 balls in a bingo hopper.  195 are green and 5 are red.  At any one point in time, the chance is 2.5% that a red ball (an extreme event) is pulled.  Now add one more ball.  The chances of an extreme even is now 20% higher.  At some point a red ball is pulled.  Can you blame the manual addition of a red ball for that extreme event?  How?  A red ball was going to get pulled anyway, at some point, so we don’t know if this was one of the originals or the new one.  In fact, there is only a one in six chance this extreme event is from our manual intervention.   So even if there is absolute proof the probability of extreme events has gone up, it is still impossible to ascribe any particular one to that increased probability.
  • How many samples would one have to take to convince yourself, with a high probability, the distribution has gone up?  The answer is … a lot more than just having pulled one red ball, which is basically what has happened with reporting on extreme events.  In fact, the number is really, really high because in the real climate we don’t even know the starting distribution with any certainty, and at any point in time other natural effects are adding and subtracting green and red balls (not to mention a nearly infinite number of other colors).

Duty to Disclose

When prosecutors put together their case at trial (at least in the US) they have a legal duty to share all evidence, including potentially exculpatory evidence, with the defense.  When you sell your house or take a company public, there is a legal requirement to reveal major known problems to potential buyers.  Of course, there are strong incentives not to share this information, but when people fail on this it is considered by all to be fraud.

I would have thought the same standard exists in scientific research, ie one has an ethical obligation to reveal data or experiments that do not confirm one’s underlying hypothesis or may potentially cast some doubt on the results.  After all, we are after truth, right?

Two posts this week shed some interesting light on this issue  vis a vis dendro-climatology.  I hesitate to pile on much on the tree ring studies at this point, as they have about as much integrity right now as the study of alchemy.  If we are going to get some real knowledge out of this data, someone is going to have to tear the entire field down to bedrock and start over (as was eventually done when alchemy became chemistry).  But I do think both of these posts raise useful issues that go beyond just Mann, Briffa, and tree rings.

In the first, Steve McIntyre looks at one of the Climategate emails from Raymond Bradley where Bradley is almost proudly declaring that MBH98 had purposely withheld data that would have made their results look far less certain.  He taunts skeptics for not yet figuring out the game, an ethical position roughly equivalent to Bernie Madoff taunting investors for being too dumb to figure out he was duping them with a Ponzi scheme.

In the second, Judith Curry takes a look at the Briffa “hide the decline” trick.  There is a lot of confusion about just what this trick was.  In short, the expected behavior of tree ring results in the late 20th century diverged from actual measured temperatures.  In short, the tree rings showed temperatures falling since about 1950 when they have in fact risen.   Since there is substantial disagreement on whether tree rings really do act as reliable proxies for temperatures, this is an important fact since if tree rings are failing to follow temperatures for the last half century, there could easily be similar failures in the past.  Briffa and the IPCC removed the post-1950 tree ring data from key charts presented to the public, and used the graphical trick of overlaying gauge temperature records to imply that the proxies continued to go up.

Given the heat around this topic, Curry tries to step back and look at the issue dispassionately.  Unlike many, she does not assign motivations to people when these are not known, but she does conclude:

There is no question that the diagrams and accompanying text in the IPCC TAR, AR4 and WMO 1999 are misleading.  I was misled.  Upon considering the material presented in these reports, it did not occur to me that recent paleo data was not consistent with the historical record.  The one statement in AR4 (put in after McIntyre’s insistence as a reviewer) that mentions the divergence problem is weak tea.

It is obvious that there has been deletion of adverse data in figures shown IPCC AR3 and AR4, and the 1999 WMO document.  Not only is this misleading, but it is dishonest (I agree with Muller on this one).  The authors defend themselves by stating that there has been no attempt to hide the divergence problem in the literature, and that the relevant paper was referenced.  I infer then that there is something in the IPCC process or the authors’ interpretation of the IPCC process  (i.e. don’t dilute the message) that corrupted the scientists into deleting the adverse data in these diagrams.

The best analogy I can find for this behavior is prosecutorial abuse.  When prosecutors commit abuses (e.g. failure to share exculpatory evidence), it is often because they are just sure the defendent is guilty.  They can convince themselves that even though they are breaking the law, they are serving the law in a larger sense because they are making sure guilty people go to jail.  Of course, this is exactly how innocent people rot in jail for years, because prosecutors are not supposed to be the ultimate aribiter of guilt and innosence.  In the same way, I am sure Briffa et al felt that by cutting ethical corners, they were serving a larger purpose because they were just sure they were right.  Excupatory evidence might just confuse the jury and lead, in their mind, to a miscarriage of justice.   As Michael Mann wrote (as quoted by Curry)

Otherwise, the skeptics have an field day casting doubt on our ability to understand the factors that influence these estimates and, thus, can undermine faith in the paleoestimates. I don’t think that doubt is scientifically justified, and I’d hate to be the one to have to give it fodder!

A Good Idea

This strikes me as an excellent idea — there are a lot of things in climate that will remain really hard to figure out, but a scientifically and statistically sound approach to creating a surface temperature record should not be among them.  It is great to see folks moving beyond pointing out the oft-repeated flaws in current surface records (e.g. from NOAA, GISS, and the Hadley Center) and deciding to apply our knowledge of those flaws to creating a better record.   Bravo.

Warming in the historic record is not going away.  It may be different by a few tenths, but I am not sure its going to change arguments one way or another.  Even the (what skeptics consider) exaggerated current global temperature metrics fall far short of the historic warming that would be consistent with current catastrophic high-CO2-sensitivity models.  So a few tenths higher or lower will not change this – heroic assumptions of tipping points and cooling aerosols will still be needed either way to reconcile aggressive warming forecasts with history.

What can be changed, however, is the stupid amount of time we spend arguing about a topic that should be fixable.  It is great to see a group trying to honestly create such a fix so we can move on to more compelling topics.  Some of the problems, though, are hard to fix — for example, there simply has been a huge decrease in the last 20 years of stations without urban biases, and it will be interesting to see how the team works around this.