More Hockey Stick Hyjinx

Update: Keith Briffa responds to the issues discussed below here.

Sorry I am a bit late with the latest hockey stick controversy, but I actually had some work at my real job.

At this point, spending much time on the effort to discredit variations of the hockey stick analysis is a bit like spending time debunking phlogiston as the key element of combustion.  But the media still seems to treat these analyses with respect, so I guess the effort is necessary.

Quick background:  For decades the consensus view was that earth was very warm during the middle ages, got cold around the 17th century, and has been steadily warming since, to a level today probably a bit short of where we were in the Middle Ages.  This was all flipped on its head by Michael Mann, who used tree ring studies to “prove” that the Medieval warm period, despite anecdotal evidence in the historic record (e.g. the name of Greenland) never existed, and that temperatures over the last 1000 years have been remarkably stable, shooting up only in the last 50 years to 1998 which he said was likely the hottest year of the last 1000 years.  This is called the hockey stick analysis, for the shape of the curve.

Since he published the study, a number of folks, most prominently Steve McIntyre, have found flaws in the analysis.  He claimed Mann used statistical techniques that would create a hockey stick from even white noise.  Further, Mann’s methodology took numerous individual “proxies” for temperatures, only a few of which had a hockey stick shape, and averaged them in a way to emphasize the data with the hockey stick.  Further, Mann has been accused of cherry-picking — leaving out proxy studies that don’t support his conclusion.  Another problem emerged as it became clear that recent updates to his proxies were showing declining temperatures, what is called “divergence.”  This did not mean that the world was not warming, but did mean that trees may not be very good thermometers.  Climate scientists like Mann and Keith Briffa scrambled for ways to hide the divergence problem, and even truncated data when necessary.  More hereMann has even flipped the physical relationship between a proxy and temperature upside down to get the result he wanted.

Since then, the climate community has tried to make itself feel better about this analysis by doing it multiple times, including some new proxies and new types of proxies (e.g. sediments vs. tree rings).  But if one looks at the studies, one is struck by the fact that its the same 10 guys over and over, either doing new versions of these studies or reviewing their buddies studies.  Scrutiny from outside of this tiny hockey stick society is not welcome.  Any posts critical of their work are scrubbed from the comment sections of RealClimate.com (in contrast to the rich discussions that occur at McIntyre’s site or even this one) — a site has even been set up independently to archive comments deleted from Real Climate.  This is a constant theme in climate.  Check this policy out — when one side of the scientific debate allows open discussion by all comers, and the other side censors all dissent, which do you trust?

Anyway, all these studies have shared a couple of traits in common:

  • They have statistical methodologies to emphasize the hockey stick
  • They cherry pick data that will support their hypothesis
  • They refuse to archive data or make it available for replication

The some extent, the recent to-do about Briffa and the Yamal data set have all the same elements.  But this one appears to have a new one — not only are the data sets cherry-picked, but there is growing evidence that the data within a data set has been cherry picked.

Yamal is important for the following reason – remember what I said above about just a few data sets driving the whole hockey stick.  These couple of data sets are the crack cocaine to which all these scientists are addicted.  They are the active ingredient.  The various hockey stick studies may vary in their choice of proxy sets, but they all include a core of the same two or three that they know with confidence will drive the result they want, as long as they are careful not to water them down with too many other proxies.

Here is McIntyre’s original post.   For some reason, the data set Briffa uses falls off to ridiculously few samples in recent years (exactly when you would expect more).  Not coincidentally, the hockey stick appears exactly as the number of data points falls towards 10 and then 5 (from 30-40).  If you want a longer, but more layman’s view, Bishop Hill blog has summarized the whole storyUpdateMore here, with lots of the links I didn’t have time this morning to find.

Postscript: When backed against the wall with no response, the Real Climate community’s ultimate response to issues like this is “Well, it doesn’t matter.”  Expect this soon.

Update: Here are the two key charts, as annotated by JoNova:

rcs_chronologies1v2

And it “matters”

yamal-mcintyre-fig2

What A Daring Guy

Joe Romm has gone on the record at Climate Progress on April 13, 2009 that the “median” forecast was for warming in the US by 2100 of 10-15F, or 5.5-8.3C, and he made it very clear that if he had to pick a single number, it would be the high end of that range.

On average, the 8.3C implies about 0.9C per decade of warming.  This might vary slightly by what starting point he intended (he is not very clear in the post) and I understand there is a curve so it will be below average in the early years and above in the later.

Anyway, Joe Romm is ready to put his money where his mouth is, and wants to make a 50/50 bet with any comers that warming in the next decade will be… 0.15C.  Boy, it sure is daring for a guy who is constantly in the press at a number around 0.9C per decade to commit to a number 6 times lower when he puts his money where his mouth is.   Especially when Romm has argued that warming in the last decade has been suppressed (somehow) and will pop back up soon.  Lucia has more reasons why this is a chickensh*t bet.

I deconstructed a previous gutless bet by Nate Silver here.

Have You Checked the Couch Cushions?

Patrick Michaels describes some of the long history of the Hadley Center and specifically Phil Jones’ resistance to third party verification of their global temperature data.  First he simply refused to share the data

We have 25 years or so invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?

(that’s some scientist, huh) and then he said he couldn’t share the data and now he says he’s lost the data.

Michaels gives pretty good context to the issues of station siting, but there are many other issues that are perfectly valid reasons for third parties to review the Hadley Center’s methodology.  A lot of choices have to be made in patching data holes and in giving weights to different stations and attempting to correct for station biases.  Transparency is needed for all of these methodologies and decisions.  What Jones is worried about is whenever the broader community (and particularly McIntyre and his community on his web site) have a go at such methodologies, they have always found gaping holes and biases.  Since the Hadley data is the bedrock on which rests almost everything done by the IPCC, the costs of it being found wrong are very high.

Here is an example post from the past on station siting and measurement quality.  Here is a post for this same station on correction and aggregation of station data, and problems therein.

Great Moments in Skepticism and “Settled Science”

Via Radley Balko:

The phrase shaken baby syndrome entered the pop culture lexicon in 1997, when British au pair Louise Woodward was convicted of involuntary manslaughter in the death of Massachusetts infant Matthew Eappen. At the time, the medical community almost universally agreed on the symptoms of SBS. But starting around 1999, a fringe group of SBS skeptics began growing into a powerful reform movement. The Woodward case brought additional attention to the issue, inviting new research into the legitimacy of SBS. Today, as reflected in the Edmunds case, there are significant doubts about both the diagnosis of SBS and how it’s being used in court.

In a compelling article published this month in the Washington University Law Review, DePaul University law professor Deborah Teurkheimer argues that the medical research has now shifted to the point where U.S. courts must conduct a major review of most SBS cases from the last 20 years. The problem, Teurkheimer explains, is that the presence of three symptoms in an infant victim—bleeding at the back of the eye, bleeding in the protective area of the brain, and brain swelling—have led doctors and child protective workers to immediately reach a conclusion of SBS. These symptoms have long been considered pathognomic, or exclusive, to SBS. As this line of thinking goes, if those three symptoms are present in the autopsy, then the child could only have been shaken to death.

Moreover, an SBS medical diagnosis has typically served as a legal diagnosis as well. Medical consensus previously held that these symptoms present immediately in the victim. Therefore, a diagnosis of SBS established cause of death (shaking), the identity of the killer (the person who was with the child when it died), and even the intent of the accused (the vigorous nature of the shaking established mens rea). Medical opinion was so uniform that the accused, like Edmunds, often didn’t bother questioning the science. Instead, they’d often try to establish the possibility that someone else shook the child.

But now the consensus has shifted. Where the near-unanimous opinion once held that the SBS triad of symptoms could only result from a shaking with the force equivalent of a fall from a three-story to four-story window, or a car moving at 25 mph to 40 mph (depending on the source), research completed in 2003 using lifelike infant dolls suggested that vigorous human shaking produces bleeding similar to that of only a 2-foot to 3-foot fall. Furthermore, the shaking experiments failed to produce symptoms with the severity of those typically seen in SBS deaths….
When I put all of this together, I said, my God, this is a sham,” Uscinski told Discover. “Somebody made a mistake right at the very beginning, and look at what’s come out of it.”

Before I am purposefully misunderstood, I am not committing the logical fallacy that an incorrect consensus in issue A means the consensus on issue B is incorrect.  The message instead is simple:  beware scientific “consensus,” particularly when that consensus is only a decade or two old.

Good News / Bad News for Media Science

The good news:  The AZ Republic actually published a front page story (link now fixed) on the urban heat island effect in Phoenix, and has a discussion of how changes in ground cover, vegetation, and landscaping can have substantial effects on temperatures, even over short distances.  Roger Pielke would be thrilled, as he has trouble getting even the UN IPCC to acknowledge this fact.

The bad news:  The bad news comes in three parts

  1. The whole focus of the story is staged in the context of rich-poor class warfare, as if the urban heat island effect is something the rich impose on the poor.  It is clear that without this class warfare angle, it probably would never have made the editorial cut for the paper.
  2. In putting all the blame on “the rich,” they miss the true culprit, which are leftish urban planners whose entire life goal is to increase urban densities and eliminate suburban “sprawl” and 2-acre lots.  But it is the very densities that cause the poor to live in the hottest temperatures, and it is the 2-acre lots that shelter “the rich” from the heat island effects.
  3. Not once do the authors take the opportunity to point out that such urban heat island effects are likely exaggerating our perceptions of Co2-based warming — that in fact some or much of the warming we ascribe to Co2 is actually due to this heat island effect in areas where we have measurement stations.

My son and I quantified the Phoenix urban heat island years ago in this project.

I am still wondering why Phoenix doesn’t investigate lighter street paving options.  They use all black asphalt, and just changing this approach (can you have lighter asphalt?) would be a big help.  By the way, our house is all white with a white foam roof, so we are doing our part to fight the heat island!

Ocean Acidification

In the past, I have responded to questions at talks I have given on ocean acidification with an “I don’t know.”  I hadn’t studied the theory and didn’t want to knee-jerk respond with skepticism just because the theory came from people who propounded a number of other theories I knew to be BS.

The theory is that increased atmospheric CO2 will result in increasing amounts of CO2 being dissolved .  That CO2 when in solution with water forms carbonic acid.  And that acidic water can dissolve the shells of shellfish.  They have tested this by dumping acid in sea water and doing so has had a negative effect on shellfish.

This is one of those logic chains that seems logical on its face, and is certainly scientific enough sounding to fool the typical journalist or concerned Hollywood star.  But the chemistry just doesn’t work this way.   This is the simplest explanation I have found, but I will take a shot at summarizing the key problem.

It is helpful to work backwards through this proposition.  First, what is it about acidic water  — actually not acidic, but “more neutral” water, since sea water is alkaline  — that causes harm to the shells of sea critters?   H+ ions in solution from the acid combine with calcium carbonate in the shells, removing mass from the shell and “dissolving” the shall.  When we say an acid “eats” or “etches” something, a similar reaction is occurring between H+ ion and the item being “dissolved”.

So pouring a beaker of acid into a bucket of sea water increases the free H+ ions and hurts the shells.  And if you do exactly that – put acid in seawater in an experiment – I am sure you would get exactly that result.

Now, you may be expecting me to argue that there is a lot of sea water and the net effect of trace CO2 in the atmosphere would not affect the pH much, especially since seawater starts pretty alkaline.  And I probably could argue this, but there is a better argument and I am embarrassed that I never saw it before.

Here is the key:  When CO2 dissolves in water, we are NOT adding acid to the water.  The analog of pouring acid into the water is a false one.  What we are doing is adding CO2 to the water, which combines with water molecules to form carbonic acid.  This is not the same as adding acid to the water, because the H+ ions we are worried about are already there in the water.  We are not adding any more.  In fact, one can argue that increasing the CO2 in the water “soaks up” H+ ions into carbonic acid and by doing so shifts the balance  so that in fact less calcium carbonate will be removed from shells.    As a result, as the link above cites,

As a matter of fact, calcium carbonate dissolves in alkaline seawater (pH 8.2) 15 times faster than in pure water (pH 7.0), so it is silly, meaningless nonsense to focus on pH.

Unsurprisingly, for those familiar with  climate, the chemistry of sea water is really complex and it is not entirely accurate to isolate these chemistries absent other effects, but the net finding is that CO2 induced thinning of sea shells seems to be based on a silly view of chemistry.

Am I missing something?  I am new to this area of the CO2 question, and would welcome feedback.

Potential Phoenix Climate Presentation

I am considering making a climate presentation in Phoenix based on my book, videos, and blogging on how catastrophic anthropogenic global warming theory tends to grossly overestimate man’s negative impact on climate.

I need an honest answer – is there any interest out there in the Phoenix area in that you might attend such a presentation in North Phoenix followed by a Q&A?  Email me or leave notes in the comments.  If you are associated with a group that might like to attend such a presentation, please email me.

More Proxy Hijinx

Steve McIntyre digs into more proxy hijinx from the usual suspects.  This is a pretty good summary of what he tends to find, time and again in these studies:

The problem with these sorts of studies is that no class of proxy (tree ring, ice core isotopes) is unambiguously correlated to temperature and, over and over again, authors pick proxies that confirm their bias and discard proxies that do not. This problem is exacerbated by author pre-knowledge of what individual proxies look like, leading to biased selection of certain proxies over and over again into these sorts of studies.

The temperature proxy world seems to have developed into a mono-culture, with the same 10 guys creating new studies, doing peer review, and leading IPCC sub-groups.  The most interesting issue McIntyre raises is that this new study again uses proxy’s “upside down.”  I explained this issue more here and here, but a summary is:

Scientists are trying to reconstruct past climate variables like temperature and precipitation from proxies such as tree rings.  They begin with a relationship they believe exists based on a physical understanding of a particular system – ie, for tree rings, trees grow faster when its warm so tree rings are wider in warm years.  But as they manipulate the data over and over in their computers, they start to lose touch with this physical reality.

…. in one temperature reconstruction, scientists have changed the relationship opportunistically between the proxy and temperature, reversing their physical understanding of the process and how similar proxies are handled in the same study, all in order to get the result they want to get.