Matt Ridley: What the Climate Wars Did to Science

I cannot recommend Matt Ridley’s new article strongly enough.  It covers a lot of ground be here are a few highlights.

Ridley argues that science generally works (in a manner entirely parallel to how well-functioning commercial markets work) because there are generally incentives to challenge hypotheses.  I would add that if anything, the incentives tend to be balanced more towards challenging conventional wisdom.  If someone puts a stake in the ground and says that A is true, then there is a lot more money and prestige awarded to someone who can prove A is not true than for the thirteenth person to confirm that A is indeed true.

This process breaks, however when political pressures undermine this natural market of ideas and switch the incentives for challenging hypotheses into punishment.

Lysenkoism, a pseudo-biological theory that plants (and people) could be trained to change their heritable natures, helped starve millions and yet persisted for decades in the Soviet Union, reaching its zenith under Nikita Khrushchev. The theory that dietary fat causes obesity and heart disease, based on a couple of terrible studies in the 1950s, became unchallenged orthodoxy and is only now fading slowly.

What these two ideas have in common is that they had political support, which enabled them to monopolise debate. Scientists are just as prone as anybody else to “confirmation bias”, the tendency we all have to seek evidence that supports our favoured hypothesis and dismiss evidence that contradicts it—as if we were counsel for the defence. It’s tosh that scientists always try to disprove their own theories, as they sometimes claim, and nor should they. But they do try to disprove each other’s. Science has always been decentralised, so Professor Smith challenges Professor Jones’s claims, and that’s what keeps science honest.

What went wrong with Lysenko and dietary fat was that in each case a monopoly was established. Lysenko’s opponents were imprisoned or killed. Nina Teicholz’s book  The Big Fat Surprise shows in devastating detail how opponents of Ancel Keys’s dietary fat hypothesis were starved of grants and frozen out of the debate by an intolerant consensus backed by vested interests, echoed and amplified by a docile press….

This is precisely what has happened with the climate debate and it is at risk of damaging the whole reputation of science.

This is one example of the consequences

Look what happened to a butterfly ecologist named Camille Parmesan when she published a paper on “ Climate and Species Range” that blamed climate change for threatening the Edith checkerspot butterfly with extinction in California by driving its range northward. The paper was cited more than 500 times, she was invited to speak at the White House and she was asked to contribute to the IPCC’s third assessment report.

Unfortunately, a distinguished ecologist called Jim Steele found fault with her conclusion: there had been more local extinctions in the southern part of the butterfly’s range due to urban development than in the north, so only the statistical averages moved north, not the butterflies. There was no correlated local change in temperature anyway, and the butterflies have since recovered throughout their range.  When Steele asked Parmesan for her data, she refused. Parmesan’s paper continues to be cited as evidence of climate change. Steele meanwhile is derided as a “denier”. No wonder a highly sceptical ecologist I know is very reluctant to break cover.

He also goes on to lament something that is very familiar to me — there is a strong argument for the lukewarmer position, but the media will not even achnowledge it exists.  Either you are a full-on believer or you are a denier.

The IPCC actually admits the possibility of lukewarming within its consensus, because it gives a range of possible future temperatures: it thinks the world will be between about 1.5 and four degrees warmer on average by the end of the century. That’s a huge range, from marginally beneficial to terrifyingly harmful, so it is hardly a consensus of danger, and if you look at the “probability density functions” of climate sensitivity, they always cluster towards the lower end.

What is more, in the small print describing the assumptions of the “representative concentration pathways”, it admits that the top of the range will only be reached if sensitivity to carbon dioxide is high (which is doubtful); if world population growth re-accelerates (which is unlikely); if carbon dioxide absorption by the oceans slows down (which is improbable); and if the world economy goes in a very odd direction, giving up gas but increasing coal use tenfold (which is implausible).

But the commentators ignore all these caveats and babble on about warming of “up to” four degrees (or even more), then castigate as a “denier” anybody who says, as I do, the lower end of the scale looks much more likely given the actual data. This is a deliberate tactic. Following what the psychologist Philip Tetlock called the “psychology of taboo”, there has been a systematic and thorough campaign to rule out the middle ground as heretical: not just wrong, but mistaken, immoral and beyond the pale. That’s what the word denier with its deliberate connotations of Holocaust denial is intended to do. For reasons I do not fully understand, journalists have been shamefully happy to go along with this fundamentally religious project.

The whole thing reads like a lukewarmer manifesto.  Honestly, Ridley writes about 1000% better than I do, so rather than my trying to summarize it, go read it.

Manual Adjustments in the Temperature Record

I have been getting inquiries from folks asking me what I think about stories like this one, where Paul Homewood has been looking at the manual adjustments to raw temperature data and finding that the adjustments actually reverse the trends from cooling to warming.  Here is an example of the comparisons he did:

Raw, before adjustments;

puertoraw

 

After manual adjustments

puertoadj2

 

I actually wrote about this topic a few months back, and rather than rewrite the post I will excerpt it below:

I believe that there is both wheat and chaff in this claim [that manual temperature adjustments are exaggerating past warming], and I would like to try to separate the two as best I can.  I don’t have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years.
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  For example, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction. 1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data — for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA’s own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example — this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable — my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net result was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth’s surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990’s.
  6. As is usual in such debates, I find words like “fabrication”, “lies”,  and “myth” to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

To these I will add a #7:  The notion that satellite results are somehow pure and unadjusted is just plain wrong.  The satellite data set takes a lot of mathematical effort to get right, something that Roy Spencer who does this work (and is considered in the skeptic camp) will be the first to tell you.  Satellites have to be adjusted for different things.  They have advantages over ground measurement because they cover most all the Earth, they are not subject to urban heat biases, and bring some technological consistency to the measurement.  However, the satellites used are constantly dieing off and being replaced, orbits decay and change, and thus times of observation of different parts of the globe change [to their credit, the satellite folks release all their source code for correcting these things].   I have become convinced the satellites, net of all the issues with both technologies, provide a better estimate but neither are perfect.