Category Archives: Temperature Measurement

Manual Adjustments in the Temperature Record

I have been getting inquiries from folks asking me what I think about stories like this one, where Paul Homewood has been looking at the manual adjustments to raw temperature data and finding that the adjustments actually reverse the trends from cooling to warming.  Here is an example of the comparisons he did:

Raw, before adjustments;

puertoraw

 

After manual adjustments

puertoadj2

 

I actually wrote about this topic a few months back, and rather than rewrite the post I will excerpt it below:

I believe that there is both wheat and chaff in this claim [that manual temperature adjustments are exaggerating past warming], and I would like to try to separate the two as best I can.  I don’t have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years.
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  For example, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction. 1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data — for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA’s own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example — this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable — my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net result was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth’s surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990’s.
  6. As is usual in such debates, I find words like “fabrication”, “lies”,  and “myth” to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

To these I will add a #7:  The notion that satellite results are somehow pure and unadjusted is just plain wrong.  The satellite data set takes a lot of mathematical effort to get right, something that Roy Spencer who does this work (and is considered in the skeptic camp) will be the first to tell you.  Satellites have to be adjusted for different things.  They have advantages over ground measurement because they cover most all the Earth, they are not subject to urban heat biases, and bring some technological consistency to the measurement.  However, the satellites used are constantly dieing off and being replaced, orbits decay and change, and thus times of observation of different parts of the globe change [to their credit, the satellite folks release all their source code for correcting these things].   I have become convinced the satellites, net of all the issues with both technologies, provide a better estimate but neither are perfect.

Those Who Follow Climate Will Definitely Recognize This

This issue will be familiar to anyone who has spent time with temperature graphs.  We can ask ourselves if 1 degree of global warming is a lot, when it is small compared to seasonal variations, or even intra-day variation, you would find in most locations.  That is not a trick question.  It might be important, but certainly how important an audience  considers it may be related to how one chooses to graph it.  Take this example form an entirely unrelated field:

Last spring, Adnan sent me a letter about … something, I can’t even remember exactly what. But it included these two graphs that he’d drawn out in pencil. With no explanation. There was just a Post-it attached to the back of one of the papers that said: “Could you please hold these 2 pages until we next speak? Thank you.”

Here’s what he sent:

as_tea_graph_2_cropped
Price of tea at 7-11 

as_tea_graph1_crop_0
Price of tea at C-Mart 

This was curious. It crossed my mind that Adnan might be … off his rocker in some way. Or, more excitingly, that these graphs were code for some top-secret information too dangerous for him to send in a letter.

But no. These graphs were a riddle that I would fail to solve when we next spoke, a couple of days later.

Adnan: Now, so would you prefer, as a consumer, would you rather purchase at a store where prices are consistent or items from a store where the prices fluctuate?

Sarah: I would prefer consistency.

Adnan: That makes sense. Especially in today’s economy. So if you had to choose, which store would you say has more consistent prices?

Sarah: 7-11 is definitely more consistent.

Adnan: As compared to…?

Sarah: As compared to C-Mart, which is going way up and down.

Look again, Adnan said. Right. Their prices are exactly the same. It’s just that the graph of C-Mart prices is zoomed way in — the y-axis is in much smaller cost increments — so it looks like dramatic fluctuations are happening. And he made the pencil lines much darker and more striking in the C-Mart graph, so it looks more…sinister or something.

Reconciling Conflicting Climate Claims

Cross-posted from Coyoteblog

At Real Science, Steven Goddard claims this is the coolest summer on record in the US.

The NOAA reports that both May and June were the hottest on record.

It used to be the the media would reconcile such claims and one might learn something interesting from that reconciliation, but now all we have are mostly-crappy fact checks with Pinocchio counts.  Both these claims have truth on their side, though the NOAA report is more comprehensively correct.  Still, we can learn something by putting these analyses in context and by reconciling them.

The NOAA temperature data for the globe does indeed show May and June as the hottest on record.  However, one should note a couple of things

  • The two monthly records do not change the trend over the last 10-15 years, which has basically been flat.  We are hitting records because we are sitting on a plateau that is higher than the rest of the last century (at least in the NOAA data).  It only takes small positive excursions to reach all-time highs
  • There are a number of different temperature data bases that measure the temperature in different ways (e.g. satellite vs. ground stations) and then adjust those raw readings using different methodologies.  While the NOAA data base is showing all time highs, other data bases, such as satellite-based ones, are not.
  • The NOAA database has been criticized for manual adjustments to temperatures in the past which increase the warming trend.  Without these adjustments, temperatures during certain parts of the 1930’s (think: Dust Bowl) would be higher than today.  This was discussed here in more depth.  As is usual when looking at such things, some of these adjustments are absolutely appropriate and some can be questioned.  However, blaming the whole of the warming signal on such adjustments is just wrong — satellite data bases which have no similar adjustment issues have shown warming, at least between 1979 and 1999.

The Time article linked above illustrated the story of these record months with a video partially on wildfires.  This is a great example of how temperatures are indeed rising but media stories about knock-on effects, such as hurricanes and fires, can be full of it.  2014 has actually been a low fire year so far in the US.

So the world is undeniably on the warm side of average (I won’t way warmer than normal because what is “normal”?)  So how does Goddard get this as the coolest summer on record for the US?

Well, the first answer, and it is an important one to remember, is that US temperatures do not have to follow global temperatures, at least not tightly.  While the world warmed 0.5-0.7 degrees C from 1979-1999, the US temperatures moved much less.  Other times, the US has warmed or cooled more than the world has.  The US is well under 5% of the world’s surface area.  It is certainly possible to have isolated effects in such an area.  Remember the same holds true the other way — heat waves in one part of the world don’t necessarily mean the world is warming.

But we can also learn something that is seldom discussed in the media by looking at Goddard’s chart:

click to enlarge

First, I will say that I am skeptical of any chart that uses “all USHCN” stations because the number of stations and their locations change so much.  At some level this is an apples to oranges comparison — I would be much more comfortable to see a chart that looks at only USHCN stations with, say, at least 80 years of continuous data.  In other words, this chart may be an artifact of the mess that is the USHCN database.

However, it is possible that this is correct even with a better data set and against a backdrop of warming temperatures.  Why?  Because this is a metric of high temperatures.  It looks at the number of times a data station reads a high temperature over 90F.  At some level this is a clever chart, because it takes advantage of a misconception most people, including most people in the media have — that global warming plays out in higher daytime high temperatures.

But in fact this does not appear to be the case.  Most of the warming we have seen over the last 50 years has manifested itself as higher nighttime lows and higher winter temperatures.  Both of these raise the average, but neither will change Goddard’s metric of days above 90F.  So it is perfectly possible Goddard’s chart is right even if the US is seeing a warming trend over the same period.  Which is why we have not seen any more local all-time daily high temperature records set recently than in past decades.  But we have seen a lot of new records for high low temperature, if that term makes sense.  Also, this explains why the ratio of daily high records to daily low records has risen — not necessarily because there are a lot of new high records, but because we are setting fewer low records.  We can argue about daytime temperatures but nighttime temperatures are certainly warmer.

This chart shows an example with low and high temperatures over time at Amherst, MA  (chosen at random because I was speaking there).  Note that recently, most warming has been at night, rather than in daily highs.

My Thoughts on Steven Goddard and His Fabricated Temperature Data Claim

Cross-posted from Coyote Blog.

Steven Goddard of the Real Science blog has a study that claims that US real temperature data is being replaced by fabricated data.  Christopher Booker has a sympathetic overview of the claims.

I believe that there is both wheat and chaff in this claim, and I would like to try to separate the two as best I can.  I don’t have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  Specifically, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction.
    1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data — for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA’s own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example — this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable — my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net results was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth’s surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990’s.
  6. As is usual in such debates, I find words like “fabrication”, “lies”,  and “myth” to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

Postscript:  Not exactly on topic, but one thing that is never, ever mentioned in the press but is generally true about temperature trends — almost all of the warming we have seen is in nighttime temperatures, rather than day time.  Here is an example from Amherst, MA (because I just presented up there).  This is one reason why, despite claims in the media, we are not hitting any more all time daytime highs than we would expect from a normal distribution.  If you look at temperature stations for which we have 80+ years of data, fewer than 10% of the 100-year highs were set in the last 10 years.  We are setting an unusual number of records for high low temperature, if that makes sense.

click to enlarge

 

Computer Generated Global Warming. Edit – Past Special – Add

Way back, I had a number of posts on surface temperature adjustments that seemed to artificially add warming to the historical record, here for example.  Looking at the adjustments, it seemed odd that they implied improving station location quality and reduced warming bias in the measurements, despite Anthony Watts work calling both assumptions into question.

More recently, Steve Goddard has been on a roll, looking at GISS adjustments in the US.   He’s found that the essentially flat raw temperature data:

Has been adjusted upwards substantially to show a warming trend that is not in the raw data.  The interesting part is that most of this adjustment has been added in the last few years.  As recently as 1999, the GISS’s own numbers looked close to those above.   Goddard backs into the adjustments the GISS has made in the last few years:

So, supposedly, some phenomenon has that shape.  After all, surely the addition of this little hockey stick shaped data curve to the raw data is not arbitrary simply to get the answer they want, the additions have to represent the results of some heretofore unaccounted-for bias in the raw data.  So what is it?  What bias or changing bias has this shape?

A Good Idea

This strikes me as an excellent idea — there are a lot of things in climate that will remain really hard to figure out, but a scientifically and statistically sound approach to creating a surface temperature record should not be among them.  It is great to see folks moving beyond pointing out the oft-repeated flaws in current surface records (e.g. from NOAA, GISS, and the Hadley Center) and deciding to apply our knowledge of those flaws to creating a better record.   Bravo.

Warming in the historic record is not going away.  It may be different by a few tenths, but I am not sure its going to change arguments one way or another.  Even the (what skeptics consider) exaggerated current global temperature metrics fall far short of the historic warming that would be consistent with current catastrophic high-CO2-sensitivity models.  So a few tenths higher or lower will not change this – heroic assumptions of tipping points and cooling aerosols will still be needed either way to reconcile aggressive warming forecasts with history.

What can be changed, however, is the stupid amount of time we spend arguing about a topic that should be fixable.  It is great to see a group trying to honestly create such a fix so we can move on to more compelling topics.  Some of the problems, though, are hard to fix — for example, there simply has been a huge decrease in the last 20 years of stations without urban biases, and it will be interesting to see how the team works around this.

A Great Example of How We Should Be Playing

I get irritated by the team-sport aspects of the climate debate, where we race to defend and attack certain work because it gives an answer we like or don’t like, rather than based on its methodology.  I confess to getting sucked into this from time to time, though I have also tried to call BS on skeptical work I thought was misguided (e.g. the Virginia AG witch hunt against Michael Mann) and I respect folks like Steve McIntyre who are controversial without falling too often into the team-sports trap.

For this reason I want to cite an article by Anthony Watt in which he criticizes, rightly I think, a skeptic for pushing a fraud/cover-up story that simply does not exist.  Ironically, the article occurs just days after Joe Romm, whose site would never tolerate the dissenting opinions in its comments section that Watt’s allows, generally equates Watt’s past work with the 10:10 video blowing up children.  (more comments on the Romm post here).

UHI and Arctic Warming

Ed Caryl has an good post correlating most of the measured warming in the Arctic with urban heat islands near key temperature stations.  He goes on to show that 15 stations with heat island effects near the station show substantial warming, while 9 stations without such effects show little or no warming (in fact show annual temperatures amazingly correlated with the Atlantic Multidecadal Oscillation, or AMO);

Here is what I do not like about his work, at least as I understand it — I would greatly prefer to see this work done on some sort of double-blind system.  One group, without any knowledge of station temperature numbers, sorts the stations while another works on the temperature trends.  This way there is no danger of the sorting decisions being pre-biased by knowledge of their characteristics (something that arguably happens all the time in dendro-climatology).

Why It Is Good to Have Two Sides of A Debate

With climate alarmists continuing to declare climate debate to be over and asking skeptics to just go away, we are reminded again why it is useful to have two sides in a debate.  Few people on any side of any question typically are skeptical of data that support their pet hypotheses.    So, in order to have a full range of skepticism and replication applied to all findings, it is helpful to have people passionately on both sides of a proposition.

I am reminded of this seeing how skeptics finally convinced the NOAA that one of its satellites had gone wonky, producing absurd data (e.g. Great Lakes temperatures in the 400-600F range).  Absolutely typically, the NOAA initially blamed skeptics for fabricating the data

NOAA’s Chuck Pistis went into whitewash mode on first hearing the story about the worst affected location, Egg Harbor, set by his instruments onto fast boil. On Tuesday morning Pistis loftily declared, “I looked in the archives and I find no image with that time stamp. Also we don’t typically post completely cloudy images at all, let alone with temperatures. This image appears to be manufactured for someone’s entertainment.”

Later he went on to own up to the problem, but not before implying at various times that the data is a) trustworthy  b) not trustworthy  c) placed online by hand with verification and d) posted online automatically with no human intervention.

This was the final NOAA position, which is absurd to me:

“NOTICE: Due to degradation of a satellite sensor used by this mapping product, some images have exhibited extreme high and low surface temperatures. “Please disregard these images as anomalies. Future images will not include data from the degraded satellite and images caused by the faulty satellite sensor will be/have been removed from the image archive.”

OK, so 600F readings will be thrown out, but how do we have any confidence the rest of the readings are OK.  Just because they may read in a reasonable range, e.g, 59F, the NOAA is just going to assume those readings are OK?

Computers are Causing Global Warming

At least, that is, in Nepal.  Willis Eschenbach has an interesting post looking into the claim that Nepal has seen one of the highest warming rates in the world (thus threatening Himalayan glaciers, etc etc).  It turns out there is one (1) GISS station in Nepal, and oddly enough the raw data shows a cooling trend.  Only the intervention of NASA computers heroically transforms a cooling trend into the strong warming trend we all know must really be there because Al Gore says its there and he got a Nobel Prize, didn’t he?

GISS has made a straight-line adjustment of 1.1°C in twenty years, or 5.5°C per century. They have changed a cooling trend to a strong warming trend … I’m sorry, but I see absolutely no scientific basis for that massive adjustment. I don’t care if it was done by a human using their best judgement, done by a computer algorithm utilizing comparison temperatures in India and China, or done by monkeys with typewriters. I don’t buy that adjustment, it is without scientific foundation or credible physical explanation.

At best that is shoddy quality control of an off-the-rails computer algorithm. At worst, the aforesaid monkeys were having a really bad hair day. Either way I say adjusting the Kathmandu temperature record in that manner has no scientific underpinnings at all. We have one stinking record for the whole country of Nepal, which shows cooling. GISS homogenizes the data and claims it wasn’t really cooling at all, it really was warming, and warming at four degrees per century at that

In updates to the post, Eschenbach and his readers track down what is likely driving this bizarre adjustment in the GISS methodology.

Might As Well Be Walking on the Sun

Steve Goddard and Anthony Watt have a series of posts on an old favorite topic on this site — how data manipulations back in the climate office is creating a lot of the “measured” warming.  This particular example is right here in Arizona, and features several sites my son and I surveyed for Anthony’s site.  They have a followup on another Arizona station here.  Check out all the asphalt:

This is a hilariously bad siting.  It demonstrates how small things can sometimes have big effects.  The MMTS sensor has a very limited cable length.  This does not mean that it only comes with a short cable (begging the question of why they can’t just buy a longer one), but that it can only have a short cable due to signal amplification issues.  As a result, we get this terrible siting because it needs to be close to the building, whereas even a hundred yards away there were much better locations

Carefree is a fairly rural (at least suburban) low density town with lots of undeveloped land.  They had to work to get a siting this bad.  A monkey throwing darts at a map of the area would have gotten a better siting.

What is the Russian Word for “Minus”? And Does it Even Start with an M?

We have discussed temperature measurement on this blog a number of times, focusing particularly on signal to noise ratio issues where errors and manual corrections in surface temperature records tend to be larger than the global warming signal we are trying to measure.  Anthony Watt has an interesting post on human error as related to reporting of temperature numbers over a large part of the measurement network.

With NASA GISS admitting that missing minus signs contributed to the hot anomaly over Finland in March, and with the many METAR coding error events I’ve demonstrated on opposite sides of the globe, it seems reasonable to conclude that our METAR data from cold places might very well be systemically corrupted with instances of coding errors.

Signal to Noise

The Hockey Schtick points to a study on Pennsylvania temperatures that illustrates a point I have been making for a while:

A new SPPI paper examines the raw and adjusted historical temperature records for Pennsylvania and finds the mean temperature trend from 1895 to 2009 to be minus .08°C/century, but after unexplained adjustments the official trend becomes positive .7°C/century. The difference between the raw and adjusted data exceeds the .6°C/century in global warming claimed for the 20th century.

I think people are too quick to jump onto the conspiracy bandwagon and paint these adjustments as scientists forcing the outcome they want.  In fact, as I have written before, some of these adjustments (such as adjustments for changes in time of observation) are essential.  Some, such as how the urbanization adjustments are done (or not done) are deeply flawed.  But the essential point is that the signal to noise ratio here is really really low.  The signal we are trying to measure (0.6C or so of warming) is smaller than the noise, even ignoring measurement and other errors.

Knowlege Laundering

Charlie Martin is looking through some of James Hansen’s emails and found this:

[For] example, we extrapolate station measurements as much as 1200 km. This allows us to include results for the full Arctic. In 2005 this turned out to be important, as the Arctic had a large positive temperature anomaly. We thus found 2005 to be the warmest year in the record, while the British did not and initially NOAA also did not. …

So he is trumpeting this approach as an innovation?  Does he really think he has a better answer because he has extrapolated station measurement by 1200km (746 miles)?  This is roughly equivalent, in distance, to extrapolating the temperature in Fargo to Oklahoma City.  This just represents for me the kind of false precision, the over-estimation of knowledge about a process, that so characterizes climate research.  If we don’t have a thermometer near Oklahoma City then we don’t know the temperature in Oklahoma City and lets not fool ourselves that we do.

I had a call from a WaPo reporter today about modeling and modeling errors.  We talked about a lot of things, but my main point was that whether in finance or in climate, computer models typically perform what I call knowledge laundering.   These models, whether forecasting tools or global temperature models like Hansen’s, take poorly understood descriptors of a complex system in the front end and wash them through a computer model to create apparent certainty and precision.  In the financial world, people who fool themselves with their models are called bankrupt (or bailed out, I guess).  In the climate world, they are Oscar and Nobel Prize winners.

Update: To the 1200 km issue, this is somewhat related.

Problems in the Surface Temperature Record

Readers of this site won’t be surprised at reports of problems in the surface temperature record.  Joe D’Aleo and Anthony Watt have teamed up on a new paper published by SPPI analyzing the surface temperature record in depth.  I have only skimmed it, but it looks terrific  (and includes a few weather station site surveys and photos by yours truly).  From the summary:

1. Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and unidirectionally tampered with that it cannot be credibly asserted there has been any significant “global warming” in the 20th century.

2. All terrestrial surface-temperature databases exhibit very serious problems that render them useless for determining accurate long-term temperature trends.

3. All of the problems have skewed the data so as greatly to overstate observed warming both regionally and globally.

The Hockey Stick

difference-between-rural-and-urban2

Via WUWT, Jeff Id takes a look at the GHCN temperature data base, specifically comparing warming in urban vs. rural locations.  As found in a number of other studies, about half of 20th century warming int he surface temperature record may be due to uncorrected urban biases.

Some past takes on the same subject:

Station Adjustments

The American Thinker blog is running a daily series of charts showing raw and “value added” or adjusted station data. The amount of the global warming signal that comes from manual adjustments rather than actual measurements is something we have discussed here before, but you can see in each of their daily examples.

Unadjusted and adjusted temperatures at Kremsmuenster, Austria

kremsmuenster-austria

Source: AppInSys (Applied Information Systems) using NOAA/GHCN database for Kremsmuenster, Austria

You can create the same charts for any station here.

Defending the Tribe

This is a really interesting email string form the CRU emails, via Steve McIntyre:

June 4, 2003 Briffa to Cook 1054748574
On June 4, 2003, Briffa, apparently acting as editor (presumably for Holocene), contacted his friend Ed Cook of Lamont-Doherty in the U.S. who was acting as a reviewer telling him that “confidentially” he needed a “hard and if required extensive case for rejecting”, in the process advising Cook of the identity and recommendation of the other reviewer. There are obviously many issues involved in the following as an editor instruction:

From: Keith Briffa
To: Edward Cook
Subject: Re: Review- confidential REALLY URGENT
Date: Wed Jun 4 13:42:54 2003

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting – to support Dave Stahle’s and really as soon as you can. Please
Keith

Cook to Briffa, June 4, 2003
In a reply the same day, Cook told Briffa about a review for Journal of Agricultural, Biological, and Environmental Sciences of a paper which, if not rejected, could “really do some damage”. Cook goes on to say that it is an “ugly” paper to review because it is “rather mathematical” and it “won’t be easy to dismiss out of hand as the math appears to be correct theoretically”. Here is the complete email:

Hi Keith,
Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper. Below is part of that file. Is this the right one? Also, is it possible to resurrect the column headings? I would like to play with it in an effort to refute their claims. If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced. Your assistance here is greatly appreciated. Otherwise, I will let Tornetrask sink into the melting permafrost of northern Sweden (just kidding of course).
Cheers,
Ed

A couple of observations

  1. For guys who supposedly represent the consensus science of tens of thousands of scientists, these guys sure have a bunker mentality
  2. I would love an explanation of how math can have theoretical deficiencies but be better in a practical sense.  In the practical sense of … giving the answer one wants?
  3. The general whitewash answer to all the FOIA obstructionism is that these are scientists doing important work not to be bothered by nutcases trying to waste their time.  But here is exactly the hypocrisy:  The email author says that some third party’s study is deficient because he can’t demonstrate how his mathematical approach might change the answer the hockey team is getting.  But no third party can do this because the hockey team won’t release the data needed for replication.  This kind of data – to check the mathematical methodologies behind the hockey stick regressions – is exactly what Steve McIntyre et al have been trying to get.  Ed Cook is explaining here, effectively, why release of this data is indeed important
  4. At the very same time these guys are saying to the world not to listen to critics because they are not peer-reviewed, they are working as hard as they can back-channel to keep their critics out of peer-reviewed literature they control.
  5. For years I have said that one problem with the hockey team is not just that the team is insular, but he reviewers of their work are the same guys doing the work.  And now we see that these same guys are asked to review the critics of their work.