Manual Adjustments in the Temperature Record

I have been getting inquiries from folks asking me what I think about stories like this one, where Paul Homewood has been looking at the manual adjustments to raw temperature data and finding that the adjustments actually reverse the trends from cooling to warming.  Here is an example of the comparisons he did:

Raw, before adjustments;



After manual adjustments



I actually wrote about this topic a few months back, and rather than rewrite the post I will excerpt it below:

I believe that there is both wheat and chaff in this claim [that manual temperature adjustments are exaggerating past warming], and I would like to try to separate the two as best I can.  I don’t have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years.
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  For example, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction. 1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data — for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA’s own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example — this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable — my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net result was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth’s surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990’s.
  6. As is usual in such debates, I find words like “fabrication”, “lies”,  and “myth” to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

To these I will add a #7:  The notion that satellite results are somehow pure and unadjusted is just plain wrong.  The satellite data set takes a lot of mathematical effort to get right, something that Roy Spencer who does this work (and is considered in the skeptic camp) will be the first to tell you.  Satellites have to be adjusted for different things.  They have advantages over ground measurement because they cover most all the Earth, they are not subject to urban heat biases, and bring some technological consistency to the measurement.  However, the satellites used are constantly dieing off and being replaced, orbits decay and change, and thus times of observation of different parts of the globe change [to their credit, the satellite folks release all their source code for correcting these things].   I have become convinced the satellites, net of all the issues with both technologies, provide a better estimate but neither are perfect.

4 thoughts on “Manual Adjustments in the Temperature Record”

  1. You have quite a balanced summary. Like you, I come to a similar conclusion that the overall bias is in average temperatures is not likely to be very large. This is especially because 70% of the globe is ocean.
    Having looked at all nine of the Paraguayan data sets that Paul Homewood cites, there is an issue that connects to your final point about motives. I
    found that in the raw data there was a drop in average temperatures in the late 1960s across eight of the surface temperature stations. Anyone seeing this in one temperature station would assume it was due to a re-siting, or poor data, as it contradicts what they “know” about the world. The change in temperature adjustments in about 1C in the late 1960s was the result, cooling the past.

    It is the sudden change in changes in the raw data, with the offsetting change in adjustments that leads me to conclude the adjustment was
    not a fabrication or conspiracy.

    Conversely as we “know” that temperatures are warming, if a step increase in warming occurred, it would be less likely to be thought of as an
    anomaly, so would not be adjusted. Or if the natural warming trend was augmented by the UHI, it might not be adjusted for.

    It is quite a subtle distinction, but could make a non-trivial difference. The way to eliminate this, is to have various secondary checks on
    the results, such as looking for clusters or patterns of adjustments, or attempting to map raw data trends.

    Kevin Marshall

  2. Perhaps you 2 Gentlemen can come up with a valid reason for the following data It was first posted on WUWT and then pointed to at Real Science and followed up by Tom Nelson.”

    This is not some “climate science nobody who doesn’t know what he is
    doing” putting stuff together as they like to claim, this is NOAA hoist
    by their own published data which shows that they have changed the 1997
    global Temperature (not just USA) by over 2 degrees C (4 degrees F) in
    17 years of adjustments.

    How many other summarised Years of NOAA Analysis are out there on the
    internet that will continue the exposure of what has to be worst
    “Science” to have ever come out of NASA.

  3. Can you expand more on this comment please? “NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustment”

    Going on NOAA’s site it appears they will make source code and raw data available
    but is that something different? I am confused. .

  4. GLOBAL DRILL for deadly geomagnetic super-storms!

    After the 1859 last huge geomagnetic super-storm, there had been recently SEVERAL alerts for a new, devastating one, by the [NOW AVERTABLE] impacts of these storms*:

    1. 1989: Quebec black-out

    2. 2003: Sweden black-out

    3. 2005: GPS black-out

    4. 2012: Near-miss EXTINCTION event, by a huge solar flare, 9 hours before Earth arrived at that point…**

    Even CIA fmr director J. Woolsey warns for ElectroMagneticPulse/super-strom existential threat!

    Since CIA has gone public on this, shouldn’t we get prepared?

    Shouldn’t we start building the technology for a plasma shield against catastrophic solar flares?



    WHAT IF a Carrington event happened NOW???

Comments are closed.