A Good Idea

This strikes me as an excellent idea — there are a lot of things in climate that will remain really hard to figure out, but a scientifically and statistically sound approach to creating a surface temperature record should not be among them.  It is great to see folks moving beyond pointing out the oft-repeated flaws in current surface records (e.g. from NOAA, GISS, and the Hadley Center) and deciding to apply our knowledge of those flaws to creating a better record.   Bravo.

Warming in the historic record is not going away.  It may be different by a few tenths, but I am not sure its going to change arguments one way or another.  Even the (what skeptics consider) exaggerated current global temperature metrics fall far short of the historic warming that would be consistent with current catastrophic high-CO2-sensitivity models.  So a few tenths higher or lower will not change this – heroic assumptions of tipping points and cooling aerosols will still be needed either way to reconcile aggressive warming forecasts with history.

What can be changed, however, is the stupid amount of time we spend arguing about a topic that should be fixable.  It is great to see a group trying to honestly create such a fix so we can move on to more compelling topics.  Some of the problems, though, are hard to fix — for example, there simply has been a huge decrease in the last 20 years of stations without urban biases, and it will be interesting to see how the team works around this.

13 thoughts on “A Good Idea”

  1. “Some of the problems, though, are hard to fix — for example, there simply has been a huge decrease in the last 20 years of stations without urban biases, and it will be interesting to see how the team works around this.”

    This was done on purpose …

  2. The current databases (GISTemp, HadCruT)are estimated to have 0.3 – 0.4K of UHIE or other adjustment error in them. This is not a trivial “few points” of a degree. This is the essence of CAGW, or even AGW. Knock that much off the 0.7K since (whenever, the date always seems to change), and we are in the normal, pre-1940, post-LIA temperature rebound.

    The UAH and RSS records, as well as the ARGO floats, are not incorporated into Hansen’s temperature profiles for exactly this non-trivial issue. A <1.5K/century devastates the IPCC meme. RealClimate recently claimed that Hansen's 1988 Scenario B was a good, though "warm" match for the historical record, whereas in fact Scenario C (the no emissions after 2000 base case) was the best. Dissembling or in denial, I can't figure out. At any rate, if the Hansen 2011 temperature record were to be knocked down by even 0.25K, 1940 would look pretty much the same as 2010. The "unprecedented" warmth of today would be off the table. The agressive warming trend would be gone. The "A" in AGW would be in serious jeoprady.

    The entire CAGW is based on monumental statistics, data "adjustments" and complex computer modelling, not in-your-face data. CAGW hangs by a fingernail. Trim the fingernails, and its gone.

    The environmental movement made a mistake in hanging the future of the world's biosphere on stopping the evil monster of CO2. But that doesn't help us. As good stewardship of the earth is now connected emotionally with CO2 management, the desire for good stewardship means that CO2 management must be protected at all costs. Arguing against the illusory Grim Reaper is like arguing for the actual Grim Reaper, a very difficult position to be in.

    The data reanalysis, I say, is not a trivial exercise in shaving. It could be devastating for the warmists. On the other hand, if the reanalysis is NOT significant, it will be devastating for the skeptics. Warming at the alarming rate is in, and fighting agaisnt CO2 is out.

  3. Great news.

    I have said many times that there are unbiased professionals who handle large amounts of data and metadata. They should be the ones analyzing the data.

    Allowing those with a vested interest in the outcome to adjust the data is simply a bad idea.

    Would you trust a drug company to adjust the outcome of tests on their drugs ?

    That isn’t even a good enough analogy because in the long run the drug company would hurt itself by marketing a bad product while global warming alarmists can be wrong for 20 years or more and still be worshiped like gods. Dr Hansen’s failed 1988 model is the poster child of bad predictions but it took over 20 years to “jump the shark”!

    A good part of the observed warming is because the sun resumed normal operation after the Maunder Minimum. Since there has only been .7° C warming and most if not all of it is from renewed sunshine.

    The 1978 to 1998 run up of temperatures was caused bu more El Nino’s than La Nina’s and even that was only 1.2 ° C per century. Pretty underwhelming.

    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

  4. Well this could spell misery for the legions of climate data “adjusters” who will now be out of work.

    Maybe Obama can hire them to adjust economic data.

  5. I am supportive of this from a standpoint that we generally ought to be trying to do things right. However, I have to agree that it might not matter much to the debate.

    The CAGW position doesn’t rest on something as straightforward as whether it has warmed 0.3, 0.7 or 1.0 over the last century. Sure, a lower number might take a bit of the wind out of the sails. However, given that the CAGW position doesn’t rest on specific numbers, but is instead an unorganized collection of anecdotal evidence, coupled with heavily-tweaked computer models, unfounded assumptions about positive feedbacks, and a healthy imagination about possible future disasters, a lower warming number for the 20th century will simply be brushed over with claims about aerosols being stronger than previously thought, more warming still waiting in the “pipeline” or similar ad hoc “explanations” that keep the overall story alive.

  6. The people associated with the effort merit some notice.

    Most are from physics and chemistry. One statistician and a grad student. The only climatologist is Judith Curry.

    And project funding appears to be mostly private, involving the rich like Bill Gates and one of the Koch brothers.

    There appears to be a conscious effort to avoid conflicts of interest that besets all the standard temperature data keepers.

  7. This sounds wonderful but it remains to be seen what it will amount to. It’s not like the undertaking like this is a trivial matter that only requires competence and independence. Given that some raw data are no longer available and others are flawed, it won’t be easy to come up with solutions that will satisfy majorities on both sides of the debate.

  8. I find it unlikely that any of the other datesets (other then GISS) are off by more then a tenth of a degree overall. GISS is doing some really weird stuff and is starting to really differ from the other data-sets.

    The error is figured out by using statistical methods when using X stations which are said to cover Y% of the Earth for instance. Its very similar to polling where you poll 1000 people for an election where 1 million will vote and will be within 4% of being correct despite the seemingly small number of people polled.

    The wild-cards in temperature are more then likely the UHI effects which are not compensated enough in the final products, and this is error that can not be determined since its unknown to say the least. GISS is probably off by quite a bit on this, but the other data sets are probably within a tenth of a degree or two tenths at the most in my honest opinion.

    In the end, it doesn’t matter either way though. We have warmed since 1880 and its mostly natural. Nothing to worry about there, and so therefore we should not cripple our economies with massive energy cost increases to finance bird killers and sunbeam farms.

  9. I say again: a couple of points of a degree are not trivial. Hansen announces that the world is 0.01K warmer in 2010 than in all other years. He knows even numbers impossible to measure or recognize are important in the CAGW meme.

    Think if 2010 were to be placed in the same temperature range as 1940 globally. About 0.2K difference. Where would AGW be if, after 71 years, we were just back to where we used to be under “normal” planetary warming?

    The points of a degree are critical. That is why the push is on to get old data “cooled” and negate the impact of UHIE on global records. And why Hansen dismisses the NIWA (and now Australian?) data complaints as immaterial. Houses built on sand and all that.

  10. GISS is a huge problem, and I do not think the person in charge of the data set should be a political stooge who is into politics more then science. Even assuming the best case scenario, Dr. Hansen is guilty of “observer bias” and this is probable in that other data-sets do not agree with GISS anymore. In attempting to capture arctic temperatures, he relies on interpolation that does not make much sense when we are talking about sparse real data…and the fact that GISS shows much stronger warming in the arctic then the other data-sets (which I might add have more stations in the arctic) goes to show that there is this taking place.

    The movement of the data sets is also telling. As Doug Proctor says, we have seen the 1940’s go from being much warmer then what we are today (roughly .2C) to cooler through data manipulation (this is just for the US..). The techniques used to achieve this might very well be adequete or otherwise sound, but in the end we have to realize that Dr. Hansen more then likely suffers from observer bias.

    The other data sets I am sure also suffer from this to an extent in such a politicized field as global warming. But I have a feeling that they are closer to reality and more then likely a seperate study will show this to be true. GISS on the other hand seems to be lost in some sort of magical statistical manipulation…I have no hopes that an independent audit would ever say that it was done fairly with no expectations on the outcome.

    Observer bias is serious indeed in science, and the fact this is never talked about in papers is very telling. A net gain or loss across the board due to some other factor such as UHI is very important to get right.

    Like I said, .8C rise in temps since 1880 is not very scary, and if that is off by 2 tenths of a degree, the models have serious re-working that need to be done in order to be correct. All of the adjustment to parameters from the known values (and cloud cover of course has never been modeled correctly) but the known values being off by that much really puts a damper on all of the work done in the last 20+ years on this topic.

  11. Doug, I understand your point, and yes the data is important, but look at the history. The average temperature hasn’t risen as much as was previously predicted by AGW theory, including Hansen’s old testimony before Congress. Did that lack of warming (which we know is a travesty) kill the AGW story? Of course not. The story just changed to accommodate the “missing heat.” You know, temporary cooling, warming waiting in the pipeline to be manifest later, etc. Lots of ways to keep the story going.

    A change in the data of a tenth of a degree here or there ultimately isn’t going to change the story, because the story is not primarly about the data . . .

  12. I don’t understand why people try to “adjust” faulty temperature readings rather than devise a way of obtaining correct readings. I once had another engineer complain that my computer simulation program for a turbine-generator was wrong, because the exhaust steam temperature was too high. He insisted that actual measurements rather than calculated temperature was correct. Upon checking, his data showed a thermodynamic efficiency of 122%, versus the expected 70% or so typical of a steam turbine. This merely demonstrates the difficulty of makung temp measurement accurately, particularly when reradiation and other factors are involved.

    How are the measuring stations designed? Are heat transfer calculations made? How is radiation shielding designed? After theoretical calculations arrive at a design which is supposed to obtain accurate readings, are tests made to be sure the design is OK? I can’t believe that a properly shielded measuring station cannot be designed that gets accurate readings.

    Another point. Why have stations in concrete parking lots, or in the middle of a city, not been redesigned and relocated? The urge to preserve and use the huge amount of old data is understadable, but if doing so results in faulty conclusions, they should be thrown out.

    Perhaps someone has an explanation for the current “correcting” of faulty data rather than throwing it out.

Comments are closed.