Charlie Martin is looking through some of James Hansen’s emails and found this:
[For] example, we extrapolate station measurements as much as 1200 km. This allows us to include results for the full Arctic. In 2005 this turned out to be important, as the Arctic had a large positive temperature anomaly. We thus found 2005 to be the warmest year in the record, while the British did not and initially NOAA also did not. …
So he is trumpeting this approach as an innovation? Does he really think he has a better answer because he has extrapolated station measurement by 1200km (746 miles)? This is roughly equivalent, in distance, to extrapolating the temperature in Fargo to Oklahoma City. This just represents for me the kind of false precision, the over-estimation of knowledge about a process, that so characterizes climate research. If we don’t have a thermometer near Oklahoma City then we don’t know the temperature in Oklahoma City and lets not fool ourselves that we do.
I had a call from a WaPo reporter today about modeling and modeling errors. We talked about a lot of things, but my main point was that whether in finance or in climate, computer models typically perform what I call knowledge laundering. These models, whether forecasting tools or global temperature models like Hansen’s, take poorly understood descriptors of a complex system in the front end and wash them through a computer model to create apparent certainty and precision. In the financial world, people who fool themselves with their models are called bankrupt (or bailed out, I guess). In the climate world, they are Oscar and Nobel Prize winners.
Update: To the 1200 km issue, this is somewhat related.