This discussion, including the comments, over at Climate Audit, really is amazing. Just when you think all the procedural errors that could be mined from the Mann hockey stick have been pulled to the surface, another gem emerges.
Here is how I understand it (please correct me if I am wrong in the comments): Michael Mann uses a variety of proxies to reconstruct history (he actually pre-screens them to only use the ones that will give him the answer he wants, but that is another problem that has been detailed in other posts). To be able to tell temperature with these proxies (since their original measurements are things like mm of tree ring width, not degrees) they must be scaled based on periods in which the thermometer-measured surface temperature record overlaps the proxy record.
Apparently, when making these calibrations, he used the surface temperature record from 1850-1995, but also did other runs with sub-periods of this, such as 1850-1949 and 1896-1995. OK so far. Well, McIntyre believes he has found that when running these correlations, the sign of the correlation factor for a single proxy actually changes.
What does this mean? Well, lets assume proxy 1 is tree ring width from a particular tree, and a calibration based on 1850-1995 has such-and-such ring width data correlated at x per degree. This means that an increase in ring width of X implies a temperature increase of one. But, when calibrating on one of the other periods, the exact same proxy has a calibration of -Y. This means that an increase in the ring width of Y yields a temperature DECREASE of one.
I had a professor of physics back in undergrad who used to just drive me crazy with his insistence on good error estimations in the lab (which he was right to emphasize, just proving I was not meant for the lab). He used to say that if your error range crossed zero, in other words, if your range of possible answers included both positive and negative numbers, then you really did not understand a process. You don’t understand a relationship, he would say, if you don’t even know the sign. Well, Mann has gotten over this little problem, I guess, because he is perfectly able to have the same physical process have exactly opposite relationships with temperature depending on what 50 year period he is working with.
OK, so Steve caught him with one bad proxy. Heck, he has over a thousand others. But now McIntyre is reporting in the comments he has found 308 such cases, where Mann has correlations that change signs like this. Wow.
Postscript: By the way, one of the most fundamental rules of regression analysis is that when you throw a variable into the regression, you should have some theoretical reason for doing so. This is because every single variable you add, no matter how spurious, is going to improve the fit of a regression (trust me on this, it’s in the math).
In the case of proxy regressions, it is simply unacceptable to rely on the regression for the sign. You rely on physics for the sign, not the regression. If you don’t even know the sign of the relationship between your proxy and temperature, then you don’t understand the proxy well enough physically to justify even calling it a proxy.
This is a big, big deal in financial modelling. I can’t tell you how often it is emphasized in financial modelling to make sure you have a working theory as to how and why a variable should affect a regression, and then when you get the result, you need to test it against your original theory. And if they are too far apart, you need to doubt the computer result. Because in financial modelling, if you get too much confidence in regressions against spurius data, you can go bankrupt (in climate, it instead seems to lead to fame, large grants, and hanging out with vice-presidents).
Update: Oops, I missed the first post on this at Climate Audit, which discusses the issues in my postscript in more depth. This is a good example, and it is not surprising they revert to a financial example as I did, as financial modelers have the greatest immediate incentives not to fool themselves.
We (the authors of this paper) have identified a weather station whose temperature readings predict daily changes in the value of a specific set of stocks with a correlation of r=-0.87. For $50.00, we will provide the list of stocks to any interested reader. That way, you can buy the stocks every morning when the weather station posts a drop in temperature, and sell when the temperature goes up. Obviously, your potential profits here are enormous. But you may wonder: how did we find this correlation? The figure of -.87 was arrived at by separately computing the correlation between the readings of the weather station in Adak Island, Alaska, with each of the 3315 financial instruments available for the New York Stock Exchange (through the Mathematica function FinancialData) over the 10 days that the market was open between November 18th and December 3rd, 2008. We then averaged the correlation values of the stocks whose correlation exceeded a high threshold of our choosing, thus yielding the figure of -.87. Should you pay us for this investment strategy? Probably not: Of the 3,315 stocks assessed, some were sure to be correlated with the Adak Island temperature measurements simply by chance – and if we select just those (as our selection process would do), there was no doubt we would find a high average correlation. Thus, the final measure (the average correlation of a subset of stocks) was not independent of the selection criteria (how stocks were chosen): this, in essence, is the non-independence error. The fact that random noise in previous stock fluctuations aligned with the temperature readings is no reason to suspect that future fluctuations can be predicted by the same measure, and one would be wise to keep one’s money far away from us, or any other such investment advisor
Update #2: I guess I have to issue a correction. I have argued that climate scientists tend to be unique in trying to avoid criticism by labeling critics as “un-scientific”. In retrospect, it does not appear climate scientists are unique:
The iconoclastic tone have attracted coverage on many blogs, including that of Newsweek. Those attacked say they have not had the chance to argue their case in the normal academic channels. “I first heard about this when I got a call from a journalist,” comments neuroscientist Tania Singer of the University of Zurich, Switzerland, whose papers on empathy are listed as examples of bad analytical practice. “I was shocked — this is not the way that scientific discourse should take place.”