I don’t want to make the mistake of over-interpreting fairly balanced remarks by Michael Kelly of Cambridge, nor of taking quotes out of context as daggers to throw at climatologists. But I did find his reactions interesting as he read through some Briffa and Jones papers — they seem to match the reactions of many non-climate scientists who tend to have the same type reactions if they really read through some of the work, rather than just issuing statements of moral support without much investigations.
All that being said, here are some of his admittedly offhand reactions after reading through some of the papers. The entire document is worthy of reading through, as linked by Bishop Hill.
There are however some more detailed qualifications:
(i) I take real exception to having simulation runs described as experiments (without at least the qualification of ‘computer’ experiments). It does a disservice to centuries of real experimentation and allows simulations output to be considered as real data. This last is a very serious matter, as it can lead to the idea that real ‘real data’ might be wrong simply because it disagrees with the models! That is turning centuries of science on its head.
(ii) The reading of the papers was made rather harder by the quality of the diagrams, and the description of the vertical axes on a number of graphs. When numbers on the vertical axis go from -2 to +2 without being explicitly labelled as percentage deviations, temperature excursions, or scaled correlation coefficients, there is potential for confusion.
(iii) I think it is easy to see how peer review within tight networks can allow new orthodoxies to appear and get established that would not happen if papers were written for and peer reviewed by a wider audience. I have seen it happen elsewhere. This finding may indeed be an important outcome of the present review….
(2) On a personal note, I chose to study the theory of condensed matter physics, as opposed to cosmology, precisely on the grounds that I could systematically control and vary the boundary conditions of my ob-ject of study as an integral part of making advances. An elegant theory which does not fit good experimental data is a bad theory. Here the starting data is patchy and noisy, and the choices made are in part aesthetic, or designed to help a conclusion. rather than neutral. This all colours my attitude to the limited value of complex simulations that cannot by exhaustively tested against ‘real’ data from independent experiments that control all but one of the variables.
(3) Up to and throughout this exercise, I have remained puzzled how the real humility of the scientists in this area, as evident in their papers, including all these here, and the talks I have heard them give, is morphed into statements of confidence at the 95% level for public consumption through the IPCC process. This does not happen in other subjects of equal importance to humanity, e.g. energy futures or environmental degradation or resource depletion. I can only think it is the ‘authority’ appropriated by the IPCC itself that is the root cause.
These questions to Briffa could have come from McIntyre:
(I) How can we be reassured about the choice of which raw data from which stations are to be selected, detrended and then included in the tree-ring data bases? Is there an algorithm that establishes the inclusion/exclusion? If I were setting out to establish the lowest possible net temperature rise over the last century is consistent with the available data, what fraction of tree-ring-data would then be included/excluded? Could I coerce the data to support a null hypothesis on global warming?
(2) In the range of papers we have reviewed, you have used a variety of statistical techniques in what is a heroic effort to get signals from noisy and patchy data. To what extent has this variety of techniques be reviewed and commented upon by the modern statistical community for their effectiveness, right use and possible weaknesses?