A Physical Scientist Looks at Dendroclimatology

I don’t want to make the mistake of over-interpreting fairly balanced remarks by Michael Kelly of Cambridge, nor of taking quotes out of context as daggers to throw at climatologists.  But I did find his reactions interesting as he read through some Briffa and Jones papers — they seem to match the reactions of many non-climate scientists who tend to have the same type reactions if they really read through some of the work, rather than just issuing statements of moral support without much investigations.

All that being said, here are some of his admittedly offhand reactions after reading through some of the papers.  The entire document is worthy of reading through, as linked by Bishop Hill.

There are however some more detailed qualifications:

(i) I take real exception to having simulation runs described as experiments (without at least the qualification of ‘computer’ experiments). It does a disservice to centuries of real experimentation and allows simulations output to be considered as real data. This last is a very serious matter, as it can lead to the idea that real ‘real data’ might be wrong simply because it disagrees with the models! That is turning centuries of science on its head.

(ii) The reading of the papers was made rather harder by the quality of the diagrams, and the description of the vertical axes on a number of graphs. When numbers on the vertical axis go from -2 to +2 without being explicitly labelled as percentage deviations, temperature excursions, or scaled correlation coefficients, there is potential for confusion.

(iii) I think it is easy to see how peer review within tight networks can allow new orthodoxies to appear and get established that would not happen if papers were written for and peer reviewed by a wider audience. I have seen it happen elsewhere. This finding may indeed be an important outcome of the present review….

(2) On a personal note, I chose to study the theory of condensed matter physics, as opposed to cosmology, precisely on the grounds that I could systematically control and vary the boundary conditions of my ob-ject of study as an integral part of making advances. An elegant theory which does not fit good experimental data is a bad theory. Here the starting data is patchy and noisy, and the choices made are in part aesthetic, or designed to help a conclusion. rather than neutral. This all colours my attitude to the limited value of complex simulations that cannot by exhaustively tested against ‘real’ data from independent experiments that control all but one of the variables.
(3) Up to and throughout this exercise, I have remained puzzled how the real humility of the scientists in this area, as evident in their papers, including all these here, and the talks I have heard them give, is morphed into statements of confidence at the 95% level for public consumption through the IPCC process. This does not happen in other subjects of equal importance to humanity, e.g. energy futures or environmental degradation or resource depletion. I can only think it is the ‘authority’ appropriated by the IPCC itself that is the root cause.

These questions to Briffa could have come from McIntyre:

(I) How can we be reassured about the choice of which raw data from which stations are to be selected, detrended and then included in the tree-ring data bases? Is there an algorithm that establishes the inclusion/exclusion? If I were setting out to establish the lowest possible net temperature rise over the last century is consistent with the available data, what fraction of tree-ring-data would then be included/excluded? Could I coerce the data to support a null hypothesis on global warming?

(2) In the range of papers we have reviewed, you have used a variety of statistical techniques in what is a heroic effort to get signals from noisy and patchy data. To what extent has this variety of techniques be reviewed and commented upon by the modern statistical community for their effectiveness, right use and possible weaknesses?

10 thoughts on “A Physical Scientist Looks at Dendroclimatology”

  1. “I take real exception to having simulation runs described as experiments…”
    I absolutely agree, I work in molecular modelling (which I came into via many years as a lab chemist)and I have heard the term ‘experiment’ used to describe computational molecular simulations many times, I suspect the reason is because of sensitivity to the charge that ‘real’ scientists do experiments. I would suggest that they are more appropriately described as ‘calculations’. NASA calculate trajectories with computer simulations, the experiment is to launch the rocket. In general, I would suggest that a calculation uses data to generate a derived result, an experiment generates data.

  2. I mostly agree with Prof Kelly after reading the whole notes. The one that always surprised me indeed is “how the real humility of the scientists in this area, as evident in their papers, including all these here, and the talks I have heard them give, is morphed into statements of confidence at the 95% level for public consumption through the IPCC process.”

    Jim, I agree that “Calculations” is semantically better than “Data”.
    However, calculations (e.g. of trajectories) may be more accurate than data (e.g. the rocket blew off because of a bad screw). Of course the calculation of trajectories is only possible because enough previous data allowed deriving the laws, but it is also wrong to consider all measured data as higher quality than calculations.

    Calculations (when they can be validated by experience) can be considered as data for further studies, as long as it is transparent. As a chemist you routinely “calculate” a molecular mass and trust it as data, even when real data is at best available for the individual atoms only. I even bet you would prefer your calculated data above a trial to measure the molecular mass in your lab from scratch.

  3. Laurent, the issue isn’t about the accuracy of calculations versus data, it’s about their logical status. Like Kelly I’ve taught at Cambridge, and like him I have thrown metaphorical handgrenades at twits describing simulations as “experiments” and their output as “data”.

  4. Laurent, I think what you are saying can be expressed as ‘data is subject to measurement error’. This is of course true, but calculations are also subject to these errors since they are dependent on input data; we can express y in terms of x, but to get a calculated value of y we must have a value for x. In addition, we must remember that calculations also have a dependency on the underlying hypothesis. To use your molecular weight example, the calculated mass of the hydrogen molecule, H2, depends on the accuracy of the measured mass of the hydrogen atom, but also on the accuracy of the hypothesis: ‘the mass of a molecule is the sum of the masses of its constituent atoms’. In fact relativity makes the latter statement an approximation (for most purposes more than adequate)that ignores (very small) mass/energy effects of bond formation. I think the significance to climate modelling is clear! – Must get back to working now!

  5. dearieme and jim:

    I understand what you mean and partially agree. My point was not only the uncertainties – I was unclear. Let me try to explain again:

    Pure data does not practically exist. Because all data is processed. A temperature reading in a thermometer is actually for example the reading of a volume change by dilatation of a medium, which happens to be graduated in “degrees” calculated based on a model of dilatation itself grounded in other measurements (data) done in a different place with other equipment.
    Adding a computer in the processing chain does not fundamentally change its nature. The more complex the computer calaculations are (and the more difficult it is to check it with other data) the higher the risks of mistakes, of course.

    So when doing chemistry (I happen to be a chemical engineer), I take as DATA temperatures and molecular masses, both of them being actually calculated.
    When doing e.g. kinetic investigations I take concentrations (indirectly measured in a GC) and time (measured as a physical process such as the movement of springs and gears in my watch) and obtain (calculate) a rate. This rate is taken as DATA for the subsequent step of checking the dependancy with temperature.

    It is acceptable to consider the rates as “raw data” even when they are calculated, for instance if they have been measured by third parties. You just have to be clear about the uncertainties.

    In essence my message: it’s not black and white but has many layers of grey.

  6. Laurent,

    In your thermometer example, the “model” for temperature measurement comes from one of the most well established theories in physics and all of science. Plus, while you could call something degrees F or C (from the model), you could also call it the volume of a specific mass of Hg. That data is what I might call “primary”. Its a direct measurement of the volume or height on a column, which certainly carries uncertainty. The model is really just an algebraic conversion equation. The models we talk about regarding predicting future GMT require these primary data as inputs (which have uncertainties) and rely on theories that for the most part have not be adequately tested for validity (more uncertainties). Then those model predictions can’t even be tested with a controlled experiment, unlike our model for thermal expansion of liquids. So we can test and refine physical models like the thermometer hundreds of times in just one day. Climatologists are lucky to do this once every decade, and don’t have a controlled experiment to do it with. In this fashion GMT models are like models that attempt to predict the stock market.

    If you want to argue that’s shades of gray, well I’d argue we’re talking about ivory vs. charcoal.

  7. A general note about the report by Kelly: The remaining questions he posses to Jones are many of the key questions skeptics want answered, the first two posted above are only the tip of the iceberg. Does anyone know if Jones ever answered these question. I’d be extremely interested to here his answers.

  8. Wally, if we were using the thermometer readings directly I would somewhat agree. The experts homogenise the data (largest adjustment and probably 95% of the trend) and do other interesting adjustments even before GISS and CRU and others do THEIR adjustments. The trends we see in the major temp records are simply what has been PUT there. As the Satellite temps are not for the surface, there simply is no comparison of trend or temp.

    Another small issue making the whole thermometer issue moot is the amount of energy in a parcel of atmosphere. Without data on the mass of the air (humidity being the major variable) you have temp data for, you have no idea how much energy it actually contains. AGW is all about retained energy. If we are not actually measuring the energy in the atmosphere, oceans, and surface we are not finding out the information that really matters.

    If the amount of water vapor in the atmosphere declined slightly over the last 10 years while the temps were relatively flat, we actually LOST energy while CO2 was increasing. If the amount of water vapor in the atmosphere was increasing while the temps were increasing, IT WAS WORSE THAN WE THOUGHT!!

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

  9. Kuhnkat,

    You’re absolutely right on both accounts there regarding temp adjustments and actual heat vs. temp. When I brought up temp measurements I wasn’t really dealing with how AGW researchers have manipulated them, but just in the general sense of what thermometer readings are and how it relates to a model.

  10. Bottom line here is that there is no “data” about the future. Absent time machines, of course.

Comments are closed.