A reader wrote me that the comments found in the Hadley CRU program code are possibly far more damning than the emails, and in fact this appears to be the case given these excerpts at Anthony Watts’ site.
In the past, I have written that as an experienced modeller, I am extremely suspicious when anyone’s models very closely match history. This is a common modelers trick – use various plugs and fudge factors and special algorithms to force the model to match history better (when it is used to “back-cast”) and people will likely trust the model more when you use it to forecast. For a variety of reasons, I have been suspicious this was the case with climate models, but never could prove it. One example from the link above
Looking back over history, it appears the model is never off by more than 0.4C in any month, and never goes more than about 10 months before re-intersecting the “actual” line. Does it bother anyone else that this level of precision is several times higher than the model has when run forward? Almost immediately, the model is more than 0.4C off, and goes years without intercepting reality.
Now we are closer, with programming code comments in the various climate programs that say things like this (from the code that apparently does some of the tree ring histories)
; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions
; of growing season temperatures. Uses "corrected" MXD - but shouldn't usually
; plot past 1960 because these will be artificially adjusted to look closer to
; the real temperatures.
; Apply a VERY ARTIFICAL correction for decline!!
2.6,2.6,2.6]*0.75 ; fudge factor
; APPLY ARTIFICIAL CORRECTION
The link above has 30+ similar examples. The real insight will be when folks like Steve McIntyre and his readers start digging into the code and replicating it — then we will see what it actually does and what biases or plugs or overrides are embedded. Stay tuned.