Using Computer Models To Launder Certainty

(cross posted from Coyote Blog)

For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon.   We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision.   We then treat the output with a reverence that does not match the quality of the inputs.

I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it.  But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr.  This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).

“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:

  1. What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
  2. As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

Pielke goes on to deconstruct the study, but just compare the two bolded statements.  First, that there is not sufficiently extensive and accurate observational data to test a hypothesis.  BUT, then we will create a model, and this model is validated against this same observational data.  Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen.  If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?

A model is no different than a hypothesis embodied in code.   If I have a hypothesis that the average width of neckties in this year’s Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis  (though it may be enough to get me media attention).  The model is merely a software implementation of my original hypothesis.  In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.

This is not just a climate problem.  The White House studies on the effects of the stimulus were absolutely identical.  They had a hypothesis that government deficit spending would increase total economic activity.  After they spent the money, how did they claim success?  Did they measure changes to economic activity through observational data?  No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment).  And the press reported it like it was a real number.

Postscript: I did not get into this in the original article, but the other mistake the study seems to make is to validate the model on a variable that is irrelevant to its conclusions.   In this case, the study seems to validate the model by saying it correctly simulates past upper ocean heat content numbers (you remember, the ones that are too few and too inaccurate to validate a hypothesis).  But the point of the paper seems to be to understand if what might be excess heat (if we believe the high sensitivity number for CO2) is going into the deep ocean or back into space.   But I am sure I can come up with a number of combinations of assumptions to match the historic ocean heat content numbers.  The point is finding the right one, and to do that requires validation against observations for deep ocean heat and radiation to space.

  • Malcolm

    Renewable Guy:

    “Covering ocean depths to 2000 meters is a more thorogh coverage than 700 meters.”

    Wait, you are talking about von Schuckmann, 2009, right? I was talking about von Schuckmann, 2011, which did not go down to 2000 meters. Given that we have von Schuckmann, 2011, why bring up von Schuckmann, 2009 at all? I am fine discussing any paper, it just seems that you got caught between them.

    Back to the main point, you are right, covering ocean depths to 2000 meters is going to get a more thorough picture than covering to 700 meters, but only provided the analysis method is the same. As I say, von Schuckmann, 2011 throws away data. It is my understanding that Loehle and similar papers do not throw away data. Throwing away data has an effect on how thorough the coverage is, and right now it is not at all clear to me as to which study did a better job at it. If it is clear to you, provide your argument.

    “If you read the skeptical science article, there is a large amount of uncertainty of the argo data because it is so new.”

    You know, unless you provide some numbers, that’s just hot air. Some CAGW papers proclaim that the Argo data unequivocally shows warming and that the uncertainties are small. I wouldn’t be surprised if some pages at scepticalscience link to these papers. Von Schuckmann says he is afraid of systematic errors which might lurk in the data. Well, I am all for this, let’s explore that area. If there are systematic errors, let’s look at them and discuss what they are. But until we found these errors, that empty talk about the possibility of having them is just a waste of time.

    “The warming trend observed is slightly smaller than that seen in Von Schuckmann (2009), where the authors measure down to ocean depths of 2000 metres, and found a warming trend of 0.77 ±0.11 watts per square metre. However, it completely refutes a recent (2010) skeptic paper which suggested the oceans were cooling, based on the upper ocean down to 700 metres. Clearly much heat is finding it’s way down into deeper waters. And although small in comparison, the deep ocean is gaining heat too.”

    I am puzzled. I quoted exactly this excerpt from scepticalscience in my yesterday’s post, replying to it, and now you reply back to me by quoting it again? Really?

  • Malcolm

    Renewable Guy:

    On Hansen:

    “I dropped that method and went to one already done by Gavin Schmidt.”

    OK. I followed the link to the comparison done by Gavin Schmidt that you provided. Gavin compares trends starting with 1984. This is a problem. Hansen’s paper was written in 1988, so, of course, including the period between 1984 and 1988 into comparisons makes Hansen’s predictions look much better than they really are, because the data for that period was already available to Hansen.

    We can’t thus use Gavin’s method verbatim, because, as is, it contains a serious flaw. Any other papers?

  • TheChuckr

    Renewable, look up the sources yourself, you know how to use Google, right? I’ll give the link for your last point.

    http://www.assassinationscience.com/climategate/

  • netdr

    renewable guy:

    netdr:
    BTW\

    The skeptical science method of analyzing Dr Hansen’s model performance was an amazing example of cherry picking and twisted thinking. If I made a model and evaluated it that way my boss would fire me.

    How they thought it was fair to start at 1984 is beyond comprehension.

    A fair method is 5 year averages to avoid spikes then evaluate predicted vs actual.

    By that method Hansen’s model was an epic fail ! 220 % wrong.

    ############################################

    You have given your opinion but have failed to make your point.

    There are other years which have 0% difference. Should Hansen’s model be evaluated by those years also?
    ***************
    Yes !
    Reality is a B**** ! But it is the only reality we have.

  • netdr

    The Gavin method is flawed beyond use.

    It stops in 2009 and starts in 1984.

    As has been shown starting in 1984 is blatant cherry picking

    Since Hansen’s models contained a simulated volcano the later the comparison is made the worse the model looks.

    In 2011 the average so far is .48 which is much lower [.15] than .63 in 2010. So evaluating the model at the end of 2011 will show it has further jumped the shark.

    Evaluating a model isn’t rocket science and attempts to make it seem like it is are obvious signs of trickery. Bamboozle the mentally lazy seems to be their MO.

    A wise man once wrote:
    “People who let others think for them because they think they aren’t capable of it ARE ABSOLUTELY RIGHT !

    They know their limitations.”

  • netdr

    Re Gavin’s phony “analysis” of Dr Hanson’s model performance.

    http://cstpr.colorado.edu/prometheus/archives/hansenscenarios.png

    By starting at 1984 instead of 1988 the model appears to have “predicted” the .3 of warming which occurred between 1984 and 1988 [when it was presented to congress and updated presumably]. So by “predicting the past” the model gets an unwarranted boost.

    By ending at 2009 it gets another unwarranted boost because the model predicted fast warming which didn’t happen in 2010 and 2011.

    Gavin is good at fooling the fools.

    The model will look even worse after 2011 is in the records. [I can hardly wait]

  • netdr

    Renewable

    Gavin is hoping you are too mentally lazy to see that 1/2 of the actual warming took place before the model results were presented to congress. That part of the warming was right because it had been tweaked to be right. So he stats with his thumb on the scale and ends the same way.

    Do you wonder why skeptics think he is a habitual liar ?

  • netdr

    Renewable

    The AR4 predictions are even more off base than Dr Hansen’s

    http://www.cgd.ucar.edu/ccr/strandwg/CCSM3_AR4_Experiments.html

    Essentially all models predicted .20 ° C by 2010.
    [the committed line doesn’t count as it is the control]

    The actual results.

    2000 [5 year avg] = .45
    2010 [5 year avg = .55 approx] so actual warming is 1/2 what was predicted. .10]

    I am sure the apostle of Global Warming Gavin can waterboard the data until it says what he wants it to.

  • netdr

    Renewable

    The laughable attempt by Gavin to defend Hansen’s model is one of the main reason’s I don’t buy the CAGW nonsense. The “circle the wagons” and defend any pro CAGW scientist even when he is obviously wrong make me suspicious that honest climate science is dead, or only practiced by skeptics.

    Steve Schneider’s immortal words expose the moral bankruptness of the CAGW bandwagon.

    “We have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we have. Each of us has to decide what the right balance is between being effective and being honest.”

    [There is a longer version which doesn’t change a thing. The man is telling people to conceal the truth !]

    If contrary evidence exists to CAGW it will never be willingly divulged by those on the CAGW bandwagon.

  • netdr

    Renewable

    Pretending that somehow predicted rate divided by actual rate is somehow more valid than predicted change over actual change is just obfuscation for the mathematically challenged.

    Since rate = change / time

    Predicted rate over actual rate is (predicted rate ) / Time divided by (Actual rate )/time the two times obviously cancel out.

    The cherry picking is what makes Hansen’s model look better than it actually is.

    He starts in 1984 [not 1988] but 1/2 of the actual warming takes place between 1984 and 1988. He ends at 2009 but a whole 1/10 of a degree is predicted for the last year because of the simulated volcano being over.

    So he fattens the actual warming and reduces the predicted warming to get a bran damaged answer.

    He only fools those who wish to be fooled.

  • Malcolm

    So, Renewable Guy, do you agree that Hansen’s predictions were off by some huge amount, like a factor of 2, and that the comparison method used by Gavin Schmidt was (and is) highly misleading?

    Also, any response on ocean heat?

  • netdr

    Renewable

    He lays low, just like sock puppet.[aka In science all opinions are equal]

    Out of the mouths of babes comes profound truth accidentally ?

  • netdr

    I watched lecture 5 by David Archer about how the no feedback CO2 warming is computed.

    I had 2 comments.

    How can different people get such different answers. The British Royal society gets .4 ° C while Hansen gets 1.0 ° C ? If the computation is so simple why do they get different answers ?

    When comparing Earth to Venus there was no mention of depth of atmosphere. Venus has a much deeper atmosphere than earth and if you were at the same atmospheric pressure as earth’s surface you would measure earth like temperatures.

    Surface temperatures on Venus are taken at a level of a super death valley. Of course it’s hot but CO2 didn’t do it.

    I don’t claim that CO2 doesn’t cause any warming but mankind has no measurement of how much and without that CAGW is a bad joke.

  • netdr

    After thinking about it I have another comment.

    RE:
    http://www.youtube.com/watch?v=8-5PsoF7Vp0&feature=relmfu

    He represents CO2 by a sheet of glass which reflects heat out to space and down to earth, but he never explains how he derives the percentage of energy bounced out into space and the percentage reflected to earth.

    If it were an almost perfect pane of glass and 99.9 % of light was bounced back it would take a lot of warming before Radiation out = Radiation in. It would be Venus like.

    If the pane of glass were poor and only 1 % were bounced back the warming would be slight. This critical number appears to be pulled out of some random number generator. Or tweaked to match reality.

    The pane of glass represents water vapor in the atmosphere and all GHG’s so teasing out CO2’s effect isn’t easy.

  • netdr

    Malcolm: wrote

    So, Renewable Guy, do you agree that Hansen’s predictions were off by some huge amount, like a factor of 2, and that the comparison method used by Gavin Schmidt was (and is) highly misleading?
    *********
    Any reasonable person would conclude that Gavin’s method is highly misleading. Don’t you agree renewable ?

    He is in denial !

  • Atmospheric models divide the entire earth into a 3 dimensional grid blocks and find numerical solutions to partial differential equations that are created by calculating the relationship and effects of one block to each block around it. This is further complicated by the relationship of the grid blocks that are next to “phase changes” for want of a better term. The gaseous atmosphere block’s relationship that is in contact with the ocean (liquid phase) or land mass (solid phase) or the edge of space (no phase?) is at best poorly understood. Since a minor error affects the entire calculation of the model then if one relationship on energy exchange is in error then the entire model becomes suspect. To date none of the models that laymen (Al Gore) often use to support their conclusion has been able to history match actual results. When the models are taken back to 1900 and actual data is input virtually every model predicts that current temperatures should be 4-6 degrees centigrade higher than we currently experience. The models are then manipulated changing the equations and relationships to force a match that may or may not have scientific reality. In short one model may change the energy from the sun to a lower level in order to reduce temperature in order to get a history match, then use the model with this error to project forward leaving the greenhouse effects of carbon dioxide the same, while it may be more logical to reduce the effect of CO2 to make the model history match. There are a myriad of relationships of this nature that can be adjusted in the models from changing the effect of cloud cover both a heat retention blanket or as a sunlight reflective agent. Minor changes in the equations can have a dramatic effect on the models predictions. In short, one can make these models predict anything the modeler wants and yet still appear to be reasonable, because the models are so complicated and the relationships between cells particularly at the phase transition boundaries are largely unknown. Every numerical simulation projects less heat escaping into space with increases in earth temperature due to carbon dioxide. The temperature increase since 1980(due to increased sunspot activity) clearly shows increases in heat escape into space, although CO2 has increased over this same time period. Every model has built in heat retention in order to overstate the heat retention effect of carbon dioxide in order to predict catastrophic global warming. NASA satellite data released in 2011 confirmed that the rate of escape of long wave radiation (heat) into space has not been reduced by increases in CO2, which was also measured by Lindzen of MIT, the complete opposite of every atmospheric model.

    If this isn’t complicated enough, throw in non-linear discontinuous functions such as volcanic eruptions, sunspot cycle variations, deforestation, re-forestation, etc and the models even if reasonably able to predict a linear progression now have no chance of being any thing close to reality.

    Nevertheless, these model results are currently being used to scare the world into economic chaos under the presumption that they are reliable predictors of things to come. The people that apparently believe the predictions constantly find data to support their conclusion and ignore anything that doesn’t support their conclusion whether it matches the model or not. Then the model creator as stated previously adjusts the model to fit the data that makes sure to predict catastrophe.

  • Ted Rado

    Billyjack:

    I developed models of chemical plant cpmplexes for many years. Some financially catastrophic decisions were made based on empirical models. If a model was based on first principles, was rigorous, had NO fudge factors, and was theoroughly validated by actual plant performance (every process stream flow rate, temperature, and composition MUST match plant data). Then, and ONLY then, was it dependable and usable.

    I have great confidence in rigorous models, but absolutely none in models that conrain even ONE fudge factor. It is my understanding that the climate models are full of them. Consider just one variable: aerosols. These can vary in color, particle size, chemical composition, and concentration. Thus they can absorb heat, reflect heat, scatter light of different wave lengths, etc. Further, irregular inputs, such as volcanos, cause huge variations. How in the world can anyone say, with a straight face, that they can write a program that accurately describes the behavior of aerosols and their effect on climate?

    Anothen problem, of course, is validation. I could write a program that says every man will become mother 100 years form now. Although that is nonsense, you can’t prove it will not happen. All the people predicting dire things a hundred years from now will be long gone, so they cannot be proven wrong today. Neither can they be provern right, so it makes no sense to mess up the economy based on their models.

  • netdr

    I too have written computer models and am amazed by the almost mystical reverence the nontechnical people have for them. A man with a few hundred notebooks and #2 pencils can get the same answers, but it is less impressive to the masses.

    Mine could be verified or proven wrong in a day or two while climate models take 20 years or more to find out that they are wrong and when they are supposedly fixed it takes the same amount of time to check out the “fix”. As you said the modeler can safely retire without being proven wrong.

    The rate of learning is so slow that climatology is in it’s babyhood. Expecting a baby to predict 89 years in the future is ridiculous.

    The AR4 models have predicted .3 ° C warming since 2000 and essentially none has occurred. The error is infinite % . The response is “wait ’till next year” which is what they will say for the next 89 years. Long before that climate alarmism will be a bad memory.

  • Ted Rado

    My main point re the climate models is how do you implement the CO2 reduction that is called for? I have asked the CAGW pushers repeatedly what energy sources can we substitue for fossil fuels? They dodge the question or (re wind and solar) say “the Spaniards are doing it” so it must be right, and thus say that we have a viable alternative.

    One must design a complete package, or we will merely create an economic catastrophe. In the absence of such a package, whether or not the climate models are sound is immaterial. We have nowhere to go, except move north if they are correct.

    AS a consequence of these considerations, I have lost some of my interst in the discussion of the validity of the climate models. Until there is a viable alternative energy scheme, it doesn’t matter.