Why the Historical Warming Numbers Matter

This is cross-posted from my non-climate blog.  I wrote it for folks who spend less time on the science-based skeptic sites, but I had several folks tell me that it would still be useful to post here.

First, let’s settle something. The world has warmed since 1850. While there always is an error bar on nearly every statement about nature, I think there is little point in questioning this past warming. There is ice core data that suggests that the little ice age, which ended some time in the very early 19th century, was perhaps the coldest period, or one of the two or three coldest periods, in the last 5000 years (ie in nearly the entire span of human civilization). Temperatures are inevitably warming from this low point.(*1)

So, if the point is not to deny warming altogether, what is the point in discussions of Climategate of picking over and trying to audit historical temperature records like the Hadley CRUT3 or NASA’s GISStemp? Skeptics often argue that much of the warming is due to bogus manual adjustments in the temperature records and biases such as urban warming. Alarmists argue that the metrics may understate warming because of masking by manmade anthropogenic cooling agents (e.g. sulfate aerosols). Why bother? Why does it matter if past warming is 0.6C or 0.8C or 0.3C? There are at least two reasons.

1. The slope of recent temperature increases is used as evidence for the anthropogenic theory.

We know greenhouse gasses like CO2 have a warming effect in the lab. And we know that overall they warm planets because otherwise ours would be colder. But how much does an incremental amount of CO2 (a relatively weak greenhouse gas) warm the Earth? A lot or a little? Is the sensitivity of the climate to CO2 high or low?

Every time I try to express this, it sounds so ridiculous that people think I must have it wrong. But the main argument supporting a high climate sensitivity to CO2 is that scientists claim to have looked at past warming, particularly from 1950-2000, and they can’t think of any natural cause that could behind it, which leaves CO2 by process of elimination. Yeah, I know this seems crazy – one wants to ask if this is really a test for CO2 sensitivity or of scientists’ understanding and imagination, but there you have it.

Now, they don’t always say it this directly. What they actually say is that they ran their climate models and their climate models could not produce the warming from 1950-2000 with natural forcings alone, but could reproduce this warming with forcings from CO2. But since the climate models are not handed down from the gods, but programmed by the scientists themselves to represent their own understanding of the climate system, in effect this is just a different way of saying what I said in the previous paragraph. The climate models perform the function of scientific money laundering, taking an imperfect knowledge on the front end and somehow converting that into settled science at the output.

Now, there are a lot of ways to criticize this approach. The models tend to leave out multi-decadal ocean cycles and don’t really understand cloud formation well. Further, the period from 1957-2008, which supposedly can only be explained by non-natural forcings, has almost the exact same temperature profile and increase as the time from 1895-1946, which of necessity must be mostly “natural.” I go into this more here, among other places.

But you can see that the amount of warming matters to this argument. The more the warming falls into a documented natural range of temperature variation, the harder it is to portray it as requiring man-made forcings to explain. This is also the exact same reason alarmist scientists work so hard to eliminate the Medieval Warm Period and little ice age from the temperature record. Again, the goal is to show that natural variation is in a very narrow range, and deviations from this narrow range must therefore be man-made. (*2)

This is the sort of unified field theory of everything we are seeing in the CRU emails. We see scientists using every trick they can find to lower or smooth out temperatures numbers before 1950, and adjust numbers after 1950 upwards. Every single trick and programming adjustment all tended to have this effect, whether it be in proxy studies or in the instrumental record. And all the efforts to prevent scrutiny, ignore FOIA’s, and throw out raw data have been to avoid third party replication of the statistical methods and adjustments they used to achieve these ends.

As an aside, I think it is incorrect to picture this as a SPECTRE-like cabal scheming to do evil. These guys really, really believed they had the right answer, and these adjustments were made to tease out what they just knew the right answer to be. This is why we are only going to see confused looks from any of these guys – they really, really believed they were doing God’s work. They are never going to understand what they did wrong. Which doesn’t make it any less bad science, but just emphasizes that we are never going to get data without spin until total sunlight is brought to this process

2. It is already really hard to justify the huge sensitivities in alarmist forecasts based on past warming — if past warming is lower, forecasts look even more absurd.

The best way to illustrate this is with a few charts from my most recent climate presentation and video. We usually see warming forecasts by year. But the real relationship is between warming and CO2 concentration (this relationship is called climate sensitivity). One can graph forecasts at various levels:

Slide57

The blue line corresponds to the IPCC no-feedback formula that I think originally goes back to Michael Mann, and yields about 1-1.2C of warming for greenhouse gas warming from CO2 before feedback effects. The middle two lines correspond to the IPCC mid and high forecasts, and the top line corresponds to more alarmist forecasts from folks like Joe Romm who predict as much as 8-10C of warming by 2100 (when we will be at 650-800ppm CO2 per the IPCC). By the way, the IPCC does not publish the lines above the blue line, so I have taken the formula they give for the blue line and scaled it to meet their end points. I think this is reasonable.

A couple of things – all climate models assume net positive feedback, what skeptics consider the key flaw in catastrophic global warming theory. In fact, most of the catastrophe comes not from global warming theory, but by this second theory that the Earth’s temperature system is dominated by very high positive feedback. I illustrate this here. The blue line is from CO2 greenhouse gas warming. Everything above it is from the multiplier effects of assumed feedbacks.

Slide21

I won’t go into the feedback issue much now – search my site for positive feedback or else watch my video for much more. Suffice it to say that skeptics consider the feedback issue the key failure point in catastrophic forecasts.

Anyway, beyond arguing about feedbacks, there is another way to test these forecasts. Relationships that hold for CO2 and warming in the future must hold in the past (same Earth). So lets just project these lines backwards to the CO2 level in the late 19th century.

Slide61

Can you see the issue? When projected back to pre-industrial CO2 levels, these future forecasts imply that we should have seen 2,3,4 or more degrees of warming over the last century, and even the flawed surface temperature records we are discussing with a number of upwards biases and questionable adjustments only shows about 0.6C.

Sure, there are some time delay issues, probably 10-15 years, as well as some potential anthropogenic cooling from aerosols, but none of this closes these tremendous gaps. Even with an exaggerated temperature history, only the no feedback 1C per century case is really validated by history. And, if one assumes the actual warming is less than 0.6C, and only a part of that is from anthropogenic CO2, then the actual warming forecast justified is one of negative feedback, showing less than 1C per century warming from manmade CO2 — which is EXACTLY the case that most skeptics make.

Those who control the past control the future. Those who control the present control the past.– George Orwell

Footnotes:

(1) More than once I have contemplated how much the fact that the invention of the thermometer occurred at perhaps the coldest point in human memory (early 17th century) has contributed to the perceptions of current warm weather being unusual.

(2) For those who are on the ball, perhaps you can spot an amazing disconnect here. Scientists claim that the natural variation of temperatures is in a very narrow band, that they never move even 0.2C per decade by natural means. But they also say that the Earth’s temperature system is dominated by positive feedback, meaning that very very small changes in forcings are magnified many fold in to large temperature changes. I won’t go in to it in depth from a systems perspective, but trust me that “high stability in a narrow range” and “dominated by high positive feedback” are not very compatible descriptions of a system.

28 thoughts on “Why the Historical Warming Numbers Matter”

  1. `Now, there are a lot of ways to criticize this approach. The models tend to..’

    You missed something, because you are smart and it too is too unbelievable..as science. The modelers themselves use the recent instrumental GHG records in their calculations. GHGs have increased due to likely anthro. effects. Therefore the temperature models are tuned to look like the slope of CO2 instrumental readings. That simple. Of course there can be more and other tricks used too.

    Heck, they may also use millennial proxy CO2 readings. if the proxy.temp.reconstruct people use the CO2 record to calibrate or otherwise tweak temperature readings..that would be something! could a “instrumental climate data” include the CO2 data? Yes.

    `Now, they don’t always say it this directly. What they actually say is that they ran their climate models and their climate models could not produce the warming from 1950-2000 with natural forcings alone, but could reproduce this warming with forcings from CO2′

    Yes, it is hard to believe. But my experience is the same as yours. recently I came across the following post, which has the closest thing to a confession I have found, if you have better please share!

    `[Trenberth] You can object all you like but you are not looking at the evidence and you need to have a basis, which you have not established. You seem to doubt that CO2 has increased and that it is a greenhouse gas and you are very wrong.’

    http://wattsupwiththat.com/2009/11/29/when-results-go-bad/

    It reflects the faith, “basis”, and the logical folly, that disprovability is immaterial, the skeptic must provide the believer an alternative reason for something which he, a priori, won’t consider anyway.

    My suggestion, stick with your gut on this avenue. Does it not follow the temp.reconstuct data would be massaged by various adjustments, steps and tricks to look like the CO2 data? Or calibrate with the CO2 data itself? That is the core of their belief. Look at the questioned document at this site:

    http://www.climateaudit.org/?p=3384

    Take a gander at the graph. It is the only time I have seen the CO2 1k year hockeystick data plotted against one of the Team’s work. Why do they not use this as their Aha! graphic evidence? Would it not be a slam dunk? The reason, I suppose, the team senses it looks too good to be true, too clean a correlation for what they believe is direct causation! To other scientists, that is, the graph was not for their consumption. And note the blogger, a brilliant man, does’t have an Aha! moment with the CO2, why, because like you said, it sounds to crazy to believe.

    If you want more on the CO2 data in the graph and from where it comes, just ask

  2. “First, let’s settle something. The world has warmed since 1850. ”

    Yes it has, BUT, we have evidence that the temperature records have been adjusted in a way that overstates the warming trend from early 1900s to 2009.
    Bogus adjutments to create warming trends:
    http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

    Turning station changes into warming trends:
    http://briefingroom.typepad.com/the_briefing_room/2009/11/niwas-explanation-raises-major-new-questions.html

    Alaska ‘bodged’:
    http://noconsensus.wordpress.com/2009/12/08/alaska-bodged-too/

    IMHO, this is the lurking submerged-under-the-waterline scandal that is at the heart of why CRU and others kept playing ‘hide the ball’ with the data.
    They have overstated the warming – BECAUSE THEY NEED THAT TO KEEP THE MODELS LOOKING GOOD.

    As you ably point out, the #1, main and major way to confirm the IPCC models is to check out whether it has ACTUALLY MATCHED THE TRENDS SO FAR.
    Those trends via back-of-envelope – if you assume all the 1950 to 2009 warming is man-made – then you get about 1.2C sensitivity.
    BUT THAT ASSUMES THE 0.6C IS REAL, when in fact in station after station, we see cases of UHI being ignored or ‘averaged out’ in a way to adjust up the trend.

    If the warming trends from 1940 to present are actually more like 0.3C warming in 70 years and 80ppm of CO2, it blows even more huge chunks in the “CO2 is a crisis” theory. IT WOULD MEAN THE WHOLE WARMING TREND IS NO GREATER THAN THE NATURAL TRENDS FROM 1890-1940! The scientists would have to look back at the models and admit that the highly suspect assumptions are wrong – (“Hey lets pretend we will have a lot more water vapor in the air to supercharge the warming, but not much of that pesky negative-feedback cloud effect that would take it all away”).

  3. Nice work. Especially the point about the positive feedback so how come it was a stable system before?

    I haven’t seen many people working through this, so I started a series about positive feedback using albedo as my first example.
    http://scienceofdoom.com/2009/12/02/positive-feedback-albedo-and-the-end-of-all-things/

    Given that positive feedback is present in some parts of the coupled systems that make up our climate – we all agree on that – then it’s quite clear why we are experiencing warming in the 20th/21st centuries – it started with the end of the last ice age 18000 B.P. (Don’t know what caused that to end but it wasn’t humans)

    That’s why I find it so confusing to hear the mantra repeated “Nothing can explain the current warming, except CO2..” Nothing – except, positive feedback in this immensely complex world that contains hundreds of variables and interactions that we haven’t yet started to model. (Like the aerosols that are somewhere between no effect and wiping out AGW according to the IPCC now it has significantly improved its knowledge on the subject!)

    Just a disclaimer on the above – I don’t know whether positive feedback from albedo, CO2 from oceans, humidity etc is in fact the reason for the last 18000 years of warming. There are negative feedbacks as well.

  4. Can you provide a citation for the equation used by the IPCC to make those graphs of the dependence of temperature on CO2 concentration with the page number included? I would like to take a look at their derivation. A linear dependence seems a bit odd for a system of 10^30 degrees of freedom to me. Thanks.

  5. Great point about positive feedback. I am an electrical engineer and have wondered about similar things myself. I also wondered if the climate system was delicate with such strong positive feedback why hasn’t it spun out of control yet?

  6. Thanks for the summary. I’ll be sharing it with friends.

    However, one question does spring to mind. Why do you assume feedback is linear? Couldn’t it be logarithmic? So you could have the first few (hundred) years with seemingly little net effect then there’s an explosion of net effect. I imagine alarmists would use it to say: a) that’s why recent history shows little net change; and b) why we’re still seeing minimal net effect – essentially because we haven’t reached the ‘knee’ of the curve yet.

    Not that I’m suggesting climate sensitivity works like that. But it’s be nice to address this point to cover all bases.

  7. Excellent.
    I have not seen AGW promoters do more than reply to this sort of critique with anything other than circular arguments or dismissal.

  8. “Couldn’t it be logarithmic?”

    The recent millenial CO2 has a stick blade that “looks” logarithmic. That look is what they unconsciously or not trying to imitate with a variety of arguable “tricks”- statistical flukes, selectivity of proxies data, steps and adjustments to temps. Another trick would be to use the CO2 data as a calibration or algorithm. Maybe the millenial readings, maybe the modern CO2 instrumental measurements.

  9. “Yeah, I know this seems crazy”

    It is not. It is a common logical fallacy, writ large. It is “begging the question”.

    Some warmist extremists may be going beyond eyeball-fudging data to match temperature more to the CO2 record, they may be using the CO2 instrumental or near historic records to look so, clothed as “instrumental climate data.” (one of their code phrases like “publicly available information”.) In fact, I think this is what the core believers are doing, partly doing, or have in the past, and without it now, have to tweak more and more elsewhere.

  10. Same old recycling of retarded bullshit. You’re tragically stupid, and so are the vast majority of your readers.

    “Temperatures are inevitably warming from this low point”

    Climate doesn’t change of its own volition. There is never any inevitability about climate change that arises simply from the contemporary state of the climate. To think so, you have to have no idea at all about how the climate works.

    “Every time I try to express this, it sounds so ridiculous that people think I must have it wrong. But the main argument supporting a high climate sensitivity to CO2 is that scientists claim to have looked at past warming, particularly from 1950-2000, and they can’t think of any natural cause that could behind it”

    It sounds ridiculous because it’s fucking stupid. You must have a horribly low IQ to go through thought processes like this. You obviously have not got the faintest idea of how scientists attribute climate change. There is a chapter on this in that rather nice summary of the science, the IPCC report. If you were at all capable of learning, I’d suggest you read it. But you are not, so I won’t.

    “The more the warming falls into a documented natural range of temperature variation, the harder it is to portray it as requiring man-made forcings to explain”

    Not even remotely. There have been forests at the poles at times in the Earth’s history; there have been ice sheets in the tropics. That has absolutely no bearing whatsoever on whether greenhouse gases cause warming.

    “This is also the exact same reason alarmist scientists work so hard to eliminate the Medieval Warm Period and little ice age from the temperature record”

    What a pathetic fantasy world you inhabit.

    “all climate models assume net positive feedback”

    They do not assume positive feedbacks. These exist and are observed. Only someone who is actually brain-damaged could fail to understand this.

    “Sure, there are some time delay issues, probably 10-15 years, as well as some potential anthropogenic cooling from aerosols, but none of this closes these tremendous gaps”

    Replace “but none” with “and all”, and “closes these tremendous gaps” with “renders completely invalid this fatuous analysis”, and your statement is accurate.

    The usual prediction – you will not respond to any of these points. You will not understand any of these points. You will regurgitate the same text within the next two weeks. You will never develop the intelligence necessary even to understand how pathetically wrong you are.

  11. “It sounds ridiculous because it’s f–king stupid.”

    I agree, because I hear it all the time, but it exists anyway. Usually, I hear it on the millenial scale, like “climate change in past 1000 (or 2000) years must be anthropogenic, there is no other possibility.” That is the extreme warmists belief I have encountered.

    I agree with CS. The believers believe with begging the question science. The models “assume” CO2 increase forcing, nothing wrong there, but the thermometer records are adjusted upward assuming nonconformance with CO2 increase is human/instrumental error, or they do it just because they are believers, and a hint of guilt causes them to be so dictatorial and verbally abusive. Y

  12. I see today, December 10, 2009 the climateaudit site has a link to a working paper dated December 9, 2009, breaching the subject, for the first time there, of calibration of proxy data to get a hockey stick graph purporting to represent temperature. Let me say that before my Dec. 9 post above, on or about Dec. 6 or 7, I posted here a longer text on the topic, clarifying what “proxy” and “instrumental measurements” were used, but that text failed to post, maybe because it sounded to crazy.

    Now that crazy calibration is a breached subject, I will tell you the CO2 source of the “calibration”. Or to be completely disproved, no problem, that is real science. Here is my data.

    The data used to get the hockey stick is the data the modelers use, for one. The ice core CO2 data from Law Dome. (not the “isotope” or borehole data also from there.) The CO2 data I mention above is not the “isotope” data from proxies openly used by modelers, but the CO2 data.

    Your hockey stick comes from the data in Etheridge et al (1996) J. Geophys. Res. 101, 4115-4128, Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn. Fig. 4, p. 4123 has the strong look of the one thousand year hockey stick, no? There is a followup non-journal publication, Etheridge et al (1998) publicly available on line with the same graphs. More to follow.

    Yours,
    cheesegrater

  13. The millenial hockeysticks look like hockeysticks because the CO2 data plotted similarly is a hockeystick. the blade is the increase in CO2 in modern times, I have no problem taking that as an anthropogenic signal. 2,000 year data from Law Dome can be found in Macfarling Meure et al (2006).

    Etheridge (1988) Annals of Glaciology 10, 28-33 interestingly enough, has a CO2 hockeystick graph (fig. 4a) with instrumental data, one might say, “grafted” to the end of the ice core CO2 data in an upswinging line. Was this the precedent for climate science grafting graphology? Do not know, but it is NOT inappropriate in this case. Indeed, it is helpful. The graft is, openly, instrumental CO2 data upon the ice core CO2 data. The line shows the continuation of an unsurprising trend.

    Which brings me to the matter of “instrumental” data. I see it taken by sceptics this must mean temperature, i.e., the instruments are heat detectors, thermometers. But then there is the climate science terminology “instrumental climate data.” On in the same?

  14. In Jones and Mann 2004, for example, “instrumental climate data” is mentioned under 3. Calibration of Proxy Data. Could this include non-temp data? Possibly, the CO2 data. The instrumental data for atmospheric CO2 from the past century or so would fit. How about the Etheridge measurements, is not that atmosphere in ice? MJ 2004 mention using isotopic ratios as direct measure of temperature, they could mean that as “instrumental” but do not directly say that. IMO, I do not know if the graphs do so, but if they use calibration data to mold one thousand year hockeysticks, it has the “look” more of the CO2 measurement record, especially in the earlier hockeysticks. Of course such does not mean other devices are also used here and elsewhere in combination. But, I “sense” something in the use of “instrumental climate data” and “instrumental temperature data” separately. Then again, since the believers believe recent millenial temperature (alleged) change Must! be anthropogenically caused, and Must! be GHG related, temperature and CO2 record must track the same.

  15. One might take the removal of the MWP to be a promotional move, to visually scare the viewer. Could be. But, if you take it as a near article of faith recent millenial temps follow GHGs, you believe an accurate temp plotting must resemble the CO2 plotting. Indeed, information otherwise would appear heretical, something to stamp out, or down. What better way to fashion that than use the CO2 data itself? Is it cheating when you think that process purifies the thermometer record ridden with human errors because CO2 change equals temp change, at least in recent millenia.

    Conclusion–eyeball the Etheridge data. See what comes to mind.

  16. One more: the origin of my research is TAR (2001) sec 3.2.3.2 figure 3.2b. Cited thereat is Etheridge et al. (1996)

  17. @Hunter,

    “It sounds ridiculous because it’s fucking stupid. You must have a horribly low IQ to go through thought processes like this. You obviously have not got the faintest idea of how scientists attribute climate change. There is a chapter on this in that rather nice summary of the science, the IPCC report. If you were at all capable of learning, I’d suggest you read it. But you are not, so I won’t.”

    I’d suggest you read the chapter. Mr. Meyer’s description is correct.

  18. @Hunter,

    I shouldn’t expect you to actually read the chapter. Here’s the part that proves you wrong and Meyer right.

    You can read it here http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf

    Page 668

    ‘Attribution’ of causes of climate
    change is the process of establishing the most likely causes
    for the detected change with some defined level of confidence
    (see Glossary). As noted in the SAR (IPCC, 1996) and the
    TAR (IPCC, 2001), unequivocal attribution would require
    controlled experimentation with the climate system. Since that
    is not possible, in practice attribution of anthropogenic climate
    change is understood to mean demonstration that a detected
    change is ‘consistent with the estimated responses to the given
    combination of anthropogenic and natural forcing’ and ‘not
    consistent with alternative, physically plausible explanations of
    recent climate change that exclude important elements of the
    given combination of forcings’ (IPCC, 2001).

  19. I recently discovered your site, and I love it. I’ve been looking at this stuff with a skeptical eye for years and its nice to see someone has such a nice gathering of the “skeptic” argument.

    Other than the positive feed back issues, which are complete BS even if we assume a logarithmic response, given that we’ve had more CO2 and higher temperatures in the past, my biggest problem is how they use mathematical models.

    You state: “Now, they don’t always say it this directly. What they actually say is that they ran their climate models and their climate models could not produce the warming from 1950-2000 with natural forcings alone, but could reproduce this warming with forcings from CO2.”

    (I apologize in advance if this has all been said before somewhere else on this site. )

    What mathematical models do is force us to be precise with our current understanding of the system (physical, biological or whatever). We then have to use a test, or an experiment in sciences where you can actually experiment, to valid the model. Or, to put it differently, prove that our current understand is correct. What has been done is to include all things we know about the climate and chosen a parameter to play with (effect of CO2) until the model fits the past data. That isn’t validating the model. That’s either creating the model in the first place, or being intentionally deceiving. Meaning, why play with CO2? I find it unlikely, basically to the point of impossibility, that the only way you can fit the data, given the likely size of this mathematical model there must be, to borrow from Aaron, 10^30 different ways to fit some 100-200 data points. Given that scientists can’t experiment on the global climate, the only way to validate the model is to wait for time to pass. The problem is that these aren’t controlled experiments that allow you to isolate the system down to a small number of factors for you to measure. This makes new data difficult to interpret as its limited by your current understanding, same as the old data. Meaning that you don’t know if deviations in cloud formations (for example) from your model predictions are actually due to a change in [CO2] or just from cows farting or again 10^30 other things, remember you can’t control anything. This is in contrast to biology for example, where you’re able to control for and measure the increase in one gene’s expression given the presence of a certain treatment. Thus, its effectively impossible to validate a model for the global climate, at least until we have far more knowledge of the system.

    This doesn’t mean climate models are worthless. It just means we should see them similar to financial models. They might provide a rough guide, but its pretty much guaranteed that you haven’t accounted for everything in the system. Heck that’s true for biology as well, only to a lesser degree.

    And Hunter, more people might believe you, or even just listen to you, if stopped cursing at them. That’s a pretty clear sign that you either don’t know what you’re talking about or have some sort of agenda that you’re personally attached to.

  20. http://joannenova.com.au/2009/12/smoking-guns-across-australia-wheres-the-warming/

    individual plots of raw australian data. more fraud revealed.
    She deserves the Pulitzer just like Woodward and Bernstein.
    Did you manage to find a synonym for the word HIDE than means anything other than HIDE- as
    in ‘HIDE the decline’?

    Norman Davies of propaganda
    Five Rules of Propaganda:
    1) endless repetition, repeating the same messages over and over with different variations

    and combinations
    2. Disfiguration: discrediting the opposition with slander and crude parodies
    3. Unanimity: presenting your point of view as if every right headed person agrees with it
    while smearing those who doubt it thanks using appeals of famous people, experts and so
    called consensus; hiding/ excluding others from the underlying basis / information of your
    position.
    4. Transfusion: manipulating the prevailing values of the public to your own advantage
    5. Simplification: reducing all facts into a comparison between ‘good and evil’ and
    ‘friends and enemies’

    What the fellows at East Anglia and elsewhere were doing was not science.
    You see- when you actually put your own eyes on the raw data.

    Shall we have a look at the antarctic now?
    http://icecap.us/images/uploads/antarctica_white_paper_final.pdf
    Scratch the surface – global warming is a fraud to the bone.
    It is Mann made.

    From an 11 year old who doesn’t have a vanity blog:
    http://www.youtube.com/watch?v=F_G_-SdAN04 home schooled, I’ll bet a dollar.

    The UN’s argument:
    http://www.youtube.com/watch?v=P_9mjBUSDng
    Good reason to supervise what your children see on TV.

  21. Wally,

    I like your argument about models and controlling observations.

    As an appendix to your argument, I think we have to prove that a real-world system is not random before we start building deterministic models. Climate history sure looks like random noise (see here for spectrum of global temperature history).

    Has anyone seen a proof that the climate is NOT random? If not, then deterministic models cannot be accurate.

  22. Hi Warren,

    You say “Every time I try to express this, it sounds so ridiculous that people think I must have it wrong. But the main argument supporting a high climate sensitivity to CO2 is that scientists claim to have looked at past warming, particularly from 1950-2000, and they can’t think of any natural cause that could behind it, which leaves CO2 by process of elimination.”

    I think you have it exactly right. Here’s a bit of supporting evidence for the picture you portray from a University of Washington course in Global Warming. They have a web page with links to the lecture slides here: http://www.atmos.washington.edu/2009Q4/111/Lecture_Slides.htm

    This course features guest lectures from Professor Theodore L. (Tad) Anderson, a researcher in aerosols. Professor Anderson’s lecture slides are refreshingly honest about reliance upon the historical record, the uncertain role played by aerosols in climate models, the overall uncertainty surrounding climate sensitivity, and the manner in which causal attribution is claimed for CO2. In particular, see the lecture slides here: http://www.atmos.washington.edu/2009Q4/111/Lecture_Slides/111Fall09_Tad_Thurs_GW_critique_2perpage.pdf

    Warren, I enjoy your blog and videos very much. I particularly like the way you differentiate between the basic science of the green house effect (upon which we all agree) and the alarmist theory of high climate sensitivity based on positive feedbacks. This is a point that needs to be emphasized in the debate that is far from over. Anyone who thinks this bit of science is ‘settled’ doesn’t know anything about it.

    Excellent work!

    John M.

  23. Kevan,

    Thanks, as someone who uses mathematical models, I’m extremely discouraged by what has been termed climate-gate. It only makes it more difficult for me persuade those unfamiliar with mathematical models of their necessity in a subject like biology, where it is so easy to experiment. After about my first sentence everyone looks at me crosswise thinking, “you can make a model do anything and why make a model, instead of just run the experiment anyway?”

    As for the noise, that’s definitely a huge problem as well. We see all these graphs, rural temperatures, urban temperatures, adjusted temperatures, etc, and we usually see some sort of curve fit to the data. If we’re lucky we see an R^2, but what we never see is the confidence interval for the beta-value. Many people, even some scientists, don’t know what that is, or why they are needed. Basically we need a measure of your confidence in the slope generated by your regression, to tell us if it is significantly different from zero. This is often not a big deal if you come away with a high R^2 and large slope (either negative or positive). But when we see these slopes of .2 degrees per DECADE and still only an R^2 of maybe .4 (which is OK but not what I would consider high), I’d would bet that your slope is not significantly different from zero. I suppose it would be pretty easy to do this myself with various publicly available data sets, but besides you and maybe 10 other people, I’m not sure anyone would listen to me.

  24. I think it’s safe to say that when the climate models meet their day of judgment, we will find it harder for a while to convince people that not all models are bad. But I think we’ll prevail. The astronomer’s model of the solar system is a pretty complicated model, but it’s known for it’s accuracy. And I think people will readily understand that no testing of climate models with repeated experiments ever took place.

    Here’s another line of reasoning, but not so sure how this one would go down with the layman. The trusty Stone-Weierstrass Theorem states that we can approximate any function with a finite polynomial (forgive gross abbreviation of theorem). We could “model” the past two centuries of global temperatures with a ten-term polynomial. And we’d be dead wrong in our predictions. Thus the act of modeling alone is shown to be entirely worthless. It’s the repeated experiments that make the model useful.

    Not sure I could make that case to a layman, though.

  25. Hi Warren,

    You say “Every time I try to express this, it sounds so ridiculous that people think I must have it wrong. But the main argument supporting a high climate sensitivity to CO2 is that scientists claim to have looked at past warming, particularly from 1950-2000, and they can’t think of any natural cause that could behind it, which leaves CO2 by process of elimination.”

    I think you have it exactly right. Here’s a bit of supporting evidence for the picture you portray from a University of Washington course in Global Warming. They have a web page with links to the lecture slides here: http://www.atmos.washington.edu/2009Q4/111/Lecture_Slides.htm

    This course features guest lectures from Professor Theodore L. (Tad) Anderson, a researcher in aerosols. Professor Anderson’s lecture slides are refreshingly honest about reliance upon the historical record, the uncertain role played by aerosols in climate models, the overall uncertainty surrounding climate sensitivity, and the manner in which causal attribution is claimed for CO2. In particular, see the lecture slides here: http://www.atmos.washington.edu/2009Q4/111/Lecture_Slides/111Fall09_Tad_Thurs_GW_critique_2perpage.pdf

    Warren, I enjoy your blog and videos very much. I particularly like the way you differentiate between the basic science of the green house effect (which we all agree upon) and the alarmist theory of high climate sensitivity based on positive feedbacks. This is a point that needs to be emphasized in the debate that is far from over. Anyone who says this particular bit of science is ‘settled’ doesn’t know anything about it.

    John M.

  26. I have another idea for the 1950 datum as a starting point. It is the year CO2 concentration hit the low point in a mid century downturn. Since then concentration data are upward. So, not only is 1950 a nice round number, it is also a very, very convenient starting point–for CO2 explanations.

  27. 2 Sources: Trudinger et al. (2002) Kalman filter analysis…J of Geophys. Research, v. 107, no. D20, 4422,

    and paper after it, ……4423.

  28. “I shouldn’t expect you to actually read the chapter. Here’s the part that proves you wrong and Meyer right.”

    Meyer’s claim is a straw man. Attribution is different form Climate Sensitivity. Estimates of CS are not derived from 20th century warming.

Comments are closed.