This could easily be a business case: Two managers. One sits in his office, looking at spreadsheets, trying to figure out if the factory is doing OK. The other spends most of his time on the factory floor, trying to see what is going on. Both approaches have value, and both have shortcomings.
Shift the scene now to the physical sciences: Two geologists. One sits at his computer looking at measurement data sets, trying to see trends through regression, interpolation, and sometimes via manual adjustments and corrections. The other is out in the field, looking at physical evidence. Both are trying to figure out sea level changes in the Maldives. The local geologist can’t see global patterns, and may have a tendency to extrapolate too broadly from a local finding. The computer guy doesn’t know how his measurements may be lying to him, and tends to trust his computer output over physical evidence.
It strikes me that there would be incredible power from merging these two perspectives, but I sure don’t see much movement in this direction in climate. Anthony Watts has been doing something similar with temperature measurement stations, trying to bring real physical evidence to improve computer modellers correction algorithms, but there is very little demand among the computer guys for this help. We’ve reached an incredible level of statistical hubris, that somehow we can manipulate tiny signals from noisy and biased data without any knowledge of the physical realities on the ground (“bias” used here in its scientific, not its political/cultural meaning)
The Obama administration really likes to create little brands, from the Obama “O” to the absurd symbol to be affixed to every stimulus-funded project. In this vein, a reader wrote me that the slam-dunk obvious graphic for cap-and-trade has already been created over 30 years ago:
Any number of folks have achnowleged that, nowadays, the surest road to academic funding is to tie your pet subject in with climate change. If, for example, you and your academic buddies want funding to study tourist resort destinations (good work if you can get it), you will have a better chance if you add climate change into the mix.
John Moore did a bit of work with the Google Scholar search engine to find out how many studies referencing, say, surfing, also referenced climate change. It is a lot. When you click through to the searches, you will find a number of the matches are spurious (ie matches to random unrelated links on the same page) but the details of the studies and how climate change is sometimes force-fit is actually more illuminating than the summary numbers.
As cities grow, as most have over the last 100 years, temperature measurement points are engulfed by increasingly hotter portions of the heat island. For example, the GISS shows the most global warming in the US centered around Tucson based on this measurement point, which 100 years ago was rural.
Apparently, Jones et al found recently that a third to a half of the warming reported in the Hadley CRUT3 database in China may be due to urban heat island effects rather than any broader warming trend. This particularly important since it was a Jones et al letter to Nature years ago that previously gave the IPCC cover to say that there was negligible uncorrected urban warming bias in the major surface temperature records.
Interestingly, Jones et al can really hs to be treated as a hostile witness on this topic. Their abstract states:
We show that all the land-based data sets for China agree exceptionally well and that their residual warming compared to the SST series since 1951 is relatively small compared to the large-scale warming. Urban-related warming over China is shown to be about 0.1°C decade−1 over the period 1951–2004, with true climatic warming accounting for 0.81°C over this period
By using the words “relatively small” and using a per decade number for the bias but an aggregate number for the underlying warming signal, they are doing everything possible to downplay their own finding (see how your eye catches the numbers 0.1 and 0.81 and compares them, even though they are not on a comparable basis — this is never an accident). But in fact, the exact same numbers restate this way: .53C, or 40% of the total measured warming of 1.34C was due to urban biases rather than any actual global warming signal.
Since when is a 40% bias or error “relatively small?”
So why do they fight their own conclusion so hard? After all, the study still shows a reduced, but existent, historic warming signal. As do satellites, which are unaffected by this type of bias. Even skeptics like myself admit such a signal still exists if one weeds out all the biases.
The reason why alarmists, including it seems even the authors themselves, resist this finding is that reduced historic warming makes their catastrophic forecasts of future even more suspect. Already, their models do not back cast well against history (without some substantial heroic tweaking or plugs), consistently over-estimating past warming. If the actual past warming was even less, it makes their forecasts going forward look even more absurd.
A few minutes looking at the official US temperature measurement stations here will make one a believer that biases likely exist in historic measurements, particularly since the rest of the world is likely much worse.
I have no idea what is driving this, whether it be a crass payback for campaign contributions (as implied in the full article) or a desire to stop those irritating amateur bloggers from trying to replicate “settled science,” but it is, as a reader said who sent it to me, “annoying:”
There are some things science needs to survive, and to thrive: eager, hardworking scientists; a grasp of reality and a desire to understand it; and an open and clear atmosphere to communicate and discuss results.
That last bit there seems to be having a problem. Communication is key to science; without it you are some nerd tinkering in your basement. With it, the world can learn about your work and build on it.
Recently, government-sponsored agencies like NIH have moved toward open access of scientific findings. That is, the results are published where anyone can see them, and in fact (for the NIH) after 12 months the papers must be publicly accessible. This is, in my opinion (and that of a lot of others, including a pile of Nobel laureates) a good thing. Astronomers, for example, almost always post their papers on Astro-ph, a place where journal-accepted papers can be accessed before they are published.
John Conyers (D-MI) apparently has a problem with this. He is pushing a bill through Congress that will literally ban the open access of these papers, forcing scientists to only publish in journals. This may not sound like a big deal, but journals are very expensive. They can cost a fortune: The Astrophysical Journal costs over $2000/year, and they charge scientists to publish in them! So this bill would force scientists to spend money to publish, and force you to spend money to read them.
I continue to be confused how research funded with public monies can be “proprietary,” but interestingly this seems to be a claim pioneered in the climate community, more as a way to escape criticism and scrutiny than to make money (the Real Climate guys have, from time to time, argued for example that certain NASA data and algorithms are proprietary and cannot be released for scrutiny – see comments here, for example.)
I really like to write a bit more about such articles, but I just don’t have the time right now. So I will simply recommend you read this guest post at WUWT on Steig’s 2009 Antarctica temperature study. The traditional view has been that the Antarctic Peninsula (about 5% of the continent) has been warming a lot while the rest of the continent has been cooling. Steig got a lot of press by coming up with the result that almost all of Antarctica is warming.
But the article at WUWT argues that Steig gets to this conclusion only by reducing all of Antarctic temperatures to three measurement points. This process smears the warming of the peninsula across a broader swath of the continent. If you can get through the post, you will really learn a lot about the flaws in this kind of study.
I have sympathy for scientists who are working in a low signal to noise environment. Scientists are trying to tease 50 years of temperature history across a huge continent from only a handful of measurement points that are full of holes in the data. A charitable person would look at this article and say they just went too far, teasing out spurious results rather than real signal out of the data. A more cynical person might argue that this is a study where, at every turn, the authors made every single methodological choice coincidentally in the one possible way that would maximize their reported temperature trend.
By the way, I have seen Steig written up all over, but it is interesting that I never saw this: Even using Steig’s methodology, the temperature trend since 1980 has been negative. So whatever warming trend they found ended almost 30 years ago. Here is the table from the WUWT article, showing the Steign original results and several cuts and recalculating their data using improved methods.
1957 to 2006 trend
1957 to 1979 trend (pre-AWS)
1980 to 2006 trend (AWS era)
Steig 3 PC
+0.14 deg C./decade
+0.17 deg C./decade
-0.06 deg C./decade
New 7 PC
+0.11 deg C./decade
+0.25 deg C./decade
-0.20 deg C./decade
New 7 PC weighted
+0.09 deg C./decade
+0.22 deg C./decade
-0.20 deg C./decade
New 7 PC wgtd imputed cells
+0.08 deg C./decade
+0.22 deg C./decade
-0.21 deg C./decade
Here, by the way, is an excerpt from Steig’s abstract in Nature:
Here we show that significant warming extends well beyond the Antarctic Peninsula to cover most of West Antarctica, an area of warming much larger than previously reported. West Antarctic warming exceeds 0.1 °C per decade over the past 50 years, and is strongest in winter and spring.
Here is the first thing I was ever taught about regression analysis — never, ever use multi-variable regression analysis to go on a fishing expedition. In other words, never throw in a bunch of random variables and see what turns out to have the strongest historical relationship. Because the odds are that if you don’t understand the relationship between the variables and why you got the answer that you did, it is very likely a spurious result.
The purpose of a regression analysis is to confirm and quantify a relationship that you have a theoretical basis for believing to exist. For example, I might think that home ownership rates might drop as interest rates rose, and vice versa, because interest rate increases effectively increase the cost of a house, and therefore should reduce the demand. This is a perfectly valid proposition to test. What would not be valid is to throw interest rates, population growth, regulatory levels, skirt lengths, superbowl winners, and yogurt prices together into a regression with housing prices and see what pops up as having a correlation. Another red flag would be, had we run our original regression between home ownership and interest rates and found the opposite result than we expected, with home ownership rising with interest rates, we need to be very very suspicious of the correlation. If we don’t have a good theory to explain it, we should treat the result as spurious, likely the result of mutual correlation of the two variables to a third variable, or the result of time lags we have not considered correctly, etc.
Makes sense? Well, then, what do we make of this: Michael Mann builds temperature reconstructions from proxies. An example is tree rings. The theory is that warmer temperatures lead to wider tree rings, so one can correlate tree ring growth to temperature. The same is true for a number of other proxies, such as sediment deposits.
In the particular case of the Tiljander sediments, Steve McIntyre observed that Mann had included the data upside down – meaning he had essentially reversed the sign of the proxy data. This would be roughly equivalent to our running our interest rate – home ownership regression but plugging the changes in home ownership with the wrong sign (ie decreases shown as increases and vice versa).
You can see that the data was used upside down by comparing Mann’s own graph with the orientation of the original article, as we did last year. In the case of the Tiljander proxies, Tiljander asserted that “a definite sign could be a priori reasoned on physical grounds” – the only problem is that their sign was opposite to the one used by Mann. Mann says that multivariate regression methods don’t care about the orientation of the proxy.
The world is full of statements that are strictly true and totally wrong at the same time. Mann’s statement in bold is such a case. This is strictly true – the regression does not care if you get the sign right, it will still get a correlation. But it is totally insane, because this implies that the correlation it is getting is exactly the opposite of what your physics told you to expect. It’s like getting a positive correlation between interest rates and home ownership. Or finding that tree rings got larger when temperatures dropped.
This is a mistake that Mann seems to make a lot — he gets buried so far down into the numbers, he forgets that they have physical meaning. They are describing physical systems, and what they are saying in this case makes no sense. He is essentially using a proxy that is essentially behaving exactly the opposite of what his physics tell him it should – in fact behaving exactly opposite to the whole theory of why it should be a proxy for temperature in the first place. And this does not seem to bother him enough to toss it out.
PS- These flawed Tiljander sediments matter. It has been shown that the Tiljander series have an inordinate influence on Mann’s latest proxy results. Remove them, and a couple of other flawed proxies (and by flawed, I mean ones with manually made up data) and much of the hockey stick shape he loves so much goes away
I have written for quite a while that the most important issue in evaluating catastrophic global warming forecasts is feedback. Specifically, is the climate dominated by positive feedbacks, such that small CO2-induced changes in temperatures are multiplied many times, or even hit a tipping point where temperatures run away? Or is the long-term stable system of climate more likely dominated by flat to negative feedback, as are most natural physical systems? My view has always been that the earth will warm at most a degree for a doubling of CO2 over the next century, and may warm less if feedbacks turn out to be negative.
There is little argument in the scientific community that a direct effect of doubling the CO2 concentration will be a small increase of the earth’s temperature — on the order of one degree. Additional increments of CO2 will cause relatively less direct warming because we already have so much CO2 in the atmosphere that it has blocked most of the infrared radiation that it can. It is like putting an additional ski hat on your head when you already have a nice warm one below it, but your are only wearing a windbreaker. To really get warmer, you need to add a warmer jacket. The IPCC thinks that this extra jacket is water vapor and clouds.
Since most of the greenhouse effect for the earth is due to water vapor and clouds, added CO2 must substantially increase water’s contribution to lead to the frightening scenarios that are bandied about. The buzz word here is that there is “positive feedback.” With each passing year, experimental observations further undermine the claim of a large positive feedback from water. In fact, observations suggest that the feedback is close to zero and may even be negative. That is, water vapor and clouds may actually diminish the already small global warming expected from CO2, not amplify it. The evidence here comes from satellite measurements of infrared radiation escaping from the earth into outer space, from measurements of sunlight reflected from clouds and from measurements of the temperature the earth’s surface or of the troposphere, the roughly 10 km thick layer of the atmosphere above the earth’s surface that is filled with churning air and clouds, heated from below at the earth’s surface, and cooled at the top by radiation into space.
When the IPCC gets to a forecast of 3-5C warming over the next century (in which CO2 concentrations are expected to roughly double), it is in two parts. As professor Happer relates, only about 1C of this is directly from the first order effects of more Co2. This assumption of 1C warming for a doubling of Co2 is relatively stable across both scientists and time, except that the IPCC actually reduced this number a bit between their 3rd and 4th reports.
They get from 1C to 3C-5C with feedback. Here is how feedback works.
Lets say the world warms 1 degree. Lets also assume that the only feedback is melting ice and albedo, and that for every degree of warming, the lower albedo from melted ice reflecting less sunlight back into space adds another 0.1 degree of warming. But this 0.1 degree extra warming would in turn melt a bit more ice, which would result in 0.01 degree 3rd order warming. So the warming from an initial 1 degree with such 10% feedback would be 1+0.1+0.01+0.001 …. etc. This infinite series can be calculated as dT * (1/(1-g)) where dT is the initial first order temperature change (in this case 1C) and g is the percentage that is fed back (in this case 10%). So a 10% feedback results in a gain or multiplier of the initial temperature effect of 1.11 (more here).
So how do we get a multiplier of 3-5 in order to back into the IPCC forecasts? Well, using our feedback formula backwards and solving for g, we get feedback percents of 67% for a 3 multiplier and 80% for a 5 multiplier. These are VERY high feedbacks for any natural physical system short of nuclear fission, and this issue is the main (but by no means only) reason many of us are skeptical of catastrophic forecasts.
[By the way, to answer past criticisms, I know that the models do not use this simplistic feedback methodology in their algorithms. But no matter how complex the details are modeled, the bottom line is that somewhere in the assumptions underlying these models, a feedback percent of 67-80% is implicit]
For those paying attention, there is no reason that feedback should apply in the future but not in the past. Since the pre-industrial times, it is thought we have increased atmospheric Co2 by 43%. So, we should have seen, in the past, 43% of the temperature rise from a doubling, or 43% of 3-5C, which is 1.3C-2.2C. In fact, this underestimates what we should have seen historically since we just did a linear interpolation. But Co2 to temperature is a logarithmic diminishing return relationship, meaning we should see faster warming with earlier increases than with later increases. Never-the-less, despite heroic attempts to posit some offsetting cooling effect which is masking this warming, few people believe we have seen any such historic warming, and the measured warming is more like 0.6C. And some of this is likely due to the fact that the solar activity was at a peak in the late 20th century, rather than just Co2.
I have a video discussing these topics in more depth:
This is the bait and switch of climate alarmism. When pushed into the corner, they quickly yell “this is all settled science,” when in fact the only part that is fairly well agreed upon is the 1C of first order warming from a doubling. The majority of the warming, the amount that converts the forecast from nuisance to catastrophe, comes from feedback which is very poorly understood and not at all subject to any sort of consensus.