Surface Temperature Measurement Bias

Frequent readers will know that I have argued for a while that substantial biases exist in surface temperature records.  For example, I participated in a number of measurement site photo surveys, and snapped this picture of the measurement station in Tucson that has gotten so much attention:

Tucson1

Global warming catastrophists do not want to admit this bias, because it would undermine their headlines-grabbing forecasts.  In particular, they have spent the last year or two bragging that their climate models must be right because they do such a good job of predicting history.  So what becomes of this argument if it is demonstrated that the "history" to which their models correlate so well is wrong?  (In fact, their models correlate with history only because they are fudged and plugged to do so, as described here).

Ross McKitrick, a Canadian economist, performs a fairly simple and compelling test on recent surface temperature records.  The chief suspected source of bias is from urbanization.  The weather station above has existed in Tucson in one form or another for 100 years.  When it was first in place, it sat in a rural setting near a small town characterized by horses and dirt roads.  Now it sits in an asphalt parking lot near cars and buildings, a block away from a power station, in the center of a town of a half million people.

McKitrick looked at the statistical correlation between economic growth and local temperature records.  What he found was that where there was growth, there was warming;  where there was less growth, there was less warming.  He has demonstrated that the surface temperature warming signal correlates strongly with urbanization and growth:

Our new paper presents a new, larger data set with a more complete set of socioeconomic indicators. We showed that the spatial pattern of warming trends is so tightly correlated with indicators of economic activity that the probability they are unrelated is less than one in 14 trillion. We applied a string of statistical tests to show that the correlation is not a fluke or the result of biased or inconsistent statistical modelling. We showed that the contamination patterns are largest in regions experiencing real economic growth. And we showed that the contamination patterns account for about half the surface warming measured over land since 1980.

The half figure is an interesting one.  For years, it has been known that satellite temperature records, which look at the whole surface of the earth, both land and sea, have been showing only about half the warming as the surface temerpature records.  McKitrick’s work seems to show that the difference may well be in urban contamination of the surface data.

So how has the IPCC reacted to his work?  For years, the IPCC ignored his work and his comments on their reports.  Finally, in the last IPCC report they responded:

McKitrick and Michaels (2004) and [Dutch meteorologists] de Laat and Maurellis (2006) attempted to demonstrate that geographical patterns of warming trends over land are strongly correlated with geographical patterns of industrial and socioeconomic development, implying that urbanization and related land surface changes have caused much of the observed warming. However, the locations of greatest socioeconomic development are also those that have been most warmed by atmospheric circulation changes (Sections 3.2.2.7 and 3.6.4), which exhibit large-scale coherence. Hence, the correlation of warming with industrial and socioeconomic development ceases to be statistically significant. In addition, observed warming has been, and transient greenhouse-induced warming is expected to be, greater over land than over the oceans (Chapter 10), owing to the smaller thermal capacity of the land.

So the IPCC argues that yes, areas of high industrial and socioeconomic development do show more warming, but that is not because of urban biases on measurement but because of "atmospheric circulation changes" that happen to warm these same urban areas.  Now, this is suspicious, since Occam’s Razor would tell us to assume the most obvious result, that urbanization puts upwards bias on temperature readings, rather than on natural circulation patterns that happen to coincide with urban areas. 

But it is more than suspicious.  It is a complete fabrication.  The report, particularly at the cited sections, has nothing about these circulation patterns either showing that they coincide with areas of economic growth or that they tend to preferentially warm these areas.   And does this answer really make any sense anyway?  A recent study in California showed warming in the cities, but not in the rural areas.  Does the IPCC really want to argue that wind patterns are warming just LA and San Francisco but not areas just 100 miles away? 

A Brief Window into How the IPCC Does Science

I thought I had blogged on this topic of seal level measurement previously, but after reading this from Q&O and looking back, I see that I never posted anything.

As a brief background:

Dr. Nils-Axel Mörner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden. He is past president (1999-2003) of the INQUA Commission on Sea Level Changes and Coastal Evolution, and leader of the Maldives Sea Level Project. Dr. Mörner has been studying the sea level and its effects on coastal areas for some 35 years. He was interviewed by Gregory Murphy on June 6 for EIR

Climate scientists are notoriously touchy about non-climate folks "meddling" in their profession, but they have no such qualms when they venture off into statistics or geology or even astrophysics without much knowlege of what they are doing.  This story is telling, as told by Dr. Mörner:

Another way of looking at what is going on is the tide gauge. Tide gauging is very complicated, because it gives different answers for wherever you are in the world. But we have to rely on geology when we interpret it. So, for example, those people in the IPCC [Intergovernmental Panel on Climate Change], choose Hong Kong, which has six tide gauges, and they choose the record of one, which gives 2.3 mm per year rise of sea level. Every geologist knows that that is a subsiding area. It’s the compaction of sediment; it is the only record which you shouldn’t use. And if that figure is correct, then Holland would not be subsiding, it would be uplifting.

And that is just ridiculous. Not even ignorance could be responsible for a thing like that. So tide gauges, you have to treat very, very carefully. Now, back to satellite altimetry, which shows the water, not just the coasts, but in the whole of the ocean. And you measure it by satellite. From 1992 to 2002, [the graph of the sea level] was a straight line, variability along a straight line, but absolutely no trend whatsoever. We could see those spikes: a very rapid rise, but then in half a year, they fall back again. But absolutely no trend, and to have a sea-level rise, you need a trend.

Then, in 2003, the same data set, which in their [IPCC’s] publications, in their website, was a strai-ght line—suddenly it changed, and showed a very strong line of uplift, 2.3 mm per year, the same as from the tide gauge. And that didn’t look so nice. It looked as though they had recorded something; but they hadn’t recorded anything. It was the original one which they had suddenly twisted up, because they entered a “correction factor,” which they took from the tide gauge. So it was not a measured thing, but a figure introduced from outside. I accused them of this at the Academy of Sciences in Moscow —I said you have introduced factors from outside; it’s not a measurement. It looks like it is measured from the satellite, but you don’t say what really happened. And they ans-wered, that we had to do it, because otherwise we would not have gotten any trend!

That is terrible! As a matter of fact, it is a falsification of the data set. Why? Because they know the answer. And there you come to the point: They “know” the answer; the rest of us, we are searching for the answer. Because we are field geologists; they are computer scientists. So all this talk that sea level is rising, this stems from the computer modeling, not from observations. The observations don’t find it!

Observer Technology Bias in Hurricane Counts

A while back, I demonstrated how apparent increases in tornadoes in the US is entirely attributable to doppler radar and more storm observation points rather than any actual increase in tornadoes.  When one corrects for this measurement change, say by limiting the count only to very large tornadoes that were unlikely to escape detection even with older technology, the tornado count has actually gone down.

Steve McIntyre points out that the same effect exists for hurricanes.  In the early 1900’s, whole storms could easily be missed if no ship crossed paths with the storm and the storm never made landfall.  Better technology (e.g. satellites) bias current hurricane numbers upwards, but by how much.  In his post, he has a count of named Atlantic storms in just the last 20 years that would likely have escaped detection fifty years ago.  How many were there?

Frankly I was surprised. There are 52 storms on the list.That’s 52 out of the 252 storms in the official record, or 20% of the total. That’s 20% of the modern storms which lack a single classical (ship or shore) report of storm winds. Wow.

The obvious question is: how can one compare these satellite- and aircraft-based storms, which left no ship or shore evidence, with pre-1945 records which were based solely on ship and shore observations?

The result is a significant bias.  Below, he has only removed these 52 storms from the last 20 years.  Others post-WWII but before 1980 would have to be removed.  One can observe that nearly all of the increase in storms in the last half century seems to be due to this measurement bias, and not to, say, global warming:

1130073 click for larger version

Its the Cities, Stupid

New study conducted in California (emphasis added):

We investigated air temperature patterns in California from 1950 to 2000. Statistical analyses were used to test the significance of temperature trends in California subregions in an attempt to clarify the spatial and temporal patterns of the occurrence and intensities of warming. Most regions showed a stronger increase in minimum temperatures than with mean and maximum temperatures. Areas of intensive urbanization showed the largest positive trends, while rural, non-agricultural regions showed the least warming. Strong correlations between temperatures and Pacific sea surface temperatures (SSTs) particularly Pacific Decadal Oscillation (PDO) values, also account for temperature variability throughout the state. The analysis of 331 state weather stations associated a number of factors with temperature trends, including urbanization, population, Pacific oceanic conditions and elevation. Using climatic division mean temperature trends, the state had an average warming of 0.99°C (1.79°F) over the 1950–2000 period, or 0.20°C (0.36°F) decade.

Southern California had the highest rates of warming, while the NE Interior Basins division experienced cooling. Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures. In comparison, irrigated cropland sites warmed about 0.13°C [per decade] annually, but near 0.40°C for summer and fall minima. Offshore Pacific SSTs warmed 0.09°C decadefor the study period.

So, warming has occured mainly in the urban areas, while the least developped regions have cooled.  Increase of minimum temperatures rathern than daily maximum’s could be a result of CO2, but is more likely a signature of urban heat islands.  In particular, look at Anthony’s map in the linked article.  Notice the red dots for hotter areas and the cool dots for cooler areas.  The red dots are all on… cities.  The blue dots are all in the countryside.  You make the call — urban heat or greenyhouse effect.

Climate Models Match History Because They are Fudged

When catastrophist climate models were first run against history, they did not even come close to matching.  Over the last several years, after a lot of time under the hood, climate models have been tweaked and forced to match historic warming observations pretty closely.  A prominent catastrophist and climate modeller finally asks the logical question:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

One wonders how it took so long for supposedly trained climate scientists right in the middle of the modelling action to ask an obvious question that skeptics have been asking for years (though this particular guy will probably have his climate decoder ring confiscated for brining this up).  The answer seems to be that rather than using observational data, modellers simply make man-made forcing a plug figure, meaning that they set the man-made historic forcing number to whatever number it takes to make the output match history. 

Gee, who would have guessed?  Well, actually, I did, though I guessed the wrong plug figure.  I did, however, guess that one of the key numbers was a plug for all the models to match history so well:

I am willing to make a bet based on my long, long history of modeling (computers, not fashion).  My guess is that the blue band, representing climate without man-made effects, was not based on any real science but was instead a plug.  In other words, they took their models and actual temperatures and then said "what would the climate without man have to look like for our models to be correct."  There are at least four reasons I strongly suspect this to be true:

  1. Every computer modeler in history has tried this trick to make their models of the future seem more credible.  I don’t think the climate guys are immune.
  2. There is no way their models, with our current state of knowledge about the climate, match reality that well. 
  3. The first time they ran their models vs. history, they did not match at all.  This current close match is the result of a bunch of tweaking that has little impact on the model’s predictive ability but forces it to match history better.  For example, early runs had the forecast run right up from the 1940 peak to temperatures way above what we see today.
  4. The blue line totally ignores any of our other understandings about the changing climate, including the changing intensity of the sun.  It is conveniently exactly what is necessary to make the pink line match history.  In fact, against all evidence, note the blue band falls over the century.  This is because the models were pushing the temperature up faster than we have seen it rise historically, so the modelers needed a negative plug to make the numbers look nice.

Here is one other reason I know the models to be wrong:  The climate sensitivities quoted above of 1.5 to 4.5 degrees C are unsupportable by history.  In fact, this analysis shows pretty clearly that 1.2 is about the most one can derive for sensitivity from our past 120 years of experience, and even that makes the unreasonable assumption that all warming for the past century was due to CO2.

Cooler, but with a Worse Environment

As a follow-up to my post on the problems with a cooler but poorer world, let’s look at a likely scenario of a cooler world with a worse environment.

Al Gore is a huge supporter of biofuels, and particularly corn-based ethanol, as a "solution" to global warming.  In fact, Al Gore claims that in addition to inventing the Internet, he "saved" corn-based ethanol (from a pro-ethanol site):

Vice-President Al Gore
Third Annual Farm Journal Conference, December 1, 1998
http://clinton3.nara.gov/WH/EOP/OVP/speeches/farmj.html

"I was also proud to stand up for the ethanol tax exemption when it was under attack in the Congress — at one point, supplying a tie-breaking vote in the Senate to save it. The more we can make this home-grown fuel a successful, widely-used product, the better-off our farmers and our environment will be."

It is good to know that when the economic and environmental toll from our disastrous subsidization of corn ethanol is finally tallied, we will know where to send the bill.  HT: Tom Nelson

And it fact, Al Gore’s ethanol support is putting him in opposition to… leading environmentalists.

Environmentalists are warning against expanding the production of biofuels, noting the proposed solution to global warming is actually causing more harm than it is designed to alleviate. Experts report biodiesel production, in particular, is causing the destruction of virgin rainforests and their rich biodiversity, as well as a sharp rise in greenhouse gas emissions.

Opponents of biofuels read like a Who’s Who of environmental activist groups. The Worldwatch Institute, World Conservation Union, and the global charity Oxfam warn that by directing food staples to the production of transport fuels, biofuels policy is leading to the starvation and further impoverishment of the world’s poor.

On November 15, Greenpeace’s Rainbow Warrior unfurled a large banner reading "Palm Oil Kills Forests and Climate" and blockaded a tanker attempting to leave Indonesia with a cargo full of palm oil. Greenpeace, which warns of an imminent "climate bomb" due to the destruction of rich forests and peat bogs that currently serve as a massive carbon sink, reports groups such as the World Wildlife Fund, Conservation International, and Flora and Fauna International have joined them in calling for an end to the conversion of forests to croplands for the production of biofuels

"The rush to address speculative global warming concerns is once again proving the law of unintended consequences," said James M. Taylor, senior fellow for environment policy at The Heartland Institute. "Biofuels mandates and subsidies are causing the destruction of forests and the development of previously pristine lands in a counterproductive attempt to improve the environment.

"Some of the world’s most effective carbon sinks are being destroyed and long-stored carbon is now being released into the atmosphere in massive quantities, merely to make wealthy Westerners feel like they are ‘doing something’ to address global warming. The reality is, they are making things worse," Taylor noted.

Why Cooler but Poorer is the Wrong Choice

A lot of folks are sitting around in Bali this week trying to figure out how they can sell the rest of us on a cooler but poorer world.  Cooler but poorer is the name I and others put on a world that may be a few tenths of a degree cooler from less CO2, but certainly will be trillions of dollars poorer through expensive government mandates and restrictions on economic growth.

The fact is that small changes in economic growth rates have a much, much greater effect on human well-being than small changes in temperatures:  (HT to Tom Nelson, who is trying to make himself the Glen Reynolds of global warming skepticism.)

Their report suggests that a central plank in the global warming argument – that it will result in a big increase in deaths from weather-related disasters – is undermined by the facts. It shows deaths in such disasters peaked in the 1920s and have been declining ever since.

Average annual deaths from weather-related events in the period 1990-2006 – considered by scientists to be when global warming has been most intense – were down by 87% on the 1900-89 average. The mortality rate from catastrophes, measured in deaths per million people, dropped by 93%.

The report by the Civil Society Coalition on Climate Change, a grouping of 41 mainly free-market bodies, comes on the eve of an international meeting on climate change in Bali.

Indur Goklany, a US-based expert on weather-related catastrophes, charted global deaths through the 20th century from “extreme” weather events.

Compared with the peak rate of deaths from weather-related events in the 1920s of nearly 500,000 a year, the death toll during the period 2000-06 averaged 19,900. “The United Nations has got the issues and their relative importance backward,” Goklany said.

The number of deaths had fallen sharply because of better warning systems, improved flood defences and other measures. Poor countries remained most vulnerable.

Why Historic Proxy Studies Matter

Over the last several years, there has been quite a bit of debate in climate
circles over historical temperature reconstructions from various "proxies" like
ice cores and tree ring widths.  The debate really heated up a few years back
when Michael Mann introduced, and the climate catastrophists at the UN IPCC
adopted, the hockey stick chart.  Until that time, both scientists and
historians agreed that there was good evidence for a period in the Middle Ages
with temperatures as warm or warmer than today (thus the name "Greenland" and
not "Glacierland") and a period known as the Little Ice Age in the 17th to 19th
centuries that was quite frosty.  Mann attempted to refute this view, using data
mainly from bristlecone pine tree rings, that the temperature history over the
last 1000 years was in fact quite stable, at least until man started producing
CO2.  (I was not writing on climate at the time, but I always wondered if any
editor availed himself of the "Mann blames Man" headline.)

But why do these temperature reconstructions matter?  Aren’t we more
concerned with the temperature in 2050 than in 1050?  Yes and no.  To really do
any kind of job at predicting future temperatures, we need more than egghead
computer models tweaked in some scientist’s office.  What we really need are
good empirical studies about the sensitivity of temperature to different
variables.

We can see the importance of historical proxies in the recent study by
Scafetta and West (pdf) which looked at historical correlations between solar
activity and temperatures.  The authors performed their analysis multiple times,
both using "flat" historical reconstructions like Mann’s and other
reconstructions (e.g. Moberg)
which show more historical variability.  The authors concluded (emphasis
added):

Climate is relatively insensitive to solar changes if a
temperature reconstruction showing little preindustrial variability is adopted.
In this scenario most of the global warming since 1900 has to be interpreted as
anthropogenically induced. On the other hand, if a secular temperature
showing large preindustrial variability is adopted, such as MOBERG05, the
climate is found to be very sensitive to solar changes and a significant
fraction of the global warming that occurred during last century should be solar
induced.
If ACRIM satellite composite is adopted the Sun might have
further contributed to the recent global warming.

Some thoughts:

  • So, which results should we rely on?  The ones using Mann’s data
    or the ones using Moberg’s?  Well, even the catastrophists at the IPCC have
    abandoned Mann in favor of Moberg, so one should assume the conclusions in bold
    are very much in play.
  • Either way, don’t panic!  Even if all the 0.6C warming in the last century was due to CO2, simple math says that we should not expect more than about 1 degree more warming over the next century  (calculation here).  If the sun caused half of that 0.6C, then you can cut future warming forecasts in half.
  • Mann’s work is full of errors, both statistical and otherwise.  Beginning with McIntyre and McKittrick, and proceeding to many major scientists, his work has been discredited, though he does keep trying to save the thin branch (probably from a bristlecone pine!) he has crawled out on, but he refuses to fix even basic scribal errors pointed out in his first study.  I discuss more of the problems with Mann and other similar proxy studies, including the divergence problem, here.
  • Both CO2 Science and Climate Audit have more on historical proxy studies and their problems than you can ever digest.
  • Though it doesn’t make the front pages, there are still good common sense peer-reviewed studies that show the Medieval Warm Period and Little Ice Age that we could expect from narrative historical records.  One such is Loehle, Via Climate Audit  (temperature anomaly over last 2000 years or so, via proxies):

Loehle9

  • Steven Milloy, via Tom Nelson, has much more on the sun as the primary driver of climate.
  • You can view the section of my global warming film on historical proxies below.  The proxy part starts around 3:00 minutes in (or -5:30 from the end if it is shown that way)

     

Don’t Panic!

Albert Einstein’s dream is now a reality.  We have a new unified field theory:  Global Warming causes everything bad.   Via Tom Nelson and American Thinker, comes this list by Dr. John Brignell of links to articles in the media attributing various bad things to Global Warming.  Currently, his list has over 600 items!  Some excerpts:

Agricultural land increase, Africa devastated,  African aid threatened, Africa hit hardest, air pressure changes, Alaska reshaped, allergies increase, Alps melting, Amazon a desert, American dream endamphibians breeding earlier (or not)ancient forests dramatically changed, animals head for the hills, Antarctic grass flourishes, anxiety, algal blooms, archaeological sites threatened, Arctic bogs melt, Arctic in bloom, Arctic lakes disappear, asthma, Atlantic less salty, Atlantic more salty

itchier poison ivy, jellyfish explosion, Kew Gardens taxed, kitten boom, krill decline, lake and stream productivity decline, lake shrinking and growing, landslides, landslides of ice at 140 mph, lawsuits increase, lawsuit successful, lawyers’ income increased (surprise surprise!), lightning related insurance claims, little response in the atmosphere, lush growth in rain forests, Lyme diseaseMalaria, malnutrition,  mammoth dung melt, Maple syrup shortage

wheat yields crushed in Australia, white Christmas dream ends, wildfires, wind shift, wind reduced, wine – harm to Australian industry, wine industry damage (California), wine industry disaster (US), wine – more English, wine -German boon, wine – no more French winters in Britain colder, wolves eat more moose, wolves eat less, workers laid off, World bankruptcy, World in crisis, World in flames, Yellow fever.

All I can say is:

Dont_panic_earth_300w

Cross-posted at Coyote Blog

Urban vs. Rural Warming

CO2 Science links to this study.  Climate catastrophists bend over backwards to try to argue that there are no such thing as urban heat islands.  But of course, whenever anyone gathers actual data rather than trying to use goofy computer model approaches, the answer is always the same:

To assess the validity of this assumption, LaDochy et al. "use temperature trends in California climate records over the last 50 years [1950-2000] to measure the extent of warming in the various sub-regions of the state." Then, "by looking at human-induced changes to the landscape, [they] attempt to evaluate the importance of these changes with regard to temperature trends, and determine their significance in comparison to those caused by changes in atmospheric composition," such as atmospheric CO2 concentration….

The three researchers found that "most regions showed a stronger increase in minimum temperatures than with mean and maximum temperatures," and that "areas of intensive urbanization showed the largest positive trends, while rural, non-agricultural regions showed the least warming." In fact, they report that the Northeast Interior Basins of the state actually experienced cooling. Large urban sites, on the other hand, exhibited rates of warming "over twice those for the state, for the mean maximum temperatures, and over five times the state’s mean rate for the minimum temperature."

I would have thought the following conclusion would have been a blinding glimpse of the obvious, but I guess it still needs to be said over and over:

LaDochy et al. write that "if we assume that global warming affects all regions of the state, then the small increases seen in rural stations can be an estimate of this general warming pattern over land," which implies that "larger increases," such as those found in areas of intensive urbanization, "must then be due to local or regional surface changes."

More on Feedback

(cross-posted from Coyote Blog)

Kevin Drum links to a blog called Three-Toed Sloth in a post about why our climate future may be even worse than the absurdly cataclysmic forecasts we are getting today in the media.  Three-Toed Sloth advertises itself as "Slow Takes from the Canopy of the Reality-Based Community."  His post is an absolutely fabulous example how one can write an article where most every line is literally true, but the conclusion can still be dead wrong because one tiny assumption at the beginning of the analysis was incorrect  (In this case, "incorrect" may be generous, since the author seems well-versed in the analysis of chaotic systems.  A better word might be "purposely fudged to make a political point.")

He begins with this:

The climate system contains a lot of feedback loops.  This means that the ultimate response to any perturbation or forcing (say, pumping 20 million years of accumulated fossil fuels into the air) depends not just on the initial reaction, but also how much of that gets fed back into the system, which leads to more change, and so on.  Suppose, just for the sake of things being tractable, that the feedback is linear, and the fraction fed back is f.  Then the total impact of a perturbation I is

J + Jf + Jf2 + Jf3 + …

The infinite series of tail-biting feedback terms is in fact a geometric series, and so can be summed up if f is less than 1:

J/(1-f)

So far, so good.  The math here is entirely correct.  He goes on to make this point, arguing that if we are uncertain about  f, in other words, if there is a distribution of possible f‘s, then the range of the total system gain 1/(1-f) is likely higher than our intuition might first tell us:

If we knew the value of the feedback f, we could predict the response to perturbations just by multiplying them by 1/(1-f) — call this G for "gain".  What happens, Roe and Baker ask, if we do not know the feedback exactly?  Suppose, for example, that our measurements are corrupted by noise — or even, with something like the climate, that f is itself stochastically fluctuating.  The distribution of values for f might be symmetric and reasonably well-peaked around a typical value, but what about the distribution for G?  Well, it’s nothing of the kind.  Increasing f just a little increases G by a lot, so starting with a symmetric, not-too-spread distribution of f gives us a skewed distribution for G with a heavy right tail.

Again all true, with one small unstated proviso I will come back to.  He concludes:

In short: the fact that we will probably never be able to precisely predict the response of the climate system to large forcings is so far from being a reason for complacency it’s not even funny.

Actually, I can think of two unstated facts that undermine this analysis.  The first is that most catastrophic climate forecasts you see utilize gains in the 3x-5x range, or sometimes higher (but seldom lower).  This implies they are using an f of between .67 and .80.  These are already very high numbers for any natural process.  If catastrophist climate scientists are already assuming numbers at the high end of the range, then the point about uncertainties skewing the gain disproportionately higher are moot.  In fact, we might tend to actually draw the reverse conclusion, that the saw cuts both ways.  His analysis also implies that small overstatements of f when the forecasts are already skewed to the high side will lead to very large overstatements of Gain.

But here is the real elephant in the room:  For the vast, vast majority of natural processes, f is less than zero.  The author has blithely accepted the currently unproven assumption that the net feedback in the climate system is positive.  He never even hints at the possibility that that f might be a negative feedback rather than positive, despite the fact that almost all natural processes are dominated by negative rather than positive feedback.  Assuming without evidence that a random natural process one encounters is dominated by negative feedback is roughly equivalent to assuming the random person you just met on the street is a billionaire.  It is not totally out of the question, but it is very, very unlikely.

When one plugs an f in the equation above that is negative, say -0.3, then the gain actually becomes less than one, in this case about 0.77.  In a negative feedback regime, the system response is actually less than the initial perturbation because forces exist in the system to damp the initial input.

The author is trying to argue that uncertainty about the degree of feedback in the climate system and therefore the sensitivity of the system to CO2 changes does not change the likelihood of the coming "catastrophe."  Except that he fails to mention that we are so uncertain about the feedback that we don’t even know its sign.  Feedback, or f, could be positive or negative as far as we know.  Values could range anywhere from -1 to 1.  We don’t have good evidence as to where the exact number lies, except to observe from the relative stability of past temperatures over a long time frame that the number probably is not in the high positive end of this range.  Data from climate response over the last 120 years seems to point to a number close to zero or slightly negative, in which case the author’s entire post is irrelevant.   In fact, it turns out that the climate scientists who make the news are all clustered around the least likely guesses for f, ie values greater than 0.6.

Incredibly, while refusing to even mention the Occam’s Razor solution that f is negative, the author seriously entertains the notion that f might be one or greater.  For such values, the gain shoots to infinity and the system goes wildly unstable  (nuclear fission, for example, is an f>1 process).  In an f>1 world, lightly tapping the accelerator in our car would send us quickly racing up to the speed of light.  This is an ABSURD assumption for a system like climate that is long-term stable over tens of millions of years.  A positive feedback f>=1 would have sent us to a Venus-like heat or Mars-like frigidity eons ago.

A summary of why recent historical empirical data implies low or negative feedback is here.  You can learn more on these topics in my climate video and my climate book.  To save you the search, the section of my movie explaining feedbacks, with a nifty live demonstration from my kitchen, is in the first three and a half minutes of the clip below:

Ending the Human Race to Prevent Global Warming

The other day, in this post on an article to help make families more green by our local paper, I observed that the paper seemed to be stopping short of the real CO2 remedies, and should have had this advice for the two families who collectively had nine kids between them:

In the next generation, no one is going to be having five and four kids.  Certainly those green Europeans would never do something as damaging as having four or five kids.  If you had aborted a few of the little darlings, just think how much CO2 you would have avoided?

Now of course I was being tongue-in-cheek, in that I would never give anyone such advice.  My point was in part to demonstrate that cutesie little pieces of advice like getting the kids to recycle more helped to reinforce the false impression that CO2 rollbacks to 1990 levels would be relatively easy.  But several readers wrote me that I was posting a straw man — that no one in the green movement was seriously talking about limiting children.  WRONG!  My father-in-law, as much as I loved the man, was a long-time greenie who believed having more than two children was close to immoral, and felt that population growth was the number one environmental problem in the world. 

And check out this new green hero:

Had Toni Vernelli gone ahead with her pregnancy ten years ago, she would know at first hand what it is like to cradle her own baby, to have a pair of innocent eyes gazing up at her with unconditional love, to feel a little hand slipping into hers – and a voice calling her Mummy.

But the very thought makes her shudder with horror.

Because when Toni terminated her pregnancy, she did so in the firm belief she was helping to save the planet.

Incredibly, so determined was she that the terrible "mistake" of pregnancy should never happen again, that she begged the doctor who performed the abortion to sterilise her at the same time.

He refused, but Toni – who works for an environmental charity – "relentlessly hunted down a doctor who would perform the irreversible surgery.

Finally, eight years ago, Toni got her way.

At the age of 27 this young woman at the height of her reproductive years was sterilised to "protect the planet". ….

"Having children is selfish. It’s all about maintaining your genetic line at the expense of the planet," says Toni, 35.

"Every person who is born uses more food, more water, more land, more fossil fuels, more trees and produces more rubbish, more pollution, more greenhouse gases, and adds to the problem of over-population."

Beware Media Exaggeration

The media wants you scared:

Spiegel talks about scientific teams, especially experts from GSF, that have analyzed several events that led to increased levels of radiation,

  1. Hiroshima in 1945
  2. Radioactive rivers and explosions in the Soviet Union preparing their nuclear bomb after 1949
  3. Chernobyl 1986

In all cases, it is found that the actual effects of "radiation illness", including birth defects and delayed deaths, were several orders of magnitude below the description available in the media. For example, almost all people who died as a consequence of the Little Boy did so either instantly or within a few hours, because of burned skin. Casualties who died after a long time because of radiation illnesses were very rare.

Similar conclusions hold for the contaminated river and the 1957 Chelyabinsk explosion of a tank with 80 tons of nuclear waste produced by the Soviet Union as well as for the Chernobyl tragedy. There doesn’t seem to be any reliable source that would really prove an elevated frequency of birth effects and similar complications. Among 6,293 men who worked in the chemical plant preparing the radioactive material for the Soviet bomb (without masks!), only 100 died of lung cancer related to radiation. Greenpeace’s proclamations that 50% of adults in those regions are infertile seem to be pure silliness.

Which is not to say that radiation is anything to screw around with, or that it is not dangerous, just that its dangers have been exaggerated by orders of magnitude.  Just like some other natural phenomena I can think of. 

I posted similar findings about Chernobyl over a year ago:

Over the next four years, a massive cleanup operation involving 240,000 workers ensued, and there were fears that many of these workers, called "liquidators," would suffer in subsequent years. But most emergency workers and people living in contaminated areas "received relatively low whole radiation doses, comparable to natural background levels," a report summary noted. "No evidence or likelihood of decreased fertility among the affected population has been found, nor has there been any evidence of congenital malformations."

In fact, the report said, apart from radiation-induced deaths, the "largest public health problem created by the accident" was its effect on the mental health of residents who were traumatized by their rapid relocation and the fear, still lingering, that they would almost certainly contract terminal cancer. The report said that lifestyle diseases, such as alcoholism, among affected residents posed a much greater threat than radiation exposure….

Officials said that the continued intense medical monitoring of tens of thousands of people in Ukraine, Russia and Belarus is no longer a smart use of limited resources and is, in fact, contributing to mental health problems among many residents nearly 20 years later. In Belarus and Ukraine, 5 percent to 7 percent of government spending is consumed by benefits and programs for Chernobyl victims. And in the three countries, as many as 7 million people are receiving Chernobyl-related social benefits.

Wow – exaggerated projections of catastrophe result in ill-considered government spending.  Who would have thought this could happen?

The Benefits of CO2

In the latest UN climate "warning,"  the UN argues that the costs of CO2 abatement are not all that high because we have to offset these costs with ancillary benefits of these actions.   Many, many folks have demonstrated that these numbers are way understated, but let’s accept this premise for a moment.  If this approach is correct, then should we not also offset the expected harms from global warming with expected benefits, like a longer growing season, and this:

Carbon dioxide is not the dreaded greenhouse gas that the global warmers crack it up to be. It is in fact the most important airborne fertiliser in the world and without it there would be no green plants at all. In fact, a doubling of the levels of this gas in the atmosphere would bring about a marked rise in plant production — good news for everyone, especially those malnourished millions who can’t afford chemical fertilisers. Perhaps the time is ripe to really start worrying (again) about the fact that for the last 200 million years the concentration of carbon dioxide in our atmosphere has been falling. Indeed it dropped to dangerously low levels during recent ice ages. The Plant Kingdom responded to this potentially catastrophic (no carbon no food) situation by producing the so-called C4 plants that can survive low CO2 by using sunlight more efficiently.

Back to the 1800s

For those who do not accept my interpretation that the IPCC wants America to solve global warming by reverting our economy to look just like India’s, check out this article from Reuters (ht: Reference Frame)

French towns worried about fuel prices, pollution and striking transport workers need look no further than the horse.

Horses are a possible alternative for vehicles such as school buses and refuse trucks, say groups eager to pick up on global concerns about eco-friendly transport.

"It’s all about sustainable development and bringing some humanity back to today’s monotonous, machine-driven jobs," Stephane de Veyrac, from the French National Stud Organisation, said at this week’s annual conference of French mayors.

De Veyrac’s group says it is the first in France to offer consulting on a wide range of horse-powered vehicles that could also haul bottles and aid street sweeping.

"It is a serious alternative — horses are already in use in over 70 towns as replacements for gasoline- and diesel-powered service vehicles," said de Veyrac, pointing to the ‘Hippoville’ prototype parked in the exhibition hall….

Studies about cost and overall carbon footprint are still underway but supporters say the animals beat cars and trucks on a number of criteria, especially for transport work requiring frequent stops over short distances, like emptying trash bins.

Here is a related thought from the Anti-Planner (empahsis added):

Many planning advocates take it for granted that sprawl and auto driving are inherently unsustainable. McShane shows just how this attitude can go when he describes Halle Neustadt, which some Swedish urban planners once described as “the most sustainable city in the world.”

McShane here refers to some field work done by the Antiplanner. To make a long story short, what made Halle Neustadt “sustainable” was poverty, and as soon its residents gained some wealth, many of them moved out and most of the rest bought automobiles, turning the cities many greenspaces into parking lots.

And, oh by the way, the urban planning ideas don’t even work:

Owen then turns to climate change, which he describes as the last gasp of smart growth. Smart growth, he notes, “has always been a policy in search of a justification, a solution in search of a problem.” Now, in climate change, smart-growth advocates hope they have found such a problem.

One difficulty, McShane notes, is that there is no guarantee that smart growth is really more greenhouse-friendly than ordinary sprawl. Depending on load factors, Diesel trains can emit more greenhouse gases per passenger mile than autos, and concrete-and-steel high-rise condos can emit more CO2 than wood homes.

Another Example of UN Alarmism

Via the Washinton Post (emphasis added):

The United Nations‘ top AIDS scientists plan to acknowledge this week that they have long overestimated both the size and the course of the epidemic, which they now believe has been slowing for nearly a decade, according to U.N. documents prepared for the announcement.

AIDS remains a devastating public health crisis in the most heavily affected areas of sub-Saharan Africa. But the far-reaching revisions amount to at least a partial acknowledgment of criticisms long leveled by outside researchers who disputed the U.N. portrayal of an ever-expanding global epidemic.

The latest estimates, due to be released publicly Tuesday, put the number of annual new HIV infections at 2.5 million, a cut of more than 40 percent from last year’s estimate, documents show. The worldwide total of people infected with HIV — estimated a year ago at nearly 40 million and rising — now will be reported as 33 million.

Having millions fewer people with a lethal contagious disease is good news. Some researchers, however, contend that persistent overestimates in the widely quoted U.N. reports have long skewed funding decisions and obscured potential lessons about how to slow the spread of HIV. Critics have also said that U.N. officials overstated the extent of the epidemic to help gather political and financial support for combating AIDS.

"There was a tendency toward alarmism, and that fit perhaps a certain fundraising agenda," said Helen Epstein, author of "The Invisible Cure: Africa, the West, and the Fight Against AIDS." "I hope these new numbers will help refocus the response in a more pragmatic way."

Does this sound like any other issue the UN is working on?  Maybe this one?

Altered for Readability

For a while now, I have known that the design I created for this site was not really working.  My intention was to draw from the color palette of the Earth in space, but what I got was a blog that was very hard to read.  I have dragged my feet for a while, casting about for a better design, when I received a class action lawsuit from Jon Edwards suing me for destroying the eyesight of my readers.  So I have modified the blog to be much more readable, at least as an interim step to a new design.

More on IPCC Reports

It has been said many times, but it is always worth pointing out again at the time of this new IPCC report just how flawed the IPCC process is and how little the IPCC summaries have to do with, you know, science.

The IPCC involves numerous experts in the preparation of its reports. However, chapter authors are frequently asked to summarize current controversies and disputes in which they themselves are professionally involved, which invites bias. Related to this is the problem that chapter authors may tend to favor their own published work by presenting it in a prominent or flattering light. Nonetheless the resulting reports tend to be reasonably comprehensive and informative. Some research that contradicts the hypothesis of greenhouse gas-induced warming is under-represented, and some controversies are treated in a one-sided way, but the reports still merit close attention.

A more compelling problem is that the Summary for Policymakers, attached to the IPCC Report, is produced, not by the scientific writers and reviewers, but by a process of negotiation among unnamed bureaucratic delegates from sponsoring governments. Their selection of material need not and may not reflect the priorities and intentions of the scientific community itself. Consequently it is useful to have independent experts read the underlying report and produce a summary of the most pertinent elements of the report.

Finally, while the IPCC enlists many expert reviewers, no indication is given as to whether they disagreed with some or all of the material they reviewed. In previous IPCC reports many expert reviewers have lodged serious objections only to find that, while their objections are ignored, they are acknowledged in the final document, giving the impression that they endorsed the views expressed therein.

Thought for the Day

Imagine for a moment that the industrial revolution occured 70 years earlier, and we were having this argument about global warming in the 1930’s rather than the 2000’s.  How would the media have reported the great midwestern US droughts we refer to today as the dust bowl?  Almost certainly, these events would have been blamed on man and CO2 combustion.  Everyone from Al Gore to James Hansen would say that these droughts were most certainly caused by man-made global warming.

We know today that these were entirely natural cyclical events, not caused at all by man (except perhaps to the extent that poor framing practices exacerbated some of the problems).  We know that such an assumption about man’s guilt would have been dead wrong.  So how is it today we can be so sure that unusual events we see today are somehow man-made?  Particularly when these events are much less dire than extremes we have already seen through natural variations over the last century.  For example, despite all the news about global warming and reporting on every single heat wave, we actually are seeing fewer all-time temperature highs today than we have in the past.