Absoutely Priceless Example of How Poor Alarmists’ Science Can Be

This is absolutely amazing.  I was checking out this article in the Ithaca Journal called "Climate Change 101: Positive Feedback Cycles" based on a pointer from Tom Nelson.

The Journal is right to focus on feedback.  As I have written on numerous occasions, the base effects of CO2 even in the IPCC projections is minimal.  Only by assuming unbelievably high positive feedback numbers does the IPCC and other climate modelers get catastrophic warming forecasts.  Such an assumption is hard to swallow – very few (like, zero) long-term stable natural processes (like climate) are dominated by high positive feedbacks (the IPCC forecasts assume 67-80% feedback factors, leading to forecasts 3x to 5x higher). 

So I guess I have to give kudos to an alarmist article that actually attempts to take on the feedback issue, the most critical, and shakiest, of the climate model assumptions. 

But all their credibility falls apart from the first paragraph.  They begin:

Our world is full of positive feedback cycles, and so is our society.
Popular children’s books like “If You Give a Mouse a Cookie” by Laura
Numeroff are excellent examples. In Numeroff’s tale, a mouse asks for a
cookie, leading it to ask for a glass of milk, and so on, till finally
it asks for another cookie.

Oh my God, they go to a children’s book to prove positive feedback?  If I had gone this route, I probably would have played the "sorcerer’s apprentice" card from Fantasia.  Anyway, they do soon get into real physics in the next paragraph.  Sort of.

Here’s an example everyone in Ithaca can relate to: the snowball. If
you make a small snowball and set it on the top of a hill, what
happens? 1) It begins rolling, and 2) it collects snow as it rolls.
When it collects snow, the snowball becomes heavier, which causes
gravity to pull on it with more force, making the snowball roll faster
down the hill. This causes more snow to collect on the snowball faster,
etc., etc. Get the picture? That is a positive feedback cycle.

OMG, my head is hurting.  Is there a single entry-level physics student who doesn’t know this is wrong?  The speed of a ball rolling downhill (wind resistance ignored) is absolutely unaffected by its weight.  A 10 pound ball would reach the bottom at the same moment as a 100 pound ball.  Do I really need to be lectured by someone who does not understand even the most basic of Newtonian physics.  (I would have to think about what increasing diameter would do to a ball rolling downhill and its speed — but the author’s argument is about weight, not size, so this is irrelevant."

Do you really need any more?  This guy has already disqualified himself from lecturing to us about physical processes.  But lets get a bit more:

And what happens to the snowball? Eventually the hill flattens and the
ball comes to a stop. But if the hill continued forever, the snowball
would reach some critical threshold. It would become too big to hold
itself together at the raging speed it was traveling down the hill and
it would fall apart. Before the snowball formed, it was at equilibrium
with its surroundings, and after it falls apart, it may again reach an
equilibrium, but the journey is fast-paced and unpredictable.

Two problems:  1) In nature, "hills" are never infinitely long.  And any hills that are infinitely long with minimal starting energy would find everything at the bottom of the hill long before we came into being 12 billion years or so into the history of the universe.  2)  Climate is a long-term quite stable process.  It oscillates some, but never runs away.  Temperatures in the past have already been many degrees higher and lower than they are today.  If a degree or so is all it takes to start the climate snowball running down the infinite hill, then the climate should have already run down this hill in the past, but it never has.  That is because long-term stable natural processes are generally dominated by negative, not positive, feedback. [ed: fixed this, had it backwards]

The author goes on to discuss a couple of well-known possible positive feedback factors – increases in water vapor and ice albedo.  But it completely fails to mention well-understood negative feedback factors, including cloud formation.  In fact, though most climate models assume positive feedback from the net of water processes (water vapor increase and cloud formation), in fact the IPCC admits we don’t even know the net sign of these factors.  And most recent published work on feedback factors have demonstrated that climate does not seem to be dominated by positive feedback factors.

It hardly goes without saying that an author who begins with a children’s book and a flawed physics example can’t take credit for being very scientific.  But perhaps his worst failing of all is discussing a process that has counter-veiling forces butfails to even mention half of these forces that don’t support his case.  It’s not science, it’s propaganda.

CO2 Limits Most Harmful to Low-Income Minorities

The Environmental Justice and Climate Change Initiative has issued a report that rising temperatures, supposedly from CO2, will hurt American blacks the most.

Blacks are more likely to be hurt by global warming than other Americans, according to a report issued Thursday.

The
report was authored by the Environmental Justice and Climate Change
Initiative, a climate justice advocacy group, and Redefining Progress,
a nonprofit policy institute. It detailed various aspects of climate
change, such as air pollution and rising temperatures, which it said
disproportionately affect blacks, minorities and low-income communities
in terms of poor health and economic loss.

“Right now we have
an opportunity to see climate change in a different light; to see it
for what it is, a human rights issue on a dangerous collision course of
race and class,” said Nia Robinson, director of the Environmental
Justice and Climate Change Initiative. “While it’s an issue that
affects all of us, like many other social justice issues, it is
disproportionately affecting African-Americans, other people of color,
low-income people and indigenous communities.”

Heat-related
deaths among blacks occur at a 150 to 200 percent greater rate than for
non-Hispanic whites, the report said. It also reported that asthma,
which has a strong correlation to air pollution, affects blacks at a 36
percent higher rate of incidence than whites.

Existing
disparities between low-income communities and wealthier ones, such as
high unemployment rates, are exacerbated by such negative effects of
climate change as storms and floods, the report said.

Hmm, no mention of reductions in cold-related deaths, which typically are larger in a given year than heat-related deaths.  But a more serious issue is the CO2-abatement measures this advocacy group supports.  These abatement efforts could easily increase gas prices by as much as $20 a gallon, along with similar increases in electricity and natural gas prices.  In addition, strong CO2 abatement programs are likely to knock a percent or two off economic growth rates and, if ethanol is still a preferred tactic, will likely substantially raise food prices as well.  I am no expert, but I would say that rising gas, electricity, and food prices and falling economic growth are all likely to hit low-income minorities pretty hard. 

CO2 abatement is a wealthy persons cause.  The poor of America and the world at large will be demonstrably worse off in a world that is cooler but poorer.

No Detectable Hurricane Trend

Hurricanes offer a difficult data set to work with.  Since there are so few, even small numerical changes year over year can lead to substantial percentage changes.  Also, random variations in landfall can change at least media perceptions of hurricane frequency.  That is why I have argued for a while that metrics like total cyclonic energy are better for looking at hurricane trends.  And, as you can see below, there has been no positive trend over the last 15 years or so:

Tc_ace_thumb

The Australian National Climate Center confirmed these findings:

Concern about the enhanced greenhouse effect affecting TC frequency
and intensity has grown over recent decades. Recently, trends in global
TC activity for the period 1970 to 2004 have been examined by Webster
et al. [2005]. They concluded that no global trend has yet emerged in the total number of tropical storms and hurricanes."…  For the 1981/82 to 2005/06 TC seasons, there are no apparent trends in the total numbers and cyclone days of TCs, nor in numbers and cyclone days of severe TCs with minimum central pressure of 970 hPa or lower.

Media Rorschach Test

This will come as no surprise to folks who attempt to follow climate science through the media, but a recent study really sheds some interesting light on how the media report science based on their pre-conceived notions, and not on the science itself.  Alex Tabarrok discusses media reporting on the relative math skills of men and women.  The politically correct view is that there are no differences, so it seems that was going to be the way the new science was reported, whether the data matched or not:

For the past week or so the newspapers have been trumpeting a new study
showing no difference in average math ability between males and
females.  Few people who have looked at the data thought that there
were big differences in average ability but many media reports also
said that the study showed no differences in high ability.

The LA Times, for example, wrote:

The study also undermined the assumption — infamously espoused by
former Harvard University President Lawrence H. Summers in 2005 — that
boys are more likely than girls to be math geniuses.

Scientific American said:

So the team checked out the most gifted children. Again, no difference.
From any angle, girls measured up to boys. Still, there’s a lack of
women in the highest levels of professional math, engineering and
physics. Some have said that’s because of an innate difference in math
ability. But the new research shows that that explanation just doesn’t
add up.

The Chronicle of Higher Education said:

The research team also studied if there were gender discrepancies at
the highest levels of mathematical ability and how well boys and girls
resolved complex problems. Again they found no significant differences.

All of these reports and many more like them are false.  In fact, consistent with many earlier studies
(JSTOR), what this study found was that the ratio of male to female
variance in ability was positive and significant, in other words we can
expect that there will be more math geniuses and more dullards, among
males than among females.  I quote from the study (VR is variance
ratio):

Greater male variance is indicated by VR > 1.0. All VRs, by state and grade, are >1.0 [range 1.11 to 1.21].

Notice
that the greater male variance is observable in the earliest data,
grade 2.  (In addition, higher male VRS have been noted for over a
century).  Now the study authors clearly wanted to downplay this
finding so they wrote things like "our analyses show greater male
variability, although the discrepancy in variances is not large."
Which is true in some sense but the point is that small differences in
variance can make for big differences in outcome at the top.  The
authors acknowledge this with the following:

If a
particular specialty required mathematical skills at the 99th
percentile, and the gender ratio is 2.0, we would expect 67% men in the
occupation and 33% women. Yet today, for example, Ph.D. programs in
engineering average only about 15% women.

Both the WSJ and economist Mark Perry get it right.

Climate: The First Post-Modernist Science?

When I was in college, we mechanical engineers had little but disdain for practitioners of the various social sciences, who seemed more focused on advancing political ideologies than conducting quality science.  Apparently, denizens of these softer sciences have become convinced that the lack of objectivity or objective research that plagues their fields is par for the course in the hard sciences as well.  MaxedOutMamma describes this post-modernist view of science:

If some reader is not familiar with the full-bodied modern explications of post-modernism, the story of the Dartmouth professor who decided to sue her students will serve as an introduction. Here is her version of the problem with her students. Here is an article
she wrote about working as a post-doc researcher at Dartmouth Medical
School, which may give a hint as to why her students were so, ah,
unwilling to assent to her view of the world:

In
graduate school, I was inculcated in the tenets of a field known as
science studies, which teaches that scientific knowledge has suspect
access to truth and that science is motivated by politics and human
interest.
This is known as social constructivism and is the
reigning mantra in science studies, which considers historical and
sociological understandings of science. From the vantage point of
social constructivism, scientific facts are not discovered but rather
created within a social framework. In other words, scientific facts do not correspond to a natural reality but conform to a social construct.


Lab
:
As a practicing scientist, I feel these views need to be qualified in
the context of literary inquiry. My mentor, Chris Lowrey, is an
extraordinary physician- scientist whose vision of science is pragmatic
and positivist. My experience in his
lab has shown me that the practice of science is at least partly
motivated by the scientific method, though with some qualifications.


Through my experience in the laboratory, I have found that postmodernism
offers a constructive critique of science in ways that social
constructivism cannot, due to postmodernism’s emphasis on openly
addressing the presupposed moral aims of science.
In other
words, I find that while an individual ethic of motivation exists, and
indeed guides the conduct of laboratory routine, I have also observed
that a moral framework—one in which
the social implications of science and technology are addressed—is
clearly absent in scientific settings.
Yet I believe such a framework is necessary. Postmodernism
maintains that it is within the rhetorical apparatus of science—how
scientists talk about their work—that these moral aims of science may
be accomplished.

For
those of you who cling to scientific method, this is pretty bizarre
stuff. But she, and many others, are dead serious about it. If a
research finding could harm a class of persons, the theory is that
scientists should change the way they talk about that finding. Since scientific method is a way of building a body of knowledge based on skeptical testing, replication, and publication, this is a problem.

The tight framework of scientific method mandates figuring out what would disprove the theory being tested and then looking for the disproof.
The thought process that spawned the scientific revolution was
inherently skeptical, which is why disciples of scientific method say
that no theory can be definitively and absolutely proved, but only
disproved (falsified). Hypotheses are elevated to the status of
theories largely as a result of continued failures to disprove the
theory and continued conformity of experimentation and observation with
the theory, and such efforts should be conducted by diverse parties.

Needless to say postmodernist schools of thought and scientific method are almost polar opposites.

Reading this, I start to come to the conclusion that climate scientists are attempting to make Climate the first post-modernist physical science.  It certainly would explain why climate is so far short of being a "big-boy science" like physics, where replicating results is more important than casual review of publications by a cherry-picked group of peers.  It also explains  this quote from National Center for Atmospheric Research (NOAA) climate researcher and global warming action promoter, Steven Schneider:

We
have to offer up scary scenarios, make simplified, dramatic statements,
and make little mention of any doubts we have. Each of us has to decide
what the right balance is between being effective and being honest.

Additionally, it goes a long way to explaining why Steve McIntyre gets this response when he requests the data he needs to try to replicate certain climate studies (and here):

    We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it. There is IPR to consider.

Roy Spencer Congressional Testimony

I am a bit late on this, but Roy Spencer raises a number of good issues here in his testimony to Congress.  In particular, he focuses on just how much climate alarmists’ assumption of strong positive feedback drive catastrophic forecasts.  Put in more realistic, better justified feedback assumptions, and the catastrophe goes away.

Testimony of Roy W. Spencer before the
Senate Environment and Public Works Committee on 22 July 2008

A printable PDF of this testimony can be found here

I would like to thank Senator Boxer and members of the Committee for allowing me to discuss my experiences as a NASA employee engaged in global warming research, as well as to provide my current views on the state of the science of global warming and climate change.

I have a PhD in Meteorology from the University of Wisconsin-Madison, and have been involved in global warming research for close to twenty years. I have numerous peer reviewed scientific articles dealing with the measurement and interpretation of climate variability and climate change. I am also the U.S. Science Team Leader for the AMSR-E instrument flying on NASA’s Aqua satellite.

1. White House Involvement in the Reporting of Agency Employees’ Work

On the subject of the Administration’s involvement in policy-relevant scientific work performed by government employees in the EPA, NASA, and other agencies, I can provide some perspective based upon my previous experiences as a NASA employee. For example, during the Clinton-Gore Administration I was told what I could and could not say during congressional testimony. Since it was well known that I am skeptical of the view that mankind’s greenhouse gas emissions are mostly responsible for global warming, I assumed that this advice was to help protect Vice President Gore’s agenda on the subject.

This did not particularly bother me, though, since I knew that as an employee of an Executive Branch agency my ultimate boss resided in the White House. To the extent that my work had policy relevance, it seemed entirely appropriate to me that the privilege of working for NASA included a responsibility to abide by direction given by my superiors.

But I eventually tired of the restrictions I had to abide by as a government employee, and in the fall of 2001 I resigned from NASA and accepted my current position as a Principal Research Scientist at the University of Alabama in Huntsville. Despite my resignation from NASA, I continue to serve as Team Leader on the AMSR-E instrument flying on the NASA Aqua satellite, and maintain a good working relationship with other government researchers.

2. Global Warming Science: The Latest Research
Regarding the currently popular theory that mankind is responsible for global warming, I am very pleased to deliver good news from the front lines of climate change research. Our latest research results, which I am about to describe, could have an enormous impact on policy decisions regarding greenhouse gas emissions.
Despite decades of persistent uncertainty over how sensitive the climate system is to increasing concentrations of carbon dioxide from the burning of fossil fuels, we now have new satellite evidence which strongly suggests that the climate system is much less sensitive than is claimed by the U.N.’s Intergovernmental Panel on Climate Change (IPCC).

Another way of saying this is that the real climate system appears to be dominated by “negative feedbacks” — instead of the “positive feedbacks” which are displayed by all twenty computerized climate models utilized by the IPCC. (Feedback parameters larger than 3.3 Watts per square meter per degree Kelvin (Wm-2K-1) indicate negative feedback, while feedback parameters smaller than 3.3 indicate positive feedback.)

If true, an insensitive climate system would mean that we have little to worry about in the way of manmade global warming and associated climate change. And, as we will see, it would also mean that the warming we have experienced in the last 100 years is mostly natural. Of course, if climate change is mostly natural then it is largely out of our control, and is likely to end — if it has not ended already, since satellite-measured global temperatures have not warmed for at least seven years now.

2.1 Theoretical evidence that climate sensitivity has been overestimated
The support for my claim of low climate sensitivity (net negative feedback) for our climate system is two-fold. First, we have a new research article1 in-press in the Journal of Climate which uses a simple climate model to show that previous estimates of the sensitivity of the climate system from satellite data were biased toward the high side by the neglect of natural cloud variability. It turns out that the failure to account for natural, chaotic cloud variability generated internal to the climate system will always lead to the illusion of a climate system which appears more sensitive than it really is.

Significantly, prior to its acceptance for publication, this paper was reviewed by two leading IPCC climate model experts – Piers Forster and Isaac Held– both of whom agreed that we have raised a legitimate issue. Piers Forster, an IPCC report lead author and a leading expert on the estimation of climate sensitivity, even admitted in his review of our paper that other climate modelers need to be made aware of this important issue.

To be fair, in a follow-up communication Piers Forster stated to me his belief that the net effect of the new understanding on climate sensitivity estimates would likely be small. But as we shall see, the latest evidence now suggests otherwise.

2.2 Observational evidence that climate sensitivity has been overestimated
The second line of evidence in support of an insensitive climate system comes from the satellite data themselves. While our work in-press established the existence of an observational bias in estimates of climate sensitivity, it did not address just how large that bias might be.

But in the last several weeks, we have stumbled upon clear and convincing observational evidence of particularly strong negative feedback (low climate sensitivity) from our latest and best satellite instruments. That evidence includes our development of two new methods for extracting the feedback signal from either observational or climate model data, a goal which has been called the “holy grail” of climate research.
The first method separates the true signature of feedback, wherein radiative flux variations are highly correlated to the temperature changes which cause them, from internally-generated radiative forcings, which are uncorrelated to the temperature variations which result from them. It is the latter signal which has been ignored in all previous studies, the neglect of which biases feedback diagnoses in the direction of positive feedback (high climate sensitivity).
Based upon global oceanic climate variations measured by a variety of NASA and NOAA satellites during the period 2000 through 2005 we have found a signature of climate sensitivity so low that it would reduce future global warming projections to below 1 deg. C by the year 2100. As can be seen in Fig. 1, that estimate from satellite data is much less sensitive (a larger diagnosed feedback) than even the least sensitive of the 20 climate models which the IPCC summarizes in its report. It is also consistent with our previously published analysis of feedbacks associated with tropical intraseasonal oscillations3.

Fig. 1. Frequency distributions of feedback parameters (regression slopes) computed from three-month low-pass filtered time series of temperature (from channel 5 of the AMSU instrument flying on the NOAA-15 satellite) and top-of-atmosphere radiative flux variations for 6 years of global oceanic satellite data measured by the CERES instrument flying on NASA’s Terra satellite; and from a 60 year integration of the NCAR-CCSM3.0 climate model forced by 1% per year CO2 increase. Peaks in the frequency distributions indicate the dominant feedback operating. This NCAR model is the least sensitive (greatest feedback parameter value) of all 20 IPCC models.
A second method for extracting the true feedback signal takes advantage of the fact that during natural climate variability, there are varying levels of internally-generated radiative forcings (which are uncorrelated to temperature), versus non-radiative forcings (which are highly correlated to temperature). If the feedbacks estimated for different periods of time involve different levels of correlation, then the “true” feedback can be estimated by extrapolating those results to 100% correlation. This can be seen in Fig. 2, which shows that even previously published4 estimates of positive feedback are, in reality, supportive of negative feedback (feedback parameters greater than 3.3 Wm-2K-1).

Fig. 2. Re-analysis of the satellite-based feedback parameter estimates of Forster and Gregory (2006) showing that they are consistent with negative feedback rather than positive feedback (low climate sensitivity rather than high climate sensitivity).

2.3 Why do climate models produce so much global warming?
The results just presented beg the following question: If the satellite data indicate an insensitive climate system, why do the climate models suggest just the opposite? I believe the answer is due to a misinterpretation of cloud behavior by climate modelers.

The cloud behaviors programmed into climate models (cloud “parameterizations”) are based upon researchers’ interpretation of cause and effect in the real climate system5. When cloud variations in the real climate system have been measured, it has been assumed that the cloud changes were the result of certain processes, which are ultimately tied to surface temperature changes. But since other, chaotic, internally generated mechanisms can also be the cause of cloud changes, the neglect of those processes leads to cloud parameterizations which are inherently biased toward high climate sensitivity.

The reason why the bias occurs only in the direction of high climate sensitivity is this: While surface warming could conceivably cause cloud changes which lead to either positive or negative cloud feedback, causation in the opposite direction (cloud changes causing surface warming) can only work in one direction, which then “looks like” positive feedback. For example, decreasing low cloud cover can only produce warming, not cooling, and when that process is observed in the real climate system and assumed to be a feedback, it will always suggest a positive feedback.
2.4 So, what has caused global warming over the last century?
One necessary result of low climate sensitivity is that the radiative forcing from greenhouse gas emissions in the last century is not nearly enough to explain the upward trend of 0.7 deg. C in the last 100 years. This raises the question of whether there are natural processes at work which have caused most of that warming.
On this issue, it can be shown with a simple climate model that small cloud fluctuations assumed to occur with two modes of natural climate variability — the El Nino/La Nina phenomenon (Southern Oscillation), and the Pacific Decadal Oscillation — can explain 70% of the warming trend since 1900, as well as the nature of that trend: warming until the 1940s, no warming until the 1970s, and resumed warming since then. These results are shown in Fig. 3.

Fig. 3. A simple climate model forced with cloud cover variations assumed to be proportional to a linear combination of the Southern Oscillation Index (SOI) and Pacific Decadal Oscillation (PDO) index. The heat flux anomalies in (a), which then result in the modeled temperature response in (b), are assumed to be distributed over the top 27% of the global ocean (1,000 meters), and weak negative feedback has been assumed (4 W m-2 K-1).

While this is not necessarily being presented as the only explanation for most of the warming in the last century, it does illustrate that there are potential explanations for recent warming other that just manmade greenhouse gas emissions. Significantly, this is an issue on which the IPCC has remained almost entirely silent. There has been virtually no published work on the possible role of internal climate variations in the warming of the last century.

3. Policy Implications
Obviously, what I am claiming today is of great importance to the global warming debate and related policy decisions, and it will surely be controversial. These results are not totally unprecedented, though, as other recently published research6 has also led to the conclusion that the real climate system does not exhibit net positive feedback.

While it will take some time for the research community to digest this new information, it must be mentioned that new research contradicting the latest IPCC report is entirely consistent with the normal course of scientific progress. I predict that in the coming years, there will be a growing realization among the global warming research community that most of the climate change we have observed is natural, and that mankind’s role is relatively minor.

While other researchers need to further explore and validate my claims, I am heartened by the fact that my recent presentation of these results to an audience of approximately 40 weather and climate researchers at the University of Colorado in Boulder last week (on July 17, 2008 ) led to no substantial objections to either the data I presented, nor to my interpretation of those data.

And, curiously, despite its importance to climate modeling activities, no one from Dr. Kevin Trenberth’s facility, the National Center for Atmospheric Research (NCAR), bothered to drive four miles down the road to attend my seminar, even though it was advertised at NCAR.

I hope that the Committee realizes that, if true, these new results mean that humanity will be largely spared the negative consequences of human-induced climate change. This would be good news that should be celebrated — not attacked and maligned.

And given that virtually no research into possible natural explanations for global warming has been performed, it is time for scientific objectivity and integrity to be restored to the field of global warming research. This Committee could, at a minimum, make a statement that encourages that goal.

REFERENCES
1. Spencer, R.W., and W.D. Braswell, 2008: Potential biases in cloud feedback diagnosis:
A simple model demonstration. J. Climate, in press.
2. Allen, M.R., and D.J. Frame, 2007: Call off the quest. Science, 318, 582.
3. Spencer, R.W., W. D. Braswell, J. R. Christy, and J. Hnilo, 2007: Cloud and radiation
budget changes associated with tropical intraseasonal oscillations. Geophys. Res.
Lett., 34, L15707, doi:10.1029/2007GL029698.
4. Forster, P. M., and J. M. Gregory, 2006: The climate sensitivity and its components
diagnosed from Earth Radiation Budget data. J. Climate, 19, 39-52.
5. Stephens, G. L., 2005: Clouds feedbacks in the climate system: A critical review. J.
Climate, 18, 237-273.
6. Schwartz, S. E., 2007: Heat capacity, time constant, and sensitivity of the Earth’s
climate system. J. Geophys. Res., 112, D24S05, doi:10.1029/2007JD008746.

Climate Global Warming Is Caused by Everything Our Interest Group Opposed Before It Came Along As An Issue

Many leftish groups have for years had a curious opposition to advertising.  Ralph Nader and his PIRG groups always made it a particular issue.  This always struck me as inherently insulting, as the "logic" behind their opposition to advertising is that people are all dumb, unthinking, programmable robots who launch off and buy whatever they see advertised on TV.

The global warming hysteria kind of sucks all the oxygen out of every other goofy leftish issue out there, so now its necessary to link your leftish cause to global warming.  So it is no surprise to find out that advertising apparently causes global warming:

AUSTRALIAN television advertising is producing as much as 57 tonnes of carbon dioxide per hour, and thirty second ad breaks are among the worst offenders, according to audit figures from pitch consultants TrinityP3.

Carbon emissions are particularly strong during high-rating programs such as the final episodes of the Ten Network’s Biggest Loser, which produced 2135kgs per 30 second ad, So You Think You Can Dance at 2061kg for every 30 seconds, closely followed by the Seven News 6pm news at 1689kg and Border Security at 1802kg.

TrinityP3 managing director Darren Woolley said emissions are calculated by measuring a broadcasters’ power consumption and that of a consumer watching an ad on television in their home, B&T Magazine reports.

“We look at the number of households and the number of TVs, and then the proportion of TVs that are plasma, LCD or traditional, and calculate energy consumption based on those factors,” Woolley said.

TrinityP3 is formalising a standard carbon footprint measurement of advertising, which it claims will be the first of its kind.

“Most companies have been obliged to think through their strategies on reducing carbon emissions and they need to remember that their marketing strategies do have an environmental impact that needs to be included. This is not something that is easily able to be measured,” Mr Woolley said.

“Reality television is interesting as the more viewers and voters that tune in, the higher the carbon footprint. The more people vote, the more it adds to the CO2 in the atmosphere.

Note that, oddly, the 54 minutes an hour of regular programming is OK, it’s only the 6 minutes of advertising that has a carbon footprint.  That’s OK, though, because I am going to start turning off the TV during advertisements and go out and sit in my idling SUV and listen to my commercial-free satellite radio instead.

Climate Global Warming Is Caused by Everything Our Interest Group Opposed Before It Came Along As An Issue

Many leftish groups have for years had a curious opposition to advertising.  Ralph Nader and his PIRG groups always made it a particular issue.  This always struck me as inherently insulting, as the "logic" behind their opposition to advertising is that people are all dumb, unthinking, programmable robots who launch off and buy whatever they see advertised on TV.

The global warming hysteria kind of sucks all the oxygen out of every other goofy leftish issue out there, so now its necessary to link your leftish cause to global warming.  So it is no surprise to find out that advertising apparently causes global warming:

AUSTRALIAN television advertising is producing as much as 57 tonnes of carbon dioxide per hour, and thirty second ad breaks are among the worst offenders, according to audit figures from pitch consultants TrinityP3.

Carbon emissions are particularly strong during high-rating programs such as the final episodes of the Ten Network’s Biggest Loser, which produced 2135kgs per 30 second ad, So You Think You Can Dance at 2061kg for every 30 seconds, closely followed by the Seven News 6pm news at 1689kg and Border Security at 1802kg.

TrinityP3 managing director Darren Woolley said emissions are calculated by measuring a broadcasters’ power consumption and that of a consumer watching an ad on television in their home, B&T Magazine reports.

“We look at the number of households and the number of TVs, and then the proportion of TVs that are plasma, LCD or traditional, and calculate energy consumption based on those factors,” Woolley said.

TrinityP3 is formalising a standard carbon footprint measurement of advertising, which it claims will be the first of its kind.

“Most companies have been obliged to think through their strategies on reducing carbon emissions and they need to remember that their marketing strategies do have an environmental impact that needs to be included. This is not something that is easily able to be measured,” Mr Woolley said.

“Reality television is interesting as the more viewers and voters that tune in, the higher the carbon footprint. The more people vote, the more it adds to the CO2 in the atmosphere.

Note that, oddly, the 54 minutes an hour of regular programming is OK, it’s only the 6 minutes of advertising that has a carbon footprint.  That’s OK, though, because I am going to start turning off the TV during advertisements and go out and sit in my idling SUV and listen to my commercial-free satellite radio instead.

Some Day Climate May Be A Big-Boy Science

In big-boy science, people who run an experiment and arrive at meaningful findings will publish not only those findings but the data and methodology they used to reach those findings.  They do that because in most sciences, a conclusion is not really considered robust until multiple independent parties have replicated the finding, and they can’t replicate the finding until they know exactly how it was reached.  Physics scientists don’t run around talking about peer review as the be-all-end-all of scientific validation.  Instead of relying on peers to read over an article to look for mistakes, they go out and see if they can replicate the results.  It is expected that others in the profession will try to replicate, or even tear down, a controversial new finding.  Such a process is why we aren’t all running around talking about the cold fusion "consensus" based on "peer-reviewed science."  It would simply be bizarre for someone in physics, say, to argue that their findings were beyond question simply because it had been peer reviewed by a cherry-picked review group and to refuse to publish their data or detailed methodology. 

Some day climate science may be all grown up, but right now its far from it.

1990: A Year Selected Very Carefully

Most of you will know that the Kyoto Treaty adopted CO2 reduction goals referenced to a base year of 1990.  But what you might not know is exactly how that year was selected.  Why would a treaty, negotiated and signed in the latter half of the 90’s adopt 1990 as a base year, rather than say 1995 or 2000?  Or even 1980.

Closely linked to this question of base year selection for the treaty is a sort of cognitive dissonance that is occurring in reports about compliance of the signatories with the treaty.  Some seem to report substantial progress by European countries in reducing emissions, while others report that nearly everyone is going to miss the goals by a lot and that lately, the US has been doing better than signatory countries in terms of CO2 emissions.

To answer this, lets put ourselves back in about 1997 as the Kyoto Treat was being hammered out.  Here is what the negotiators knew at that time:

  • Both Japan and Europe had been mired in a recession since about 1990, cutting economic growth and reducing emissions growth.  The US economy had been booming.  From 1990-1995, US average real GDP growth was 2.5%, while Japan and Europe were both around 1.4% per year (source xls). 
  • The Berlin Wall fell in 1989, and Germany began unifying with East Germany in 1990.  In 1990, All that old, polluting, inefficient Soviet/Communist era industry was still running, pumping out incredible amounts of CO2 per unit produced.  By 1995, much of that industry had been shut down, though even to this day Germany continues to reap year over year efficiency improvements as they restructure old Soviet-era industry, transportation infrastructure, etc.
  • The UK in the late 1980’s had embarked on a huge campaign to replace Midlands coal with natural gas from the North Sea.  From 1990-1995, for reasons having nothing to do with CO2, British substituted a lot of lower CO2 gas combustion in place of higher CO2 coal production.

Remember, negotiators knew all this stuff in 1997.  All the above experience netted to this CO2 data that was in the negotiators pocket at Kyoto (from here):

CO2 Emissions Changes, 1990-1995

EU -2.2%
Former Communist -26.1%
Germany -10.7%
UK -6.9%
Japan 7.2%
US 6.4%

In the above, the categories are not mutually exclusive.  Germany and UK are also in the EU numbers, and Germany is included in the former communist number as well.  Note that all numbers exclude offsets and credits.

As you can see, led by the collapse of the former communist economies and the shuttering of inefficient Soviet industries, in addition to the substitution of British gas for coal, the European negotiators knew they had tremendous CO2 reductions already in their pocket, IF 1990 was chosen as a base year.  They could begin Kyoto already looking like heroes, despite the fact that the reductions from 1990-1997 were almost all due to economic and political happenings unrelated to CO2 abatement programs.

Even signatory Japan was ticked off about the 1990 date, arguing that it benefitted the European countries but was pegged years after Japan had made most of their improvements in energy efficiency:

Jun Arima, lead negotiator for Japan’s energy ministry, said the 1990 baseline for CO2 cuts agreed at Kyoto was arranged for the convenience of the UK and Germany. …

Mr Arima said: "The base year of 1990 was very advantageous to European countries. In the UK, you had already experienced the ‘dash for gas’ from coal – then in Germany they merged Eastern Germany where tremendous restructuring occurred.

"The bulk of CO2 reductions in the EU is attributable to reductions in UK and Germany."

His other complaint was that the 1990 baseline ruled inadmissible the huge gains in energy efficiency Japan had made in the 1980s in response the 1970s oil shocks.

"Japan achieved very high level of energy efficiency in the 1980s so that means the additional reduction from 1990 will mean tremendous extra cost for Japan compared with other countries that can easily achieve more energy efficiency."

So 1990 was chosen by the European negotiators as the best possible date for their countries to look good and, as an added bonus, as a very good date to try to make the US look bad.  That is why, whenever you see a press release from the EU about carbon dioxide abatement, you will see them trumpet their results since 1990.  Any other baseline year would make them look worse.

One might arguably say that anything that occured before the signing of the treaty in 1997 is accidental or unrelated, and that it is more interesting to see what has happened once governments had explicit programs in place to reduce CO2.  This is what you will see:

Just let me remind you of some salutary statistics. Between 1997 and 2004, carbon dioxide emissions rose as follows:

Emissions worldwide increased 18.0%;

Emissions from countries that ratified the protocol increased 21.1%;

Emissions from non-ratifiers of the protocol increased 10.0%;

Emissions from the US (a non-ratifier) increased 6.6%;

A lot more CO2 data here.

Postscript:  One would expect that absent changes in government regulations, the US has probably continued to do better than Europe on this metric the last several years.  The reason is that increases in wholesale gas prices increase US gas retail prices by a higher percentage than it does European retail prices.   This is because fixed-amount taxes make up a much higher portion of European gas prices than American.  While it does not necesarily follow from this, it is not illogical to assume that recent increases in oil and gas prices have had a greater effect on US than European demand, particularly since, with historically lower energy prices, the US has not made many of the lower-hanging efficiency investments that have already been made in Europe.

Climate Re-Education Program

  A reader sent me a heads-up to an article in the Bulletin of the American Meteorological Society ($, abstract here) titled "Climate Change Education and the Ecological Footprint".  The authors express concern that non-science students don’t sufficiently understand global warming and its causes, and want to initiate a re-education program in schools to get people thinking the "right" way.

So, do climate scientists want to focus on better educating kids in details of the carbon cycle?  In the complexities in sorting out causes of warming between natural and man-made effects?  In difficulties with climate modeling?  In the huge role that feedback plays in climate forecasts?

Actually, no.  Interestingly, the curriculum advocated in the Journal of American Meteorology has very little to do with meteorology or climate science.  What they are advocating is a social engineering course structured around the concept of "ecological footprint."  The course, as far as I can tell, has more in common with this online kids game where kids find out what age they should be allowed to live to based on their ecological footprint.

Like the Planet Slayer game above, the approach seems to be built around a quiz (kind of slow and tedious to get through).  Like Planet Slayer, most of the questions are lifestyle questions – do you eat meat, do you buy food from more than 200 miles away, how big is your house, do you fly a lot, etc.  If you answer that yes, you have a good diet and a nice house and travel a bit and own a car, then you are indeed destroying the planet.

I could go nuts on a rant about propoganda in government monopoly schools, but I want to make a different point [feel free to insert rant of choice here].  The amazing thing to me is that none of this has the first thing to do with meteoroogy or climate science.  If there were any science at all in this ecological footprint stuff, it would have to be economics.  What does meteorology have to say about the carrying capacity of the earth?  Zero.  What does climate science have to say about the balance between the benefits of air travel and the cost of the incremental warming that might result from that air travel?  Zero. 

Take one example – food miles.  I live in Phoenix.  The cost to grow crops around here (since most of the agricultural water has to be brought in from hundreds of miles away) is high.  The cost is also high because even irrigated, the soil is not as productive for many crops as it is in, say, Iowa, so crops require more labor, more fertilizer, and more land for the same amount of yield.  I could make a really good argument that an ear of corn trucked in from Iowa probably uses less resources than an ear of corn grown withing 200 miles of where I live.  Agree or disagree, this is a tricky economics question that requires fairly sophisiticated analysis to answer.  How is teaching kids that "food grown within 200 miles helps save the planet" advancing the cause of climate science?  What does meteorology have to say about this question?

I am sorry I don’t have more excerpts, but I am lazy and I have to retype them by hand.  But this is too priceless to miss:

Responding to the statement "Buying bottled water instead of drinking water from a faucet contributes to global warming" only 21% of all [San Jose State University] Meteorology 112 students answered correctly.  In the EF student group, this improved to a 53% correct response….  For the statement, "Eating a vegetarian diet can reduce global warming," the initial correct response by all Meteorology 112 students was 14%, while the EF group improved to 80%.

Oh my god, every time you drink bottled water you are adding 0.0000000000000000000000000001C to the world temperature.  How much global warming do I prevent if I paint flowers on my VW van?  We are teaching college meteorology students this kind of stuff?  The gulf between this and my freshman physics class is so wide, I can’t even get my head around it.  This is a college science class?

In fact, the authors admit that their curriculum is an explicit rejection of science education, bringing the odd plea in a scientific journal that science students should be taught less science:

Critics of conventional environmental education propose that curriculum focused solely on science without personal and social connections may not be the most effective educational model for moving toward social change.

I think it is a pretty good sign that a particular branch of science has a problem when it is focused more on "social change" than on getting the science right, and when its leading journal focuses on education studies rather than science.

If I were a global warming believer, this program would piss me off.  Think about it.  Teaching kids this kind of stuff and then sending them out to argue with knowlegeable skeptics is like teaching a bunch of soldiers only karate and judo and then sending them into a modern firefight.  They are going to get slaughtered. 

Hockey Stick: RIP

I have posted many times on the numerous problems with the historic temperature reconstructions that were used in Mann’s now-famous "hockey stick."   I don’t have any problems with scientists trying to recreate history from fragmentary evidence, but I do have a problem when they overestimate the certainty of their findings or enter the analysis trying to reach a particular outcome.   Just as an archaeologist must admit there is only so much that can be inferred from a single Roman coin found in the dirt, we must accept the limit to how good trees are as thermometers.  The problem with tree rings (the primary source for Mann’s hockey stick) is that they vary in width for any number of reasons, only one of which is temperature.

One of the issues scientists are facing with tree ring analyses is called "divergence."  Basically, when tree rings are measured, they have "data" in the form of rings and ring widths going back as much as 1000 years (if you pick the right tree!)  This data must be scaled — a ring width variation of .02mm must be scaled in some way so that it translates to a temperature variation.  What scientists do is take the last few decades of tree rings, for which we have simultaneous surface temperature recordings, and scale the two data sets against each other.  Then they can use this scale when going backwards to convert ring widths to temperatures.

But a funny thing happened on the way to the Nobel Prize ceremony.  It turns out that if you go back to the same trees 10 years later and gather updated samples, the ring widths, based on the scaling factors derived previously, do not match well with what we know current temperatures to be. 

The initial reaction from Mann and his peers was to try to save their analysis by arguing that there was some other modern anthropogenic effect that was throwing off the scaling for current temperatures (though no one could name what such an effect might be).  Upon further reflection, though, scientists are starting to wonder whether tree rings have much predictive power at all.  Even Keith Briffa, the man brought into the fourth IPCC to try to save the hockey stick after Mann was discredited, has recently expressed concerns:

There exists very large potential for over-calibration in multiple regressions and in spatial reconstructions, due to numerous chronology predictors (lag variables or networks of chronologies – even when using PC regression techniques). Frequently, the much vaunted ‘verification’ of tree-ring regression equations is of limited rigour, and tells us virtually nothing about the validity of long-timescale climate estimates or those that represent extrapolations beyond the range of calibrated variability.

Using smoothed data from multiple source regions, it is all too easy to calibrate large scale (NH) temperature trends, perhaps by chance alone.

But this is what really got me the other day.  Steve McIntyre (who else) has a post that analyzes each of the tree ring series in the latest Mann hockey stick.  Apparently, each series has a calibration period, where the scaling is set, and a verification period, an additional period for which we have measured temperature data to verify the scaling.  A couple of points were obvious as he stepped through each series:

  1. Each series individually has terrible predictive ability.  Each were able to be scaled, but each has so much noise in them that in many cases, standard T-tests can’t even be run and when they are, confidence intervals are huge.  For example, the series NOAMER PC1 (the series McIntyre showed years ago dominates the hockey stick) predicts that the mean temperature value in the verification period should be between -1C and -16C.  For a mean temperature, this is an unbelievably wide range.  To give one a sense of scale, that is a 27F range, which is roughly equivalent to the difference in average annual temperatures between Phoenix and Minneapolis!  A temperature forecast with error bars that could encompass both Phoenix and Minneapolis is not very useful.
  2. Even with the huge confidence intervals above, the series above does not verify!  (the verification value is -.19).  In fact, only one out of numerous data series individually verifies, and even this one was manually fudged to make it work.

Steve McIntyre is a very careful and fair person, so he allows that even if none of the series individually verify or have much predictive power, they might when combined.  I am not a statistician, so I will leave that to him to think about, but I know my response — if all of the series are of low value individually, their value is not going to increase when combined.  They may accidentally in mass hit some verification value, but we should accept that as an accident, not as some sort of true signal emerging from the data. 

Why Does NASA Oppose Satellites? A Modest Proposal For A Better Data Set

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watt has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network.  Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for "adjusting" historical temperature records  (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).  There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don’t know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:

Atmosphericcarbondioxide

I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent. 

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)

Landsatsplice

Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.

Landsatsplice2

One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

A Reminder

As we know, alarmists have adopted the term "climate change" over "global warming," in large part since the climate is always changing for all manner of reasons, one can always find, well, climate change.   This allows alarmists in the media to point to any bit of weather in the tails for the normal distribution and blame these events on man-made climate change.

But here is a reminder for those who may be uncomfortable with their own grasp of climate science (don’t feel bad, the media goes out of its way not to explain things very well).  There is no mechanism that has been proven, or even credibly identified, for increasing levels of CO2 in the atmosphere to "change the climate" or cause extreme weather without first causing warming.  In other words, the only possible causality is CO2 –> warming –> changing weather patterns.  If we don’t see the warming, we don’t see the changing weather patterns. 

I feel the need to say this, because alarmists (including Gore) have adopted the tactic of saying that climate change is accelerating, or that they see the signs of accelerating climate change everywhere.  But for the last 10 years, we have not seen any warming.  Uah

So if climate change is in fact somehow "accelerating," then it cannot possibly be due to CO2.  I believe that they are trying to create the impression that somehow CO2 is directly causing extreme weather, which it does not, under any mechanism anyone has ever suggested.   

Antarctic Sea Ice

I have written a number of times that alarmists like Al Gore focus their cameras and attention on small portions of the Antarctic Peninsula where sea ice is has been shrinking  (actually, it turns out Al Gore did not focus actual cameras but used special effects footage from the disaster movie Day after Tomorrow).  I have argued that this is disingenuous, because the Antarctic Peninsula is not representative of climate trends in the rest of Antarctica, much less a good representative of climate trends across the whole globe.  This map reinforces my point, showing in red where sea ice has increased, and in blue where it has decreased  (this is a little counter-intuitive where we expect anomaly maps to show red as hotter and blue as colder).

Clivarvariations6n1

The Cost of the Insurance Policy Matters

Supporters of the precautionary principle argue that even if it is uncertain that we will face a global warming catastrophe from producing CO2, we should insure against it by abating CO2 just in case.  "You buy insurance on your house, don’t you," they often ask.  Sure, I answer, except when the cost of the insurance is more than the cost of the house.

In a speech yesterday here in Washington, Al Gore challenged the United States to "produce every kilowatt of electricity through wind, sun, and other Earth-friendly energy sources within 10 years. This goal is achievable, affordable, and transformative." (Well, the goal is at least one of those things.) Gore compared the zero-carbon effort to the Apollo program. And the comparison would be economically apt if, rather than putting a man on the moon—which costs about $100 billion in today’s dollars—President Kennedy’s goal had been to build a massive lunar colony, complete with a casino where the Rat Pack could perform.

Gore’s fantastic—in the truest sense of the word—proposal is almost unfathomably pricey and makes sense only if you think that not doing so almost immediately would result in an uninhabitable planet. …

This isn’t the first time Gore has made a proposal with jaw-dropping economic consequences. Environmental economist William Nordhaus ran the numbers on Gore’s idea to reduce carbon emissions by 90 percent by 2050. Nordhaus found that while such a plan would indeed reduce the maximum increase in global temperatures to between 1.3 and 1.6 degrees Celsius, it did so "at very high cost" of between $17 trillion and $22 trillion over the long term, as opposed to doing nothing. (Again, just for comparative purposes, the entire global economy is about $50 trillion.)

I think everyone’s numbers are low, because they don’t include the cost of storage (technology unknown) or alternative capacity when it is a) dark and/or b) not windy.

A while back I took on Gore’s suggestion that all of America’s electricity needs could be met with current Solar technology with a 90 mile x 90 mile tract of solar.  Forgetting the fact that Al’s environmental friends would never allow us to cover 8100 square miles of the desert in silicon, I got a total installation cost of $21 trillion dollars.  And that did not include the electrical distribution systems necessary for the whole country to take power from this one spot, nor any kind of storage technology for using electricity at night  (it was hard to cost one out when no technology exist for storing America’s total energy needs for 12 hours).  Suffice it to say that a full solution with storage and distribution would easily cost north of $30 trillion dollars.

This Too Shall Pass (By Popular Demand)

In perhaps the largest batch of email I have ever gotten on one subject, readers are demanding more coverage of the effect of trace atmospheric gasses on kidney function.  So here you go:

In early July, when a former government employee accused Dick Cheney’s office of deleting from congressional testimony key statements about the impact of climate change on public health, White House staff countered that the science just wasn’t strong enough to include. Not two weeks later, however, things already look different. University of Texas researchers have laid out some of the most compelling science to date linking climate change with adverse public-health effects: scientists predict a steady rise in the U.S. incidence of kidney stones — a medical condition largely brought on by dehydration — as the planet continues to warm.

I am certainly ready to believe that this is "the most compelling science to date" vis a vis the negative effects of global warming, though I thought perhaps the study about global warming increasing acne was right up there as well.

Here are 48,900 other things that "global warming will cause."  More from Lubos Motl.  And here is the big list of global warming catastrophe claims.

Update:  I am not sure I would have even bothered, but Ryan M actually dives into the "science" of the kidney stone finding

Working on New Videos

Sorry posting has been light, but I am working on starting a new series of videos.  At some point I want to update the old ones, but right now I want to experiment with some new approaches — the old ones are pretty good, but are basically just powerpoint slides with some narration.  If you have not seen the previous videos, you may find them as follows:

  • The 6-part, one hour version is here
  • The 10-minute version, which is probably the best balance of time vs. material covered, is here.
  • The short 3-minute version I created for a contest (I won 2nd place) is here.

Combined, they have over 40,000 views.

Another Dim Bulb Leading Global Warming Efforts

Rep. Edward Markey (D-Mass.) is chairman of the House (Select) Energy Independence and Global Warming Committee.  He sure seems to know his stuff, huh:

A top Democrat told high school students gathered at the U.S. Capitol Thursday that climate change caused Hurricane Katrina and the conflict in Darfur, which led to the “black hawk down” battle between U.S. troops and Somali rebels….

“In Somalia back in 1993, climate change, according to 11 three- and four-star generals, resulted in a drought which led to famine,” said Markey.

“That famine translated to international aid we sent in to Somalia, which then led to the U.S. having to send in forces to separate all the groups that were fighting over the aid, which led to Black Hawk Down. There was this scene where we have all of our American troops under fire because they have been put into the middle of this terrible situation,” he added.

Ugh.