Typhoons and Hurricanes

(Cross-posted from Coyoteblog)

The science that CO2 is a greenhouse gas and causes some warming is hard to dispute.  The science that Earth is dominated by net positive feedbacks that increase modest greenhouse gas warming to catastrophic levels is very debatable.  The science that man’s CO2 is already causing an increase in violent and severe weather is virtually non-existent.

Seriously, of all the different pieces of the climate debate, the one that is almost always based on pure crap are the frequent media statements linking manmade CO2 to some severe weather event.

For example, Coral Davenport in the New York Times wrote the other day:

As the torrential rains of Typhoon Hagupit flood thePhilippines, driving millions of people from their homes, the Philippine government arrived at a United Nationsclimate change summit meeting on Monday to push hard for a new international deal requiring all nations, including developing countries, to cut their use of fossil fuels.

It is a conscious pivot for the Philippines, one of Asia’s fastest-growing economies. But scientists say the nation is also among the most vulnerable to the impacts of climate change, and the Philippine government says it is suffering too many human and economic losses from the burning of fossil fuels….

A series of scientific reports have linked the burning of fossil fuels with rising sea levels and more powerful typhoons, like those that have battered the island nation.

It is telling that Ms. Davenport did not bother to link or name any of these scientific reports.  Even the IPCC, which many skeptics believe to be exaggerating manmade climate change dangers, refused in its last report to link any current severe weather events with manmade CO2.

Roger Pielke responded today with charts from two different recent studies on typhoon activity in the Phillipines.  Spot the supposed upward manmade trend.  Or not:

kubotachan2009

c2789-wpac-50-10-weinkleetal

 

I am not a huge fan of landfalling cyclonic storm counts because whether they make landfall or not can be totally random and potentially disguise trends.  A better metric is the total energy of cyclonic storms, land-falling or not, where again there is no trend.

Via the Weather Underground, here is Accumulated Cyclonic Energy for the Western Pacific (lower numbers represent fewer cyclonic storms with less total strength):

ace-west-pacific

 

And here, by the way, is the ACE for the whole globe:

ace-global

Remember this when you see the next storm inevitably blamed on manmade global warming.  If anything, we are actually in a fairly unprecedented (in the last century and a half) hurricane drought.

Those Who Follow Climate Will Definitely Recognize This

This issue will be familiar to anyone who has spent time with temperature graphs.  We can ask ourselves if 1 degree of global warming is a lot, when it is small compared to seasonal variations, or even intra-day variation, you would find in most locations.  That is not a trick question.  It might be important, but certainly how important an audience  considers it may be related to how one chooses to graph it.  Take this example form an entirely unrelated field:

Last spring, Adnan sent me a letter about … something, I can’t even remember exactly what. But it included these two graphs that he’d drawn out in pencil. With no explanation. There was just a Post-it attached to the back of one of the papers that said: “Could you please hold these 2 pages until we next speak? Thank you.”

Here’s what he sent:

as_tea_graph_2_cropped
Price of tea at 7-11 

as_tea_graph1_crop_0
Price of tea at C-Mart 

This was curious. It crossed my mind that Adnan might be … off his rocker in some way. Or, more excitingly, that these graphs were code for some top-secret information too dangerous for him to send in a letter.

But no. These graphs were a riddle that I would fail to solve when we next spoke, a couple of days later.

Adnan: Now, so would you prefer, as a consumer, would you rather purchase at a store where prices are consistent or items from a store where the prices fluctuate?

Sarah: I would prefer consistency.

Adnan: That makes sense. Especially in today’s economy. So if you had to choose, which store would you say has more consistent prices?

Sarah: 7-11 is definitely more consistent.

Adnan: As compared to…?

Sarah: As compared to C-Mart, which is going way up and down.

Look again, Adnan said. Right. Their prices are exactly the same. It’s just that the graph of C-Mart prices is zoomed way in — the y-axis is in much smaller cost increments — so it looks like dramatic fluctuations are happening. And he made the pencil lines much darker and more striking in the C-Mart graph, so it looks more…sinister or something.

When Climate Alarmism Limits Environmental Progress

One of my favorite sayings is that “years from now, environmentalists will look back on the current obsession with global warming and say that it did incredible harm to real environmental progress.”  The reason is that there are many environmental problems worse than the likely impact of man-made global warming that would cost substantially less money to solve. The focus on climate change has sucked all the oxygen out of every other environmental improvement effort.

The recent Obama climate discussions with China are a great example.  China has horrendous environmental problems that need to be solved long before they worry about CO2 production.

Take coal plants.  Coal plants produce a lot of CO2, but without the aid of modern scrubbers and such, they also produce SOx, NOx, particulates matter and all the other crap you see in the Beijing air.  The problem is that the CO2 production from a coal plant takes as much as 10-100x more money to eliminate than it takes to eliminate all the other bad stuff.

While economically rational technology exists to get rid of all the other bad stuff from coal (technology that is currently in use at most US coal plants), there is no reasonable technology to eliminate CO2 from coal.  The only option is to substitute things like wind and solar which are much more expensive, in addition to a number of other drawbacks.

What this means is that the same amount of money needed to replace a couple percent of the Chinese coal industry with carbon-less technologies could probably add scrubbers to all the coal plants.  Thus the same money needed to make an only incremental change in CO2 output would make an enormous change in the breath-ability of air in Chinese cities.

So if we care about the Chinese people, why are we pushing them to worry about CO2?

PS-  by the way, there have been a number of studies that have attributed a lot of the Arctic and Greenland ice melting to the albedo effect of coal combustion particulate matter from China deposited on the ice.  The same technology that would make Beijing air breathable might also reduce Arctic ice melts.

HydroInfra: Scam! Investment Honeypot for Climate Alarmists

Cross-posted from Coyoteblog.

I got an email today from some random Gmail account asking me to write about HyrdoInfra.  OK.  The email begins: “HydroInfra Technologies (HIT) is a Stockholm based clean tech company that has developed an innovative approach to neutralizing carbon fuel emissions from power plants and other polluting industries that burn fossil fuels.”

Does it eliminate CO2?  NOx?  Particulates?  SOx?  I actually was at the bottom of my inbox for once so I went to the site.  I went to this applications page.  Apparently, it eliminates the “toxic cocktail” of pollutants that include all the ones I mentioned plus mercury and heavy metals.  Wow!  That is some stuff.

Their key product is a process for making something they call “HyrdroAtomic Nano Gas” or HNG.  It sounds like their PR guys got Michael Crichton and JJ Abrams drunk in a brainstorming session for pseudo-scientific names.

But hold on, this is the best part.  Check out the description of HNG and how it is made:

Splitting water (H20) is a known science. But the energy costs to perform splitting outweigh the energy created from hydrogen when the Hydrogen is split from the water molecule H2O.

This is where mainstream science usually closes the book on the subject.

We took a different approach by postulating that we could split water in an energy efficient way to extract a high yield of Hydrogen at very low cost.

A specific low energy pulse is put into water. The water molecules line up in a certain structure and are split from the Hydrogen molecules.

The result is HNG.

HNG is packed with ‘Exotic Hydrogen’

Exotic Hydrogen is a recent scientific discovery.

HNG carries an abundance of Exotic Hydrogen and Oxygen.

On a Molecular level, HNG is a specific ratio mix of Hydrogen and Oxygen.

The unique qualities of HNG show that the placement of its’ charged electrons turns HNG into an abundant source of exotic Hydrogen.

HNG displays some very different properties from normal hydrogen.

Some basic facts:

  • HNG instantly neutralizes carbon fuel pollution emissions
  • HNG can be pressurized up to 2 bars.
  • HNG combusts at a rate of 9000 meters per second while normal Hydrogen combusts at a rate 600 meters per second.
  • Oxygen values actually increase when HNG is inserted into a diesel flame.
  • HNG acts like a vortex on fossil fuel emissions causing the flame to be pulled into the center thus concentrating the heat and combustion properties.
  • HNG is stored in canisters, arrayed around the emission outlet channels. HNG is injected into the outlets to safely & effectively clean up the burning of fossil fuels.
  • The pollution emissions are neutralized instantly & safely with no residual toxic cocktail or chemicals to manage after the HNG burning process is initiated.

Exotic Hyrdrogen!  I love it.  This is probably a component of the “red matter” in the Abrams Star Trek reboot.  Honestly, someone please tell me this a joke, a honeypot for mindless environmental activist drones.    What are the chemical reactions going on here?  If CO2 is captured, what form does it take?  How does a mixture of Hydrogen and Oxygen molecules in whatever state they are in do anything with heavy metals?  None of this is on the website.   On their “validation” page, they have big labels like “Horiba” that look like organizations thave somehow put their impremature on the study.  In fact, they are just names of analytical equipment makers.  It’s like putting “IBM” in big print on your climate study because you ran your model on an IBM computer.

SCAM!  Honestly, when you see an article written to attract investment that sounds sort of impressive to laymen but makes absolutely no sense to anyone who knows the smallest about of Chemistry or Physics, it is an investment scam.

But they seem to get a lot of positive press.  In my search of Google, everything in the first ten pages or so are just uncritical republication of their press releases in environmental and business blogs.   You actually have to go into the comments sections of these articles to find anyone willing to observe this is all total BS.   If you want to totally understand why the global warming debate gets nowhere, watch commenter Michael at this link desperately try to hold onto his faith in HydroInfra while people who actually know things try to explain why this makes no sens

Reconciling Conflicting Climate Claims

Cross-posted from Coyoteblog

At Real Science, Steven Goddard claims this is the coolest summer on record in the US.

The NOAA reports that both May and June were the hottest on record.

It used to be the the media would reconcile such claims and one might learn something interesting from that reconciliation, but now all we have are mostly-crappy fact checks with Pinocchio counts.  Both these claims have truth on their side, though the NOAA report is more comprehensively correct.  Still, we can learn something by putting these analyses in context and by reconciling them.

The NOAA temperature data for the globe does indeed show May and June as the hottest on record.  However, one should note a couple of things

  • The two monthly records do not change the trend over the last 10-15 years, which has basically been flat.  We are hitting records because we are sitting on a plateau that is higher than the rest of the last century (at least in the NOAA data).  It only takes small positive excursions to reach all-time highs
  • There are a number of different temperature data bases that measure the temperature in different ways (e.g. satellite vs. ground stations) and then adjust those raw readings using different methodologies.  While the NOAA data base is showing all time highs, other data bases, such as satellite-based ones, are not.
  • The NOAA database has been criticized for manual adjustments to temperatures in the past which increase the warming trend.  Without these adjustments, temperatures during certain parts of the 1930’s (think: Dust Bowl) would be higher than today.  This was discussed here in more depth.  As is usual when looking at such things, some of these adjustments are absolutely appropriate and some can be questioned.  However, blaming the whole of the warming signal on such adjustments is just wrong — satellite data bases which have no similar adjustment issues have shown warming, at least between 1979 and 1999.

The Time article linked above illustrated the story of these record months with a video partially on wildfires.  This is a great example of how temperatures are indeed rising but media stories about knock-on effects, such as hurricanes and fires, can be full of it.  2014 has actually been a low fire year so far in the US.

So the world is undeniably on the warm side of average (I won’t way warmer than normal because what is “normal”?)  So how does Goddard get this as the coolest summer on record for the US?

Well, the first answer, and it is an important one to remember, is that US temperatures do not have to follow global temperatures, at least not tightly.  While the world warmed 0.5-0.7 degrees C from 1979-1999, the US temperatures moved much less.  Other times, the US has warmed or cooled more than the world has.  The US is well under 5% of the world’s surface area.  It is certainly possible to have isolated effects in such an area.  Remember the same holds true the other way — heat waves in one part of the world don’t necessarily mean the world is warming.

But we can also learn something that is seldom discussed in the media by looking at Goddard’s chart:

click to enlarge

First, I will say that I am skeptical of any chart that uses “all USHCN” stations because the number of stations and their locations change so much.  At some level this is an apples to oranges comparison — I would be much more comfortable to see a chart that looks at only USHCN stations with, say, at least 80 years of continuous data.  In other words, this chart may be an artifact of the mess that is the USHCN database.

However, it is possible that this is correct even with a better data set and against a backdrop of warming temperatures.  Why?  Because this is a metric of high temperatures.  It looks at the number of times a data station reads a high temperature over 90F.  At some level this is a clever chart, because it takes advantage of a misconception most people, including most people in the media have — that global warming plays out in higher daytime high temperatures.

But in fact this does not appear to be the case.  Most of the warming we have seen over the last 50 years has manifested itself as higher nighttime lows and higher winter temperatures.  Both of these raise the average, but neither will change Goddard’s metric of days above 90F.  So it is perfectly possible Goddard’s chart is right even if the US is seeing a warming trend over the same period.  Which is why we have not seen any more local all-time daily high temperature records set recently than in past decades.  But we have seen a lot of new records for high low temperature, if that term makes sense.  Also, this explains why the ratio of daily high records to daily low records has risen — not necessarily because there are a lot of new high records, but because we are setting fewer low records.  We can argue about daytime temperatures but nighttime temperatures are certainly warmer.

This chart shows an example with low and high temperatures over time at Amherst, MA  (chosen at random because I was speaking there).  Note that recently, most warming has been at night, rather than in daily highs.

Computer Models as “Evidence”

Cross-posted from Coyoteblog

The BBC has decided not to every talk to climate skeptics again, in part based on the “evidence” of computer modelling

Climate change skeptics are being banned from BBC News, according to a new report, for fear of misinforming people and to create more of a “balance” when discussing man-made climate change.

The latest casualty is Nigel Lawson, former London chancellor and climate change skeptic, who has just recently been barred from appearing on BBC. Lord Lawson, who has written about climate change, said the corporation is silencing the debate on global warming since he discussed the topic on its Radio 4 Today program in February.

This skeptic accuses “Stalinist” BBC of succumbing to pressure from those with renewable energy interests, like the Green Party, in an editorial for the Daily Mail.

He appeared on February 13 debating with scientist Sir Brian Hoskins, chairman of the Grantham Institute for Climate Change at Imperial College, London, to discuss recent flooding that supposedly was linked to man-made climate change.

Despite the fact that the two intellectuals had a “thoroughly civilized discussion,” BBC was “overwhelmed by a well-organized deluge of complaints” following the program. Naysayers harped on the fact that Lawson was not a scientist and said he had no business voicing his opinion on the subject.

Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.

I see this all the time.  A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything.  Computer models are extremely useful when we have hypotheses about complex, multi-variable systems.  It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

This is no different (except in speed and scale) from a person in the 18th century sitting down with Newton’s gravitational equations and grinding out five years of predicted positions for Venus (in fact, the original meaning of the word “computer” was a human being who ground out numbers in just his way).  That person and his calculations are the exact equivalent of today’s computer models.  We wouldn’t say that those lists of predictions for Venus were “evidence” that Newton was correct.  We would use these predictions and compare them to actual measurements of Venus’s position over the next five years.  If they matched, we would consider that match to be the real evidence that Newton may be correct.

So it is not the existence of the models or their output that are evidence that catastrophic man-made global warming theory is correct.  It would be evidence that the output of these predictive models actually match what plays out in reality.  Which is why skeptics think the fact that the divergence between climate model temperature forecasts and actual temperatures is important, but we will leave that topic for other days.

The other problem with models

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate.  But the techniques are substantially the same.  And the pitfalls.

Confession time.  In my very early days as a consultant, I did something I am not proud of.  I was responsible for a complex market model based on a lot of market research and customer service data.  Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results.  In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion.  It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy.  A few tweaks to assumptions and I could get the answer I wanted.  And no one would ever know.  Someone could stare at the model for an hour and not recognize the tuning.

Robert Caprara has similar thoughts in the WSJ (probably behind a paywall)  Hat tip to a reader

The computer model was huge—it analyzed every river, sewer treatment plant and drinking-water intake (the places in rivers where municipalities draw their water) in the country. I’ll spare you the details, but the model showed huge gains from the program as water quality improved dramatically. By the late 1980s, however, any gains from upgrading sewer treatments would be offset by the additional pollution load coming from people who moved from on-site septic tanks to public sewers, which dump the waste into rivers. Basically the model said we had hit the point of diminishing returns.

When I presented the results to the EPA official in charge, he said that I should go back and “sharpen my pencil.” I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn’t change much. At our next meeting he told me to run the numbers again.

After three iterations I finally blurted out, “What number are you looking for?” He didn’t miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy…

I realized that my work for the EPA wasn’t that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client’s position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.

If opponents don’t like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else’s model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.

Another Plea to Global Warming Alarmists on the Phrase “Climate Denier”

Cross-posted from Coyoteblog

Stop calling me and other skeptics “climate deniers“.  No one denies that there is a climate.  It is a stupid phrase.

I am willing, even at the risk of the obvious parallel that is being drawn to the Holocaust deniers, to accept the “denier” label, but it has to be attached to a proposition I actually deny, or that can even be denied.

As help in doing so, here are a few reminders (these would also apply to many mainstream skeptics — I am not an outlier)

  • I don’t deny that climate changes over time — who could?  So I am not a climate change denier
  • I don’t deny that the Earth has warmed over the last century (something like 0.7C).  So I am not a global warming denier
  • I don’t deny that man’s CO2 has some incremental effect on warming, and perhaps climate change (in fact, man effects climate with many more of his activities other than just CO2 — land use, with cities on the one hand and irrigated agriculture on the other, has measurable effects on the climate).  So I am not a man-made climate change or man-made global warming denier.

What I deny is the catastrophe — the proposition that man-made global warming** will cause catastrophic climate changes whose adverse affects will outweigh both the benefits of warming as well as the costs of mitigation.  I believe that warming forecasts have been substantially exaggerated (in part due to positive feedback assumptions) and that tales of current climate change trends are greatly exaggerated and based more on noting individual outlier events and not through real data on trends (see hurricanes, for example).

Though it loses some of this nuance, I would probably accept “man-made climate catastrophe denier” as a title.

** Postscript — as a reminder, there is absolutely no science that CO2 can change the climate except through the intermediate step of warming.   If you believe it is possible for CO2 to change the climate without there being warming (in the air, in the oceans, somewhere), then you have no right to call anyone else anti-science and you should go review your subject before you continue to embarrass yourself and your allies.

My Thoughts on Steven Goddard and His Fabricated Temperature Data Claim

Cross-posted from Coyote Blog.

Steven Goddard of the Real Science blog has a study that claims that US real temperature data is being replaced by fabricated data.  Christopher Booker has a sympathetic overview of the claims.

I believe that there is both wheat and chaff in this claim, and I would like to try to separate the two as best I can.  I don’t have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  Specifically, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction.
    1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data — for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA’s own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example — this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable — my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net results was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth’s surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990’s.
  6. As is usual in such debates, I find words like “fabrication”, “lies”,  and “myth” to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

Postscript:  Not exactly on topic, but one thing that is never, ever mentioned in the press but is generally true about temperature trends — almost all of the warming we have seen is in nighttime temperatures, rather than day time.  Here is an example from Amherst, MA (because I just presented up there).  This is one reason why, despite claims in the media, we are not hitting any more all time daytime highs than we would expect from a normal distribution.  If you look at temperature stations for which we have 80+ years of data, fewer than 10% of the 100-year highs were set in the last 10 years.  We are setting an unusual number of records for high low temperature, if that makes sense.

click to enlarge

 

The Thought Experiment That First Made Me A Climate Skeptic

Please check out my Forbes post today.  Here is how it begins:

Last night, the accumulated years of being called an evil-Koch-funded-anti-science-tobacco-lawyer-Holocaust-Denier finally caught up with me.  I wrote something like 3000 words of indignation about climate alarmists corrupting the very definition of science by declaring their work “settled”, answering difficult scientific questions with the equivalent of voting, and telling everyone the way to be pro-science is to listen to self-designated authorities and shut up.  I looked at the draft this morning and while I agreed with everything written, I decided not to publish a whiny ode of victimization.  There are plenty of those floating around already.

And then, out of the blue, I received an email from a stranger.  Last year I had helped to sponsor a proposal to legalize gay marriage in Arizona.  I was doing some outreach to folks in the libertarian community who had no problem with gay marriage (after all, they are libertarians) but were concerned that marriage licensing should not be a government activity at all and were therefore lukewarm about our proposition.  I suppose I could have called them bigots, or homophobic, or in the pay of Big Hetero — but instead I gathered and presented data on the number of different laws, such as inheritance, where rights and privileges were tied to marriage.  I argued that the government was already deeply involved with marriage, and fairness therefore demanded that more people have access to these rights and privileges.  Just yesterday I had a reader send me an email that said, simply, “you changed my mind on gay marriage.”  It made my day.  If only climate discussion could work this way.

So I decided the right way to drive change in the climate debate is not to rant about it but instead to continue to model what I consider good behavior — fact-based discussion and a recognition that reasonable people can disagree without that disagreement implying one or the other has evil intentions or is mean-spirited.

This analysis was originally published about 8 years ago, and there is no longer an online version.  So for fun, I thought I would reproduce my original thought experiment on climate models that led me to the climate dark side.

I have been flattered over time that folks like Matt Ridley have picked up on bits and pieces of this analysis.  See it all here.

Explaining the Flaw in Kevin Drum’s (and Apparently Science Magazine’s) Climate Chart

Cross-Posted from Coyoteblog

I won’t repeat the analysis, you need to see it here.  Here is the chart in question:

la-sci-climate-warming

My argument is that the smoothing and relatively low sampling intervals in the early data very likely mask variations similar to what we are seeing in the last 100 years — ie they greatly exaggerate the smoothness of history (also the grey range bands are self-evidently garbage, but that is another story).

Drum’s response was that “it was published in Science.”  Apparently, this sort of appeal to authority is what passes for data analysis in the climate world.

Well, maybe I did not explain the issue well.  So I found a political analysis that may help Kevin Drum see the problem.  This is from an actual blog post by Dave Manuel (this seems to be such a common data analysis fallacy that I found an example on the first page of my first Google search).  It is an analysis of average GDP growth by President.  I don’t know this Dave Manuel guy and can’t comment on the data quality, but let’s assume the data is correct for a moment.  Quoting from his post:

Here are the individual performances of each president since 1948:

1948-1952 (Harry S. Truman, Democrat), +4.82%
1953-1960 (Dwight D. Eisenhower, Republican), +3%
1961-1964 (John F. Kennedy / Lyndon B. Johnson, Democrat), +4.65%
1965-1968 (Lyndon B. Johnson, Democrat), +5.05%
1969-1972 (Richard Nixon, Republican), +3%
1973-1976 (Richard Nixon / Gerald Ford, Republican), +2.6%
1977-1980 (Jimmy Carter, Democrat), +3.25%
1981-1988 (Ronald Reagan, Republican), 3.4%
1989-1992 (George H. W. Bush, Republican), 2.17%
1993-2000 (Bill Clinton, Democrat), 3.88%
2001-2008 (George W. Bush, Republican), +2.09%
2009 (Barack Obama, Democrat), -2.6%

Let’s put this data in a chart:

click to enlarge

 

Look, a hockey stick , right?   Obama is the worst, right?

In fact there is a big problem with this analysis, even if the data is correct.  And I bet Kevin Drum can get it right away, even though it is the exact same problem as on his climate chart.

The problem is that a single year of Obama’s is compared to four or eight years for other presidents.  These earlier presidents may well have had individual down economic years – in fact, Reagan’s first year was almost certainly a down year for GDP.  But that kind of volatility is masked because the data points for the other presidents represent much more time, effectively smoothing variability.

Now, this chart has a difference in sampling frequency of 4-8x between the previous presidents and Obama.  This made a huge difference here, but it is a trivial difference compared to the 1 million times greater sampling frequency of modern temperature data vs. historical data obtained by looking at proxies (such as ice cores and tree rings).  And, unlike this chart, the method of sampling is very different across time with temperature – thermometers today are far more reliable and linear measurement devices than trees or ice.  In our GDP example, this problem roughly equates to trying to compare the GDP under Obama (with all the economic data we collate today) to, say, the economic growth rate under Henry the VIII.  Or perhaps under Ramses II.   If I showed that GDP growth in a single month under Obama was less than the average over 66 years under Ramses II, and tried to draw some conclusion from that, I think someone might challenge my analysis.  Unless of course it appears in Science, then it must be beyond question.

If You Don’t Like People Saying That Climate Science is Absurd, Stop Publishing Absurd Un-Scientific Charts

Reprinted from Coyoteblog

science a “myth”.  As is usual for global warming supporters, he wraps himself in the mantle of science while implying that those who don’t toe the line on the declared consensus are somehow anti-science.

Readers will know that as a lukewarmer, I have as little patience with outright CO2 warming deniers as I do with those declaring a catastrophe  (for my views read this and this).  But if you are going to simply be thunderstruck that some people don’t trust climate scientists, then don’t post a chart that is a great example of why people think that a lot of global warming science is garbage.  Here is Drum’s chart:

la-sci-climate-warming

The problem is that his chart is a splice of multiple data series with very different time resolutions.  The series up to about 1850 has data points taken at best every 50 years and likely at 100-200 year or more intervals.  It is smoothed so that temperature shifts less than 200 years or so in length won’t show up and are smoothed out.

In contrast, the data series after 1850 has data sampled every day or even hour.  It has a sampling interval 6 orders of magnitude (over a million times) more frequent.  It by definition is smoothed on a time scale substantially shorter than the rest of the data.

In addition, these two data sets use entirely different measurement techniques.  The modern data comes from thermometers and satellites, measurement approaches that we understand fairly well.  The earlier data comes from some sort of proxy analysis (ice cores, tree rings, sediments, etc.)  While we know these proxies generally change with temperature, there are still a lot of questions as to their accuracy and, perhaps more importantly for us here, whether they vary linearly or have any sort of attenuation of the peaks.  For example, recent warming has not shown up as strongly in tree ring proxies, raising the question of whether they may also be missing rapid temperature changes or peaks in earlier data for which we don’t have thermometers to back-check them (this is an oft-discussed problem called proxy divergence).

The problem is not the accuracy of the data for the last 100 years, though we could quibble this it is perhaps exaggerated by a few tenths of a degree.  The problem is with the historic data and using it as a valid comparison to recent data.  Even a 100 year increase of about a degree would, in the data series before 1850, be at most a single data point.  If the sampling is on 200 year intervals, there is a 50-50 chance a 100 year spike would be missed entirely in the historic data.  And even if it were in the data as a single data point, it would be smoothed out at this data scale.

Do you really think that there was never a 100-year period in those last 10,000 years where the temperatures varied by more than 0.1F, as implied by this chart?  This chart has a data set that is smoothed to signals no finer than about 200 years and compares it to recent data with no such filter.  It is like comparing the annualized GDP increase for the last quarter to the average annual GDP increase for the entire 19th century.   It is easy to demonstrate how silly this is.  If you cut the chart off at say 1950, before much anthropogenic effect will have occurred, it would still look like this, with an anomalous spike at the right (just a bit shorter).  If you believe this analysis, you have to believe that there is an unprecedented spike at the end even without anthropogenic effects.

There are several other issues with this chart that makes it laughably bad for someone to use in the context of arguing that he is the true defender of scientific integrity

  • The grey range band is if anything an even bigger scientific absurdity than the main data line.  Are they really trying to argue that there were no years, or decades, or even whole centuries that never deviated from a 0.7F baseline anomaly by more than 0.3F for the entire 4000 year period from 7500 years ago to 3500 years ago?  I will bet just about anything that the error bars on this analysis should be more than 0.3F, much less the range of variability around the mean.  Any natural scientist worth his or her salt would laugh this out of the room.  It is absurd.  But here it is presented as climate science in the exact same article that the author expresses dismay that anyone would distrust climate science.
  • A more minor point, but one that disguises the sampling frequency problem a bit, is that the last dark brown shaded area on the right that is labelled “the last 100 years” is actually at least 300 years wide.  Based on the scale, a hundred years should be about one dot on the x axis.  This means that 100 years is less than the width of the red line, and the last 60 years or the real anthropogenic period is less than half the width of the red line.  We are talking about a temperature change whose duration is half the width of the red line, which hopefully gives you some idea why I say the data sampling and smoothing processes would disguise any past periods similar to the most recent one.

Update:  Kevin Drum posted a defense of this chart on Twitter.  Here it is:  “It was published in Science.”   Well folks, there is climate debate in a nutshell.   An 1000-word dissection of what appears to be wrong with a particular analysis retorted by a five-word appeal to authority.

Update On My Climate Model (Spoiler: It’s Doing a Lot Better than the Pros)

Cross posted from Coyoteblog

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends:

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this “model”

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun’s output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about “since 1950″, but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to… ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can’t assume that these factors caused the current “pause” in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued….

Climate Goundhog Day

I posted something like this over at my other blog but I suppose I should post it here as well.  Folks ask me why I have not been blogging much here on climate, and the reason is that is has just gotten too repetitive.  It is like the movie Groundhog Day, with the same flawed studies being refuted in the same ways.  Or, if you want another burrowing mammal analogy, being a climate skeptic has become a giant game of Wack-a-Mole, with each day bringing a new flawed argument from alarmist that must be refuted.  But we never accumulate any score — skeptics have pretty much killed Gore’s ice core analysis, the hockey stick, the myth that CO2 is reducing snows on Kilamanjaro, Gore’s 20- feet of sea rise — the list goes on an on.  But we get no credit — we are still the ones who are supposedly anti-science.

This is a hobby, and not even my main hobby, so I have decided to focus on what I enjoy best about the climate debate, and that is making live presentations.  To this end, you will continue to see posts here with updated presentations and videos, and possibly a new analysis or two as I find better ways to present the material (by the way, if you have a large group, I am happy to come speak — I do not charge a speaker fee and can often pay for the travel myself).

However, while we are on the subject of climate Groundhog Day (where every day repeats itself over and over), let me tell you in advance what stories skeptic sites like WUWT and Bishop Hill and Climate Depot will be running in the coming months on the IPCC.  I can predict these with absolute certainty because they are the same stories run on the last IPCC report, and I don’t expect those folks at the IPCC to change their stripes.  So here are your future skeptic site headlines:

  1. Science sections of recent IPCC report were forced to change to fit the executive summary written by political appointees
  2. The recent IPCC report contains a substantial number of references to non-peer reviewed gray literature
  3. In the IPCC report, a couple of studies that fend off key skeptic attacks either have not yet even been published or were included despite being released after the cut off date set for studies to be included in the report
  4. In several sections of the recent IPCC report, the lead author ignored most other studies and evidence on the matter at hand and based their chapter mostly on their own research
  5. In its conclusions, the IPCC expresses absolute confidence in a statement about anthropogenic warming so vague that most skeptics might agree with the proposition.  Media then reported this as 97% confidence in 5 degrees of warming per century and 20 feet of sea rise
  6. The hockey stick has been reworked and is still totally flawed
  7. Non-Co2 causes of weather and weather related effects (e.g the sun or anthropocentric contributions like soot) are downplayed or ignored in the most recent IPCC report
  8. The words “urban heat island” appear nowhere in the IPCC report.  There is no consideration of the quality of the surface temperature record, its measurement, or the manual adjustments made to it.
  9. Most of the key studies in the IPCC report have not archived their data and refuse to release their data or software code to any skeptic for replication

Oh, I suppose it will not be all Groundhog Day.  I will predict a new one.  The old headlines were “IPCC ignores ocean cycles as partial cause for 1978-1998 warming”.  This report will be different.  Now stories will read for the new report, “IPCC blames warming hiatus on cooling from ocean cycles, but says ocean cycles have nothing to do with earlier warming”.

Amherst, MA Presentation, March 7

I will be rolling out version 3.0 of my presentation on climate that has already been around the Internet and back a couple of times.  Called “Don’t Panic:  The Science of the Climate Skeptic Position”, it will be given at 7PM in the Pruyne Lecture Hall at Amherst College on March 7, 2013.  Come by if you are in the area.

Topics include:

  • What does it mean when people say “97% of scientists agree with global warming?”   This statement turns out to be substantially less powerful when one understands the propositions actually tested.
  • The greenhouse gas effect of CO2 is a fact (did I surprise you?) but it is a second, unproven theory of strong positive feedbacks in the climate that causes the hypothesized catastrophe.
  • The world has indeed warmed over the last century, but not enough to be consistent with catastrophic forecasts, and not all due to CO2
  • While good science is being done, the science behind knock-on effects of global warming (e.g. global warming causedSandy) is often non-existent or embarrassingly bad.  Too often, the media is extrapolating from single data points
  • The “precautionary principle” ignores real negative effects of carbon rationing, particularly in lesser developed countries.

Speaker Pledge

The tone of the global warming debate is often terrible (on both sides).  The speaker will assume those who disagree are persons of goodwill.   The speaker will not resort to ad hominem attacks or discussion of funding sources and motivations.

Climate De-Bait and Switch

Dealing with facile arguments that are supposedly perfect refutations of the climate skeptics’ position is a full-time job akin to cleaning the Augean Stables.  A few weeks ago Kevin Drum argued that global warming added 3 inches to Sandy’s 14-foot storm surge, which he said was an argument that totally refuted skeptics and justified massive government restrictions on energy consumption (or whatever).

This week Slate (and Desmog blog) think they have the ultimate killer chart, on they call a “slam dunk” on skeptics.  Click through to my column this week at Forbes to see if they really do.

Lame, Desperate Climate Alarm Logic

Via Kevin Drum:

Chris Mooney reports today that there’s also a very simple reason: global warming has raised sea levels by about eight inches over the past century, and this means that when Sandy swept ashore it had eight extra inches of water to throw at us.….So that’s that. No shilly shallying. No caveats. “There is 100 percent certainty that sea level rise made this worse,” says sea level expert Ben Strauss. “Period.”

Hmm, OK.  First, to be clear, sea level rise over the last 100 years has been 17-20cm, which is 6.7-7.7 inches, which the author alarmingly rounded up to 8 inches.  But the real problem is the incredible bait and switch here.  They are talking about the dangers of anthropogenic global warming, but include the sea level rise from all warming effects, most of which occured long before we were burning fossil fuels at anywhere near current rates.  For example, almost half this rise was before 1950, where few argue that warming and sea level rise was due to man.  In fact, sea level rise is really a story of a constant 2-3mm a year rise since about 1850 as the world warms from the little ice age.  There has been no modern acceleration.

Graph—Global mean sea level: 1870–2007(source)

It is pretty heroic to blame all of a trend on an input that really only appeared significantly about 2/3 into the period on this chart.  By this chart, the warming since 1950, the period the IPCC blames warming mostly on man’s CO2, the sea level rise is only 10cm, or about 4 inches.  And to even claim four inches form CO2 since 1950 one would have to make the astonishing claim that whatever natural effect was driving sea levels higher since the mid-19th century suddenly halted at the exact same moment man began burning fossil fuels in earnest.    I’m not sure that the Sandy storm surge could even be measured to a precision of four inches or less.

Assuming three of the four inches are due to anthropogenic CO2, then the storm surge was 1.8%  higher due to global warming (taking 14 feet as the storm surge maximum, a number on which there is little agreement, confirming my hypothesis above that we are arguing in the noise).  Mooney’s argument is that damage goes up exponentially with surge height.  Granting this is true, this means that Sandy was perhaps 3.5% worse due to man-made higher sea levels.

So there you have your stark choice — you can shut down the global economy and throw billions of people in India and China back into horrendous poverty, or your 100-year storms will be 3,5% worse.  You make the call.

I would argue that one could find a far bigger contribution to Sandy’s nastiness in New York’s almost pathological refusal to accept in advance of Sandy that their city might be targeted by an Atlantic storm.  Huge percentages of the affected areas of the city are actually fill areas, and there is absolutely no evidence of sea walls or any sort of storm preparation.  I would have thought it impossible to find a seacoast city worse prepared for a storm than was New Orleans, but New York seems to have surpassed it.

As I wrote before, it is crazy to use Sandy as “proof” of a severe storm trend when in fact we are in the midst of a relative hurricane drought.  There is no evidence that the seas in Sandy’s storm track have seen any warming over the last century.

Extrapolating From A Single Data Point: Climate and Sandy

I have a new article up at Forbes on how crazy it is to extrapolate conclusions about the speed and direction of climate change from a single data point.

Positing a trend from a single database without any supporting historical information has become a common media practice in discussing climate.  As I wrote several months ago, the media did the same thing with the hot summer, arguing frequently that this recent hot dry summer proved a trend for extreme temperatures, drought, and forest fires.  In fact, none of these are the case — this summer was not unprecedented on any of these dimensions and no upward trend is detectable in long-term drought or fire data.   Despite a pretty clear history of warming over the last century, it is even hard to establish any trend in high temperature extremes  (in large part because much of the warming has been in warmer night-time lows rather than in daytime highs).  See here for the data.

As I said in that earlier article, when the media posits a trend, demand a trendline, not just a single data point.

To this end, I try to bring so actual trend data to the trend discussion.

A Great Example of How The Climate Debate is Broken

A climate alarmist posts a “Bet” on a site called Truthmarket that she obviously believes is a dagger to the heart of climate skeptics.  Heck, she is putting up $5,000 of her own money on it.  The amazing part is that the proposition she is betting on is entirely beside the point.  She is betting on the truth of a statement that many skeptics would agree with.

This is how the climate debate has gone wrong.  Alarmists are trying to shift the debate from the key points they can’t prove to facile points they can.  And the media lets them get away with it.

Read about it in my post this week at Forbes.com

I Was Right About Monnett

When the news first came out that Charles Monnett, observer of the famous drowned polar bear, was under investigation by the Obama Administration, I cautioned that:

  1. If you read between the lines in the news articles, we really have no idea what is going on.  The guy could have falsified his travel expense reports
  2. The likelihood that an Obama Administration agency would be trying to root out academic fraud at all, or that if they did so they would start here, seems absurd to me.
  3. There is no room for fraud because the study was, on its face, facile and useless.  The authors basically extrapolated from a single data point.  As I tell folks all the time, if you have only one data point, you can draw virtually any trend line you want through it.  They had no evidence of what caused the bear deaths or if they were in any way typical or part of a trend — it was all pure speculation and crazy extrapolation.  How could there be fraud when there was not any data here in the first place?  The fraud was in the media, Al Gore, and ultimately the EPA treating this with any sort of gravitas.

As I expected, while the investigation looked into the polar bear study, the decision seems to have nothing to do with polar bears or academic fraud.  The most-transparent-administration-ever seems to be upset that Monnett shared some emails that made the agency look bad.  These are documents that, to my eye, appear to be public records that you or I should have been able to FOIA anyway had we known they existed.  But despite all the Bush-bashing (of which I was an enthusiastic participant), Obama has been far more aggressive in punishing and prosecuting leakers.  In fact, Monnett may be able to get himself a payday under whistle-blower statutes.