Category Archives: Climate Science Process

Matt Ridley: What the Climate Wars Did to Science

I cannot recommend Matt Ridley’s new article strongly enough.  It covers a lot of ground be here are a few highlights.

Ridley argues that science generally works (in a manner entirely parallel to how well-functioning commercial markets work) because there are generally incentives to challenge hypotheses.  I would add that if anything, the incentives tend to be balanced more towards challenging conventional wisdom.  If someone puts a stake in the ground and says that A is true, then there is a lot more money and prestige awarded to someone who can prove A is not true than for the thirteenth person to confirm that A is indeed true.

This process breaks, however when political pressures undermine this natural market of ideas and switch the incentives for challenging hypotheses into punishment.

Lysenkoism, a pseudo-biological theory that plants (and people) could be trained to change their heritable natures, helped starve millions and yet persisted for decades in the Soviet Union, reaching its zenith under Nikita Khrushchev. The theory that dietary fat causes obesity and heart disease, based on a couple of terrible studies in the 1950s, became unchallenged orthodoxy and is only now fading slowly.

What these two ideas have in common is that they had political support, which enabled them to monopolise debate. Scientists are just as prone as anybody else to “confirmation bias”, the tendency we all have to seek evidence that supports our favoured hypothesis and dismiss evidence that contradicts it—as if we were counsel for the defence. It’s tosh that scientists always try to disprove their own theories, as they sometimes claim, and nor should they. But they do try to disprove each other’s. Science has always been decentralised, so Professor Smith challenges Professor Jones’s claims, and that’s what keeps science honest.

What went wrong with Lysenko and dietary fat was that in each case a monopoly was established. Lysenko’s opponents were imprisoned or killed. Nina Teicholz’s book  The Big Fat Surprise shows in devastating detail how opponents of Ancel Keys’s dietary fat hypothesis were starved of grants and frozen out of the debate by an intolerant consensus backed by vested interests, echoed and amplified by a docile press….

This is precisely what has happened with the climate debate and it is at risk of damaging the whole reputation of science.

This is one example of the consequences

Look what happened to a butterfly ecologist named Camille Parmesan when she published a paper on “ Climate and Species Range” that blamed climate change for threatening the Edith checkerspot butterfly with extinction in California by driving its range northward. The paper was cited more than 500 times, she was invited to speak at the White House and she was asked to contribute to the IPCC’s third assessment report.

Unfortunately, a distinguished ecologist called Jim Steele found fault with her conclusion: there had been more local extinctions in the southern part of the butterfly’s range due to urban development than in the north, so only the statistical averages moved north, not the butterflies. There was no correlated local change in temperature anyway, and the butterflies have since recovered throughout their range.  When Steele asked Parmesan for her data, she refused. Parmesan’s paper continues to be cited as evidence of climate change. Steele meanwhile is derided as a “denier”. No wonder a highly sceptical ecologist I know is very reluctant to break cover.

He also goes on to lament something that is very familiar to me — there is a strong argument for the lukewarmer position, but the media will not even achnowledge it exists.  Either you are a full-on believer or you are a denier.

The IPCC actually admits the possibility of lukewarming within its consensus, because it gives a range of possible future temperatures: it thinks the world will be between about 1.5 and four degrees warmer on average by the end of the century. That’s a huge range, from marginally beneficial to terrifyingly harmful, so it is hardly a consensus of danger, and if you look at the “probability density functions” of climate sensitivity, they always cluster towards the lower end.

What is more, in the small print describing the assumptions of the “representative concentration pathways”, it admits that the top of the range will only be reached if sensitivity to carbon dioxide is high (which is doubtful); if world population growth re-accelerates (which is unlikely); if carbon dioxide absorption by the oceans slows down (which is improbable); and if the world economy goes in a very odd direction, giving up gas but increasing coal use tenfold (which is implausible).

But the commentators ignore all these caveats and babble on about warming of “up to” four degrees (or even more), then castigate as a “denier” anybody who says, as I do, the lower end of the scale looks much more likely given the actual data. This is a deliberate tactic. Following what the psychologist Philip Tetlock called the “psychology of taboo”, there has been a systematic and thorough campaign to rule out the middle ground as heretical: not just wrong, but mistaken, immoral and beyond the pale. That’s what the word denier with its deliberate connotations of Holocaust denial is intended to do. For reasons I do not fully understand, journalists have been shamefully happy to go along with this fundamentally religious project.

The whole thing reads like a lukewarmer manifesto.  Honestly, Ridley writes about 1000% better than I do, so rather than my trying to summarize it, go read it.

HydroInfra: Scam! Investment Honeypot for Climate Alarmists

Cross-posted from Coyoteblog.

I got an email today from some random Gmail account asking me to write about HyrdoInfra.  OK.  The email begins: “HydroInfra Technologies (HIT) is a Stockholm based clean tech company that has developed an innovative approach to neutralizing carbon fuel emissions from power plants and other polluting industries that burn fossil fuels.”

Does it eliminate CO2?  NOx?  Particulates?  SOx?  I actually was at the bottom of my inbox for once so I went to the site.  I went to this applications page.  Apparently, it eliminates the “toxic cocktail” of pollutants that include all the ones I mentioned plus mercury and heavy metals.  Wow!  That is some stuff.

Their key product is a process for making something they call “HyrdroAtomic Nano Gas” or HNG.  It sounds like their PR guys got Michael Crichton and JJ Abrams drunk in a brainstorming session for pseudo-scientific names.

But hold on, this is the best part.  Check out the description of HNG and how it is made:

Splitting water (H20) is a known science. But the energy costs to perform splitting outweigh the energy created from hydrogen when the Hydrogen is split from the water molecule H2O.

This is where mainstream science usually closes the book on the subject.

We took a different approach by postulating that we could split water in an energy efficient way to extract a high yield of Hydrogen at very low cost.

A specific low energy pulse is put into water. The water molecules line up in a certain structure and are split from the Hydrogen molecules.

The result is HNG.

HNG is packed with ‘Exotic Hydrogen’

Exotic Hydrogen is a recent scientific discovery.

HNG carries an abundance of Exotic Hydrogen and Oxygen.

On a Molecular level, HNG is a specific ratio mix of Hydrogen and Oxygen.

The unique qualities of HNG show that the placement of its’ charged electrons turns HNG into an abundant source of exotic Hydrogen.

HNG displays some very different properties from normal hydrogen.

Some basic facts:

  • HNG instantly neutralizes carbon fuel pollution emissions
  • HNG can be pressurized up to 2 bars.
  • HNG combusts at a rate of 9000 meters per second while normal Hydrogen combusts at a rate 600 meters per second.
  • Oxygen values actually increase when HNG is inserted into a diesel flame.
  • HNG acts like a vortex on fossil fuel emissions causing the flame to be pulled into the center thus concentrating the heat and combustion properties.
  • HNG is stored in canisters, arrayed around the emission outlet channels. HNG is injected into the outlets to safely & effectively clean up the burning of fossil fuels.
  • The pollution emissions are neutralized instantly & safely with no residual toxic cocktail or chemicals to manage after the HNG burning process is initiated.

Exotic Hyrdrogen!  I love it.  This is probably a component of the “red matter” in the Abrams Star Trek reboot.  Honestly, someone please tell me this a joke, a honeypot for mindless environmental activist drones.    What are the chemical reactions going on here?  If CO2 is captured, what form does it take?  How does a mixture of Hydrogen and Oxygen molecules in whatever state they are in do anything with heavy metals?  None of this is on the website.   On their “validation” page, they have big labels like “Horiba” that look like organizations thave somehow put their impremature on the study.  In fact, they are just names of analytical equipment makers.  It’s like putting “IBM” in big print on your climate study because you ran your model on an IBM computer.

SCAM!  Honestly, when you see an article written to attract investment that sounds sort of impressive to laymen but makes absolutely no sense to anyone who knows the smallest about of Chemistry or Physics, it is an investment scam.

But they seem to get a lot of positive press.  In my search of Google, everything in the first ten pages or so are just uncritical republication of their press releases in environmental and business blogs.   You actually have to go into the comments sections of these articles to find anyone willing to observe this is all total BS.   If you want to totally understand why the global warming debate gets nowhere, watch commenter Michael at this link desperately try to hold onto his faith in HydroInfra while people who actually know things try to explain why this makes no sens

Computer Models as “Evidence”

Cross-posted from Coyoteblog

The BBC has decided not to every talk to climate skeptics again, in part based on the “evidence” of computer modelling

Climate change skeptics are being banned from BBC News, according to a new report, for fear of misinforming people and to create more of a “balance” when discussing man-made climate change.

The latest casualty is Nigel Lawson, former London chancellor and climate change skeptic, who has just recently been barred from appearing on BBC. Lord Lawson, who has written about climate change, said the corporation is silencing the debate on global warming since he discussed the topic on its Radio 4 Today program in February.

This skeptic accuses “Stalinist” BBC of succumbing to pressure from those with renewable energy interests, like the Green Party, in an editorial for the Daily Mail.

He appeared on February 13 debating with scientist Sir Brian Hoskins, chairman of the Grantham Institute for Climate Change at Imperial College, London, to discuss recent flooding that supposedly was linked to man-made climate change.

Despite the fact that the two intellectuals had a “thoroughly civilized discussion,” BBC was “overwhelmed by a well-organized deluge of complaints” following the program. Naysayers harped on the fact that Lawson was not a scientist and said he had no business voicing his opinion on the subject.

Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.

I see this all the time.  A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything.  Computer models are extremely useful when we have hypotheses about complex, multi-variable systems.  It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

This is no different (except in speed and scale) from a person in the 18th century sitting down with Newton’s gravitational equations and grinding out five years of predicted positions for Venus (in fact, the original meaning of the word “computer” was a human being who ground out numbers in just his way).  That person and his calculations are the exact equivalent of today’s computer models.  We wouldn’t say that those lists of predictions for Venus were “evidence” that Newton was correct.  We would use these predictions and compare them to actual measurements of Venus’s position over the next five years.  If they matched, we would consider that match to be the real evidence that Newton may be correct.

So it is not the existence of the models or their output that are evidence that catastrophic man-made global warming theory is correct.  It would be evidence that the output of these predictive models actually match what plays out in reality.  Which is why skeptics think the fact that the divergence between climate model temperature forecasts and actual temperatures is important, but we will leave that topic for other days.

The other problem with models

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate.  But the techniques are substantially the same.  And the pitfalls.

Confession time.  In my very early days as a consultant, I did something I am not proud of.  I was responsible for a complex market model based on a lot of market research and customer service data.  Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results.  In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion.  It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy.  A few tweaks to assumptions and I could get the answer I wanted.  And no one would ever know.  Someone could stare at the model for an hour and not recognize the tuning.

Robert Caprara has similar thoughts in the WSJ (probably behind a paywall)  Hat tip to a reader

The computer model was huge—it analyzed every river, sewer treatment plant and drinking-water intake (the places in rivers where municipalities draw their water) in the country. I’ll spare you the details, but the model showed huge gains from the program as water quality improved dramatically. By the late 1980s, however, any gains from upgrading sewer treatments would be offset by the additional pollution load coming from people who moved from on-site septic tanks to public sewers, which dump the waste into rivers. Basically the model said we had hit the point of diminishing returns.

When I presented the results to the EPA official in charge, he said that I should go back and “sharpen my pencil.” I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn’t change much. At our next meeting he told me to run the numbers again.

After three iterations I finally blurted out, “What number are you looking for?” He didn’t miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy…

I realized that my work for the EPA wasn’t that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client’s position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.

If opponents don’t like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else’s model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.

Explaining the Flaw in Kevin Drum’s (and Apparently Science Magazine’s) Climate Chart

Cross-Posted from Coyoteblog

I won’t repeat the analysis, you need to see it here.  Here is the chart in question:

la-sci-climate-warming

My argument is that the smoothing and relatively low sampling intervals in the early data very likely mask variations similar to what we are seeing in the last 100 years — ie they greatly exaggerate the smoothness of history (also the grey range bands are self-evidently garbage, but that is another story).

Drum’s response was that “it was published in Science.”  Apparently, this sort of appeal to authority is what passes for data analysis in the climate world.

Well, maybe I did not explain the issue well.  So I found a political analysis that may help Kevin Drum see the problem.  This is from an actual blog post by Dave Manuel (this seems to be such a common data analysis fallacy that I found an example on the first page of my first Google search).  It is an analysis of average GDP growth by President.  I don’t know this Dave Manuel guy and can’t comment on the data quality, but let’s assume the data is correct for a moment.  Quoting from his post:

Here are the individual performances of each president since 1948:

1948-1952 (Harry S. Truman, Democrat), +4.82%
1953-1960 (Dwight D. Eisenhower, Republican), +3%
1961-1964 (John F. Kennedy / Lyndon B. Johnson, Democrat), +4.65%
1965-1968 (Lyndon B. Johnson, Democrat), +5.05%
1969-1972 (Richard Nixon, Republican), +3%
1973-1976 (Richard Nixon / Gerald Ford, Republican), +2.6%
1977-1980 (Jimmy Carter, Democrat), +3.25%
1981-1988 (Ronald Reagan, Republican), 3.4%
1989-1992 (George H. W. Bush, Republican), 2.17%
1993-2000 (Bill Clinton, Democrat), 3.88%
2001-2008 (George W. Bush, Republican), +2.09%
2009 (Barack Obama, Democrat), -2.6%

Let’s put this data in a chart:

click to enlarge

 

Look, a hockey stick , right?   Obama is the worst, right?

In fact there is a big problem with this analysis, even if the data is correct.  And I bet Kevin Drum can get it right away, even though it is the exact same problem as on his climate chart.

The problem is that a single year of Obama’s is compared to four or eight years for other presidents.  These earlier presidents may well have had individual down economic years – in fact, Reagan’s first year was almost certainly a down year for GDP.  But that kind of volatility is masked because the data points for the other presidents represent much more time, effectively smoothing variability.

Now, this chart has a difference in sampling frequency of 4-8x between the previous presidents and Obama.  This made a huge difference here, but it is a trivial difference compared to the 1 million times greater sampling frequency of modern temperature data vs. historical data obtained by looking at proxies (such as ice cores and tree rings).  And, unlike this chart, the method of sampling is very different across time with temperature – thermometers today are far more reliable and linear measurement devices than trees or ice.  In our GDP example, this problem roughly equates to trying to compare the GDP under Obama (with all the economic data we collate today) to, say, the economic growth rate under Henry the VIII.  Or perhaps under Ramses II.   If I showed that GDP growth in a single month under Obama was less than the average over 66 years under Ramses II, and tried to draw some conclusion from that, I think someone might challenge my analysis.  Unless of course it appears in Science, then it must be beyond question.

If You Don’t Like People Saying That Climate Science is Absurd, Stop Publishing Absurd Un-Scientific Charts

Reprinted from Coyoteblog

science a “myth”.  As is usual for global warming supporters, he wraps himself in the mantle of science while implying that those who don’t toe the line on the declared consensus are somehow anti-science.

Readers will know that as a lukewarmer, I have as little patience with outright CO2 warming deniers as I do with those declaring a catastrophe  (for my views read this and this).  But if you are going to simply be thunderstruck that some people don’t trust climate scientists, then don’t post a chart that is a great example of why people think that a lot of global warming science is garbage.  Here is Drum’s chart:

la-sci-climate-warming

The problem is that his chart is a splice of multiple data series with very different time resolutions.  The series up to about 1850 has data points taken at best every 50 years and likely at 100-200 year or more intervals.  It is smoothed so that temperature shifts less than 200 years or so in length won’t show up and are smoothed out.

In contrast, the data series after 1850 has data sampled every day or even hour.  It has a sampling interval 6 orders of magnitude (over a million times) more frequent.  It by definition is smoothed on a time scale substantially shorter than the rest of the data.

In addition, these two data sets use entirely different measurement techniques.  The modern data comes from thermometers and satellites, measurement approaches that we understand fairly well.  The earlier data comes from some sort of proxy analysis (ice cores, tree rings, sediments, etc.)  While we know these proxies generally change with temperature, there are still a lot of questions as to their accuracy and, perhaps more importantly for us here, whether they vary linearly or have any sort of attenuation of the peaks.  For example, recent warming has not shown up as strongly in tree ring proxies, raising the question of whether they may also be missing rapid temperature changes or peaks in earlier data for which we don’t have thermometers to back-check them (this is an oft-discussed problem called proxy divergence).

The problem is not the accuracy of the data for the last 100 years, though we could quibble this it is perhaps exaggerated by a few tenths of a degree.  The problem is with the historic data and using it as a valid comparison to recent data.  Even a 100 year increase of about a degree would, in the data series before 1850, be at most a single data point.  If the sampling is on 200 year intervals, there is a 50-50 chance a 100 year spike would be missed entirely in the historic data.  And even if it were in the data as a single data point, it would be smoothed out at this data scale.

Do you really think that there was never a 100-year period in those last 10,000 years where the temperatures varied by more than 0.1F, as implied by this chart?  This chart has a data set that is smoothed to signals no finer than about 200 years and compares it to recent data with no such filter.  It is like comparing the annualized GDP increase for the last quarter to the average annual GDP increase for the entire 19th century.   It is easy to demonstrate how silly this is.  If you cut the chart off at say 1950, before much anthropogenic effect will have occurred, it would still look like this, with an anomalous spike at the right (just a bit shorter).  If you believe this analysis, you have to believe that there is an unprecedented spike at the end even without anthropogenic effects.

There are several other issues with this chart that makes it laughably bad for someone to use in the context of arguing that he is the true defender of scientific integrity

  • The grey range band is if anything an even bigger scientific absurdity than the main data line.  Are they really trying to argue that there were no years, or decades, or even whole centuries that never deviated from a 0.7F baseline anomaly by more than 0.3F for the entire 4000 year period from 7500 years ago to 3500 years ago?  I will bet just about anything that the error bars on this analysis should be more than 0.3F, much less the range of variability around the mean.  Any natural scientist worth his or her salt would laugh this out of the room.  It is absurd.  But here it is presented as climate science in the exact same article that the author expresses dismay that anyone would distrust climate science.
  • A more minor point, but one that disguises the sampling frequency problem a bit, is that the last dark brown shaded area on the right that is labelled “the last 100 years” is actually at least 300 years wide.  Based on the scale, a hundred years should be about one dot on the x axis.  This means that 100 years is less than the width of the red line, and the last 60 years or the real anthropogenic period is less than half the width of the red line.  We are talking about a temperature change whose duration is half the width of the red line, which hopefully gives you some idea why I say the data sampling and smoothing processes would disguise any past periods similar to the most recent one.

Update:  Kevin Drum posted a defense of this chart on Twitter.  Here it is:  “It was published in Science.”   Well folks, there is climate debate in a nutshell.   An 1000-word dissection of what appears to be wrong with a particular analysis retorted by a five-word appeal to authority.

Climate Goundhog Day

I posted something like this over at my other blog but I suppose I should post it here as well.  Folks ask me why I have not been blogging much here on climate, and the reason is that is has just gotten too repetitive.  It is like the movie Groundhog Day, with the same flawed studies being refuted in the same ways.  Or, if you want another burrowing mammal analogy, being a climate skeptic has become a giant game of Wack-a-Mole, with each day bringing a new flawed argument from alarmist that must be refuted.  But we never accumulate any score — skeptics have pretty much killed Gore’s ice core analysis, the hockey stick, the myth that CO2 is reducing snows on Kilamanjaro, Gore’s 20- feet of sea rise — the list goes on an on.  But we get no credit — we are still the ones who are supposedly anti-science.

This is a hobby, and not even my main hobby, so I have decided to focus on what I enjoy best about the climate debate, and that is making live presentations.  To this end, you will continue to see posts here with updated presentations and videos, and possibly a new analysis or two as I find better ways to present the material (by the way, if you have a large group, I am happy to come speak — I do not charge a speaker fee and can often pay for the travel myself).

However, while we are on the subject of climate Groundhog Day (where every day repeats itself over and over), let me tell you in advance what stories skeptic sites like WUWT and Bishop Hill and Climate Depot will be running in the coming months on the IPCC.  I can predict these with absolute certainty because they are the same stories run on the last IPCC report, and I don’t expect those folks at the IPCC to change their stripes.  So here are your future skeptic site headlines:

  1. Science sections of recent IPCC report were forced to change to fit the executive summary written by political appointees
  2. The recent IPCC report contains a substantial number of references to non-peer reviewed gray literature
  3. In the IPCC report, a couple of studies that fend off key skeptic attacks either have not yet even been published or were included despite being released after the cut off date set for studies to be included in the report
  4. In several sections of the recent IPCC report, the lead author ignored most other studies and evidence on the matter at hand and based their chapter mostly on their own research
  5. In its conclusions, the IPCC expresses absolute confidence in a statement about anthropogenic warming so vague that most skeptics might agree with the proposition.  Media then reported this as 97% confidence in 5 degrees of warming per century and 20 feet of sea rise
  6. The hockey stick has been reworked and is still totally flawed
  7. Non-Co2 causes of weather and weather related effects (e.g the sun or anthropocentric contributions like soot) are downplayed or ignored in the most recent IPCC report
  8. The words “urban heat island” appear nowhere in the IPCC report.  There is no consideration of the quality of the surface temperature record, its measurement, or the manual adjustments made to it.
  9. Most of the key studies in the IPCC report have not archived their data and refuse to release their data or software code to any skeptic for replication

Oh, I suppose it will not be all Groundhog Day.  I will predict a new one.  The old headlines were “IPCC ignores ocean cycles as partial cause for 1978-1998 warming”.  This report will be different.  Now stories will read for the new report, “IPCC blames warming hiatus on cooling from ocean cycles, but says ocean cycles have nothing to do with earlier warming”.

Climate De-Bait and Switch

Dealing with facile arguments that are supposedly perfect refutations of the climate skeptics’ position is a full-time job akin to cleaning the Augean Stables.  A few weeks ago Kevin Drum argued that global warming added 3 inches to Sandy’s 14-foot storm surge, which he said was an argument that totally refuted skeptics and justified massive government restrictions on energy consumption (or whatever).

This week Slate (and Desmog blog) think they have the ultimate killer chart, on they call a “slam dunk” on skeptics.  Click through to my column this week at Forbes to see if they really do.

A Great Example of How The Climate Debate is Broken

A climate alarmist posts a “Bet” on a site called Truthmarket that she obviously believes is a dagger to the heart of climate skeptics.  Heck, she is putting up $5,000 of her own money on it.  The amazing part is that the proposition she is betting on is entirely beside the point.  She is betting on the truth of a statement that many skeptics would agree with.

This is how the climate debate has gone wrong.  Alarmists are trying to shift the debate from the key points they can’t prove to facile points they can.  And the media lets them get away with it.

Read about it in my post this week at Forbes.com

I Was Right About Monnett

When the news first came out that Charles Monnett, observer of the famous drowned polar bear, was under investigation by the Obama Administration, I cautioned that:

  1. If you read between the lines in the news articles, we really have no idea what is going on.  The guy could have falsified his travel expense reports
  2. The likelihood that an Obama Administration agency would be trying to root out academic fraud at all, or that if they did so they would start here, seems absurd to me.
  3. There is no room for fraud because the study was, on its face, facile and useless.  The authors basically extrapolated from a single data point.  As I tell folks all the time, if you have only one data point, you can draw virtually any trend line you want through it.  They had no evidence of what caused the bear deaths or if they were in any way typical or part of a trend — it was all pure speculation and crazy extrapolation.  How could there be fraud when there was not any data here in the first place?  The fraud was in the media, Al Gore, and ultimately the EPA treating this with any sort of gravitas.

As I expected, while the investigation looked into the polar bear study, the decision seems to have nothing to do with polar bears or academic fraud.  The most-transparent-administration-ever seems to be upset that Monnett shared some emails that made the agency look bad.  These are documents that, to my eye, appear to be public records that you or I should have been able to FOIA anyway had we known they existed.  But despite all the Bush-bashing (of which I was an enthusiastic participant), Obama has been far more aggressive in punishing and prosecuting leakers.  In fact, Monnett may be able to get himself a payday under whistle-blower statutes.

Lewandowsky et al. Proves Skeptics are Reasonable and Pro-Science

I am not sure it is worth beating this dead horse any further, but I will make one final observation about Lewandowsky.  As a reminder, the study purported to link skeptics with belief in odd conspiracy theories, particularly the theory that the Apollo 11 landings were faked (a conclusion highlighted in the title of the press release).

Apparently the study got this conclusion based on a trivial 10 responses out of hundreds from folks who self-identified as skeptics, but due to the horrible methodology many not actually have been such.

But here is the interesting part.  Even if the data was good, it would mean that less than .2% of the “skeptics” adopted the moon landing conspiracy theory.  Compare this to the general population:

 A 1999 Gallup poll found that a scant 6 percent of Americans doubted the Apollo 11 moon landing happened, and there is anecdotal evidence that the ranks of such conspiracy theorists, fueled by innuendo-filled documentaries and the Internet, are growing.

Twenty-five percent of respondents to a survey in the British magazine Engineering & Technology said they do not believe humans landed on the moon. A handful of Web sites and blogs circulate suspicions about NASA’s “hoax.”

And a Google search this week for “Apollo moon landing hoax” yielded more than 1.5 billion results.  (more here)

By Lewandowsky’s own data, skeptics are 30-100 times less gullible than the average American or Brit.

By the way, I have spent a lot of time debunking silly 9/11 theories.  Here is one example of a science-based response to the Rosie O’Donnell (a famous climate alarmist, by the way) and her claim that burning jet fuel can’t melt steel so therefore the WTC had to have been destroyed by demolition charges set by Dick Cheney, or something like that.

Worst Study Ever?

I have to agree with JoNova, the Lewandowsky study ostensibly linking climate skeptics to moon-landing-deniers is perhaps the worst study I have seen in a really long time.   This is another sign of postmodernism run wild in the sciences, with having the “right” answer being more important than actually being able to prove it.

The whole story is simply delicious, given the atrocious methodology paired is paired with a self-important mission by the authors of supposedly defending science against its detractors.  I can’t do the whole mess justice without just repeating her whole post, so go visit the article.

For the record, I have never seriously doubted that the moon landings really happened or that cigarettes cause cancer.  Also, I will add my name to the list of skeptical bloggers who were not contacted about the study — though I am a small fry, I am pretty easy to find given my URL.

By the way, the article mentions 9/11 truthers only in passing.  This is probably not an accident.  I would bet just about any amount of money that there is a good correlation between 9/11 conspiracy theorists and climate alarmists.

A Response to Popular Ad Hominem, err Science, Magazine on Global Warming Skeptics

My new column is up at Forbes.com, and addresses the most recent Popular Science hit piece on climate skeptics:

I thought I knew what “science” was about:  the crafting of hypotheses that could be tested and refined through observation via studies that were challenged and replicated by the broader community until the hypothesis is generally accepted or rejected by the broader community.

But apparently “popular science” works differently, if the July 2012 article by Tom Clynes in the periodical of that name is any guide [I will link the article when it is online].  In an article called “the Battle,” Clynes serves up an amazing skewering of skeptics that the most extreme environmental group might have blushed at publishing.  After reading this article, it seems that “popular science” consists mainly of initiating a sufficient number of ad hominem attacks against those with whom one disagrees such that one is no longer required to even answer their scientific criticisms.

The article is a sort of hall-of-fame of every ad hominem attack made on skeptics – tobacco lawyers, Holocaust Deniers, the Flat Earth Society, oil company funding, and the Koch Brothers all make an appearance.

Just one example of the really shoddy journalism in this article:

Clynes mentions the story of Jeffrey Gleason and Charles Monnett, who published an observation of drowned polar bears.   The pair came under review by the Office of Inspector General for “integrity issues.”  The author uses this anecdote as an extreme example of harassment of climate scientists.  He is careful not to mention skeptics in the context of their story, but one is clearly meant to take this as an example of extreme harassment of scientists by skeptics.  Certainly skeptics have criticized their work, but Gleason and Monnett, as Clynes must surely know, came under review by the Obama Administration (certainly not a hotbed for sympathy towards skeptics) mainly for ethical lapses around reporting and use of funds.    This is extreme journalistic malfeasance — Gleason’s and Monnett’s job problems have nothing to do with skeptics but their story is included in a way meant to support the author’s thesis on skeptic harassment

Read it all

Defending the “Consensus” in Other Scientific Fields

Readers will likely find some parallels here to climate science:  A number of studies dispute whether cutting back on salt consumption to government-recommended levels is really healthier.   Gary Taubes wrote a long opinion piece in the NY Times this Sunday highlighting evidence that eating too little salt can actually increase mortality from heart disease.  Now, I don’t really have a dog in this hunt and haven’t studied the evidence either way, but I thought the reaction of the anti-salt crusaders was familiar:

Proponents of the eat-less-salt campaign tend to deal with this contradictory evidence by implying that anyone raising it is a shill for the food industry and doesn’t care about saving lives. An N.I.H. administrator told me back in 1998 that to publicly question the science on salt was to play into the hands of the industry. “As long as there are things in the media that say the salt controversy continues,” he said, “they win.”

When several agencies, including the Department of Agriculture and the Food and Drug Administration, held a hearing last November to discuss how to go about getting Americans to eat less salt (as opposed to whether or not we should eat less salt), these proponents argued that the latest reports suggesting damage from lower-salt diets should simply be ignored. Lawrence Appel, an epidemiologist and a co-author of the DASH-Sodium trial, said “there is nothing really new.” According to the cardiologist Graham MacGregor, who has been promoting low-salt diets since the 1980s, the studies were no more than “a minor irritation that causes us a bit of aggravation.”

This attitude that studies that go against prevailing beliefs should be ignored on the basis that, well, they go against prevailing beliefs, has been the norm for the anti-salt campaign for decades. Maybe now the prevailing beliefs should be changed. The British scientist and educator Thomas Huxley, known as Darwin’s bulldog for his advocacy of evolution, may have put it best back in 1860. “My business,” he wrote, “is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations.”

Burning Down the House

Steve Zwick walked back his comments about letting skeptics’s houses burn down and tries to clarify the point he was trying to make.  I have further comments in a new Forbes article here.  An excerpt:

Steve Zwick has posted an update to the post I wrote about last week and has decided the house-burning analogy was unproductive.  Fine.  I have written a lot of dumb stuff on a deadline.  In his new post, he has gone so far in the opposite direction of balance and fairness that I am not even sure what his point is any more — the only one I can tease out is that people who intentionally bring bad information to a public debate should be held accountable in some way.  Uh, OK.  If he wants to lock up the entirety of Congress he won’t get any argument out of this libertarian.

Here is the problem with Mr. Zwick’s point in actual application:  Increasingly, many people on both sides of the climate debate have decided that the folks on the other side are not people of goodwill.  They are nefarious.  They lie.  They want to destroy the Earth or the want to promote UN-led world socialism.   If you believe your opponents are well-mentioned but wrong, you say “they are grossly underestimating future climate change which could have catastrophic effects on mankind.”  You don’t talk about punishments, because we don’t punish people who take the wrong scientific position — did we throw those phlogiston proponents in jail?  How about the cold fusion guys?

However, when the debate becomes politicized, we stop believing the other side is well-intentioned.  So you get people like Joe Romm describing the people on the two sides of the debate this way:

But the difference is that those who are trying to preserve a livable climate and hence the health and well-being of our children and billions of people this century quickly denounce the few offensive over-reaches of those who claim to share our goals — but those trying to destroy a livable climate [ie skeptics], well, for them lies and hate speech are the modus operandi, so such behavior is not only tolerated, but encouraged.

This is where the argument goes downhill.   When one group believes the other side is no longer just disagreeing, but “trying to destroy a livable climate” and for whom “lies and hate speech are the modus operandi,” then honest debate is no longer possible.  If I honestly thought a group of people really, truly wanted to destroy a livable climate, I might suggest letting their houses burn down too.

A Vivid Reminder of How The Climate Debate is Broken

My Forbes column is up this week.  I really did not want to write about climate, but when Forbes conctributor Steve Zwick wrote this, I had to respond

We know who the active denialists are – not the people who buy the lies, mind you, but the people who create the lies.  Let’s start keeping track of them now, and when the famines come, let’s make them pay.  Let’s let their houses burn.  Let’s swap their safe land for submerged islands.  Let’s force them to bear the cost of rising food prices.

They broke the climate.  Why should the rest of us have to pay for it?

The bizarre threats and ad hominem attacks have to stop.  Real debate is necessary based on an assumption that our opponents may be wrong, but are still people of good will.  And we need to debate what really freaking matters:

Instead of screwing around in the media trying to assign blame for the recent US heat wave to CO2 and threatening to burn down the houses of those who disagree with us, we should be arguing about what matters.  And the main scientific issue that really matters is understanding climate feedback.  I won’t repeat all of the previous posts (see here and here), but this is worth repeating:

Direct warming from the greenhouse gas effect of CO2 does not create a catastrophe, and at most, according to the IPCC, might warm the Earth another degree over the next century.  The catastrophe comes from the assumption that there are large net positive feedbacks in the climate system that multiply a small initial warming from CO2 many times.  It is this assumption that positive feedbacks dominate over negative feedbacks that creates the catastrophe.  It is telling that when prominent supporters of the catastrophic theory argue the science is settled, they always want to talk about the greenhouse gas effect (which most of us skeptics accept), NOT the positive feedback assumption.  The assumption of net positive climate feedback is not at all settled — in fact there is as much evidence the feedback is net negative as net positive — which may be why catastrophic theory supporters seldom if ever mention this aspect of the science in the media.

I said I would offer a counter-proposal to Mr. Zwick’s that skeptics bear the costs of climate change.  I am ready to step up to the cost of any future man-made climate change if Mr. Zwick is ready to write a check for the lost economic activity and increased poverty caused by his proposals.  We are at an exciting point in history where a billion people, or more, in Asia and Africa and Latin America are at the cusp of emerging from millenia of poverty.  To do so, they need to burn every fossil fuel they can get their hands on, not be forced to use rich people’s toys like wind and solar.  I am happy to trade my home for an imaginary one that Zwick thinks will be under water.  Not only is this a great way to upgrade to some oceanfront property, but I am fully confident the crazy Al Gore sea level rise predictions are a chimera, since sea levels have been rising at a fairly constant rate since the end of the little ice age..  In return, perhaps Mr. Zwick can trade his job for one in Asia that disappears when he closes the tap on fossil fuels?

I encourage you to read it all, including an appearance by the summer of the shark.

Tilting at Straw Men

In my Forbes article a few weeks ago, I showed how the arguments alarmists most frequently use to “prove” that skeptics are wrong are actually straw men.  Alarmists want to fight the war over whether the greenhouse gas effect of CO2 is true and whether the world has seen warming over the last century, both propositions that skeptics like myself accept.

The issue for us is whether man is causing a catastrophe (mainly due to large positive feedbacks in the climate system), and whether past warming has been consistent with catastrophic rates of man-made warming.  Both of these propositions are far from proven, and are seldom even discussed in the media.

I found a blog I had not read before on energy policy issues that had a very sensible article on just this issue

The most frustrating thing about being a scientist skeptical of catastrophic global warming is that the other side is continually distorting what I am skeptical of.

In his immodestly titled New York Review of Books article “Why the Global Warming Skeptics Are Wrong,” economist William Nordhaus presents six questions that the legitimacy of global warming skepticism allegedly rests on.

  1. Is the planet in fact warming?
  2. Are human influences an important contributor to warming?
  3. Is carbon dioxide a pollutant?
  4. Are we seeing a regime of fear for skeptical climate scientists?
  5. Are the views of mainstream climate scientists driven primarily by the desire for financial gain?
  6. Is it true that more carbon dioxide and additional warming will be beneficial?

Since the answers to these questions are allegedly yes, yes, yes and no, no, no, it’s case closed, says Nordhaus.

Except that he is attacking a straw man. Scientists (or non-scientists) who are “skeptics” are skeptical of catastrophic global warming—not warming or human-caused warming as such. So much for 1 and 2. We refuse to label CO2 a “pollutant” because it is essential to life and because we do not believe it has the claimed catastrophic impact. So much for 3. And since 4-6 don’t pertain to the scientific issue of

Who Wrote the Fake Heartland Strategy Memo?

Certainly Peter Gleick is still in the running.

But as I wrote in Forbes last week, the memo does not have the feel of having been written by a “player” like Gleick.  It feels like someone younger, someone more likely to take the cynical political knife-fighting statements of someone like Glieck (e.g. skeptics are anti-science) and convert them literally (and blindly) to supposed Heartland agenda items like trying to discourage science teaching.  Someone like an intern or student, who might not realize how outrageous their stilted document might look to real adults in the real world, who understand that leaders of even non-profits they dislike don’t generally speak like James Bond villains.   Even Megan McArdle joked “Basically, it reads like it was written from the secret villain lair in a Batman comic.  By an intern.”

Now combine that with a second idea.  Gleick is about the only strong global warming believer mentioned by the fake strategy document.   I don’t think many folks who have observed Heartland from afar would say that Heartland has any special focus on or animus towards Gleick (more than they might have for any other strong advocate of catastrophic man-made global warming theory).   I would not have inferred any such focus by Heartland, and seriously, who would possibly think to single out Peter Gleick of all candidates (vs. Romm or Hansen or Mann et al) in a skeptic attack strategy?

The only person who might have inferred such a rivalry would have been someone close to Gleick, who heard about Heartland mainly from Gleick.  Certainly Gleick seems to have had a particular focus, almost obsession, with Heartland, and so someone who viewed Heartland only through the prism of Gleick’s rants might have inferred that Heartland had something special in for him.  And thus might have featured him prominently in a hypothesized attack in their strategy document.

So this is what I infer from all this:  My bet is on a fairly young Gleick sycophant — maybe a worker at the Pacific Institute, maybe an intern, maybe a student.  Which would mean in turn that Gleick very likely knows who wrote the document, but might feel some responsibility to protect that person’s identity.

Peter Gleick Admits to Stealing Heartland Documents

I have an updated article at Forbes.  A small excerpt

In a written statementPeter Gleick of the Pacific Institute, and vocal advocate of catastrophic man-made global warming theory, has admitted to obtaining certain Heartland Institute internal documents under false premises, and then forwarding these documents to bloggers who were eager to publish them.

Gleick (also a writer on these pages at Forbes) frequently styles himself a defender of scientific integrity (for example), generally equating any criticism of his work or scientific positions with lack of integrity (the logic being that since certain scientists like himself have declared the science to be settled beyond question, laymen or even other scientists who dispute them must be ethically-challenged).

In equating disagreement with lack of integrity, he offers a prime example of what is broken in the climate debate, with folks on both sides working from an assumption that their opponents have deeply flawed, even evil motives.  Gleick frequently led the charge to shift the debate away from science, which he claimed was settled and unassailable, to the funding and motives of his critics.  Note that with this action, Gleick has essentially said that the way to get a more rational debate on climate, which he often says is his number one goal, was not to simplify or better present the scientific arguments but to steal and publish details on a think tank’s donors….

Hit the link to read it all.

Using Computer Models To Launder Certainty

(cross posted from Coyote Blog)

For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon.   We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision.   We then treat the output with a reverence that does not match the quality of the inputs.

I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it.  But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr.  This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).

“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:

  1. What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
  2. As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

Pielke goes on to deconstruct the study, but just compare the two bolded statements.  First, that there is not sufficiently extensive and accurate observational data to test a hypothesis.  BUT, then we will create a model, and this model is validated against this same observational data.  Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen.  If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?

A model is no different than a hypothesis embodied in code.   If I have a hypothesis that the average width of neckties in this year’s Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis  (though it may be enough to get me media attention).  The model is merely a software implementation of my original hypothesis.  In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.

This is not just a climate problem.  The White House studies on the effects of the stimulus were absolutely identical.  They had a hypothesis that government deficit spending would increase total economic activity.  After they spent the money, how did they claim success?  Did they measure changes to economic activity through observational data?  No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment).  And the press reported it like it was a real number.

Postscript: I did not get into this in the original article, but the other mistake the study seems to make is to validate the model on a variable that is irrelevant to its conclusions.   In this case, the study seems to validate the model by saying it correctly simulates past upper ocean heat content numbers (you remember, the ones that are too few and too inaccurate to validate a hypothesis).  But the point of the paper seems to be to understand if what might be excess heat (if we believe the high sensitivity number for CO2) is going into the deep ocean or back into space.   But I am sure I can come up with a number of combinations of assumptions to match the historic ocean heat content numbers.  The point is finding the right one, and to do that requires validation against observations for deep ocean heat and radiation to space.

Using Models to Create Historical Data

Megan McArdle points to this story about trying to create infant mortality data out of thin air:

Of the 193 countries covered in the study, the researchers were able to use actual, reported data for only 33. To produce the estimates for the other 160 countries, and to project the figures backwards to 1995, the researchers created a sophisticated statistical model. [1]What’s wrong with a model? Well, 1) the credibility of the numbers that emerge from these models must depend on the quality of “real” (that is, actual measured or reported) data, as well as how well these data can be extrapolated to the “modeled” setting ( e.g. it would be bad if the real data is primarily from rich countries, and it is “modeled” for the vastly different poor countries – oops, wait, that’s exactly the situation in this and most other “modeling” exercises) and 2) the number of people who actually understand these statistical techniques well enough to judge whether a certain model has produced a good estimate or a bunch of garbage is very, very small.

Without enough usable data on stillbirths, the researchers look for indicators with a close logical and causal relationship with stillbirths. In this case they chose neonatal mortality as the main predictive indicator. Uh oh. The numbers for neonatal mortality are also based on a model (where the main predictor is mortality of children under the age of 5) rather than actual data.

So that makes the stillbirth estimates numbers based on a model…which is in turn…based on a model.

This sound familiar to anyone?   The only reason it is not a good analog to climate is that the article did not say that they used mortality data from 1200 kilometers away to estimate a country’s historic numbers.

Smart, numerically facile people who glibly say they support the science of anthropogenic global warming would be appalled if they actually looked at it in any depth.   While gender studies grads and journalism majors seem consistently impressed with the IPCC, physicists, economics, geologists, and others more used to a level of statistical rigor generally turn from believers to skeptics once they dig into the details.  I did.