Monthly Archives: July 2010

Garbage In, Money Out

In my Forbes column last week, I discuss the incredible similarity between the computer models that are used to justify the Obama stimulus and the climate models that form the basis for the proposition that manmade CO2 is causing most of the world’s warming.

The climate modeling approach is so similar to that used by the CEA to score the stimulus that there is even a climate equivalent to the multiplier found in macro-economic models. In climate models, small amounts of warming from man-made CO2 are multiplied many-fold to catastrophic levels by hypothetical positive feedbacks, in the same way that the first-order effects of government spending are multiplied in Keynesian economic models. In both cases, while these multipliers are the single most important drivers of the models’ results, they also tend to be the most controversial assumptions. In an odd parallel, you can find both stimulus and climate debates arguing whether their multiplier is above or below one.

How similar does this sound to climate science:

If macroeconometrics were a viable paradigm, we would have seen major efforts to try to bring this sort of model up to date from its 1975 time warp. However, for reasons I have documented, the profession has decided that this macroeconometric project was a blind alley. Nobody bothered to bring these models up to date, because that would be like trying to bring astrology up to date.

This, from Arnold Kling about macroeconomic models could have been written just as well to describe the process for running climate models

Thirty-five years ago, I was Blinder’s research assistant, doing these sorts of simulations on the Fed-MIT-Penn model for the Congressional Budget Office. I think they are still done the same way. See lecture 13. Here are some of the things that Blinder had to tell his new research assistant to do.1. Make sure that there were channels in the model for credit market conditions to affect consumption and investment.

2. Correct the model’s past forecast errors, so that it would track the actual behavior of the economy over the past two years exactly. With the appropriate “add factors” or correction factors, the model then produces a “baseline scenario” that matches history and then projects out to the future. For the future, a judgment call has to be made as to how rapidly the add factors should decay. That is mostly a matter of aesthetics.

3. Simulate the model without the fiscal stimulus. This will result in the model’s standard multiplier analysis.

4. Make up an alternative path for what you think would have happened in credit markets without TARP and other extraordinary measures. For example, you might assume that mortgage interest rates would have been one percentage point higher than they actually were.

5. Simulate the model with this alternative scenario for credit market conditions.

6. (4) and (5) together create a fictional scenario of how the economy would have performed had the government not taken steps to fight the crisis. According to the model, this fictional scenario would have been horrid, with unemployment around 15 percent.

In the case of climate, the equivalent fictional scenario would be the world without manmade CO2, but the process of tweaking input variables and assuming one’s conclusions is the same.

You Know it Has to Be A Skeptic Writing When You See This

I have followed Roy Spencer’s work for a while on trying to measure climate feedback effects from satellite data.  In general, I give him Kudos for actually working on what is really THE critical problem that separates climate catastrophe from climate rounding error.  It is good someone is working on this, rather than, say, how global warming might affect toad mating, or whatever.

I have never been totally convinced by this part of Spencer’s work.  Again, I give him kudos for trying to isolate the effect of single variables in a complex system through actual observation, rather than the lazy approach of running experiments inside computer models of dubious accuracy.  I am not convinced he has achieved this, but I must admit I have not spent a ton of time working it through.

Anyway, Spencer has a long discussion of his methodology in answer to some critics.  I reserve judgment until I have studied it further.  But I was captivated by this bit:

On the positive side, though, MF10 have forced us to go back and reexamine the methodology and conclusions in SB08. As a result, we are now well on the way to new results which will better optimize the matching of satellite-observed climate variability to the simple climate model, including a range of feedback estimates consistent with the satellite data. It is now apparent to us that we did not do a good enough job of that in SB08.

Really?  You shared your data, were criticized, and are modifying your approach based on this criticism?  I thought from the study of the habits of mainstream climate scientists the correct scientific procedure was to 1) hide your data like it was Russian nuclear secrets; 2) prevent any opposing view from getting published; and 3) defend a flawed methodology by getting 10 of your friends to use the same methodology and summarize it all in an IPCC spaghetti graph.

Does This Sound Familiar to Anyone?

Greg Mankiw on scoring the federal stimulus package:

the CEA took a conventional Keynesian-style macroeconomic model and used those set of equations to estimate the effect the stimulus should have had.  Essentially, the model offers an estimate of the policy’s effect, conditional on the model being a correct description of the world.  But notice that this exercise is not really a measurement based on what actually occurred.  Rather, the exercise is premised on the belief that the model is true, so no matter how bad the economy got, the inference is that it would have been even worse without the stimulus.  Why?  Because that is what the model says.  The validity of the model itself is never questioned.

Does this sound like climate science or what?  The same models that are used to predict future temperature increases are used to decide how much of past warming was dues to Co2 and how much was due to natural effects.  Here is the retrospective IPCC chart which assigns more than 100% of post-1950 warming to CO2 (since the blue “natural forcings” is shown to go down, see more here)

Here is the stimulus version, showing flat employment, but positing that the stimulus created jobs because employment “would have gone down without it” (sound familiar?)

This kind of retrospective look at causality has the look of science but in fact is nothing of the sort, and can be not much more than guesses laundered to look like facts.

But this may in fact be worse than guessing.  In both cases, these graphs are drawn by folks who think they know the answer (in the first case that CO2 caused all warming, in the second that the stimulus created millions of jobs).  Since in both cases the lower “without” case (either without CO2 or without stimulus) is horrendously, almost impossible to derive and totally impossible to measure, there is good reason to believe it is merely a plug, fixed in value to get the answer they want.  But if I plugged it just on the back of an envelope, everyone would call me out for it, so I plug it in an arcane model where numerous inputs can be tweaked to get different results, to avoid this kind of unwanted scrutiny.

Readers of climate sites will also recognize this criticism of Obama’s self-serving stimulus analysis

Moreover, the fact that other organizations simulating similar models come to similar conclusions is no evidence about the validity of the model’s simulations.  It only tells you the CEA staff did not commit egregious programming errors when running their computer simulations.

Sounds like the logic behind the hockey stick spaghetti graphs, no?

Might As Well Be Walking on the Sun

Steve Goddard and Anthony Watt have a series of posts on an old favorite topic on this site — how data manipulations back in the climate office is creating a lot of the “measured” warming.  This particular example is right here in Arizona, and features several sites my son and I surveyed for Anthony’s site.  They have a followup on another Arizona station here.  Check out all the asphalt:

This is a hilariously bad siting.  It demonstrates how small things can sometimes have big effects.  The MMTS sensor has a very limited cable length.  This does not mean that it only comes with a short cable (begging the question of why they can’t just buy a longer one), but that it can only have a short cable due to signal amplification issues.  As a result, we get this terrible siting because it needs to be close to the building, whereas even a hundred yards away there were much better locations

Carefree is a fairly rural (at least suburban) low density town with lots of undeveloped land.  They had to work to get a siting this bad.  A monkey throwing darts at a map of the area would have gotten a better siting.

Absolutely Hilarious

I know I am late on this but I am trying to spool back up on this site so allow me to catch up.  It turned out that that the IPCC’s Amazon claim (that 40% of the rain forest was at risk from global warming) came from the Facebook page of a 12-year-old girl.  OK, just kidding, it didn’t, but the source is not much better — apparently the claim was just thrown up on a web page of a Brazilian activist organization in 1999, and then pulled down in 2003.  Everything since has been one long game of “telephone.”  The whole story is fascinating and worth reading.

Great Academics Go Along With the Pack

It would be an understatement to say that much of the focus in villifying skeptics has been on the skeptic’s funding.  The storyline goes that skeptics are only fighting the obvious because they are paid off by oil and coal companies.

But of course, it turns out that global warming alarmists get far more funding than skeptics, likely 100x as much or more (funding for skeptics is at most a million dollar or two a year, and that may be high — funding for alarmists by governments alone is in the billions a year).  The quick reply of leading alarmist scientists is that the money is incidental.

I am generally willing to take them at their word — I find trying to look into other people’s hearts to be a hopeless exercise.  And besides, does anyone really think the folks who, say, believe in or oppose string theory are taking those positions for the money.  If I really had to discuss incentives, I would argue that prestige and wanting to belong are actually stronger motivations for alarmist scientists, as preaching doom seems to lead to fame while being a skeptic seems to lead to academic shunning.

So I have generally avoided the topic of monetary motivation of alarmists, but what am I to think when Penn State makes the case in its report on Michael Mann?  In a rather straight-forward way, they make the case that Mann is a good climate scientist because he is good at obtaining funding

This level of success in proposing research, and obtaining funding to conduct it, clearly places Dr. Mann among the most respected scientists in his field. Such success would not have been possible had he not met or exceeded the highest standards of his profession for proposing research…

Had Dr. Mann’s conduct of his research been outside the range of accepted practices, it would have been impossible for him to receive so many awards and recognitions, which typically involve intense scrutiny from scientists who may or may not agree with his scientific conclusions…

Clearly, Dr. Mann’s reporting of his research has been successful and judged to be outstanding by his peers. This would have been impossible had his activities in reporting his work been outside of accepted practices in his field.

This argument is OK as far as it goes, but implicitly defines a great academic as “someone who goes along with the pack.”  Note that skeptics cannot claim to get  a lot of research grants, because the alarmists control the funding.  Skeptics can’t get into peer-reviewed journals, because, as the East Anglia emails make clear, a small group of alarmist scientists are blocking their publication.  Mann’s research has been judged outstanding by his peers because he agrees with his peers.

In a large sense, Penn State’s only test of Mann’s ability is that he is currently a member in good standing of the small in-crowd that dominates climate science.  His science is good because it comes to the right conclusions.

Unlike many skeptics, I have no desire to “get” Professor Mann.  I don’t need him fired or even investigated by Penn State.  The way to refute him is to refute him, not haul him in front of tribunals.

That being said, Penn State did start and investigation and as such has some responsibility to do the thing right.  And boy was this a joke.   The most charitable thing I can say is that his work is fraught with more questionable decisions and practices and approaches than anything I have ever seen that was taken this seriously.  We could talk about it for days, but here is one example to get you thinking.