A Cautionary Tale About Models Of Complex Systems

I have often written warming about the difficulty of modeling complex systems.  My mechanical engineering degree was focused on the behavior and modeling of dynamic systems.  Since then, I have spent years doing financial, business, and economic modeling.  And all that experienced has taught me humility, as well as given me a good knowledge of where modelers tend to cheat.

Al Gore has argued that we should trust long-term models, because Wall Street has used such models successfully for years  (I am not sure he has been using this argument lately, lol).  I was immediately skeptical of this statement.  First, Wall Street almost never makes 100-year bets based on models (they may be investing in 30-year securities, but the bets they are making are much shorter term).  Second, my understanding of Wall Street history is that lower Manhattan is littered with the carcasses of traders who bankrupted themselves following the hot model of the moment.  It is ever so easy to create a correlation model that seems to back-cast well.  But no one has ever created one that holds up well going forward.

A reader sent me this article about the Gaussian copula, apparently the algorithm that underlay the correlation models Wall Streeters used to assess mortgage security and derivative risk.

Wall Streeters have the exact same problem that climate modelers have.  There is a single output variable they both care about (security price for traders, global temperature for modelers).  This variable’s value changes in a staggeringly complex system full of millions of variables with various levels of cross-correlation.  The modelers challenge is to look at the historical data, and to try to tease out correlation factors between their output variable and all the other input variables in an environment where they are all changing.

The problem is compounded because some of the input variables move on really long cycles, and some move on short cycles.  Some of these move in such long cycles that we may not even recognize the cycle at all.  In the end, this tripped up the financial modelers — all of their models derived correlation factors from a long and relatively unbroken period of home price appreciation.  Thus, when this cycle started to change, all the models fell apart.

Li’s copula function was used to price hundreds of billions of dollars’ worth of CDOs filled with mortgages. And because the copula function used CDS prices to calculate correlation, it was forced to confine itself to looking at the period of time when those credit default swaps had been in existence: less than a decade, a period when house prices soared. Naturally, default correlations were very low in those years. But when the mortgage boom ended abruptly and home values started falling across the country, correlations soared.

I never criticize people for trying to do an analysis with the data they have.  If they have only 10 years of data, that’s as far as they can run the analysis.  However, it is then important that they recognize that their analysis is based on data that may be way too short to measure longer term trends.

As is typical when models go wrong, early problems in the model did not cause users to revisit their assumptions:

His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched—and was making people so much money—that warnings about its limitations were largely ignored.

Then the model fell apart. Cracks started appearing early on, when financial markets began behaving in ways that users of Li’s formula hadn’t expected. The cracks became full-fledged canyons in 2008—when ruptures in the financial system’s foundation swallowed up trillions of dollars and put the survival of the global banking system in serious peril.

A couple of lessons I draw out for climate models:

  1. Limited data availability can limit measurement of long-term cycles.  This is particularly true in climate, where cycles can last hundreds and even thousands of years, but good reliable data on world temperatures is only available for our 30 years and any data at all for about 150 years.  Interestingly, there is good evidence that many of the symptoms we attribute to man-made global warming are actually part of climate cycles that go back long before man burned fossil fuels in earnest.  For example, sea levels have been rising since the last ice age, and glaciers have been retreating since the late 18th century.
  2. The fact that models hindcast well has absolutely no predictive power as to whether they will forecast well
  3. Trying to paper over deviations between model forecasts and actuals, as climate scientists have been doing for the last 10 years, without revisiting the basic assumptions of the model can be fatal.

A Final Irony

Do you like irony?  In the last couple of months, I have been discovering I like it less than I thought.  But here is a bit of irony for you anyway.  The first paragraph of Obama’s new budget read like this:

This crisis is neither the result of a normal turn of the business cycle nor an accident of history, we arrived at this point as a result of an era of profound irresponsibility that engulfed both private and public institutions from some of our largest companies’ executive suites to the seats of power in Washington, D.C.

As people start to deconstruct last year’s financial crisis, most of them are coming to the conclusion that the #1 bit of “irresponsibility” was the blind investment of trillions of dollars based on solely on the output of correlation-based computer models, and continuing to do so even after cracks appeared in the models.

The irony?  Obama’s budget includes nearly $700 billion in new taxes (via a cap-and-trade system) based solely on … correlation-based computer climate models that predict rapidly rising temperatures from CO2.  Climate models in which a number of cracks have appeared, but which are being ignored.

Postscript: When I used this comparison the other day, a friend of mine fired back that the Wall Street guys were just MBA’s, but the climate guys were “scientists” and thus presumably less likely to err.  I responded that I didn’t know if one group or the other was more capable (though I do know that Wall Street employs a hell of a lot of top-notch PhD’s).  But I did know that the financial consequences for Wall Street traders having the wrong model was severe, while the impact on climate modelers of being wrong was about zero.  So, from an incentives standpoint, I know who I would more likely bet on to try to get it right.

17 thoughts on “A Cautionary Tale About Models Of Complex Systems”

  1. No analogy is perfect, but this is definitely a comparison worth drawing, if for no other reason than the fact that people are acutely — in many cases personally — aware of the financial crisis, so it is easy for the lay person to understand the analogy. And the situations are eerily similar. Your friend’s faith in climate “scientists'” capacity to model long term climate, unfortunately, reflects more on your friend’s naivete, than on the actual state of affairs. I would add one more caveat to what you told your friend: There is actually a fair amount of incentive for the models to be sensational, rather than “right.” No doubt there are plenty of dedicated modelers who try to be accurate, so I am not sure that this rises to the level of a disincentive to be “right,” but it certainly should give us pause. Perhaps this is just a way of pointing out the other side of the coin you already mentioned to your friend, namely, the fact that there is little disincentive to getting it wrong.

    You are too kind in your assessment of the climate models: “a number of cracks” is clearly an understatement. My understanding is that the climate models have consistently failed to predict anything with a fidelity better than chance, but if someone is aware of a climate model that has performed well over the last 10 or 20 years, I’d love to hear about it.

  2. My iGoogle home page is set up to generate quotes every day. This one from Nikola Tesla seems particularly appropriate for both financial and climate models:

    Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.

  3. Wall Street bought the best math and stats brains available. The hockey team can’t even dream about having the brainpower the Wall St guys had.

  4. “But I did know that the financial consequences for Wall Street traders having the wrong model was severe, while the impact on climate modelers of being wrong was about zero.”

    Not quite right. Most of the time, Wall Street traders do not depend on their model being right, but only on it being less wrong than the other guy’s. However, this standard really doesn’t cut it for climate models either.

  5. Cost of climate modelers being wrong is not zero. It’s going to cost 700 BILLION stated, plus all the billions that has been spent on this hoax, plus trillions more in economic costs worldwide.

  6. The Li article was interesting but he discusses only the “copula” part, not the “gaussian” part. Thus, he misses the wider story.

    ALL the models in the financial world make the gaussian assumption. Referencing the current crash, it should be self evident this assumption is total crap… as Mandelbrot, Taleb and others suggest.

    Standard theory says this crash has something like a 1 in 10^10 probability. History shows it is more like 1 in (10-20).

    Modern Portfolio Theory, the Capital Asset Pricing Model (CAPM), Value at Risk (Var) models – now all computerized (GIGO alert) – are still in use because the are simple to use. However, they underestimate real-world risk by orders of magnitude.

    LOL – What’s a Connecticut Fund Manager to do?

    Oh, I am interested in the Gore quote about Wall Street models. Got a source? I could have some fun with it. 🙂

  7. gofer: The cost to the climate modelers themselves of being wrong is zero – unlike the cost of being politically incorrect, since they work for government agencies and liberal universities. Wall Street quants generally have both their jobs and their own money riding on their predictions.

  8. The comment about the MBAs is erroneous; the technical work is done by people like the individual who came up with the Gaussian copula, the MBAs simply make the ultimate decisions.

    Similarly, the climatologists and other scientists are the ones who come up with the climate models, but it’s the politicians that ultimately decide what policies to dream up as a result.

  9. My father gave me a book recently, “Fortune’s Formula”, by William Poundstone. The book covers risk and risk management- including Edward Thorp, who used the “Kelley” money management system to win at blackjack, and later to make money in a Warrant hedge fund.

    The book also covered not so successful funds, including “Long Term Capital Management” , established by Robert Merton and Myron Scholes, famed for the
    “Black-Scholes” equation for determining stock varibility. “Long Term Capital Management” went bust in a very short term, partly because you cannot rely on short term measurements to determine longer term variance, partly because the variance in the companies LTCM invested in were not independent runs, but had a common risk factor. In that regard, it was sort of like the housing market. Models were built showing a small chance of default. . It was assumed that the risks would be reduced by investing in packages of large numbers of subpar loans. Unfortunately for the investors, those loans weren’t varying independently, they all went bust together.- A. McIntire

  10. You put your finger on it: climate modelers really have nothing at stake when they make 50 or 100 year predictions. They’ll be dead either way. But by hyping the threat they are making sure of their job security TODAY. It is all about their short term job security. The damage they cause to the economy and to other people’s jobs by hyping a phantom threat is more than they care to think about.

  11. Another analogy I see between modeling in finance and climatology is the group think mentality. Anyone who knows a little about trading knows that going with the herd is the least risky strategy. Being a contrarian trader or climatologist requires some balls.
    Whatever the outcome, as a general rule (with some notable exceptions), it’s more rewarding to be wrong collectively than to be right individually.
    It’s all the more verifiable in universities than at Wall Street.

  12. For many years, I taught a water quality modeling course to engineers and the occasional limnologist. The standard models employed are quite crude, although their implementation requires large amounts of input and coding. The questions of verification and calibration are debated endlessly. Fortunately, there are several rules of thumb derived by engineers and limnologists that can be used to judge the output and reduce the GIGO. However, in almost every class, a significant number of my engineering (!!) students objected to the ROT. They were enamored of the clean, objective math and wanted no part of the messy reality of actual lakes and rivers. I can only believe that practicing climate modelers have the same preference for the seemingly objective math, and feel a revulsion to such ugly nuisances as the Medieval Climatic Optimum

  13. Bob: I assume that those rules of thumb were often confirmed by making predictions and verifying them with fresh data from the real world, NOT just by matching the existing data?

    As I understand it, the climate modelers do use rules of thumb (AKA “fudge factors” or “plugs”) extensively, because their computer models are too coarse-grained to model clouds, etc. That is, they would need several more orders of magnitude in computer power to just use “clean, objective math” and have anything at all resembling a model of the real world, so they adjust the models empirically. The trouble is, while you have many lakes and rivers to test your empirical models against, climate modelers only have one world climate. For the number that’s most important to them, the annual average temperature for the world, they get just one new number a year! And that number is very noisy with a dubious derivation. So there is no way to tell how good their empirical adjustments are at predicting the future, versus just being adjusted to match past measurement errors, variables and cycles not accounted for, and random variations. We do know that the model predictions presented by Hansen ten years ago were way off – but given the randomness in the data, you need far more than 10 additional data points to test the model’s long term accuracy…

    Secondly, what are the chances that any computer program of significant complexity is bug-free? In my experience, zero, unless you have tested the hell out of it, and even then… In my job, I often have to read and understand the software that controls a piece of machinery. This is not very complex software, that has been tested again and again it has been verified that the software will give the specified behavior for every specified condition, and revert to a safe mode for anything unspecified. Yet, in reading the code and comments, I often find places where the path taken to reach the goal clearly isn’t what the programmer expected…

    But what happens if the programmer testing the program doesn’t know what the corrrect result is? And that’s the case with climate models – assuming the modeler is actually trying to do scientific work, rather than supporting a predetermining political position. It’s all too easy to tweak the program until the results look like you expected, even though they are wrong. “Hockey stick” Mann is a perfect example of that.

  14. Interesting article but it strikes me as drawing a long bow. The underlying causes of the problem aren’t hard to see:

    (1) Lending huge amounts of money to people (‘no money down’ loans) who didn’t need to demonstrate responsibility and who had little risk of their own, if things turned pear shaped.

    (2) Lack of government financial oversight – in fact the opposite – a misguided attempt to encourage home ownership.

    (3) Tax and other policies that created a realestate bubble.

    Can computer models take into account 1-3? Were they designed to? Should they have been designed to?

  15. “Al Gore has argued that we should trust long-term models, because Wall Street has used such models successfully for years”

    What a pointless lie. He’s never said anything of the sort. Making up stuff like this only exposes you for the utter charlatan that you are.

  16. I have studied different approaches to modeling complex (adaptive) systems. While my experience is only academic, I feel safe in saying that I understand the general problem and some of the pitfalls.

    Pitfalls:
    1. Choosing what level of abstraction to model. We don’t have infinite computing (nor the math, nor the physics knowledge) to support creating models which model at the particle level. So, there is always some degree of bias introduced into the system. Maybe you could equate “rules of thumb” to bias.

    2. Don’t get hung-up on trying to model macro properties. Those behaviors and properties are in all likely hood one of emergence. In economics Hayek liked to talk about local knowledge of a system (a more micro approach), whereas Keynes would like to look at the world from macro point of view. The interactions amoung local agents in a system is where one should focus their attention. You can’t “fix” a symptom of a system with a solution at the macro level, this sort of top-down approach only indirectly affects the system. You can directly affect the system from bottom-up. Herein lies the power.. the interactions at the micro level.

    3. Randomness. What type of random distribution should you assign to your different data sets. And always allow for a freak occurrence to happen, especially when people are involved.

    4. Coding errors, bugs, errors in rounding numbers… I think of an example where I was casting from double to float and losing just enough precision to throw off my agent’s Q-learning algorithm (when building a foreign policy model). Or, dare I admit, a bug where an agents movement was slightly off, which when fixed increased my predictive score on the Netflix prize (I used a model to predict movie preference based on spreading likes/dislikes via word of mouth). You show me a perfect program running over 10k LOC, I’ll show you a liar. I don’t care if you are CMM lvl 5.

    At the end of the day, my experience with modeling leaves me as a skeptic in the global warming debate.

  17. The problem is even more fundamental than the problems people have outlined. All a model can do, no matter how well specified, is tell you the consequences of your premises. It is, at best, a way of testing your premises against how things turn out.

    So, models do not actually tell you about the world, they tell you what you currently think about how the world works. But people persist in treating them as if they tell you about the world.

    And so yes, the Wall St experience does connect to the problem with climate models.

Comments are closed.