by **Thomas Fiddaman** » Tue Feb 21, 2012 11:16 am

I think there are two levels of causal certainty in our models.

I. There are some things we know with considerable certainty: stocks accumulate their inflows and outflows, physical stocks must have first order negative feedback on their outflows, the laws of physics apply, units must balance, and so on. Hopefully students are not reticent about this kind of statement about causality.

II. Then there are some things we don't really know - the basis for decisions or physical processes that haven't been adequately measured. In some cases, it's possible to ask people about the basis for their actions. They might give the wrong answer, but at least the model will reveal the gap if it's important. In other cases, like large-scale social systems, it's not really possible to ask except in a statistical sense. This is where correlation != causality really gets into trouble. In such cases, students' reticence might be legitimate. However, the proper response is not to freeze up and have no model (which really means reverting to intuition that is surely fallible). It's to embrace the uncertainty and create decision rules that represent multiple possible theories about what's going on. Some will prove untenable when confronted with data. After some weeding, the survivors can be used to give contingent or robust advice - 'policy A works whether theory X or theory Y describes behavior.' This may require smaller models, with a greater share of time devoted to experimentation.

In the absence of data with which to test causality, operational thinking, extreme conditions and similar thought experiments actually provide a lot of information. We may not know exactly why a manager orders N cans of soup per month, but there are some basic things that any decision rule would have to accomplish in order to be at least vaguely rational in some bounded sense. That, plus the physics, turns quite a bit of level II questions into level I.

It's a bit ironic that causality questions obstruct dynamic modeling, when failure to account for accumulation and feedback are two of the biggest pitfalls in intuitive and statistical reasoning.