paranoia about specifying causal links

This forum is for discussion of topics related to System Dynamics. You must be logged in to post but anyone who has registered on the System Dynamics web submission system may post. Note that there may be a delay of a working day or two after registering before you can post anything.

paranoia about specifying causal links

Postby Richard Dudley » Thu Feb 09, 2012 6:06 pm

Specifying causal links between pairs of model elements is central to the creation of causal loop diagrams and system dynamics models. I was surprised today to learn that some of my students expressed serious reservations about specifying causal relationships without having "proof" that these relationships were causal. These are all graduate students in various fields (mostly related to agriculture or food or nutrition) with experience in international development. I believe this concern on their part stems from their training in statistics where it is been hammered into their heads that correlation and causality are different things – which of course we know.

Just wondering if anyone else has run into this concern on the part of graduate students.

Ultimately I told him that they could provide a small disclaimer paragraph in the lower corner of their causal loop diagrams that they did not have data/information to prove that the causal relationships were absolutely certain. :D .

comments?

Richard
Richard Dudley
 
Posts: 60
Joined: Sun Jan 11, 2009 1:17 pm
Location: Etna, New York, USA

Re: paranoia about specifying causal links

Postby Kim Warren » Fri Feb 10, 2012 3:10 am

This is always going to be a problem, so long as we ask people to put together qualitative diagrams. I always insist on at least sketching estimated time-charts on all varying items, so we can see how exactly A is supposed to be causing B. Not statistically validated of course, but at least helps us past the plausibility test - relying on qualitative links alone leaves us with little more than idle speculation.
Kim Warren
 
Posts: 103
Joined: Mon Feb 16, 2009 5:56 am

Re: paranoia about specifying causal links

Postby Eliot Rich » Fri Feb 10, 2012 5:07 am

This came up in lecture yesterday, with a discussion of stress effects on personal productivity.

When students (or clients) are concerned about specifying a link, it sometime helps to note that the link (and the model) is one theory of how the world works. Not the only theory, and not necessarily the right one. As with any theory, we make assumptions until evidence is presented. In the SD approach, sometimes the link does not contribute to the dynamic behavior, and we therefore don't need to worry about collecting evidence for its particular effect.

Best,

Eliot
Eliot Rich
 
Posts: 18
Joined: Mon Jan 12, 2009 3:39 pm
Location: University at Albany, SUNY

Re: paranoia about specifying causal links

Postby Robert Eberlein » Fri Feb 10, 2012 7:40 pm

First, I think it is important to point out that many, if not most, links in a good causal loop or stock and flow diagram should have a very common sense interpretation and obvious validity. If you are going against people who want you to demonstrate that burning more oil will reduce the stock of unused nonrenewable resources, find a different audience! The strength and importance of the links may not be known, but their existence and direction can still be quite obvious.

The process Kim described is a double edged sword. On the one hand looking at data can be useful for understanding whether things manifest in expected ways, phase relationships and relative volatility. One the other hand trying to use observed data relationships to drive causal understanding is precisely inferring causation from correlation.

Any theory of behavior (which is the entire set of feedback relationships and model equations) has to stand on its own. Data are extremely important in their ability to refute the theory - if the theory is not consistent with observed behavior then what is it telling us. It is only human to take the converse to be the equivalent of the contra-positive so that a failure to refute is accepted as direct confirmation. This is actually why (more or less) nowadays it seems that correlation=causation. If I have some hair-brained theory about why one thing would cause another, and I get get a sophisticated Probit regression that does not disprove my theory then I treat it as proved and write a paper.

As to adding caveats - what problem are you trying to solve for whom? Ultimately it is the stakeholders who have to agree to the relationships you formalize and then understand the implications of those relationships. For them it does not matter how many caveats you add - they either believe them (or perhaps you) or they don't. So my inclination would be to tell the students to be bold, believe in what they are doing, but recognize that the propounded theory they present needs to face challenges to demonstrate its usefulness.
Robert Eberlein
Site Admin
 
Posts: 176
Joined: Sat Dec 27, 2008 8:09 pm

Re: paranoia about specifying causal links

Postby Jay Forrester » Sat Feb 11, 2012 5:21 pm

I have never encountered the problem described by Richard Dudley of doubting cause-to-effect relationships at the micro-level in developing a model. However, I have probably always worked with a different kind of participant, that is, people who have a substantial working knowledge of the system being modeled. Students should not try to model a system before they gain significant practical knowledge about the system. At the very least, that means extensive discurrions with people who are operating in that system. Those people in the real system have clear, and usually correct, information about cause-effect relationships. What they do not understand is what dynamic behavior will result from the micro-structure that they do understand.

Knowledge about the system includes an understanding of the problem that is to be understood and corrected. One does not model a system, one models the cause of a problem. The first requirement on a model is that it must show how the problem of interest is being caused. Only after understanding the cause does one move on to an improvement.

It appears that the students who were described believe that the inputs to a model can be proven and that there is such a thing as proof of validity, which is not possible. System dynamics operates in the real world, as do those in physics and engineering. A model is a theory about the behavior of a system, just as Einstein's law is a theory about an aspect of the physical world. There are no proofs of the correctness of such models, only observations that they agree well enough with the real world to be useful.

Another point that such students should understand is that the world operates on mental models and those mental models are the competition against which system dynamics models should first be compared. Can one develop a system dynamics model that is more insightful than the operational mental models, that is cleared to explain, shows why the mental models are failing, and suggests plausible alternate policies to improve behavior of the system.
Jay Forrester
 
Posts: 4
Joined: Sun Jan 11, 2009 11:06 am

Re: paranoia about specifying causal links

Postby Jim Duggan » Tue Feb 14, 2012 2:19 pm

Another useful reference for helping students focus on the problem at hand is to refer to the term "operational thinking", as defined by Barry Richmond in his paper - Systems thinking: critical thinking skills for the 1990s and beyond. (System Dynamics Review Vol. 9. no. 2 (Summer 1993): 113-133)

"Thinking operationally means thinking in terms of how things really work-not how they theoretically work, or how one might fashion a bit of algebra capable of generating realistic-looking output."

Furthermore :

"Operational thinking grounds students in reality. It also tends to be perceived as relevant because the student is thinking about it like it really is rather than dealing with abstractions that may bear little relation to what’s going on."

and also...:

"It’s easy to create exercises that develop operational thinking. Simply look around at real-world processes (like learning, becoming friends, experiencing peer pressure, pollution, drug or alcohol addiction) and ask, How do these processes really work? Let the students diagram their resulting observations. Then have them challenge each other’s depictions, asking, Is this really how it works?"

So a related question is, what kinds of practical real world problems do people think students engage with best?

regards,
Jim.
Jim Duggan
 
Posts: 39
Joined: Sun Feb 22, 2009 8:16 am
Location: Galway, Ireland.

Re: paranoia about specifying causal links

Postby Sarah Boyar » Thu Feb 16, 2012 9:21 am

New students definitely don't get it right when they are trying to do economic problems: And nearly every new student will try to draw a CLD of supply and demand, or price. Similarly, when they try to draw their CLDs of topical social phenomena – or even processes from their own professional lives - they don't demonstrate clear operational understanding.

I think this is because these systems are already characterized by their own specialist language, the syntax of which tacitly posits its own structural understanding.
Students do get it right when they creatively and independently draw feedback loops from their own more intimate experiences, without trying to use examples that parallel the CLDs taught in the lectures.

I believe that this is because students can use more 'primitives' when they talk about their own lives. This works better than starting with words (linguistic constructs) that tacitly subsume some sort of structure. Seems to me that the more specialist words have a structural syntax already built into their meaning – a syntax that may or may not align with CLD/SD logic.

I think that there is a linguistic hierarchy. When we do group model building, I find that we are constantly trying to boil down to the essence of what people really mean by their words, continue unto the absolutely irreducible.....and then use these uncontaminated words in conjunction with the structural logic of SD in order to reconstruct the system/their worldview.

Clarity on variable definitions precedes clarity on feedback structure. Thus, the self-generated examples (the more interesting and creative ones, too) naturally tend to be causally correct more often.

Also, interestingly, I have noticed that a few students have a causal dyslexia. They get everything perfectly backwards. For example, if there is a variable and then an effect on variable, they draw the causal link from the variable to the effect, rather than from the effect to the variable. It's amazing! Of the few students who exhibit this, most usually exhibit it consistently/systematically.

I will ask some students if they mind me posting some of their CLDs in order to illustrate these points.

Best wishes,
Sarah
Sarah Boyar
 
Posts: 13
Joined: Sat Jul 18, 2009 5:22 pm

Re: paranoia about specifying causal links

Postby Thomas Fiddaman » Tue Feb 21, 2012 11:16 am

I think there are two levels of causal certainty in our models.

I. There are some things we know with considerable certainty: stocks accumulate their inflows and outflows, physical stocks must have first order negative feedback on their outflows, the laws of physics apply, units must balance, and so on. Hopefully students are not reticent about this kind of statement about causality.

II. Then there are some things we don't really know - the basis for decisions or physical processes that haven't been adequately measured. In some cases, it's possible to ask people about the basis for their actions. They might give the wrong answer, but at least the model will reveal the gap if it's important. In other cases, like large-scale social systems, it's not really possible to ask except in a statistical sense. This is where correlation != causality really gets into trouble. In such cases, students' reticence might be legitimate. However, the proper response is not to freeze up and have no model (which really means reverting to intuition that is surely fallible). It's to embrace the uncertainty and create decision rules that represent multiple possible theories about what's going on. Some will prove untenable when confronted with data. After some weeding, the survivors can be used to give contingent or robust advice - 'policy A works whether theory X or theory Y describes behavior.' This may require smaller models, with a greater share of time devoted to experimentation.

In the absence of data with which to test causality, operational thinking, extreme conditions and similar thought experiments actually provide a lot of information. We may not know exactly why a manager orders N cans of soup per month, but there are some basic things that any decision rule would have to accomplish in order to be at least vaguely rational in some bounded sense. That, plus the physics, turns quite a bit of level II questions into level I.

It's a bit ironic that causality questions obstruct dynamic modeling, when failure to account for accumulation and feedback are two of the biggest pitfalls in intuitive and statistical reasoning.
Thomas Fiddaman
 
Posts: 149
Joined: Thu Jan 15, 2009 6:55 pm
Location: Bozeman, MT


Return to Open Discussion

Who is online

Users browsing this forum: Jamie Melhuish and 1 guest