Placeholder canvas
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages

Go Home

Models, Maps, and Levels of Evidence

Models, Maps, and Levels of Evidence

Anyone who has attended a System Dynamics conference in recent years, or has read past posts in the System Dynamics Society blog, is surely aware of differences of opinion on the value of qualitative maps as opposed to quantified simulations.

What you may not know is that this debate has been going on for a long time, stretching back to the early 1980s, when Eric Wolstenholme and others asserted that one might be able to infer dynamics from qualitative maps.  In 1990, Peter Senge’s book The Fifth Discipline upped the ante by suggesting that certain problems might be categorized according to System Archetypes, which some took to mean that one could go directly to solutions without first simulating.  This became known as Systems Thinking.

In 2000, Geoff Coyle wrote a paper (SDR 2000, 16:3) that pointedly asked whether there might be situations so uncertain that quantified modeling cannot tell us more than qualitative mapping alone.  Rogelio Oliva and I wrote a rejoinder (SDR 2001, 17:4) that challenged this idea and sought to reclaim Jay Forrester’s original view of simulation as the necessary test bed for any dynamic hypothesis that might conceivably lead to policy decisions.

Even now, 20 years later, the debate continues.  We see in our conferences increasing use of Group Model Building that goes no further than a qualitative map, and from which the authors claim to have derived dynamic insights.

Some have been dismayed by this development—which appears to be the further expansion of Systems Thinking—fearing it is diluting and dumbing down the fundamentals of our field.  The modelers accuse the mappers of lacking rigor, while the mappers say that good group process has a rigor of its own.

What can we do about this long-brewing pot of trouble?

  Is there any way out of the impasse?

I’d like to suggest a possibility.  Several years ago, I published a paper (SDR 2014, 30:1-2) describing how a “levels of evidence” approach—a standard for classifying work in the biomedical sciences—might be applied in SD.  To achieve an “A” level of evidence, one would need both strong structural and behavioral evidence and the ability to reliably test one’s model.

  Structural evidence comes from conversations with subject matter experts and focused studies of cause and effect.  Behavioral evidence comes from a comparison of model output with time series data and historical records.   Work with strong support for structure but not behavior, or behavior but not structure, would achieve at best a “B” rating.  Work with strong support for neither would get a “C” rating.

In the biomedical sciences, works that have “B” or “C” ratings can still be presented at conferences and even appear in prestigious journals.  A rating less than “A” does not mean poor quality but rather lack of full, iron-clad reliability for drawing conclusions and making decisions; that is, something more exploratory or tentative.  Its level of evidence is designated in the conference proceedings or at the top of the paper so that the audience knows what they are dealing with—a decisive work (one from which decisions can be made with confidence) or something less than that.

It seems to me we can apply the Levels of Evidence filter objectively across both simulation models and qualitative maps.  Let’s start with the simple fact that a simulation model can be tested formally, while a qualitative map cannot.  It follows that the best possible rating for a simulation model is “A”, while the best possible for a qualitative map is “B”.

If we can agree on this much, then we may be able to find a way for simulation models and qualitative maps to coexist.  It would require acknowledgment from the advocates of qualitative maps that their work can never be considered decisive.  And, it would require acknowledgment from the advocates of simulation that a model lacking sufficient evidence may be no more reliable than a well-developed qualitative map.

Recent Posts

Society Governance Updates

Society Governance Updates Welcome, Allyson! New President Allyson Beall King joined the Policy Council as our 2024 President. Her primary role is as director of the Washington State University School of the Environment, which focuses on regional ecologies and our...

Call for Presenters: Seminar Series

Call for Presenters: Seminar Series We at the System Dynamics Society are continually seeking vibrant and knowledgeable presenters for our ongoing Seminar Series. As we unfold the calendar, there’s always a place for more insights, experiences, and expertise to enrich...

Upcoming Events

Dissertation Defense by Anne M. Johnson

Dissertation Defense by Anne M. Johnson

You are cordially invited to Anne M. Johnson‘s Ph.D. dissertation defense, which will be held at 9:00 AM (Boston time) on Monday, April 22nd, on the WPI campus in the Higgins House’s Great Hall . Anne’s research investigates how a synthesis between...

Recent Business cases

A Design Value Calculator: A System Dynamics Boardgame

A Design Value Calculator: A System Dynamics Boardgame EXECUTIVE Summary Product design is a specific form of complex innovation that touches all areas of an organization’s management. While entrepreneurs recognise the value of design, they often tend to focus...

Fast-Track Cities Uses System Dynamics to Enhance HIV Care

Fast-Track Cities Uses System Dynamics to Enhance HIV Care EXECUTIVE Summary Low levels of viral suppression at 69% for people with HIV make it hard to believe the 95% target level will be achieved by 2030 in St. Louis, USA. As a solution, Fast-Track Cities-STL opted...

Join us