**Session Report: ****Monday Morning Parallel Session on Formulation and Parameterization. **

In the Monday morning parallel session on *Formulation and Parameterization*, three papers were presented to advance the toolset available to SD practitioners for model building and formulation. The first paper focused on the importance of behavior patterns in SD and the need for pattern-based recognition and curve-fitting, as opposed to point-wise methods. The approach was presented along with test results, successes, and challenges, creating a point of departure for further work. The last two papers addressed inherent randomness and noise in systems, and discussed SD methods for explicitly characterizing randomness, noting that sometimes it is appropriate to look at a system from a deterministic point of view, while other times an explicit representation is necessary. The theory is based on two well-known results: Baye’s rule and the Chapman-Kolmogorov equation.

First Paper:

The first presentation was given by Gönenc Yucel on “Pattern-based System Design/Optimization” (co-author Yaman Barlas). The central idea is to optimize model parameters to yield desired behavior in terms of pattern characteristics, as opposed to point-wise closeness. All modelers conduct parametric analysis when building a model with some idea in mind about the feasible ranges for variables. The problem posed by the paper is the following: Given a subset of model parameters, find optimal values that generate some desired behavior. The challenges in solving this problem include feedback loops, non-linear relationships, and delays, which make it difficult to predict consequences of parameter changes.

Simulation-based optimization was proposed as a potential tool for solving this problem, keeping in mind that there does not exist a generic optimization algorithm for all classes of nonlinear systems.

Analysts who use SD methods are typically more concerned with behavioral sensitivity than numeric sensitivity of a given variable at a given time. The optimization approach presented is driven by a pattern recognition module which compares model output with the desired behavior pattern specified by the user. The output is a proximity index which is used by the objective function evaluation to control optimization iterations . The paper references 21 basic behavior patterns (Kanar and Barlas, 1999).

Three experiments were described, each based on models of varying size, number, and range of parameters. The first was a 2 nd order linear model. The second was a 3 rd order linear model with 2, 3, 5 and 7 parameters, and the third was non-linear, with wider feasible parameter ranges.

When comparing the run-time performance of the algorithm with a brute force method, the most significant finding is that while the brute force method increases exponentially in time with the number of parameters, the proposed algorithm runs relatively independent of problem size.

A few opportunity areas for improvement were presented. Under certain conditions the algorithm was not able to match the desired pattern, while in other cases it thought it had matched, but did not. One question to consider for future work might be whether or not a system, given the subset of parameters chosen, would even be capable of producing the desired behavior pattern. Could this somehow be detected?

In summary, the proposed methodology does not require reference data, rather the modeler determines desired behavior (e.g. s-shaped growth) and a subset of parameters to change. Additional work is needed to address scale independency of the algorithm. Performance issues arise with “narrow” patterns, where the desired pattern is contained within a small interval of the time horizon.

One member from the audience asked if this approach could be extended to match multiple trajectories simultaneously, which remains a possibility for future work.

Second Paper:

The second paper was presented by Rudolf Kulhavy, titled “Bayesian Analysis of Stochastic System Dynamics”, which presented a particle filter algorithm for estimating state variables and unknown parameters in an SD model, using sequential Monte Carlo approximation. The idea behind the approach is to estimate parameters using Bayesian statistics which can be done optimally for linear systems, but is more complicated and requires numeric methods for nonlinear systems. The author’s background is in control theory, with experience in both academic research and commercial R&D.

The presentation began with the familiar standard deterministic SD model, a system of ordinary differential equations (ODEs). The system is analyzed in discrete time with a sampling period and state and measurement noise is introduced. It is also noted that there may be some states (stocks) that cannot be observed (the unknown states).

Then a conditional probability density function is added to describe the state transition probabilities, given the previous state and input. We now have a partially observed, controlled Markov Chain with unknown parameters. If we augment the vector of states with the unknown parameters, then we increase dimensionality and introduce non-linearity (bi-linear system). What is gained, however, is an explicit representation of the problem: How to estimate states given inputs and measurements of the system.

Two simple tools are used to conduct system measurement and time-step updates. The product rule is used for the measurement update (posterior probability is proportional to the likelihood and prior). Here, if the problem is not linear/gaussian, we must use numerical approximation. The Chapman-Kolmogorov equation is used for the time update.

The novel aspect of this approach is using a sequential Monte Carlo approximation to work with samples of probabilities. This approach is also known as a “weighted bootstrap” or “particle filter”. Applications of this approach include Automatic Target Recognition, Bayesian networks, computational anatomy, mobile robotics, neural networks, signal processing, and tracking & guidance.

The paper also posed the question: “What does Bayesian Inference have to do with System Dynamics?” SD is a method for converting prior information into a system of ODEs. Bayesian inference provides a precise, coherent framework for updating the prior state, capturing and combining all elements of uncertainty (parameters, measurements, etc). Recent progress in sequential Monte Carlo methods makes this even more attractive for System Dynamics.

Third Paper:

The third paper was presented by Erling Moxnes, titled “The unavoidable a priori, revisited, or deriving the principles of SD”. The idea leverages on original work by Dana Meadows and confronts some criticism of SD. The paper presented guidelines for the analysis of complex, dynamic systems. Three forms of quantitative analysis in particular include: Optimization under Uncertainty, Bayesian Statistics, and System Dynamics. These methods can form a single coherent research program for the purpose of creating mental models, policy tuning, or policy analysis and optimization.

When the goal is to create a mental model change, it is best to use diagrams that do not rely on parameter values in order to derive meaning. The right diagram provides problem focus, is robust to parameter changes, and typically focuses on concepts such as stability or equilibrium. In these cases, there is no need for exact predictions in order for the model to be useful. When fine tuning policy, or assessing policy sensitivity, it is important to keep in mind that the “cost versus expected benefits of improvement” follows the 80/20-rule (80% of benefits are derived from 20% of the efforts).

Policy based on system feedback (like a control law) may be a function of all the states. If this is the case, it is important to be aware of the existence of those feedback links in the model, which has structural implications.

A future area of focus for SD is to consider how to deal with measurement error which influences policy effectiveness. When using Bayesian Statistics, we consider prior information and time series data. Prior information may consist of empirical or subjective data. We do not formulate models based on time-series data alone; we use prior data. The Bayesian perspective is important in model building and analysis, and in defense of SD, practitioners do use prior information!

**Jeremy B. Sato**