MON 11:00 AM Parallel - Information and Project Management

Full Report:

Using Institutional Theory and Dynamic Simulation to Understand Complex E-Government Phenomena by Luis Luna-Reyes and J. Ramon Gil-Garcia

Luis said that governments hope to gain important benefits from e-government initiatives; however, about eighty percent of these initiatives fail. Luis discussed how organizational forms (e.g., centralization, formalization, and communication channels) and institutional arrangements (e.g., laws, regulations, and cultural constraints) affect how that organization implements technology.

To better understand the effectiveness of e-Government initiatives, Luis’s team converted an existing technology enactment framework by Fountain to a system dynamics model. Then, they used the model to show the effectiveness of the same technical enactment done by four organizations with different organizational forms. Each organization used the same technological infrastructure and was under the same Mexican Federal Ministry, but had a different technical enactment of portal implementation within the e-Mexico program. Organization forms were modeled as collaboration and expressed by the number of collaborating organizations, collective engagement, and collective understanding. Institutional arrangement was modeled as process formalization and expressed as processes and process legitimacy, as each varies from proposed to formal. The model reproduced different technology enactments consistent with the content development networks present in the e-Mexico program. When asked about his main recommendations, Luis said that trust was important in building the collaborating relationships needed for technology enactments. He also said that strong leadership and the existence of a previous network were key components in the creation of a project team. A good balance on relationships, results, and process-orientation are also important in the capitalization of the efforts of the team.

Dynamics of Agile Software Development by Kim van Oorschot, Kishore Sengupta, and Luk van Wassenhove

Kim began with a discussion of the problems with waterfall development (e.g., long delays, high costs, and low quality).  She said the waterfall approach is not flexible enough to deal with frequent requirement changes. She discussed how agile development claims to overcome these problems. Agile development divides the total development effort into a series of short “design, code, and test” iteration cycles (sprints), each with its own start and end dates. Kim asked, “How agile should a project be to shorten development time, reduce costs, and improve quality?” The degree of agility was defined in terms of the length of one iteration cycle. For example, one project may require ten iteration cycles, called sprints, each with a thirty day period. Her team asked, “What is the influence of the length of the iteration cycle (sprint period) on project performance?” The latter is measured in the overall time to complete the project, the effort consumed in a “number of person-days,” and the quality measured in the “number of undetected errors that cause rework.”

She and her team modified an existing software project system dynamics model, by Abdel-Hamid and Madnick (1991), to test the effect on project performance of different iteration periods. The periods varied from 20 to 260 days with each agile scenario having a fixed period for all its sprints. In addition, for each agile scenario, the team looked at two cases, one with no requirements changes during the entire development effort and one with changing requirements. For the “requirements change” case, each new sprint required additional effort in modifying the work of existing sprints. For each case and performance measure, there was a U-shaped trend in performance, with the best performance occurring between 43- and 55- day sprint periods (Agile 4 through Agile 6) for the “no requirements change” case and between 52 and 87 days (Agile 3 through Agile 5) for the “requirements change” case. Her team concluded that mid-levels of agility outperform low and high levels of agility in terms of quality, resource costs, and development time. These results are consistent with the average schedule pressure, which is lowest for mid-levels of agility, as illustrated in the figure.

Kim gave the following explanation: low levels of agility (e.g., agile 1) are inefficient due to inactivity (distant due date). High agility leads to inefficiency due to over-activity (fire-fighting) (e.g., agile 13).

Someone asked about the assumption that thirty-five percent of all tasks in need of completion are unknown at the start of the project. Kim said that Abdel-Hamid and Madnick made the same assumption, and, since it was based a real project, her team kept it. Someone else asked if she varied the duration of the iteration period for a given agile scenario. In many agile development projects, the early sprints are longer than later ones. She said they had not done that but that it was worth trying.

Deciding on Software Pricing and Openness under Competition, by Hazhir Rahmandad and Thanujan Ratnarajah

Hazhir said that if firms want to encourage open source contributors and complementary products, then they need to decide on an openness policy (i.e., whether to share all or part of their code) and a pricing policy (e.g., how much to charge for their software over time). These decisions are based on how much the market values the software, which depends on price, product feature richness, complementary software developed for the product, and installed users. As the value (utility) of the product increases, so does its share of the market – the higher the openness, the less you can charge. However, the higher the openness, the more non-employees (i.e., open source contributors) can help develop new features, potentially saving on development costs. Also, as the market share grows, the unit costs go down.

The model contains three reinforcing loops (i.e., open source support, complementary software, and network effects) and one balancing loop (development). The authors conducted a series of model runs including two firms. A base run with the two firms and a fixed openness and pricing policy helped the authors intuit the dynamics involved. In the second run, they optimized a single firm’s pricing and openness policies while keeping the policies of the second firm fixed.
They then sought a strategic equilibrium between the firms by simulating competition using a game-based approach through the following rounds:

Round 1: Firm 2 predicted the optimal policy of firm 1 (as in determined in run 2 above) and optimized its policies based on firm 1’s fixed policy.  
Round 2: Firm 1 predicted the optimal policy of firm 2 (as in determined in round 1) and optimized its policies based on firm 2’s fixed policy.

The rounds continued in a similar fashion until a competitive equilbium was reached. The authors concluded that openness is the dominant strategy when reinforcing loops are strong, yet it cuts deeply into profits when firms compete. Finding equilbrium led the authors to conclude that it is possible to find an open-loop Stakelberg equilibrium in differential games using typical system dynamics tools.

The authors also analyzed how sensitive the competitive equilibrium was to key parametric assumptions. They learned that the direct impact of an open source community (through development support) is limited.

Someone thought the optimization approach Hazhir and his co-author used was unusual since they sought a dynamic pricing policy, i.e., the best pricing policy at different intervals of time given a constant openness policy throughout a run.

Pascal Gambardella, CSC