Programs that operate in complex and dynamic systems require responsive and nimble approaches to measurement and evaluation. It is sometimes difficult to determine which measurement and evaluation tools and practices are most appropriate, or how they can best be used for adaptive management and decision making. This challenge is an important one for our foundation to grapple with, given our commitment to measurement and that many of the foundation’s programmatic areas influence or interact with complex systems change. With this in mind, the Moore Foundation’s measurement, evaluation and learning team, in partnership with the Center for Evaluation Innovation, hosted a two-day event in May aimed at facilitating discussion around measurement and evaluation for systems change. This forum brought together program staff from across the foundation as well as systems change strategists and evaluation practitioners.
In addition to establishing a common understanding of how define and think about systems change evaluation, participants discussed four practical dilemmas that systems change efforts create for evaluation:
- How do we better understand our unique contribution to change when there are so many other actors influencing the system?
- How can we select and capture meaningful interim outcomes to understand systems change progress when we are operating in short time horizons and using strategies that have indirect impact?
- How do we set realistic boundaries around the aspects of the system that our strategy and our evaluation should focus on?
- How can we embed continuous, intentional learning into our strategy and evaluation work?
Following this two-day forum, the Center for Evaluation and Innovation developed a full analysis and executive summary of the forum, which details the results from the two-day discussion.
Furthermore, conversations at the event indicated that participants were eager to see tools and resources from their colleagues that they may apply to their own work. The most relevant resources and tools that surfaced during the discussions related to the following topics:
- Assessing contribution
- Identifying meaningful interim systems change outcomes
- Increasing confidence that an initiative or portfolio is on the right track
- Setting systems boundaries
- Strategic learning
The full list of resources is now available, and we are pleased to be able to share it with you. We hope the forum serves as a launching point for future conversations inside and outside the foundation around this important and emergent evaluation topic.
Mari Kenton Wright is a measurement evaluation and learning officer at the Gordon and Betty Moore Foundation.