Evaluating Impacts in the Circular Food Innovation Lab

By Lindsay Cole, Manager of the City of Vancouver Solutions Lab

Welcome to Learning out Loud! This is where CFIL collaborators reflect on what we’ve been learning and trying in this experimental space. Thanks for joining us on our journey, and if you have any thoughts on what you’re reading we’d be happy to hear from you!

A recurring and vexing question in social innovation labs is about measuring impacts. How do we know if what we are doing is making a difference on the complex challenges that we work on? This blog shares some of the ways that we are working with this question in the Circular Food Innovation Lab (CFIL). We draw upon quotes and references from some wise thinkers in this space and have provided references along the way in case you are interested in learning more about any of these approaches.

The dominant way that impacts and outcomes are measured in city governments is through quantitative measures, key performance indicators (KPI’s), and/or SMART (specific, measurable, achievable, relevant, and time-bound) targets. The challenge with these approaches in the context of social innovation challenges is that they are mismatched to purpose. Social innovation challenges are complex by nature - characterized as non-linear, uncertain, ambiguous, and without a clear known endpoint. Social innovations will have a big picture North star/vision, for example a circular system where all organic materials are treated as food/nutrients in a cycle and the idea of ‘waste’ is no longer relevant or acceptable. So when it’s not clear what ‘completion’ or ‘success’ looks like because we can’t perhaps yet quite imagine it, or when it is likely to be in the distant future, how might we understand the impacts of our work? This post considers three aspects of our approach to evaluation: developmental evaluation; role of the evaluator in CFIL; and principles focused evaluation.

Developmental Evaluation (DE)

Developmental evaluation (DE) is a type of utilization focused evaluation, which means that the intended use for an evaluation is shaped by/with intended users, and that this user orientation is designed into every aspect and stage of the evaluation (Patton, 1978). Preskill and Beer (2012) think that DE is best suited for evaluating social innovation, and that DE has five characteristics that make it distinct from other evaluation approaches:

  1. A focus on social innovations where there is no accepted model (and may never be) for solving the problem.
  2. Continuous learning is intentionally embedded into the DE process.
  3. An emergent and adaptive evaluation design ensures that the evaluation has purpose and that it can respond in nimble ways to emerging issues and questions.
  4. The role of the developmental evaluator is a strategic learning partner and facilitator, which reflects a different role for most evaluators and their clients.
  5. The developmental evaluator brings a complex systems orientation to the evaluation.

DE provides timely and appropriate information and feedback to social innovators in order to inform and learn from the adaptive development of interventions in complex and dynamic environments (Patton, 2011). DE practitioners are often part of, or working closely alongside, the teams and initiatives that they are supporting, and their evaluative practice can become part of the innovation process itself. DE is designed to work well in conditions of high innovation, exploration, uncertainty, turbulence, rapid change, and emergence. DE processes involve asking evaluative questions, applying evaluation logic, and gathering and reporting data to support project, program, initiative, product, and/or organizational development and learning.

Some of the types of evaluative questions asked and answered through DE typically include:

  • What is developing or emerging as the innovation takes shape?
  • What variations in effects are we seeing?
  • What do the initial results reveal about expected progress?
  • What seems to be working and not working?
  • What elements merit more attention or changes?
  • How is the larger system or environment responding to the innovation?
  • How should the innovation be adapted in response to changing circumstances?
  • How can the project adapt to the context in ways that are within the project’s control? (Preskill & Beer, 2011).
Some examples of how we put DE into practice in CFIL included:

DE conversations and captures (documenting the ‘what’, ‘so what’, and ‘now what’) at significant points in the process (e.g. after gatherings with business collaborators, at the end of each project phase). As the team grew more practiced and comfortable with this approach, the reflection, insights, and learning grew richer and more interesting and helpful to the overall lab process.

Weekly/bi-weekly evaluative captures by the design team during active prototyping helped us to understand how/why the prototype concepts were being developed, tested, and iterated in the ways that we were, and what we were learning about the specific prototype, as well as what the prototype (as a fractal) was telling us about the larger systemic challenge.

Phase 2 Pivot: The first phase of prototyping gave us insights into how difficult it can be to co-design, and how challenging it can be to find the low-fidelity version of a solution to test in this complex challenge space. Based on what we saw emerge through our weekly DE captures, we shifted our original plan for Phase 2 (to focus on higher fidelity prototyping) to instead deepened and more committed rapid prototyping. We also added new prototype concepts to test in order to provide multiple pathways for business collaborators to stay involved in the lab if they might not have time to consistently participate in a prototyping group.

Role of the Evaluator as Systems Intervenor

The role of evaluator in social innovation works requires a balance of holding a somewhat independent and integrative view of the work, while also being entangled enough with the process and the team to offer valuable insights. Forss et al. (2011) and Cabaj (2018) offer some specific ways to work with this complexity as an evaluator. These include being concrete, inventive, flexible, and specific, and also understanding what type of strategic approach to innovation is being used and then designing an appropriate evaluative strategy from there. Lynn and Preskill (2016) suggest that the idea of rigour needs to be reframed when evaluating complexity, with a focus on evaluating: quality of thinking; generation of credible and legitimate claims with transferable findings; responsiveness to cultural contexts and stakeholder values; and the quality and value of the learning process.

Some examples of how we put this into practice in CFIL included:

The role of the evaluator was held by the project lead who structured evaluation into the overall project and process management approach;

Taking a service-oriented approach to developing, testing, evaluating, and communicating about prototypes/ing with business collaborators;

Regular cycles of action and reflection were part of the design process from the beginning in order to situate ourselves as designers as part of the system we are trying to transform, and make the evaluation results visible to the whole CFIL team as a systemic intervention in and of itself.

Principles Focused Evaluation (PFE)

PFE focuses on principles as the object of evaluation. PFE is concerned with three primary questions:

  1. “To what extent have meaningful and evaluable principles been articulated?
  2. If principles have been articulated, to what extent and in what ways are they being adhered to in practice?
  3. If adhered to, to what extent and in what ways are principles leading to desired results?” (Patton, 2017, p. ix).

Patton says that effective principles provide meaningful guidance, are useful and inspiring, are developmentally adaptive, and are evaluable (Tamarack, 2018). PFE puts the principles of the evaluators themselves in a more central position than other evaluation frameworks, and by doing so provokes some interesting thinking about making values and principles explicit in complex adaptive system interventions. By focusing on principles as the evaluand, this framework provides space and structure to surface different and conflicting values, creates wider and more diverse range of cultural perspectives to be present, and opens up different ways of being and knowing. This may lead to shifts in some of Meadows (2008) higher points of leverage, including values and mindsets, and may create impetus for reframing work on complex challenges from different principled groundings.

Some examples of how we put PFE into practice in CFIL included:

When setting up the lab, we articulated some core principles and a North Star of what we are working towards.

The design team is regularly reflecting on the extent to which principles related to transformation of the food system are being enacted in the prototypes.

Through systems mapping we identify the deeper values, mindsets, and principles that are shaping the more visible recurring patterns of behaviour and events that keep the system stuck. Different team members hold different values, purpose, approaches, and perspectives about the work that we are doing together and this helps to surface, explore, and work with these differences.

Disclaimer: the opinions and perspectives expressed within each of these posts are solely the author’s and do not reflect the opinions and perspectives of all CFIL participants.

References

Cabaj, M. (2018). What We Know So Far: Evaluating Efforts to Scale Social Innovation. Here to There Consulting.

Forss, K., Mita, M. & Schwartz, R. Eds. (2011). Evaluating the Complex: Attribution, Contribution, and Beyond. Comparative Policy Evaluation, Volume 18. New Jersey: Transaction Publishers.

Lynn, J., & Preskill, H. (2016). Rethinking rigor: Increasing credibility and use. Spark Policy Institute + FSG.

Meadows, D. H. (2008). Thinking in systems: A primer. Abingdon; London: Taylor & Francis Group.

Patton, M. Q. (1978). Utilization-focused evaluation. Beverly Hills: Sage Publications.

Patton, M. Q. (2011). Developmental Evaluation. New York: The Guilford Press.

Patton, M. Q. (2017). Principles-focused evaluation : The GUIDE. New York: The Guilford Press.

Tamarack (2018). Webinar on Principles Focused Evaluation with Michael Quinn Patton and Mark Cabaj. Retrieved December 2018 from: http://www.tamarackcommunity.ca/library/webinar-principles-focused-evaluation

Other Stories

Welcome to the CFIL Blog page! Here is where we share stories, recaps and insights from this learning journey.