Sitemap

What should be the role of evaluation in a changing global context?

Zoom image will be displayed
Changes ahead for the world — and evaluation needs to respond (Photo: Patricia Rogers)

This blog shares an edited version of my presentation as part of a panel session on ‘The role of evaluation in a changing global context” organised by the International Evaluation Group of the World Bank as part of its celebrations of 50 years. The session involved Raj Kumar, Founding President and Editor-in-Chief of Devex; Andrea Cook, Executive Director of the United Nations Sustainable Development Group System-Wide Evaluation Office; Juha Uitto, former Director of the Global Environment Facility’s independent evaluation office; and was moderated by Marie Gaarder, Executive Director, the International Initiative for Impact Evaluation.

You can find the recording of the session here and a blog from 3IE summarising the session here.

I’ll be expanding on these points next week at the Canadian Evaluation Society conference in Fredericton, Canada in a keynote titled “Reimagining evaluation for an uncertain future”. So I’d be interested in any comments or good examples you might suggest of what evaluation needs to do more of.

If you can’t make it to Fredericton you can register for virtual live attendance on Monday of the Opening Ceremony and Keynote by Dr. Roland Chrisjohn on “Beyond narratology: reclaiming mixed methods within Indigenous science”; Tuesday my Keynote; and Wednesday Plenary Panel on “Questioning and Navigating Ethics in Evaluation” and Closing Ceremony. Cost: CES members: CAD$50; Non-members: CAD $100; Students, New Practitioners, and the Global South: CAD$10. Details here.

Here’s what I used my 6 minutes to cover:

We are at a crisis point for development and hence for evaluation. The previous speakers, Andrea Cook and Juha Uitto, have talked about the crises that confront us in terms of environmental sustainability, including climate change, equity and inclusion, and consequential increases in hunger, poverty, fragility, violence.

At such a time, we cannot afford business as usual for evaluation. The traditional way that we have been doing evaluation will become less and less useful in confronting these crises.

Today I’d like to talk briefly about 4 ways we need to do evaluation differently and some global ways of working together to do this.

Firstly, we need better theories of change. These need to recognise that development is always an intervention in complex systems and being affected by fast, disruptive change. Our theories of change need to reflect the complicated and complex nature of development, so we understand how development happens and how to make it work better.

We need to address complication — not just ‘what works’ but ‘what works for whom in what contexts’. Our theories of change need to show the interconnected chains of causal links which lead to a range of outcomes and impacts in different contexts, and show the networks of actors, contributing factors, events and actions, not only our development intervention.

These different results in different contexts are not ‘nuances’ as they are sometimes referred to but fundamentally important differences. If we care about equity and about translating evidence to new contexts we must do this.

We need to address complexity — developing ongoing answers to questions about ‘what is working and what is likely to work’ under conditions of ongoing uncertainty and emergence. These are situations that call for adaptive management, being able to act despite uncertainty, not by pretending it doesn’t exist but by supporting iterative action and review.

Secondly, we need to change evaluation from its emphasis on providing a snapshot out of the rear window to actually supporting collaboration and learning to act and go forward. We need processes for periodic reflection on what’s working, what fits with how we understand things and what doesn’t. How do we make sense of that? What do we need to do next?

Evaluation is therefore not just about producing evidence, but about supporting learning processes. We need to embed evaluation in planning, design and implementation. We need to engage non-evaluators in the process, and support their use of evidence with regular cycles of reflection, review and revision and engage the wider community, to encourage support for development investments and commitments.

We also need more rapid evaluation, more real time evaluation, and more early-stage evaluations. Better use of technology especially GIS and AI can help speed up data collection, coding, analysis, synthesis and reporting — always with the caveat that these need to be done under expert human oversight not as a replacement for it and issue of privacy and confidentiality need to be managed.

Thirdly, we need to be able to draw on diverse types of evidence and synthesise them effectively. This means we need to consistently advocate for situationally-appropriate methods, designs and processes, not a hierarchy of evidence or research designs which leads to leaving out important evidence and drawing erroneous conclusions. And we need to synthesise them using rubrics, or global scales, which can bring together diverse evidence in a way that is transparent, replicable, empirical and based on explicit criteria.

Fourthly, given the urgency of the crises facing us, we need to look at issues of environmental sustainability and equity in all evaluations, to ensure that there are not significant negative impacts from our interventions, and to learn better about how to address these issues. This is not a call to double the size and the length and duration of all our evaluations to do that. Nor do we want to see a tick and flick compliance exercise. We need a sensible screening process to identify and address major issues and then draw on diverse evidence, knowledge and perspectives to do that.

The good news is that there are a number of initiatives working on ways of addressing these issues. Let me finish by quickly naming 12 of these.

1. As Marie mentioned in her introduction, there is the methods menu developed by the Millenium Challenge Corporation and 3IE which focuses on evaluation methods for advocacy and policy influence, where counterfactuals are not possible and other rigorous methods are needed.

2. There is the work of the Causal Pathways Initiative which focuses on methods for understanding how, why, and under what conditions change happens or has happened, and that emphasises actors’ narratives, and inclusive and participatory processes.

3. There is the UNDP M & E sandbox on M & E for uncertain and complex contexts, especially supporting adaptive management

4. The UNFPA adaptive evaluation resource pack which supports real-time learning and adaptation, and capturing complexity and contributions in unpredictable environments.

5. CECAN — The Centre for the Evaluation of Complexity Across the Nexus which has been transforming the practice of policy evaluation across the food, energy, water and environmental domains, to make it fit for a complex world.

6. The RAMSES projects (Realist And Meta-narrative Evidence Syntheses: Evolving Standards) about how to synthesise diverse evidence to answer complex questions

7. The Equitable Evaluation Initiative, which focuses on how evaluation can be a conduit for and of equity

8. The Footprint Evaluation Initiative, which aims to embed environmental sustainability in all evaluation

9. Blue Marble Evaluation, a global initiative focused on training the next generation of evaluators to Think Globally, Act Globally and Evaluate Globally

10. The International Academy of Evaluation a global movement of volunteers committed to evaluation policy influence, professionalism, and transformative impact.

11. The most recent initiative — the Earth Day Evaluation Declaration for evaluators and evaluation commissioners, launched just a few days ago, which promotes Environmental, Social and Economic Sustainability and Regeneration Criteria in Evaluations

12. And of course close to my heart, BetterEvaluation, now part of the IEG’s Global Evaluation Initiative, which aims to both bring together materials on the full rainbow of methods, processes and approaches, and also to support building the evidence base about when to choose different methods and how to use them well.

Together we can all contribute to building this evidence base to support appropriate choice and use of methods.

This is not likely to be in the form of a decision tree algorithm but in the form of decision support of issues to consider in terms of the nature of what is being evaluated, the nature of the evaluation and resource availability.

Imagine if all evaluations reported not only their methodology but why this combination was chosen in this context.

Imagine if all evaluation conferences included some collaborative work on methods for particular situations, showcasing and analysing examples of practice.

I believe that together we can learn how to make these changes to evaluation practice and encourage everyone to engage in these and other initiatives — to share your experiences and learn about new approaches and methods to better address the challenges facing development.

Please add your comments below, especially to suggest other good resources and examples of better ways of doing evaluation, or other initiatives working on this.

Patricia Rogers Better evidence use, better world
Patricia Rogers Better evidence use, better world

Written by Patricia Rogers Better evidence use, better world

Former Professor, Public Sector Evaluation, RMIT University. Founder, BetterEvaluation, global open access knowledge platform on evaluation methods & processes

Responses (2)

Write a response