Disrupt Development

View Original

Young Voices for Development: An evaluation of the evaluator’s toolbox

This blog was written in September 2021 by Marieke Pijnenburg, Policy Advisor Results Based Management, Bureau of International Cooperation at the Dutch Ministry of Foreign Affairs, Laurens Kymmell, Policy Officer SEAH Taskforce, Department for Stabilisation and Humanitarian Aid the Dutch Ministry of Foreign Affairs and Luciano Rodriguez Carrington, PMEL Officer at RNW Media.

Every day it is becoming increasingly clear that traditional approaches to development are too slow to keep up with today’s global challenges. To tackle these challenges and build future solutions, we need young changemakers who dare to think outside the box. That is why Disrupt Development and the Advanced Master in International Development (AMID) of the Radboud University Nijmegen have joined hands to amplify the voices of young development professionals. In this Young Voices for Development blog series, young professionals of the AMID Young Professional programme rise to the stage to write about groundbreaking solutions and talk about inspiring innovations. In this blog post, Marieke Pijnenburg, Laurens Kymmell and Luciano Rodriguez Carrington dive into the evaluation of the evaluator’s toolbox. (The original blog post is posted on the AMID website: here)

Impact evaluation in international development has come a long way with a myriad of methodologies and tools developed over the past decades: from randomised control trials to social network analysis. But while the evaluator’s toolbox seems exhaustive at first glance, a closer look proves that there remains a lot of room for improvement. This entails critical reflection on the relevance of existing tools, the need for new innovative tools that go beyond the well-established evaluator’s toolbox and a call for greater emphasis on downward accountability and learning instead of an overemphasis on upward accountability. This blog guides you through these heated MEAL discussions with six concrete suggestions. Do not be afraid and jump into the MEAL-pandora-toolbox. See you on the other side.

Do not take quantitative methods at face value

Within international development, there is an increased focus on the use of quantitative methods such as randomised control trials (RCT), quasi-experimental designs (QED) or natural experiments. These emulate scientific experimental approaches used in settings such as medical trials in order to control factors that are not under direct experimental control. Methods such as RCTs have in many ways been ground-breaking, allowing randomised selection from large sample sizes, in principle and on average, ensuring that the differences measured between two groups are due to the intervention and nothing else.

Despite being useful in some contexts, RCTs have too often been hailed as a “silver bullet” or “golden standard” (even winning a Nobel prize!) for impact evaluations. By coining it as purely “rigorous”’ and “scientific” its limitations have often been ignored, dangerously so. RCTs might be useful to determine the success or failure of a specific programme, but completely leave out the ‘why’ and ‘for whom’ question. It focuses on average results of an entire population, whilst impacts of interventions are different for different people. Understanding this heterogeneity is crucial for international development policy and programmes.

Moreover, it feeds into the technocratic idea within development that focuses only on what is numerically measurable, ignoring the often systemic, political and messy context. To give a simple example: if children keep becoming ill you can give them medicine that might help cure the symptoms, however it might be the water supply or contaminations from other places that causes the child to fall ill again and again. The problem: an intervention focussed on giving medication fits in well with the RCT-requirements, but the more systemic, messy and political approach that would focus on the water supply system does not. Interventions and research on the latter of course exists, but do not have as much political or academic clout. Donors also play a crucial role in this, often demanding straightforward interventions that show mostly measurable quantitative results. NGOs and implementing partners therefore tend to focus on upward accountability towards donors, focussing on what is best for reporting, fitting into tight results frameworks and gaining more funding, instead of downward accountability: what is best for and needed by the people they are aiming to support.

Use qualitative methods to deal with context and complexity

Everyone knows the stereotypical images of failed development projects: donated technology lying around completely unused, or programmes only reaching or benefitting the more advantaged people within communities. These might be cliches, but still occur too often due to one common factor: a lack of understanding of local context and dynamics. This is where qualitative methods such as focus group discussions, participatory group methods or key informant interviews are crucial. These need to be used in parallel to quantitative methods to allow for formative in-depth research that offers insights into local norms, context and preferences. Process and participatory evaluation at the onset would allow for projects to ensure appropriate design and baseline, in addition to trouble-shooting as the project develops. These types of methods also lend themselves to a better understanding of the theoretical pathways and assumptions of projects, often with a focus on contributions of activities and interventions instead of direct attribution that is easily measurable and quantifiable.

Combine methodological strengths

In the ideal situation, you mix and match according to the objectives of the evaluation. This preferably includes a combination of methods that complement and triangulate one another to come as close to ‘measuring reality’ as possible. But just combining some quantitative and qualitative methods unfortunately does not cut it. While the upward accountability systems often dictate the start of a baseline before qualitative analysis can be carried out, the evaluation itself would benefit more from an approach where the qualitative part is conducted first, so it can feed into the quantitative aspect.

An innovative approach that applies mixed-methods is the Q-methodology. It illustrates how one can achieve the best of both worlds. Q-methodology takes into account the participant’s viewpoint, while at the same time allowing for the quantification of the qualitative responses through the use of a framework. While it already takes into account more than most methods, it does still ask for lived experiences to be forced into a set framework. These tensions show the need to further develop these and other tools as well as to critically evaluate the context and programme-specific applicability, shortcomings and strengths of existing tools.  

Look beyond the evaluator’s toolbox

It may seem that the Monitoring, Evaluation, Accountability & Learning (MEAL) toolbox out there is enormous. However, in reality the methods that live up to the specific needs of today’s complex and wide-range of development interventions are actually quite scarce. Beyond the well-established tools there is a growing need for context-specific tools that adapt to the rapidly changing environments and available technologies. Therefore, ‘development practitioners’ should crack open the evaluator’s toolbox with a tool, to find new tools.

A concrete example can be found in the case of Social Listening, a news and social media monitoring tool that gives insights into the priorities, sentiment and successful strategies to engage specific target groups on social media channels, by processing large volumes of digital data. Another innovative ‘out of the toolbox thinking’ example is the Social Norms Exploration Tool (SNET) that provides a systematic investigation into social norms through a participatory guide for successful programme implementation. Social listening, largely borrowed from social media and market research experts, and SNET, purposefully developed by SRHR experts with support from USAID, indicate how we can create, as well as ‘borrow’ relevant tools from within and beyond the development sector. Showing the importance of finding new innovative tools that keep up with the increasingly complex reality, technological advances and knowledge beyond that of the well-established evaluator’s toolbox.

Remain agile and flexible

The use of impact evaluation methods (e.g. RCTs), ask for a rigid project implementation, barely allowing change within the project. Once the baseline is set, an evaluator's preference is not to alter the programme in order to ensure that the mid-, and end-term evaluations are the same, lending itself for easier impact measurement. Impact measurement is important and useful, but should not constrain the flexibility and adaptability of programmes. Project planning, no matter how thoroughly done, is based on the limited data available. This combined with an often difficult and unpredictable implementation environment means that there is often room for improvement along the way. Shahu et al. (2012) showed that the cost of including flexibility is much lower than the costs of managing unexpected changes during project implementation and that the projects that include a flexibility approach have a higher success rate as compared to those in rigid systems. Inbuilt flexibility allows adjustments to be made in order to reach the best outcomes and allow for creative responses to opportunities that might not have been anticipated during the planning phase. This is crucial to the success of a complex development programme and therefore a bare necessity (Boakye, Lawrence & Liu, 2015). Methods should be adapted to, and benefit interventions, rather than constrain impact measurement, learning and ultimately projects themselves.

Ensure a focus on downward accountability, not just upward accountability

We call for a move away from mostly upward accountability towards a greater focus on downward accountability by investing in learning to maximise impact (yes, we dare to say impact). These two types of accountability have the potential to go hand in hand. However the current system, in which the donors expect their (taxpayers’) money to be well spent, fosters the need for the development sector to ‘prove’ itself, therefore minimising the space left for ‘lessons learned’, as this is often seen as money not well spent. Did they miss the memo that lessons learned are quite important to improve? If the donor is as fixated on realising maximum impact as they claim to, they should be more open to accepting downward accountability. This type of accountability is equally important for impact. Focus on only upward accountability also often leads to overquantifying results that might not be that quantifiable, making findings from evaluations meaningless.

A ‘toolful’ way forward

We hope that this blog has provided you with critical insights into the landscape of MEAL methods in international development. Based on the key arguments we just layed out for you, we call for: 1) a careful evaluation of what (combination of) tools could be relevant based on their strengths and limitations; 2) flexibility as a builtin requirement into the programme design to account for the complexity of projects and their contexts 3) a need to go beyond the existing evaluator’s toolbox and create (please let us know!), adjust or borrow new innovative tools; and 4) a greater emphasis on downward accountability and learning, instead of an overemphasis on upward accountability.

If you made it to this side of the MEAL-pandora-toolbox evaluation, congratulations! We are very proud of you and hope that you feel better equipped to use these calls to action to critically evaluate the methods and accountability emphasis of your organisation’s MEAL work. Whether you are a MEAL expert or rookie, if you have any questions, comments or ideas on this topic, please feel free to reach out to us!

Sources

Boakye, Lawrence G. & Liu, Li. (2015). Governance of tomorrow's international development projects (IDPs): flexible or rigid?. 2. 55-61.

Shahu, R., Pundir, A., & Ganapathy, L. (2012). An Empirical Study on Flexibility: A Critical Success Factor of Construction Projects. Global Journal of Flexible Systems Management (Global Institute of Flexible Systems Management), 13(3), 123-128. doi: 10.1007/s40171-012-0014-5.