[Contribution] Evaluations and research during crisis?
By Korea HeraldPublished : May 4, 2020 - 09:40
Lenin once said, “There are decades where nothing happens and there are weeks when decades happen.”
One of the unsung and first casualties of humanitarian crisis situations is the neglect of applied research: Research that can otherwise tell us what works, for whom, how much, why and under what circumstances. Indeed, a recent article in the journal Science discusses scientists who are having to abandon projects they have otherwise invested lifetimes in because of COVID.
This is not new. Most humanitarian crises witness this phenomenon, and humanitarian and development agencies rue it later for the lost opportunity of knowing what projects and investments and interventions work in stressed, oppressive and stretched environments, and how and how much.
Like a lab rat, we are condemned to repeat the experience of wringing our hands -- and then come another crisis, we still do not know what we could have done better. COVID, like other humanitarian crises, has diminished our memories of times gone by and makes us undo our decisions. Behavioral science helps explain this “recency bias.”
But the questions that plague scientists, researchers and evaluators alike are: Should we continue with our evaluations of international development and climate investments, and if so, why?
My team at the Green Climate Fund and I have also been asking the same questions, and here is the answer to the first question: yes, we should do evaluations of investments and programs and even more so now. Here are a few reasons.
First, during times of crisis, the pressure on resources becomes even greater. Knowing what works and what doesn’t can help us understand where precious (and rapidly scarce) resources should go. Evaluations do just that. For example, Tapnis and Doocy (2017) synthesize evidence of the comparative advantages of cash transfers vs. in-kind transfers vs. food transfers in humanitarian contexts. It is work such as this that is helping agencies understand what can work best in different types of crises. One insight: Cash transfers may have long-term beneficial consequences that had been ignored previously.
Second, evaluations can help us understand why something is working. By looking closely at pathways and using tools such as counterfactual pathways and implementation research, we can understand how pathways of change are affected during a crisis. This can then help mitigate risks. For example, an assessment of what worked during the 2004 tsunami in India and in island countries showed that it wasn’t international assistance that helped people become resilient: rather, it was social and local networks.
Third, evaluations can help understand how much, relatively, interventions and programs make a difference. When combined with cost data, evaluations can help cost-effectiveness estimations so that resources go to the best interventions and cost-effective programs possible. Although very little of this has been done in climate finance, there are great examples in development that we could and should imitate. For example, when policy makers want to increase school enrollment, there are a variety of options available -- improve teachers, give scholarships, provide school meals, etc. What should one do? Applied research has provided the answer.
Most importantly, evaluations especially are done with a set of values in mind -- equity, access, sustainability and innovation. These values become even more important during times of crisis. Applied research and evaluations can help assess how we can get to these values in the best way possible. In the case of an assessment of an alternate dispute resolution mechanism set up by the UNHCR in strife-torn Liberia, the study found that the mechanism worked for some category of disputes such as land. On the other hand, it significantly increased the incidents of witch hunts and “trials by ordeal.”
Overall, we have to recognize that one size does not fit all. In low income countries, for example, lockdowns are likely to be far less effective as a response to a pandemic, than in a high-income country. As an illustration, but particularly relevant to the time we do have a vaccine for COVID, previous evidence on vaccination rates has found far lower success rates in terms of coverage in low income countries, compared to high income countries. In the related case of childhood vaccinations, evidence shows that different strategies are required to ensure complete coverage in these countries, such as including parents and community members in regular conversations along with reminder cards, providing household incentives, doing home visits and integrating immunization with other services.
Indeed, neglecting the good applied context and time specific research that can help us understand what works during times of crisis, how much and why makes us unprepared for the next crisis. We ignore this at our own peril.
By Dr. Jyotsna Puri
Jyotsna Puri is the head of the Independent Evaluation Unit of the Green Climate Fund. The views reflected in this article are her own. -- Ed.
One of the unsung and first casualties of humanitarian crisis situations is the neglect of applied research: Research that can otherwise tell us what works, for whom, how much, why and under what circumstances. Indeed, a recent article in the journal Science discusses scientists who are having to abandon projects they have otherwise invested lifetimes in because of COVID.
This is not new. Most humanitarian crises witness this phenomenon, and humanitarian and development agencies rue it later for the lost opportunity of knowing what projects and investments and interventions work in stressed, oppressive and stretched environments, and how and how much.
Like a lab rat, we are condemned to repeat the experience of wringing our hands -- and then come another crisis, we still do not know what we could have done better. COVID, like other humanitarian crises, has diminished our memories of times gone by and makes us undo our decisions. Behavioral science helps explain this “recency bias.”
But the questions that plague scientists, researchers and evaluators alike are: Should we continue with our evaluations of international development and climate investments, and if so, why?
My team at the Green Climate Fund and I have also been asking the same questions, and here is the answer to the first question: yes, we should do evaluations of investments and programs and even more so now. Here are a few reasons.
First, during times of crisis, the pressure on resources becomes even greater. Knowing what works and what doesn’t can help us understand where precious (and rapidly scarce) resources should go. Evaluations do just that. For example, Tapnis and Doocy (2017) synthesize evidence of the comparative advantages of cash transfers vs. in-kind transfers vs. food transfers in humanitarian contexts. It is work such as this that is helping agencies understand what can work best in different types of crises. One insight: Cash transfers may have long-term beneficial consequences that had been ignored previously.
Second, evaluations can help us understand why something is working. By looking closely at pathways and using tools such as counterfactual pathways and implementation research, we can understand how pathways of change are affected during a crisis. This can then help mitigate risks. For example, an assessment of what worked during the 2004 tsunami in India and in island countries showed that it wasn’t international assistance that helped people become resilient: rather, it was social and local networks.
Third, evaluations can help understand how much, relatively, interventions and programs make a difference. When combined with cost data, evaluations can help cost-effectiveness estimations so that resources go to the best interventions and cost-effective programs possible. Although very little of this has been done in climate finance, there are great examples in development that we could and should imitate. For example, when policy makers want to increase school enrollment, there are a variety of options available -- improve teachers, give scholarships, provide school meals, etc. What should one do? Applied research has provided the answer.
Most importantly, evaluations especially are done with a set of values in mind -- equity, access, sustainability and innovation. These values become even more important during times of crisis. Applied research and evaluations can help assess how we can get to these values in the best way possible. In the case of an assessment of an alternate dispute resolution mechanism set up by the UNHCR in strife-torn Liberia, the study found that the mechanism worked for some category of disputes such as land. On the other hand, it significantly increased the incidents of witch hunts and “trials by ordeal.”
Overall, we have to recognize that one size does not fit all. In low income countries, for example, lockdowns are likely to be far less effective as a response to a pandemic, than in a high-income country. As an illustration, but particularly relevant to the time we do have a vaccine for COVID, previous evidence on vaccination rates has found far lower success rates in terms of coverage in low income countries, compared to high income countries. In the related case of childhood vaccinations, evidence shows that different strategies are required to ensure complete coverage in these countries, such as including parents and community members in regular conversations along with reminder cards, providing household incentives, doing home visits and integrating immunization with other services.
Indeed, neglecting the good applied context and time specific research that can help us understand what works during times of crisis, how much and why makes us unprepared for the next crisis. We ignore this at our own peril.
By Dr. Jyotsna Puri
Jyotsna Puri is the head of the Independent Evaluation Unit of the Green Climate Fund. The views reflected in this article are her own. -- Ed.
-
Articles by Korea Herald