In this slide deck from the American Evaluation Association’s 2020 Virtual Conference, Headlight’s CEO and Director of Strategic Learning, Rebecca Herrington, shared action-oriented best practices around a commonly critiqued challenge with qualitative evaluation work: rigor.
As researchers and evaluators work to monitor, evaluate, and learn from work in the field, we often collect qualitative data from beneficiaries and stakeholders in the forms of focus groups and key informant interviews. Many are apprehensive to trust findings and recommendations from qualitative data and critique the qualitative methods used for not being rigorous enough. We absolutely must be thoughtful about the levels of confidence and the strength of findings and conclusions being drawn to ensure the appropriate use of evidence and understanding of what we do and do not know. But there are also ways to ensure the analysis process used within qualitative methods is more rigorous, meeting international best practice standards and achieving acceptable minimums for sampling and triangulation before putting forward findings and recommendations. Donors in the international development field also frequently want to see the impact in terms of numbers, but those numbers don’t always tell us much about programming quality, beneficiary experiences, or why and how change is happening on the ground.
Qualitative work can provide a deeper level of nuance and more appropriate and detailed recommendations for adaptation or scale if it is rigorous, which depends on the process used by the evaluator or researcher. Currently, there is very low application of rigorous qualitative techniques across the evaluation field, both in guidance and understanding from donors (see Raising the Bar evaluation report from Alliance for Peacebuilding) and in training up evaluators (Dewey, et. al, 2008). Headlight wishes to share what we have learned about improving rigor in qualitative approaches to contribute to the advancement and continued growth of the evaluation field and provide session attendees with practical ways to apply these lessons learned in their own work. The slides here from Rebecca Herrington’s 2020 AEA presentation provide a detailed look at processes that ensure rigor across qualitative methods at all four stages of qualitative work: 1) design, 2) implementation, 3) coding, and 4) analysis. For more detailed information on ensuring qualitative rigor check out Designing Rigorous Qualitative Practice part 1 and part 2.
Dewey, J. D., Montrosse, B. E., Schröter, D. C., Sullins, C. D., & Mattox, J. R. (2008). Evaluator Competencies: What’s Taught Versus What’s Sought. American Journal of Evaluation, 29(3), 268–287. https://doi.org/10.1177/1098214008321152