By Dr. Yitbarek Kidane Woldetensay, SDRM-SI DE Team Lead, Headlight Consulting Services, LLC.
No single evaluation approach is best for all projects, nor is there a “good enough” approach– matching an evaluation approach with a particular project is about finding the “right fit.” Choosing the right evaluation approach for the proposed project is crucial for producing proper, credible, and rigorous evidence in evaluations. Developmental evaluation (DE) is not appropriate for every initiative, so assessing whether a project, program, or initiative fits DE before accepting a DE award or Developmental Evaluator position is essential. Always start by asking the following two questions:
- Is DE the right fit for the proposed project?
- Is there readiness to implement DE?
In this blog, we will describe what kind of projects; DE is the right evaluation approach and why it is the appropriate choice. We will cover the details of readiness to implement DE in our next blog.
Is DE the right fit?
The niche for DE is relatively small and particular. DE provides a helpful approach for real-time projects in a time of uncertainty, flux, and change, where the way forward is unknown and is likely to change constantly. The role of DE is to support the ongoing development, innovation, and change required to enable such projects to adapt to the emerging and complex environments in which they are situated. For example, DE is unsuitable when the primary objective is traditional accountability with performance indicators that are highly prescribed initially. For DE to be of use is recognition that a particular situation is complex and that a rigid, preconceived design may not work. DE is the right fit in the following cases.
DE is the right fit if the project/program is complex and/or is implemented in a dynamic environment
Some evaluation approaches help interventions measure whether they have reached their predefined outcomes. However, complex systems change may require the redefinition of outputs and outcomes. Complex projects often have no clear way forward, with outcomes emerging as the elements of the system adapt and evolve in response to changing circumstances in a non-linear fashion. This kind of intervention responds well to probe-sense-respond management approaches, in which development practitioners experiment, gather information, and act accordingly. Thus, midterm and end-line evaluations can occur too late for complex interventions or innovations to aid in programmatic fine-tuning. DE is particularly well-suited to managing complexity. DE provides a potential tool for reducing the high level of uncertainty. This is achieved by core evaluation skills, such as asking evaluative questions and using continuous data collection and analysis to provide near real-time feedback loops to implementers. So, one of the questions to ask in assessing whether DE is the right evaluation approach for a project is “is the situation complex and emergent?”
DE is the right fit if the initiative tests new approaches/innovations
Funders frequently operate in rapidly changing environments that require innovative and dynamic programming, which may not have tested theories of change or fully developed designs. Innovations can take the form of initiatives, programs, projects, policies, collaborations, and interventions. Innovations involve uncertain outcomes and unfold in situations where stakeholders typically disagree about the nature of the problem and what should be done to address it. These two dimensions, degree of uncertainty and disagreement, define the zone of complexity.
DE is an increasingly used approach to evaluate innovative and emergent programs and projects. It enables evaluators to provide real-time feedback so that evaluation findings can be used to guide development and adaptations. DE emphasizes learning and adaptation and aligns well with implementing innovation platforms with continuous reflection, learning, and adaptation as a specific design principle. In contrast to other evaluation purposes, DE serves a distinct purpose in supporting innovation development. Summative evaluation renders overall judgments of merit, worth, and significance. Formative evaluation supports improvements in a model; improving a model, especially to get it ready for summative evaluation, serves an essential purpose, but it is not a developmental purpose. DE enhances innovation by identifying and infusing data primarily within an informative process toward resolving the uncertainty associated with innovation and facilitating program co-creation between the clients and the developmental Evaluator.
Of course, Developmental evaluation is not appropriate for all innovation works; we should assess the innovation conditions to see if they are suitable to apply DE.
DE is the right fit if there is buy-in from decision-makers and implementers
Buy-in is a somewhat nebulous concept that we use to describe support for, agreement with, or even enthusiasm for the process and/or results of the DE from the DE stakeholders. DE stakeholders include DE Funders [person or organization funding the DE], the program team[s] being evaluated, staff in the program team’s broader operating unit or organization, and other implementers or partners who may participate in data collection or use the DE findings, the Developmental Evaluator, and the technical and management team supporting the Evaluator. Buy-in for the DE process means that stakeholders believe in and are committed to the evaluation design (i.e., the questions, data collection, and iterative feedback approach), the person or people who carry out the evaluation, and the deliverables produced by the evaluation.
Lack of buy-in for the process can result in small and large consequences for the DE, e.g., delaying or complicating the data collection, inhibiting implementation of adaptive actions from DE findings, etc. Lack of buy-in for the process begets lack of buy-in for the results (i.e., use of the findings, conclusions, or recommendations for learning or improvement). At best, a lack of buy-in for results means that the DE serves no purpose and is thus a waste of resources. Worse, it can harm the program or beneficiaries if no corrective actions recommended by the DE are taken. In serious cases, there may be retaliation against the Evaluator (the primary person conducting the DE) or other stakeholders, especially if there are negative findings.
DE is the right fit if the project’s procurement mechanisms allow for flexibility in implementation
DE is a highly versatile approach and is well suited for programs under flexible procurement mechanisms in which implementation is likely to change in response to emerging conditions on the ground. DE is beneficial in programs with untested or incomplete theories of change, where objectives may shift in response to contextual changes and implementers and/or program managers are “building the plane in the air.” Generally, given the innovation and complexity orientation, DE is best suited for organizations with financial and contractual structures to allow for adaptation of the process or intervention.
In general, though interest in DE is increasing among evaluators and non-evaluators alike, it is not suitable for all contexts. As described, while DE originated to serve complex, innovative programs, it can only do so successfully when the organizational context is appropriate. We hope this blog helps you better understand conditions when DE becomes a priority evaluation approach. In the next blog, we will explain the DE readiness assessment details. Stay tuned!
Dozois, Elizabeth, Langlois, Marc, Blanchet- Cohen. 2010. DE 201: A Practitioners Guide to Developmental Evaluation. Montreal: The J.W. McConnell Family Foundation & The International Institute for Child’s Rights and Development.
Gamble, Jamie. 2008. A Developmental Evaluation Primer. Montreal: The J.W. McConnell Family Foundation.
Implementing Developmental Evaluation: A Practical Guide for Evaluators and Administrators. (2019) U.S. Agency for International Development.
Patton, Michael Quinn. 2010. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: The Guilford Press.
Patton, Michael Quinn. 2011. Essentials of Utilization-Focused Evaluation. Los Angeles: Sage Publications.
Questions? Please email us at email@example.com
Leave a Reply