Learning to Make Our Assessments and Evaluations More Effective for Our Clients through Better Contextualization

By Maxine Secskas, CLAME Specialist, Headlight Consulting Services, LLC

This blog came about as a response to my interest in learning more about how to get better feedback and insight from a client while designing an assessment or a tool. Last year, I was disappointed to realize that a very involved assessment and analysis that I was conducting for a client was likely not to be as targeted or helpful as we commit ourselves and our services to be here at Headlight. I suspect that the final report will end up dusty on a shelf, a checkbox ticked because the client felt it had been necessary. I felt the issue had lain in the unfamiliarity of our clients with the framework being used and my own lack of experience in teasing out their critical feedback during the design phase. This blog shares my process of realization, identification of this issue, and the lessons I have learned from my discussions with more experienced colleagues, as well as new approaches I intend to take moving forward. I hope reading this can help stave off similar experiences for other junior evaluators and analysts or can serve as a helpful reminder for those more experienced. 

Click here to go to the end to view the recommendations I will carry forward in future efforts.

“Why did we do this assessment?” Not a question we want to get.

As a junior evaluator at Headlight, I recently conducted an assessment for a client, and towards the end of the process, questions started rising up around the intended uses of the findings and whether it was really helpful. I had provided technical support for the client through the design phase, drafted an interview protocol based on the framework the client was interested in, data had been collected and translated by local enumerators, and I was well into the analysis of the data. However, at this late stage the conversations I had with the client at our weekly check-in had started to turn to why we were doing the assessment, why we chose the stakeholder categories used for the KIIs, and whether the findings would be actionable for the target audience, as well as questions about the utility of the framework we were using.

While we had taken the opportunity during the design and inception stages to discuss our vision for the end uses and to gather the client’s input on the logistics of the assessment, we received no significant input to shape the design at that time. The clients were unfamiliar with the framework that the assessment was to be based on and deferred to our technical expertise and experience. This meant that towards the end of the process, we realized that the protocol could have been simplified, the scope of the questions narrowed to be more targeted to the client’s intended uses, and we should have had a clearer plan for products or deliverables that would be most impactful for the client’s end users at the conclusion of the assessment.

What we did and exploring what the gaps are – a need for deeper understanding and a systemic approach.

In reflecting on the process that we had followed, I wondered: what could we have done differently to have elicited more feedback from the client during the design phase? I believe that could have helped us shape a more use-focused and impactful product from the start. At the time, I had drafted an interview protocol based on a few past examples of this type of assessment and based on all of the elements of the framework the client was interested in. I then shared that draft protocol with the client, accompanied by prompting questions about local context and intended use to guide their review. We also held weekly check-in meetings with the client but did not have a dedicated session to review and gather feedback on the protocol, only a verbal query about whether they had questions or feedback for us to incorporate before finalizing the tool. In the end, the client only offered feedback on what demographic information was to be collected but no feedback on the content of the assessment itself.

As a junior evaluator who wants to ensure that my work is targeted and impactful for partners and clients, this experience begs the question: How do we solicit meaningful input and collaborative feedback using an approach that reduces burdens (time, novelty, information overload)? How can I best manage clients through new methods or approaches to solicit feedback beyond, “It looks good; we trust your expertise”?

In conversations with colleagues at Headlight, it seems that this need to get collaborative input from partners and clients can be a challenge for professionals in the development field more broadly. The trouble with adapting existing tools for specific contexts and uses appears to be less of a lack of buy-in or commitment from a client but more of a gap in understanding the purpose of doing an assessment or the intended uses for the findings that would guide further contextualization. It is, at its heart, a systems-thinking issue. The client and the evaluators are not adequately designing for the desired end result and are thinking too prescriptively outside of the implementation context. I know now that I need to start asking questions earlier to connect the process to the end use and think more critically about how I involve my local partners, clients, and end-users from the beginning.

Lessons Learned from colleagues – tie it to the client’s frame of reference, ensure a hands-on pilot for feedback, assume a need for continued adaptation, and much more.

To move beyond my own budding learnings, I decided to gather lessons learned from my more-experienced colleagues. I chose to speak with Dr. Yitbarek Kidane Woldetensay, Headlight Developmental Evaluation Lead for Disaster Risk Management in Ethiopia, who recently contextualized an Organizational Capacity Assessment for a fellowship program at universities in Ethiopia, and Rebecca Askin, Headlight Senior CLAME Specialist, who recently built a nuanced Developmental Evaluation Readiness Assessment, adapted from the USAID CLA Maturity Assessment.

Highlights from the Discussion with Dr. Woldetensay

Dr. Woldetensay told me about his experience in this recent effort to adapt and contextualize an Organizational Capacity Assessment for universities participating in an internship program:

“It is good to use tools that have already been tested. I don’t recommend drafting tools from scratch. We need to identify if one has already been developed…

Once we identify an example to start from, a focus on objectives is important. What do the goals mean? What are the dimensions? This will give us the background to have a good tool. We need to identify the objective of the assessment… What are the questions and objectives, the intent? …

At the design stage, it is important to involve the client. For their buy-in. Read their [strategy] documents (theory of change, results framework, etc.). When you talk about their strategy, they are interested… So link what you are saying to their [strategy] documents. If they feel it is their own, they will feel responsible…

It is a challenge to get feedback from the client. Especially from program people – when we talk about data, they may feel it is not their responsibility. So, in that case, it is good to prepare an exercise that will involve them in responses. Prepare a question exercise to engage them rather than just asking about the protocol. If they participate as a group, they will feel more responsible for participating… So, after they mapped their interventions, I analyzed it, put it in Excel, populated interventions in one column, and reviewed their suggestions to identify what is missing… We also provided a Training of Trainers in-person for the universities. Since this OCA will be facilitated by partner universities, we conducted a 3-day training. We gave the draft questionnaire for them to review. And updated that with their feedback. So it was a second opportunity with stakeholders to improve the tool…”

Highlights from the Discussion with Rebecca Askin

My conversation with Rebecca Askin focused on her past experience with numerous efforts that used contextualization approaches and recommendations from what she had seen that worked well:

“I will start by saying that there is a pathway, from a need to a goal…  Have we really understood what is being asked for? We need to ask a LOT of questions. Don’t make assumptions to fill in the blanks. You have to diplomatically question and push on assumptions. If I am designing, I need to know that the people I have in my mind are really THE people who will be present and engaged when the tool is being used. Take the time to interrogate who will touch the tool and use the tool, and have them participate in the contextualization process. Get as specific as possible.

Talk to people on the client team. Have they tried to do something like this before, what worked or didn’t, what were the challenges? This gives us useful info. Get specialist and technical input, but also take the tool to the NON-experts who understand what it will feel like in the room when the tool is used. Those who know what people using the tool will resist and what concepts need further explanation. We need to test the tool with those who will administer the tool as well.  

We should assume that the tool is not totally fit-for-purpose. We have more success when we make that assumption, so build in a mechanism for feedback moments. If I am designing the tool but not administering it, I build in check-ins. Set the expectation that this is not a perfect tool or process, and there are likely to be points where the people working with it will be unclear or prefer to go about things differently. We need to be open to that and invite their feedback. That will take time and resources, which should be built into any workplanning of the effort. At some point, the tool needs to be good enough to use it. So we need to know when the set feedback points are or hard cutoffs to keep the process moving forward. 

Often people don’t understand the kind of feedback that I want. We need to pay close attention to the client’s level of engagement (how actively they are asking questions). Are people just “yes-ing”? When I ask them for feedback or thoughts, they are not providing anything specific? We need to have heightened awareness and reflect on the level of feedback we’re getting. When the feedback is minimal, often the client is either too busy, trying to avoid a delay, or are not clear on what the tool is that you are working on. There might not be anything you can do about that; you might not have the relationships to probe. But the best thing I find is, whatever time I allocated to processing feedback, I will reallocate to demonstrating the tool, so if the team is not doing their own critical thinking, we need to find a creative way to encourage creative thinking. So find a way to demo, a less passive way of waiting for feedback. Provide a safe space for them to engage with the tool. I will ask them questions: can you imagine if you are the people facilitating the tool? Or the people who will be answering questions, put yourself in their position, imagine you are them, would they be confused by any piece? Are there any concepts they would find hard to understand?…Also, keep track of the changes we are making and the reasons that the change was made. It does not need to be complex, but a paper trail. What was changed, when, and for what reason. If we are doing many iterations, we may lose track of the versions. This can help us stay grounded in the choices we are making.…”

Carrying These Contextualization Lessons Forward

After processing the realization that my effort would have greatly benefited from more input from my client at the outset, here are some of the lessons and approaches that I intend to carry with me into future work:

Lesson #1: An understanding of the objectives is important. When designing an assessment or a tool, we need to ensure that we really understand what is being asked for, ideally by asking a lot of questions. We need to pay close attention to our client’s level of engagement in the design, and if we suspect that they are approving everything with no adaptations for the sake of expediency or a lack of understanding, we need to engage them in a different manner (see lesson #3). We should also ensure that we speak with a diverse group of people involved in the effort to capture the needs of the effort, which may not be fully articulated by the project manager or the MEL manager.

Lesson #2: Link the design of the assessment or tool and its objectives to the client’s strategy. We should be familiar with the client’s strategy documents, such as their Theory of Change and Results Framework, and be able to articulate how our work will tie to their broader goals and portfolio. Doing this should help the client feel the work is their own, will matter for their day-to-day efforts, and may help them feel more engaged. A client has to understand how the assessment or tool will benefit or help their end goal, and this can be a useful way to convey that. 

Lesson #3: Hands-on engagement with the assessment or tool can help engage the client and get their feedback. Captive audiences are especially useful for people who are new to a particular approach or technical method. It can help them better understand the operation of the assessment or tool if we prepare an exercise that will involve them in responses. This connects with Lesson #1 above in that facilitating technical and strategic sessions is key to getting a better understanding of how our effort connects to their larger system of work and enables them to give more effective input.

Lesson #4: Plan for iterations; within the time allowed. If we establish the understanding that the assessment or tool has room for improvement, and we build in a mechanism or timeframe for feedback and iteration, then our clients will have more opportunities and feel encouraged to give feedback. This may be limited within the timeline allowed on a project, as there may be a cut-off date for any final improvements on tools before implementation. It may also be limited by short-term projects, where we need to consider the trade-offs carefully at the outset. 

Lesson #5: Keep a change log. We should keep track of the changes that we make on an assessment or tool and the reasons for the changes that were made. Documentation of our rationale is helpful for future decision-making and adaptation.

Are there other suggestions that you might recommend from your experiences? Please comment below to add any additional thoughts. And stay tuned for our next post coming soon!


Leave a Reply

Comments

  • Joy Garba

    Great learning article on evaluations.

  • Amanda Satterwhite

    Thanks for this great post! As the USAID “client,” it is wonderful to see partners pushing us to do assessments that make sense – not just tick a box. Thanks for helping to move all of us in that direction!

  • Michael

    Hello Maxine. Thank you for sharing your experience. I can almost tell you that all junior and senior evaluators have gone through this experience. I have over a decade of experience in MEL and 5 years experience in research, assessment, evaluation and the most important thing for me in meeting my client’s need is to have clear and concise research objectives. Very good objectives means I can design tools to respond to the objectives and have a thorough review session with the client to see if the tools will help us ‘answer’ those objectives.

    Another approach to help me get my client buy-in is the use of what I call “Back-up Planning”. I categorize each question in my tool per the research objectives to see if they are adequate to help me answer the research questions.

    Finally, I want to say I love your last lesson on keeping a change log. Surely I am adopting this going forward. Thank you!