Establishing Evaluation Offices Part 2

By: Cindy Clapp-Wincek, Senior Evaluation Advisor

What can help to build an agency culture that values, conducts, and uses evaluations?

Particularly for an agency that has not had evaluation before, strengthening evaluation knowledge and culture can be an uphill climb. I have recently worked with an agency that had not been doing monitoring or evaluation to any degree. Congress finally required an evaluation policy and a program to support it, so we’ve begun working together to establish what this looks like within their constraints. Field staff understandably viewed this as an unfunded, new mandate; an additional burden on top of already full work schedules. The requirement is indeed a hardship until the system responds with resources, staff, and training, but evaluation needs to be implemented regardless as the mandate responds. To successfully build an evaluation culture, it is necessary to bring on new resources, support iterative familiarization efforts for staff at all levels, and invest in a broader organizational change effort. Only when evaluation becomes a senior management priority AND a requirement for receiving any additional budget is progress likely to be made.

This blog shares some of the lessons I’ve learned over the years that might jumpstart your efforts to build an enduring culture that values, conducts, and uses evaluation to improve and streamline programs. If you are involved in a new or “renewed” evaluation office, there is a “golden hour” of opportunity to capitalize on all the attention you can get. More about that below.

Importance of a champion in senior management. This is particularly important in the early days of an evaluation office. Most Federal budget processes are based heavily on consistency with past budgets to avoid reopening contention between winners and losers. Without a senior champion, it will be hard in any office to get sufficient resources and staffing. While I was head of evaluation at USAID, the Administrator of the Agency, Dr. Rajiv Shah, was a great champion of evaluation, which led to real and meaningful increases in budget and staffing. Champions can also provide great visibility to evaluation. At USAID, we ran a competition to identify the strongest evaluation reports of that year and the winners presented their reports to the Administrator. The fact that the Administrator took the time to meet with the teams sent a message throughout the organization about the importance he placed on evaluation.

Introducing evaluation, performance monitoring, and learning to new hire staff and conveying that the use of solid information and learning are intrinsic to all the jobs in an organization. By introducing staff to the importance of evaluation, monitoring, and learning from their first day, it sets an expectation that this is a priority for every job regardless of title. Staff do not have to all be experts in evaluation methods or data quality assessments, but they should all be expected to know the sources of solid evidence related to their jobs, review them on a regular basis, and make modifications to programs and budgets accordingly. This needs to be an expectation of all team members from the very beginning that is conveyed through onboarding training and upheld by their managers.

Build a sense of community with those that conduct evaluations – this includes managers on the ground who commission evaluations, direct-hires as well as contract staff who conduct the evaluations for your agency, and any technical staff within your agency who may advise program managers or who may be parts of evaluation teams. Communities share information and support, particularly when a critical evaluation creates controversy.

I have observed that contract staff are most often left out of the community and its ongoing conversations. Frequently it is stated that their objectivity should not be compromised. Knowing the evaluation priorities, policies, and staff inside the agency makes contract evaluators better informed, which in turn helps them to provide better, more use-focused, accessible evaluations to that agency. After the DoD Security Cooperation office started commissioning evaluations, they convened a series of meetings of the Federally-Funded Research and Development Centers (FFRDCs), a specific group of public-private partnerships, that were conducting the initial evaluations. This gave the groups of evaluators an opportunity to learn from each other and establish norms and standards in an environment where implementing evaluation was very new.

Ongoing webinars and brown bags to bring in new ideas on evaluation. The evaluation field continues to evolve at a rapid rate. New methods, approaches, and practices are continually being innovated with a focus on methods that are better adept at tackling complexity. Support staff as they try to keep up with the new ideas that may be of greatest use in your environment. The American Evaluation Association is a great place to identify new ideas and those that present them well.

Have an evaluation agenda that is developed with all the key stakeholders. What do the decision-makers need to know? What are the most significant and/or impeding gaps in evidence? Setting an evaluation agenda must include decision-makers at various levels, not just those at the top of the food chain, in order to be carried out effectively. While leadership may have the biggest decisions to make, other staff at lower levels should use evaluation findings in their work closer to the project or implementation. These practical on-the-ground changes are what make the biggest difference. All key stakeholders should be part of the process of identifying evaluation questions and prioritizing what evaluations get conducted.

Minimize jargon. Jargon is a useful shorthand if you know the lingo but can be a real barrier if you don’t. Evaluation and performance monitoring have entire dialects of their own. These have varied by agency and can lead to confusion. Years ago, a group of NGOs got together and created a “Rosetta Stone” that cross-walked jargon used by about a dozen bilateral international donors in an attempt to address this problem. It is not the words that are important but the actions, principles, and standards that actually guide the evaluation work. Evaluation policy should be built on and use the words of the agency’s implementation guidance.

Balance learning and accountability. Frequently it is said that the two fundamental purposes of evidence/evaluation in a federal system are Learning (more on this in blog 3) and accountability. Being accountable to taxpayers is an important principle that guided my actions throughout my years as a Federal evaluator. That said, too much emphasis on accountability can make agency staff very nervous, feeling that someone is looking over their shoulder to judge them. There are clearly demonstrated instances of staff “modifying” data and putting off evaluations in order to avoid the scrutiny. While they should be accountable, some failure is a natural part of learning and innovating. In trying to find the right balance, I tended to put the emphasis on what they could learn from the evidence that would help them make more of a difference with their programs – encouraging them to embrace accountability but not in so many words.

Fight the natural tendency for evaluation to always be last. Even in the “OMB Implementation of the Foundations for Evidence-based Policy Making Act of 2018” (M-19-23), evaluation was part of Phase 4. We think of evaluation as coming after implementation has been completed and that seems to push it to the end in all of our discussions. By leaving evaluation to be last, we see all too often that resources have run out, timing gets squeezed, and things could have gone better if planning for evaluation had taken place at the beginning. Indeed, implementers should be thinking about evaluation throughout implementation – what do they need to know that could make their work stronger? Could a “mid-term” evaluation refocus and strengthen efforts? Would a developmental evaluation that runs throughout implementation provide crucial feedback loops for a more innovative management approach? Evaluation doesn’t always come at the end; so fight the tendency to let it be last in every meeting agenda, discussion, and budget allocation.

Building an Evaluation Culture: Sprinting and Sustaining

As I said in the first blog, building a culture is a long, slow process – a marathon not a sprint as we have been known to say. I’ve learned these ideas over four decades. Not all of these recommendations will work at any given time. You need to look at your agency and figure out what will work best at this time. As evaluators, we then review and make adjustments, perhaps adding in new ideas as new opportunities arise.

There is that small golden window of opportunity when a new evaluation office or function is created (or rebooted). There is the attention that comes with newness, which is such an opportunity to get out the important messages that are part of building evaluation culture. It is also likely to be one of your best chances to get resources and to get staff new and old into new training. Sprint when the opportunities arise and keep the attention for as long as you can. Conversely, slow and steady work in the lean times builds the culture and provides perhaps less (but still good quality) information for decision making. And do remember that the cycle will turn again.

Is your organization establishing or reprioritizing an evaluation office, implementing new evaluation policies, or going through other related organizational behavior change shifts? Headlight would be more than happy to support you and apply our expertise and experience to help make this change easier for you. For more about this, please contact us at info@headlightconsultingservices.com. For more on the issues and decisions for Federal evaluation offices, please register and join us for our AEA session on Wednesday, October 28, 2020 at 8 AM Eastern.


Leave a Reply

Comments

no comments found.