Establishing Evaluation Offices Part 1

By: Cindy Clapp-Wincek, Senior Evaluation Advisor

Evaluation has now been around a good long time. Some Federal evaluation offices have been around for decades, some agencies do not yet have evaluation offices, and some agencies have had evaluation offices that have started and stopped. In any of these contexts, there are enduring issues. What choices face managers of evaluation offices? What criteria do they use to make those choices? In this series of blog posts, I will explore the issues outlined below and provide lessons and ideas for managers of evaluation offices to tackle these issues.

  • How is evaluation organized, funded, and staffed within your agency? (blog post #1, keep reading!)
  • How do you build a culture that values evaluation and evidence in ways that foster the creation of good evaluations as well as their use in decision making? (blog post #2, forthcoming)
  • How do you balance evaluation with performance monitoring, evidence, data, and learning? Although each of these is connected to the others, a manager of an evaluation office has to make choices on where to put resources. Some aspects of the choices are conceptual; many are practical. (blog post #3, forthcoming)

My own experience comes from the fairly unique world of foreign assistance – at least unique in the Federal context. In this field of work, we do not use block grants. All evaluation exists in a cultural context – our cultural contexts have greater diversity than most given the global nature of our work.

Twice, I co-authored meta-evaluations of USAID’s and other foreign assistance agencies’ evaluation systems. As a result of the second paper, I became the Director of USAID’s reboot of their M&E functions; I was the Director of the Office of Learning, Evaluation, and Research from 2011-2014. And starting in 2015, I advised the Security Cooperation Assessment, Monitoring, and Evaluation Office in the Office of Policy at the Department of Defense (DoD) on creating and establishing their new office. The decisions that I made at USAID and the work I did to support DoD Security Cooperation provide lessons to other evaluation offices regardless of where they might be in their evaluation office establishment journeys. Although these ideas and lessons are meant to take the long view, they must be currently considered in the context of the Evidence Act of 2018 and the Office of Management and Budget (OMB) guidance to agencies in the Executive Branch.

Enduring Issues and the Challenges that Ensue

With the Evidence Act, we know that Federal evaluation offices will have a policy (or getting one put into place will be their first task) and a Chief Evaluation Officer. That means they will start (or should start) with someone knowledgeable about evaluation as well as expectations of how the agency will deal with evaluation. For any evaluation office at any time, there are a number of decisions that will need to be made about how evaluation might be organized and staffed:

  • Evaluation Task Choices: Does the evaluation office have staff members that conduct evaluations themselves (either with staff doing the data collection or staff contracting and managing evaluations) or does it set policy, build capacity (e.g., managing training programs), foster an environment and culture for evaluation, as well as provide support and expertise for others in the system doing evaluation work? Each of these tasks requires different skills and probably different levels of resources that will all come into play in making decisions.
  • Evaluations versus Other Evidence: Will evaluation be housed with requirements for evidence, performance monitoring, and/or learning, i.e., real-time information for adaptive decisions? If so, how do you balance evaluation with the other types of evidence needed and used (performance monitoring, data gathering, and learning)? Although each of these is connected to the others, there are conceptual differences (e.g., what level of rigor, objectivity, and analysis do you need) and practical differences (e.g., real-time learning or many months to complete a “rigorous, independent” evaluation). A manager of an evaluation office has to make choices on where to put resources. Those choices must factor in OMB guidance, any recent Government Accountability Office (GAO) or Inspector General (IG) audits, Congressional interests and expectations, the senior leadership of that agency, and other relevant stakeholders.
  • Centralization or Decentralization: Will evaluation resources be centralized or decentralized? Will the predominance of evaluation resources be concentrated in one centralized evaluation office, or do the different parts of the agency have their own evaluation staff and resources embedded within their teams? Centralized can be more visible to senior decision-makers, consolidate a higher level of expertise, and provide a clear focus for the agency. Decentralized can disseminate evaluation importance closer to the managers who should be learning from the evaluations and provide a source of assistance that doesn’t have to compete with the rest of the agency. What is the right balance for your agency and the level of resources?
  • Staffing Resources: Can you get sufficient full-time equivalents (FTEs) for direct-hire staff, or will you have to rely on in-house contractors and unique hiring categories such as Presidential Management Interns? With no existing personnel code for “evaluator,” which approach will be most successful in getting you the type of expertise that you need? If yours is an office that conducts evaluations in house, how will you get the right set of methods and fieldwork skills? Over the long term, there are issues of career paths for evaluators; I myself left USAID in 1991 after 14 years because there was no career path. (See OMB criteria on staffing in their first phase guidance July 2019)
  • Budget Competition: How can evaluation compete with other budget priorities? Implementation folks particularly want greater budgets for what they are trying to achieve – not necessarily recognizing that better evidence and evaluation can improve the effectiveness and efficiency of investments and catalyze outcomes. The very rhythm of the annual budget cycle tends to keep the focus on what is viewed as paying off most quickly. This can make a comparatively large budget for a several year evaluation study struggle for prioritization in budget decision making. This budget pressure adds to the ongoing challenge of the immediate drowning out the important.
  • Use of Evaluations and Evidence: What kinds of steps do you need to take to ensure that the evidence you are working so hard to create is used in decision making? Do you develop relationships between your office and the implementors? Do you provide a variety of briefing pieces, podcasts, brown bags, and other products derived from evaluation and other data gathering? Can you highlight cases in which evaluations and/or good evidence were used and made a difference in outcomes? Experience has shown that it is important to understand what types of information decision-makers need, when they need it, and what format they need it in so that the information is actually useful and used instead of pushed aside to gather dust on a shelf. Think in terms of different segments of staff: senior decision-makers, on-the-ground staff, and agency technical staff take different kinds of actions and make various types of decisions requiring tailored information and timing. Be aware of the potential organizational behavior change and further resource investments that may be required to further strengthen a culture of evidence-based decision making.
  • Leadership: Who directs and manages the evaluation office? Identifying the number of issues the head of the evaluation office will face should clarify the diversity of challenges the head will face. Many of the choices outlined above require knowledge of evaluation, both theory and practice. To be successful, the head of evaluation will also need to know the agency, the other actors in the agency, as well as Federal bureaucratic procedures (such as budgeting and hiring).

Take the Long View

This is a considerable list of choices and decisions. Teasing out the considerations for each of these critical management concerns should support creating new evaluation offices or strengthening existing ones. Each agency situation is different and the choices made should be well-grounded in the specifics of your agency.

A bit of advice from my experience: evaluation has cycles of ups and downs. Work hard to make a difference when the evaluation priority is up and know that at least some staff will have learned and will continue good evaluation practices even when its priority is down. And be ready for when its star rises again.

In the next post in the series, I will talk about things I learned about building a culture that values, conducts, and uses evaluations. The third and final blog in the series will explore some of the challenges of balancing different types of evidence and learning. For more engagement around this topic, I will also be co-hosting and presenting a three-agency discussion with Tom Chapel of the Centers for Disease Control and Prevention and Katherine Dawes of the US Environmental Protection Agency virtually at the American Evaluation Association Annual Meeting. If you are interested, then please join us for session 1695 on Wednesday, Oct 28, 2020, from 08:00 AM – 08:45 AM Eastern. For other questions, please reach out to us at info@headlightconsultingservices.com.


Leave a Reply

Comments

  • Daniel

    I have been reading posts regarding this topic and this post is one of the most interesting and informative one I have read. Thank you for this!