Summit Hosts Live Webinar on the Alchemy of Program Evaluation

October 12, 2022 Summit Consulting

Summit webinar on program evaluation

On October 6, Summit hosted a live webinar with four in-house experts discussing evidence-based program evaluations. The experts drew on their real-life project experience to discuss the different types of program evaluations, how to design cost-effective evaluations, the advantages and challenges of qualitative and quantitative methods, and what’s happening right now in evidence building across the federal government, including interagency efforts to advance programs and policies.

Our experts included senior manager Bálint Pető, MPP, PMP; senior consultant Kassim Mbwana, MPP; senior consultant Teresa Kline, MA, PMP; and director Sarah Cunningham, CGFM, MPP. The panel was moderated by director of client development Kate Lynch Machado. Below are some key takeaways from the webinar.

Key Takeaways

How do I get started?

If an organization hasn’t done many program evaluations before, getting started with the right research questions and the right methodologies can feel daunting. But it doesn’t have to!

Sarah emphasized how many resources exist for getting started, such as looking at for ideas on what’s out there, having conversations with other entities—both inside and outside your organization—working on similar programs, and not discounting the importance of small wins.

Bálint discussed some of the ways agencies can build their own capacity for conducting evaluations, highlighting Summit’s work setting up a program evaluation function for the Consumer Product Safety Commission (CPSC). By starting with a needs assessment to understand what the CPSC’s main goals are for their program evaluation capacity and training CPSC staff on program evaluation, Summit has been helping the agency build a culture around evaluation.

What if data collection is too expensive?

One audience member asked how the team approaches evaluation needs when there’s a clearly defined research question but a small budget for data collection and limited existing data. As Kassim explained, understanding the purpose of the question and the intent for the desired answer can help you shape any administrative data you already have, making the most of what’s available. Teresa built on that topic to talk about getting creative in picking the right mix of primary data collection methodologies. Depending on your need and the population, a survey or a handful of in-depth interviews might be the most cost-efficient choice.

How do you show stakeholders the connection between program activities and macro-level changes?

When conducting an impact evaluation, the key goal is to understand whether the program had the impacts it was designed to have, including whether it reached the intended population. One audience member noted that stakeholders may have a strong interest in the outcome and asked how to show the connection between the macro-level changes and actual program activities.

From a methodological standpoint, Bálint noted that this is one of the biggest challenges of impact evaluations. For example, when evaluating an unemployment insurance program, there are many confounding factors stemming from the underlying economy that impact the job market, which makes it challenging to know whether the unemployment insurance program itself is responsible for the outcomes. Bálint discussed the statistical methods used to disentangle spurious effects of other processes, such as choosing the correct counterfactual.

From a project management perspective, Teresa highlighted how logic models can be a great tool for visually depicting the connection between program activities all the way through to outputs and outcomes. Often created in the discovery phase of an evaluation, logic models help key program administrators visualize the intended connections and can serve as a model to be tested during the evaluation as evaluators ask whether those intended connections are accurate based on real-life program implementation.


Share This: