CENTERING LIVED EXPERTISE: CO-DESIGNING WITH SURVIVORS
Co-design is a process that actively involves all stakeholders – survivors, children, advocates, administrators, policymakers – to ensure that...
An evaluation plan will identify the context in which your program operates, the evaluation purpose, goals, and questions that will be examined, the evaluation approach (design, data collection strategies, analysis plan), a communication strategy, the resources needed including financial and persons, and a timeline.
The following outline provided by the CDC’s six step public health method of evaluation summarizes the typical steps involved in conducting an evaluation of your program:
Engage constituents, including those involved in program operations; those served or affected by the program; and primary users of the evaluation.
Describe the program, including the need, expected effects, activities, resources, stage, context and logic model.
Focus the evaluation design to assess the issues of greatest concern to constituents while using time and resources as efficiently as possible. Consider the purpose, users, uses, questions, methods and agreements.
Gather credible evidence to strengthen evaluation judgments and the recommendations that follow. These aspects of evidence gathering typically affect perceptions of credibility: indicators, sources, quality, quantity and logistics.
Justify conclusions by linking them to the evidence gathered and judging them against agreed-upon values or standards set by the stakeholders. Justify conclusions on the basis of evidence using these five elements: standards, analysis/synthesis, interpretation, judgment and recommendations.
Ensure use and share lessons learned with these steps: design, preparation, feedback, follow-up and dissemination.
Culturally-specific organizations and programs especially those serving Latinx communities may find Esperanza United’s Building Evidence Toolkit particularly helpful. It uses a recipe metaphor to guide users through developing evaluations from developing your logic model to collecting data and sharing your findings. Content and resources are offered in English and Spanish.
An evaluation plan will identify the context in which your program operates, the evaluation purpose, goals, and questions that will be examined, the evaluation approach (design, data collection strategies, analysis plan), a communication strategy, the resources needed including financial and persons, and a timeline.
One of the first steps to planning your evaluation is identifying and engaging your program’s constituents (those that have a vested interest in and care about the results of the program). These program constituents can offer valuable feedback on how your program works, what activities it engages in, and what the program expects to achieve as a result of programming. They are often involved in guiding the goals and aims of the evaluation and can provide insight into the context in which the program operates.
Who cares about the program?
What do they bring to the table?
Whose voices are missing?
How will they be involved?
What are their available time and resources to participate?
If program participants, how will they be compensated?
Below is an example of how an ongoing evaluation might engage various groups of program constituents:
A core component of any evaluation plan or program development strategy is a clear description of the program. Most program and evaluation plans will outline the program’s mission, goals, objectives, activities, and resources. The evaluation of your program will rely heavily on the program objectives to identify indicators of success or the key metrics that the program’s quality, merit, and excellence will be compared against. Two common tools for describing a program are the logic model and theory of change.
A logic model is a brief (usually one-page) document that shows how your program works, what you will use (inputs or resources), what you will do (activities or strategies), what you will create (outputs), and what you will achieve (outcomes or results). Logic models are pragmatic and link linearly how each step of the program will lead to the end results. A logic model essentially captures and illustrates what is involved in implementing the program and what the program aims to achieve.
A theory of change (TOC) is a more comprehensive description of how and why the program will lead to the desired program goals. Similar to logic models, theories of change are often presented visually but will also include rich narrative description that outlines the program’s context in which it operates, and the underlying assumptions of the program (why it’s expected to reach its goals). A theory of change essentially explains a developer’s working hypothesis about why a program is going to work, under what conditions, and how a program’s strategies and activities cause or contribute to the outcome of interest.
Similarities
Differences
Key takeaway: Both are helpful, but logic models are clearer on how programs DO the work but less clear on WHY
Resources for describing your program with a logic model:
–Using Logic Models by Child Welfare Information Gateway offers links to resources about developing and using logic models.
–Guiding Program Development with Logic Models by W.K. Kellogg Foundation. A 5-page brief from Kellogg’s comprehensive Logic Model Development Guide
More on describing your program with a Theory of Change
What’s next?
Consider selecting the goals of your evaluation, identifying what questions you have about your program or strategy and select your evaluation approach. An evaluation plan can help to identify all of these components.