CENTERING LIVED EXPERTISE: CO-DESIGNING WITH SURVIVORS
Co-design is a process that actively involves all stakeholders – survivors, children, advocates, administrators, policymakers – to ensure that...
Evidence building is a learning mindset that offers a set of approaches to support change agents to identify, focus, and improve the relevance and impact of their design, programs, and strategies for the people and communities they serve.
Understanding our program’s impact on the families we serve is essential to effectively supporting survivors and fulfilling our mission of ending violence against women and children. Evidence building is a learning mindset that offers a set of approaches to support change agents to identify, focus, and improve the relevance and impact of their design, programs, and strategies for the people and communities they serve. It combines elements of program development, monitoring and evaluation to enable programs to become more effective. It can also be very helpful in making the case for and acquiring funding and other resources for our programs. When paired with organizational strategy, evidence building can accelerate the impact of your programs and strategies. Within the Promising Futures landscape, evidence building is a process of exploring, questioning, learning, innovating, iterating, and meaning-making done with impacted people and their communities.
This introductory guide to evidence building focuses on strategies for the design, development, and evaluation of change efforts, intervention strategies, and programming implemented in the domestic violence field. It intentionally and explicitly centers the importance of what we believe to be core competencies in violence prevention and intervention work, that is – culturally responsive, survivor centered, trauma informed, and developmentally appropriate design and programming for parent and child survivors of family violence.
The Promising Futures community is informed by a set of Guiding Principles which influence our approach to everything – including evidence building. In particular the Storytelling principle elevates the use of a wide range of evidence building strategies to share the impacts of our work.
Storytelling: Capture stories and spread their impact using a wide range of interpersonal, cultural, and research and evaluation approaches
Storytelling is one of the most fundamental forms of communication. From the first cave paintings created many thousands of years ago, to the oral traditions of indigenous cultures, to the modern narratives and numbers found in quantitative and qualitative research and evaluation, storytelling has made meaning and shaped thinking. Stories reflect values, ways of viewing the world, and culture. Formal dominant culture narratives have historically shaped the ways that evaluation is conceptualized, data collected, analyzed and interpreted. However, storytelling also provides real, rich, and important information. Storytelling requires the use of a wide range of approaches to knowledge production that:
Do not rank order or more highly value formal vs. informal forms of information to capture the breadth and variation of human experiences;
Ensure different forms of data (qualitative, quantitative) and methods of inquiry (experimental, narrative, participatory action) are utilized in research and evaluation; and
Honor and protect people’s stories when they will be used or retold, by engaging them from start to finish in their use.
Evidence building at its core is about fostering cultures of learning and continuous improvement, and engages stakeholders (especially those most impacted by the issues the change agent is trying to solve) to integrate continuous learning with organizational, program, movement building, or policymaking strategy.
Change agents working across multiple settings (e.g., in programs, coalitions, advocacy networks, courts, health, education, policy, etc.) toward supporting parent and child survivors of family violence can use evidence building to better understand:
Who should be at the table with me?
What is the most important problem to solve at this time and why?
What strategies or program activities are most effective for the issue(s) I’m trying to solve or change?
Is this program or strategy improving the lives of parent and child survivors in a meaningful and relevant way?
What is the impact of the program or strategy and are these impacts across all target populations similar? Why or why not?
Evidence building includes a vast toolbox of approaches (methods) that can be categorized into 4 primary areas including:
1) Foundational fact finding
2) Process and performance monitoring
3) Evaluation and learning
4) Research and policy analysis
Each area can also inform and direct the others and some strategies may fall into more than one area (Young, 2021)
helps us better understand the context for change. It can include gathering information with the community served about their needs, priorities, and assets, resources available, and other contextual and cultural elements that may influence whether a program will be successful.
You may already be acquainted with different forms of foundational fact finding. For example, when agencies consider launching a new program for survivors they may conduct a needs assessment, engage in appreciative inquiry or asset mapping to understand the needs and strengths of the community they hope to serve. Beyond direct programming, foundational fact finding can be used to craft strategy such as when a coalition surveys its members to identify relevant and key areas to target policy advocacy efforts.
Foundational fact finding can help you answer questions such as:
You can use foundational fact finding to:
Resources and tools to support foundational fact finding include:
is used to capture trends about how our programs and strategies are progressing or in some cases evolving. It includes the ongoing tracking of important information related to program implementation or delivery. These data often include information about the quantity and types of services being provided and the number of people being served. Much of these data are likely already being captured in your agency’s database (e.g., in Salesforce or Apricot) or written records. Sometimes change agents track more comprehensive data as part of their performance monitoring that can include information about outcomes or changes in survivors’ goals, priorities, needs, well-being, views of self, behaviors, and relationship dynamics. Other times change agents may be tracking other systems or programs’ information to assess performance like whether supports and services were offered to survivors, whether supports and services were culturally responsive or trauma informed, or what types of barriers keep survivors from accessing needed services and supports for their safety and wellbeing as well as the safety and wellbeing of their children.
Insights gleaned from the regular monitoring of these data can be used to inform in “real-time” course corrections and guide program improvements. Keeping an eye on these types of data can also help us easily communicate our progress to grant funders or other fiscal sponsors, our board members, and community members.
Questions that process and performance monitoring can help answer:
Are we capturing the correct key measures?
What progress is the implemented approach making toward the key objectives and measures?
Who is the program reaching? Who is being missed?
Are there unintended or negative consequences for users?
Are services administered and delivered in equitable ways?
Are implementers of the program being adequately trained and supported?
involve the systematic assessment of the strengths and weaknesses and the effectiveness and efficiency of a program, a program’s components, or an organization’s strategy alongside action steps to use results from the systematic assessment to make changes and improve the experiences survivors have of their engagement with a change agent and the conditions under which those experiences are offered to survivors.
Evaluations come in all shapes and sizes. For some, evaluating our own work can feel overwhelming. However, evaluations can provide an opportunity for us to meaningfully learn about what works and doesn’t work, for whom, and under what conditions in trying to solve or improve persistent social problems, like family violence. Ongoing evaluation and learning can help to identify important improvements or breakthrough thinking in how we might effectively solve an impacted person’s pain point or facilitate their resilience and goals. As social service organizations predominantly funded by government grants, evaluation of our work can help show that we are good stewards of the resources entrusted to us. As change agents who serve impacted communities, evaluation can help us be accountable to the people we serve, and specifically to improving their lives. Further, evaluations can help guide where we put our efforts and resources by identifying where our programs are having meaningful impacts and for whom. Thus, the ultimate goal of evaluation is to “provide credible (and culturally authentic) evidence that fosters greater understanding and improves (design and) decision making, all aimed at improving social conditions and promoting healthy, just, and equitable communities” (Thomas & Campbell, 2020). – Italic emphasis added.
Evaluations can be implemented at any stage of program development. Evaluations conducted in the initial stages of programs are often referred to as “formative” while those focused on capturing changes as a result of participation are referred to as “summative” evaluations.
Formative evaluations are typically conducted for refining and or improving the program during its implementation. It is often answering questions about the process of implementation including what the project is doing, its operations and service delivery.
Summative evaluations are often implemented to determine the influence or effects of the program and seek to understand the outcomes and results of the program. They aim to answer questions about the program’s impacts. See more below about the kinds of questions about your program that each evaluation type can help you answer.
Evaluation Question Examples by Type of Evaluation (Thomas & Campbell, 2020)
Formative evaluation questions:
Outcome evaluation questions:
Impact evaluation questions:
is undertaken to generate new knowledge about a specific area of inquiry. Research often seeks to show causal relationships (to prove) and findings are meant to apply to a larger population (generalizable knowledge). You may find research studies that merge various evidence sources or bodies of knowledge (e.g. multiple evaluations of a specific type of program).
Policy analysis is a type of research that offers a systematic process for examining and comparing the effects of multiple policy options. Policy analysis often makes use of existing data (e.g. child welfare data, domestic violence fatality reviews) to generate and inform policy.
Research can be exploratory such as when trying to understand something that has never been studied before. It can be used to describe the prevalence of a specific issue. For example, research can answer questions about how widespread a problem is, e.g. 1:3 women experience IPV in their lifetimes. Research can also be used to establish causal relationships between different factors or variables such as identifying what factors increase children’s wellbeing after family reunification. Research is an important source of explanatory knowledge that can shed light on a specific phenomenon thereby deepening everyone’s understanding about that phenomenon.
Research seeks to prove, evaluation seeks to improve.
Many people, seasoned evaluators included, use the terms research and evaluation interchangeably. Just as there are numerous definitions of evaluation there are many different perspectives about the differences and overlap between the terms. What makes discerning between the two more difficult is that fundamentally they both are about answering questions and they both often use the same methods in the pursuit of those answers. Their differences lie in what types of questions are being asked, what type of knowledge or answers are being generated, and how that knowledge is going to be used. One defining element is that research intends to build generalizable evidence to test theories or explain how something works while evaluation focuses on the systematic assessment of programs and intervention strategies and aims to assess how something is done, its value or worth, and whether or not it had impact or was effective.
Finally, it is also important to understand that research and evaluation are not mutually exclusive. They can be integrated, can work hand in hand, or complement one another. Both research and evaluation can be used in service of advocacy for change. Understanding their differences helps change agents communicate their recommendations and the basis on which those recommendations are made in a clear and credible way. See the article, Ways of framing the difference between research and evaluation at www.BetterEvaluation.org for more information on the differences between research and evaluation.
Also see: What is the Difference Between Research and Evaluation and Between Process and Outcome Evaluation? from the National Resource Center for Domestic Violence.
Below are some key resources for helping you to think about different kinds of evidence and evaluation and how these can inform our work:
Dive deeper into learning about evaluation and how it can support your work with the following resources:
A toolkit for those interested in conducting policy analysis: The CDC Policy Process
The following resources offer guidance for implementing evaluation within the fields of domestic violence, violence prevention and child welfare:
What constitutes “evidence” and from whose perspective has been an ongoing source of debate. The resources below offer some current thinking on how evidence is defined and how it can help you select amongst the best available evidence for your community and program.