Evaluation
Evaluation of programming is becoming increasingly important to our funders and to our communities. Additionally, evaluation of prevention programs can help educators improve their curricula and presentation delivery with the goal of increasing program impact and effectiveness and ending sexual violence.
Evidence-based
Why is evaluation so important? Besides learning about what your program is achieving and how it could improve, evaluation helps build the practice-based evidence base. This is especially important to the field of preventing sexual and intimate partner violence, because there are currently few research evidence-based, when research evidence means findings published in peer-reviewed journals and replicated in multiple settings with comprehensive evaluation. Most of the the best sexual and intimate partner violence prevention work exists in practice within communities. We need to do evaluation in order to capture the “Evidence-of-Practice” and build our own evidence base. We need to use quantitative and qualitative evaluation and research in this process.
The CDC’s EvaluACTION is an interactive resource for starting to figure out why and how to plan and implement evaluation for your programming. It gives an overview of the kinds of questions to ask of your program and organization and includes a tool to build your program logic model and evaluation plan. It also links to several in-depth resources that can help practically build an evaluation plan.
Technical Assistance Guide and Resource Kit for Primary Prevention and Evaluation(Stephanie M. Townsend, PhD, for PCAR, 2009)
- This manual is intended to support prevention educators in building upon what they are already
doing to evaluate their programs. It is divided into five sections:
1. Introduction to Primary Prevention
2. Primary Prevention Strategies
3. Introduction to Program Evaluation
4. Basic Steps for Evaluating Your Programs
5. Evaluation Resources - This toolkit was updated and divided into four volumes in 2015.
- Volume 1: Choosing Prevention Strategies
- Volume 2: Evaluating Prevention Strategies
- Volume 3: Analyzing Evaluation Data
- Volume 4: Analyzing Qualitative Data
Innovations in Evaluation: A Report on Evaluation in the Field of Sexual Violence Prevention (Stephanie M. Townsend, PhD, for NSVRC, 2017) highlights six state and local level approaches to evaluation. The report explores evaluation capacity based on organization and individual factors.
In 2011, the CDC published a guide titled “Understanding Evidence, Part 1: Best Available Research Evidence” to help preventionists determine whether or not a prevention program, practice, or policy is actually achieving the outcomes it aims to and in the way it intends.
Some collections of evidence based practices or best practices in listed in:
- Key Findings from “A systematic review of primary prevention strategies for sexual violence perpetration.” Research translation by the National Sexual Violence Resource Center, based on research by Sarah DeGue et al., 2014.
- Preventing Sexual Violence on College Campuses: Lessons from Research and Practice, by Sarah DeGue, 2014. Prepared for the White House Task Force to Protect Students from Sexual Assault.
- An Evidence-Based Review of Sexual Assault Preventive Intervention Programs By Shannon Morrison, Jennifer Hardison, Anita Mathew, & Joyce O’Neil. September 2004.
- A critical review of interventions for the primary prevention of perpetration of partner violence. Whitaker DJ, Morrison S, Lindquist C, Hawkins SR, O’Neil JA, Nesius AM, Mathew A, Reese L. Aggression & Violent Behavior 2006; 11(2): 151-166.
- Blueprints for Violence Prevention
- Creating Safe Environments: Violence Prevention Strategies and Programs (June 2006) This report discusses approaches to violence prevention with a focus on youth violence and intimate partner violence prevention. The report includes an environmental scan of promising violence prevention programs from across the U.S. and provides a useful overview of the current state of violence prevention.
- “Interventions to Prevent Sexual Violence” by Paul A. Schewe in L. S. Doll, S. Bonzo, J. Mercy, D. Sleet (Eds).Handbook of injury and violence prevention. Secausus, NJ: Springer. 2006
- The Mendez Foundation, “Too Good For Violence”
- The Michigan Coalition Against Domestic and Sexual Violence has produced this guide to assist prevention educators in measuring the successes of their programming: Outcome Evaluation Strategies for Sexual Assault Service Programs- A Practical Guide.pdf
- Download
- 2 MB
- Evaluation for Improvement: A Seven-Step Empowerment Evaluation Approach for Violence Prevention Organizations. This CDC report is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies.Evaluation Improvement.pdf
- Download
- 3 MB
Guides and Toolkits
The CDC has an overall Framework for Program Evaluation, as well as a detailed guide to Developing an Effective Evaluation Plan.
The following materials may be helpful for program monitoring, which does not need to have a formal evaluation design, and which should be an ongoing process throughout program implementation, evaluation, and revision. These materials give guidance on how to develop indicators that can help track program implementation and outcomes.
- PreventConnect eLearning course: Measuring the Impact of your Sexual and Domestic Violence Prevention Efforts
- Public Health Foundation’s guide
- Know How NonProfit
- Theoryofchange.org
- From the Urban Reproductive Health Initiative
- Activity-based Evaluation from the Texas Association Against Sexual Assault: “This toolkit uses the architecture of the curriculum and weaves into it an improvement loop to assess both participant knowledge and comprehension, as well as to improve facilitator practice. In this way, evaluation becomes an organic component of curriculum design and implementation. And most importantly, you will not have to do extra work to evaluate your process; it will be built into your practice.”
- Deepening Engagement for Lasting Impact: A framework for measuring media performance & results, prepared for the Bill & Melinda Gates Foundation
The Community ToolBox has created the following evaluation resources:
- Chapter 36. Introduction to Evaluation
- Chapter 37. Operations in Evaluating Community Interventions
- Chapter 38. Methods for Evaluating Comprehensive Community Initiatives
- Chapter 39. Using Evaluation to Understand and Improve the Initiative
The Center for Evaluation Innovation and Network Impact teamed up to produce two guides on evaluating social networks. The State of Network Evaluation offers the field’s current thinking on evaluation frameworks, approaches, and tools. It addresses why networks are important and why they should be evaluated, what is unique about networks, what elements of a network can be evaluated, and relevant evaluation designs and methods are appropriate. Evaluating Networks for Social Change: A Casebook profiles nine network evaluations and their questions, methodologies, and results. The nine networks represent a variety of network types, illustrate a range of network evaluation methodologies, and are organized to reflect three basic areas of focus for a network evaluation: network connectivity, network health, and network results.
In January 2012, the Ohio Domestic Violence Network launched an empowerment evaluation toolkit, which is the result of six years of working with the Centers for Disease Control and Prevention’s (CDC) DELTA Program. Following Getting to Outcomes methodology for planning, implementation, and evaluation of primary prevention activities, ODVN’s Empowerment Evaluation consultants Amy Bush Stevens and Dr. Sandra Ortega developed the toolkit as a user friendly translation. Ohio’s local DELTA Projects, several sexual violence prevention programs funded by the Ohio Department of Health, and state leaders, provided critical feedback throughout the development of the toolkit.
The American Academy of Pediatrics’ Community Pediatrics program provides Evaluating Your Community-Based Program workbooks and recordings for those implementing community-based health initiatives. The materials walk participants step-by-step through the process of planning and implementing evaluation strategies.
The Forum for Youth Development’s Measuring Youth Program Quality: A Guide to Assessment Tools was prepared for the after-school and youth development fields. It provides guidance to practitioners, policy makers, researchers, and evaluators as to what options are available and what issues to consider in choosing a quality assessment tool. The majority of the document reviews, summarizes, and provides links to specific quality assessment tools.
Interviews with Evaluation Specialists
In this interview, consultant Patrick Lemmon talks with CALCASA on one strategy to evaluate behavioral intent. In this clip, Patrick looks at an example of a bystander intervention program.
In this interview, Wendi Siebold talks about online tools to support evaluation.
Types of evaluation:
Process evaluation: Documents whether a program can be (or is being) implemented with me as planned.
Outcome evaluation: Determines whether a program has the intended effect on intimate parter and sexual violence (or on its risk and/or protective factors)
Example Data Collection Methods.pdf
- Download
- 71 KB
Outcome evaluation
“Outcomes – sometimes called objectives – are specific, measurable statements that let you know when you have reached your goals. Outcome statements describe specific changes in your knowledge, attitudes, skills, and behaviors you expect to occur as a result of your actions.
If you are training to increase knowledge, your training goals could be to:
- Increase knowledge about sexual violence and dating violence perpetration and victimization
- Increase knowledge of the overlapping risk factors for sexual and dating violence perpetration and youth violence
- Identify appropriate opportunities to address issues related to sexual violence and dating violence in current program efforts
If you are training to increase knowledge and also skills, your training goals could also include those shown above as well to:
- Increase skills to interrupt language and behaviors that objectify and demean women and to promote respectful language and dating behavior.
Good outcome statements are SMART: specific, measurable, achievable, relevant, and time-bound. Think carefully about what you can realistically accomplish in your trainings given the groups you want to reach and the scope of your resources.
Develop short, intermediate, and long-term outcomes as follows:
- Short-term outcomes should describe what you want to happen within a relatively brief period (e.g. during the course of one or several trainings, depending on how many sessions you conduct). Focus your short-term outcomes on what you want people to learn. An example of a short-term outcome would be that coaches learn about the risk and protective factors for sexual violence and/or intimate partner violence.
- Intermediate outcomes describe what you want to happen after your trainings are completed. Focus your intermediate outcomes on what you want people to do when they go back to their [classes, workplaces, etc] and apply what they have learned. An example of an intermediate outcome would be that coaches demonstrate interrupting sexual harassment and teaching respect.
- Long-term outcomes describe the impact you hope to have on the primary prevention of sexual violence and/or intimate partner violence after the trainings are completed, but farther into the future. Describe what you hope will change as a result of your trainings. An example of a long-term outcome would be that incidents of sexual harassment decrease in schools.
Well-written and complete outcome statements will usually define the following five elements (Fisher, Imm, Chinman & Wandersman, 2006) as you describe:
- Who will change – the [people] you are training
- What will change – the knowledge, attitudes, and skills you expect to change
- By how much – how much change you think you can realistically achieve
- By when – the timeframe within you hope to see change
- How the change will be measured – the surveys, tests, interviews, or other methods you will use to measure the different changes specified
A useful way to remember these elements is the ABCDE Method of Writing Outcome Statements (Atkinson, Deaton, Travis & Wessel, 1999):
- A – Audience (who will change?)
- B – Behavior (what will change?)
- C – Condition (by when?)
- D – Degree (by how much?)
- E – Evidence (how will the change be measured?)”
Fisher D, Lang KS, Wheaton J. Training Professionals in the Primary Prevention of Sexual and Intimate Partner Violence: A Planning Guide. Atlanta (GA): Centers for Disease Control and Prevention (2010).
Strategies for ranking effectiveness
Effective: strategies which include one or more programs demonstrated to be effective; effective refers to being supported by multiple well-designed studies showing prevention of perpetration and/or experience of intimate partner violence and/or sexual violence.
Emerging evidence: strategies which include one or more programs for which evidence of effectiveness is emerging; emerging evidence refers to being supported by one well-designed study showing prevention of perpetration and/or experience of intimate partner and/or sexual violence or studies showing positive changes in knowledge, attitudes, and beliefs related to intimate partner violence and/or sexual violence.
Effectiveness unclear: strategies which include one or more programs of unclear effectiveness due to insufficient or mixed evidence.
Emerging evidence of ineffectiveness: strategies which include one or more programs for which evidence of ineffectiveness is emerging; emerging evidence refers to being supported by one well-designed study showing lack of prevention of perpetration and/or experience of intimate partner and/or sexual violence or studies showing the absence of changes in knowledge, attitudes, and beliefs related to intimate partner violence and/or sexual violence.
Ineffective: strategies which include one or more programs shown to be ineffective; ineffective refers to being supported by multiple well-designed studies showing lack of prevention of perpetration and/or experience of intimate partner and/or sexual violence.
Probably harmful: strategies which include at least one well designed study showing an increase in perpetration and/or experiencing of intimate partner and/or sexual violence or negative changes in knowledge, attitudes, and beliefs related to intimate partner and/or sexual violence.
Culturally Relevant Evaluation
Building Evidence Toolkit for Community-Based Organizations
- Download
- 706 KB
Medicine_Wheel_Evaluation_Framework.pdf
- Download
- 307 KB