Skip Navigation

Evaluation


Evaluation of programming is becoming increasingly important to our funders and to our communities. Additionally, evaluation of prevention programs can help educators improve their curricula and presentation delivery with the goal of increasing program impact and effectiveness and ending sexual violence.

Evidence-based

Why is evaluation so important? Besides learning about what your program is achieving and how it could improve, evaluation helps build the practice-based evidence base. This is especially important to the field of preventing sexual and intimate partner violence, because there are currently few research evidence-based, when research evidence means findings published in peer-reviewed journals and replicated in multiple settings with comprehensive evaluation. Most of the the best sexual and intimate partner violence prevention work exists in practice within communities. We need to do evaluation in order to capture the “Evidence-of-Practice” and build our own evidence base. We need to use quantitative and qualitative evaluation and research in this process.

The CDC’s EvaluACTION is an interactive resource for starting to figure out why and how to plan and implement evaluation for your programming. It gives an overview of the kinds of questions to ask of your program and organization and includes a tool to build your program logic model and evaluation plan. It also links to several in-depth resources that can help practically build an evaluation plan.

Technical Assistance Guide and Resource Kit for Primary Prevention and Evaluation(Stephanie M. Townsend, PhD, for PCAR, 2009)

Innovations in Evaluation: A Report on Evaluation in the Field of Sexual Violence Prevention (Stephanie M. Townsend, PhD, for NSVRC, 2017) highlights six state and local level approaches to evaluation. The report explores evaluation capacity based on organization and individual factors.

In 2011, the CDC published a guide titled “Understanding Evidence, Part 1: Best Available Research Evidence” to help preventionists determine whether or not a prevention program, practice, or policy is actually achieving the outcomes it aims to and in the way it intends.

Some collections of evidence based practices or best practices in listed in:

Guides and Toolkits

The CDC has an overall Framework for Program Evaluation, as well as a detailed guide to Developing an Effective Evaluation Plan.

The following materials may be helpful for program monitoring, which does not need to have a formal evaluation design, and which should be an ongoing process throughout program implementation, evaluation, and revision. These materials give guidance on how to develop indicators that can help track program implementation and outcomes.

The Community ToolBox has created the following evaluation resources:

The Center for Evaluation Innovation and Network Impact teamed up to produce two guides on evaluating social networks. The State of Network Evaluation offers the field’s current thinking on evaluation frameworks, approaches, and tools. It addresses why networks are important and why they should be evaluated, what is unique about networks, what elements of a network can be evaluated, and relevant evaluation designs and methods are appropriate. Evaluating Networks for Social Change: A Casebook profiles nine network evaluations and their questions, methodologies, and results. The nine networks represent a variety of network types, illustrate a range of network evaluation methodologies, and are organized to reflect three basic areas of focus for a network evaluation: network connectivity, network health, and network results.

In January 2012, the Ohio Domestic Violence Network launched an empowerment evaluation toolkit, which is the result of six years of working with the Centers for Disease Control and Prevention’s (CDC) DELTA Program. Following Getting to Outcomes methodology for planning, implementation, and evaluation of primary prevention activities, ODVN’s Empowerment Evaluation consultants Amy Bush Stevens and Dr. Sandra Ortega developed the toolkit as a user friendly translation. Ohio’s local DELTA Projects, several sexual violence prevention programs funded by the Ohio Department of Health, and state leaders, provided critical feedback throughout the development of the toolkit.

The American Academy of Pediatrics’ Community Pediatrics program provides Evaluating Your Community-Based Program workbooks and recordings for those implementing community-based health initiatives. The materials walk participants step-by-step through the process of planning and implementing evaluation strategies.

The Forum for Youth Development’s Measuring Youth Program Quality: A Guide to Assessment Tools was prepared for the after-school and youth development fields. It provides guidance to practitioners, policy makers, researchers, and evaluators as to what options are available and what issues to consider in choosing a quality assessment tool. The majority of the document reviews, summarizes, and provides links to specific quality assessment tools.

Interviews with Evaluation Specialists

In this interview, consultant Patrick Lemmon talks with CALCASA on one strategy to evaluate behavioral intent. In this clip, Patrick looks at an example of a bystander intervention program.

In this interview, Wendi Siebold talks about online tools to support evaluation.

Types of evaluation:

Process evaluation: Documents whether a program can be (or is being) implemented with me as planned.
Outcome evaluation: Determines whether a program has the intended effect on intimate parter and sexual violence (or on its risk and/or protective factors)
Example Data Collection Methods.pdfExample Data Collection Methods.pdf

Outcome evaluation

“Outcomes – sometimes called objectives – are specific, measurable statements that let you know when you have reached your goals. Outcome statements describe specific changes in your knowledge, attitudes, skills, and behaviors you expect to occur as a result of your actions.

If you are training to increase knowledge, your training goals could be to:

  • Increase knowledge about sexual violence and dating violence perpetration and victimization
  • Increase knowledge of the overlapping risk factors for sexual and dating violence perpetration and youth violence
  • Identify appropriate opportunities to address issues related to sexual violence and dating violence in current program efforts

If you are training to increase knowledge and also skills, your training goals could also include those shown above as well to:

  • Increase skills to interrupt language and behaviors that objectify and demean women and to promote respectful language and dating behavior.

Good outcome statements are SMART: specific, measurable, achievable, relevant, and time-bound. Think carefully about what you can realistically accomplish in your trainings given the groups you want to reach and the scope of your resources.

Develop short, intermediate, and long-term outcomes as follows:

  • Short-term outcomes should describe what you want to happen within a relatively brief period (e.g. during the course of one or several trainings, depending on how many sessions you conduct). Focus your short-term outcomes on what you want people to learn. An example of a short-term outcome would be that coaches learn about the risk and protective factors for sexual violence and/or intimate partner violence.
  • Intermediate outcomes describe what you want to happen after your trainings are completed. Focus your intermediate outcomes on what you want people to do when they go back to their [classes, workplaces, etc] and apply what they have learned. An example of an intermediate outcome would be that coaches demonstrate interrupting sexual harassment and teaching respect.
  • Long-term outcomes describe the impact you hope to have on the primary prevention of sexual violence and/or intimate partner violence after the trainings are completed, but farther into the future. Describe what you hope will change as a result of your trainings. An example of a long-term outcome would be that incidents of sexual harassment decrease in schools.

Well-written and complete outcome statements will usually define the following five elements (Fisher, Imm, Chinman & Wandersman, 2006) as you describe:

  • Who will change – the [people] you are training
  • What will change – the knowledge, attitudes, and skills you expect to change
  • By how much – how much change you think you can realistically achieve
  • By when – the timeframe within you hope to see change
  • How the change will be measured – the surveys, tests, interviews, or other methods you will use to measure the different changes specified

A useful way to remember these elements is the ABCDE Method of Writing Outcome Statements (Atkinson, Deaton, Travis & Wessel, 1999):

  • A – Audience (who will change?)
  • B – Behavior (what will change?)
  • C – Condition (by when?)
  • D – Degree (by how much?)
  • E – Evidence (how will the change be measured?)”

Fisher D, Lang KS, Wheaton J. Training Professionals in the Primary Prevention of Sexual and Intimate Partner Violence: A Planning Guide. Atlanta (GA): Centers for Disease Control and Prevention (2010).

Strategies for ranking effectiveness

Effective: strategies which include one or more programs demonstrated to be effective; effective refers to being supported by multiple well-designed studies showing prevention of perpetration and/or experience of intimate partner violence and/or sexual violence.

Emerging evidence: strategies which include one or more programs for which evidence of effectiveness is emerging; emerging evidence refers to being supported by one well-designed study showing prevention of perpetration and/or experience of intimate partner and/or sexual violence or studies showing positive changes in knowledge, attitudes, and beliefs related to intimate partner violence and/or sexual violence.

Effectiveness unclear: strategies which include one or more programs of unclear effectiveness due to insufficient or mixed evidence.

Emerging evidence of ineffectiveness: strategies which include one or more programs for which evidence of ineffectiveness is emerging; emerging evidence refers to being supported by one well-designed study showing lack of prevention of perpetration and/or experience of intimate partner and/or sexual violence or studies showing the absence of changes in knowledge, attitudes, and beliefs related to intimate partner violence and/or sexual violence.

Ineffective: strategies which include one or more programs shown to be ineffective; ineffective refers to being supported by multiple well-designed studies showing lack of prevention of perpetration and/or experience of intimate partner and/or sexual violence.

Probably harmful: strategies which include at least one well designed study showing an increase in perpetration and/or experiencing of intimate partner and/or sexual violence or negative changes in knowledge, attitudes, and beliefs related to intimate partner and/or sexual violence.

Culturally Relevant Evaluation

Building Evidence Toolkit for Community-Based Organizations

cultural_competence_guide.pdfcultural_competence_guide.pdf

Medicine_Wheel_Evaluation_Framework.pdfMedicine_Wheel_Evaluation_Framework.pdf