In practice, evaluation takes place in a wide range of settings that perhaps constrain an evaluator’s choice of evaluation methods. For example, an evaluation may take place during implementation, rather than starting from the beginning. Or, ideas for interventions may emerge from various sources, affecting how much the evaluator can adopt an ideal evaluation design.
In the case of the Be Safe program, we (Harry Cummings and Associates), recently conducted a final evaluation of the program a year after it had ended (Lam et al., 2017). Be Safe is a personal safety, school-based violence against children prevention program targeting children ages 5 to 9 across all of Sri Lanka. Be Safe was implemented by the Canadian Red Cross Society in partnership with the Ministry of Education of Sri Lanka from 2009 to 2014. The goal of the program was to create and maintain environments that are safe from violence and abuse for children.
When determining the appropriate evaluation design, a range of contextual and limiting factors was considered, including 1) the large-scale and large-time frame implementation of the program across all 25 districts in Sri Lanka; 2) the complex humanitarian environment of Sri Lanka; 3) the lack of baseline data or control group for longitudinal analysis or pre/post-test comparisons; and 4) the implementation of the evaluation one year after program completion. Evaluations of interventions in complex environments remain challenging, as is the case for evaluation efforts where “gold-standard” methods (e.g. randomized control trials) may be unethical or impractical to implement (Deering et al., 2011).
Dose-response evaluation represents an approach to examine the link between program exposure (or dose) and program outcomes. This approach has been widely used in the evaluation of clinical trials, and more recently, in the evaluation of program interventions in controlled research contexts (Grieco, Jowers, Errisuriz, & Bartholomew, 2016; Jørgensen et al., 2015 ; Kim et al., 2016). However, dose-response evaluations have not been well applied to large-scale interventions in complex humanitarian environments.
What did we do?
We used a cross-sectional retrospective approach combined with dose-response to assess a sample of schools at one point in time, relying on the memory of participants to recall program activities and program outcomes. We surveyed 835 parents of children who participated in the program across seven districts in Sri Lanka. We asked them outcome-related questions such as: did Be Safe influence your child’s sense of safety at his/her school, or is it okay for teachers to use physical punishment against children to maintain discipline. All answers were designed used a Likert scale (e.g. strongly disagree to strongly agree). To assess the level of exposure to the program, we asked parents questions such as: how many numbers of years was your child involved in the program, or did your child share with you or talk about the elements of Be Safe. After defining the exposure and outcome variables, we used Spearman Rank-Order Correlation to measure dose-response.
What did we find?
We found that parents of children who had more exposure to the program perceived that the program increased child safety in the school, in the community, and in school policies targeting violence prevention. However, while correlations were found between program exposure and program outcomes, most correlations were low. Furthermore, many parents did not remember any of the programming objectives. Yet, there is some evidence to suggest that increased exposure to the program contributes to violence against children prevention outcomes.
As with any study design, there are limitations. For example, we relied on the perspective of parents of children who participated in the Be Safe program (i.e. indirect program participants). In terms dose-response, the strength of correlations they may likely be higher if we had examined self-reported data from children (however, this would be difficult information to collect due to young ages of children). Going further, the strength of the effect (i.e. odds ratio) of each category of outcome could have been measured, however, it was beyond the scope of our objectives.
Why is this research important?
This study evaluated the effectiveness of a violence against children prevention program in the absence of a baseline and a control group. We used a dose-response approach and found that an increase in exposure to the program was associated with improvements in parents’ perceived child safety in the school, in school policies surrounding violence prevention, and in child safety in the community. We found that dose-response was a useful approach for providing credible, though not conclusive, evidence on program effects. Researchers and evaluators may consider designing dose-response measures in program monitoring and evaluation efforts. A tip: ensure that you have a clear understanding of program theory so that you can effectively design your exposure and outcome variables. Finally, it is important to emphasize that dose-response should not replace traditional evaluation approaches – experimental designs (with baseline and control groups) should be considered early in the program planning process to ensure that other measures of program effectiveness can be captured.
To read the full article, click here.
Photo credit: Foter.com