The Journal of Extension - www.joe.org

August 2018 // Volume 56 // Number 4 // Research In Brief // v56-4rb4

Examining Predictors of Implementation Quality in an Emerging International Extension Context

Abstract
In this article, we explore a component of evidence-based programming—implementation quality—within an emerging international Extension context. Specifically, we examine how the traits, characteristics, and perceptions of 46 program facilitators influenced their support of maintaining implementation quality in a Nicaraguan youth violence and substance abuse prevention program, Dale se REAL. The results indicated that of four potential variables, only facilitator buy-in to the Dale se REAL program was a meaningful predictor of implementation support. The implications of the study findings, relative to evidence-based Extension research in both the United States and an emerging international context, are discussed.


Ryan J. Gagnon
Assistant Professor
Clemson University
Clemson, South Carolina
rjgagno@clemson.edu

Jonathan Pettigrew
Assistant Professor
Arizona State University
Tempe, Arizona
jonathan.pettigrew@asu.edu

A common goal in Extension is to use the best possible theory and evidence when developing, delivering, and assessing youth- and family-centered programs (Dunifon, Duttweiler, Pillemer, Tobias, & Trochim, 2004; Vierregger et al., 2015). This orientation is well reflected in research relating to program outcomes, but there is a notable gap in terms of understanding program implementation quality (Duerden & Witt, 2012; Rector, Bakacs, Rowe, & Barbour, 2016). More simply, many Extension programs demonstrate what outcomes were achieved, but not necessarily how those outcomes were achieved. Several comprehensive reviews highlight how implementation quality is frequently undervalued as a component of program evaluation in the broader social sciences (Berkel, Mauricio, Schoenfelder, & Sandler, 2011; Dane & Schneider, 1998; Durlak & DuPre, 2008). This lack of "how" focus on implementation quality may increase the risk of researchers' and practitioners' misstating that a program is effective when it has not been properly implemented (e.g., type I error) and/or inferring that a program is ineffective when it has not been properly implemented (e.g., type II error).

The research investigating implementation quality is limited compared to outcomes-focused evaluations, yet it is essential for evaluating the internal and external validity of programs (Arnold & Cater, 2016; Berkel et al., 2011; Durlak & DuPre, 2008). Both implementation research and assessment may illustrate sources of programmatic success or failure, demonstrate areas of a program design in need of modification, and provide guidelines for program replication in settings outside the original program designer's control (Durlak & DuPre, 2008). Despite ample evidence that implementation assessment is a critical component of the program delivery and improvement process (Fagan, Hanson, Hawkins, & Arthur, 2008; Little, Sussman, Sun, & Rohrbach, 2013), implementation assessments rarely occur (Sloboda, Dusenbury, & Petras, 2014) and are often not included as part of the overall program evaluation process (Berkel et al., 2011; Pawson & Tilley, 1997). There are multiple reasons for the lack of implementation investigations, including an absence of organizational awareness regarding the importance of implementation assessment (Dusenbury, Branningan, Falco, & Hansen, 2003), a lack of requirement for program replication (Sloboda et al., 2014), and a deficiency of resources for conducting implementation assessment (Lillehoj, Griffin, & Spoth, 2004).

Prior research has indicated that when implementation quality is assessed, it is generally influenced by characteristics at at least one of four levels: (a) the organization supporting the program, (b) the community and participants the program is intended to serve (Durlak & DuPre, 2008), (c) the characteristics of the program itself (Little et al., 2013), and (d) the characteristics of the facilitator(s) providing the program (Berkel et al., 2011; Wandersman et al., 2008). For the study described here, we focused on the fourth level, the program facilitator (i.e., frontline staff member responsible for delivering the program), as ultimately it is that person's responsibility to ensure that a program is implemented as intended (Wanless, Rimm-Kauffman, Abry, Larsen, & Patton, 2015). Specifically, we examined how a facilitator's characteristics and traits relate to his or her beliefs about implementation quality.

Measurement and Prediction of Implementation Quality

Although a preponderance of evidence has demonstrated the importance of accounting for implementation quality as part of program evaluations (see Durlak & DuPre, 2008), routinizing of the practice lags (Pettigrew & Hecht, 2015). One reason is that measurement can be resource intensive. Live observations require highly trained personnel, not to mention logistical coordination. Video records hold promise for measuring implementation, but these also require extensive investment of time and personnel. Correspondingly, a less resource-intensive avenue for predicting implementation quality is to examine the facilitators delivering the program, including their self-reported implementation quality, beliefs about the importance of implementation (i.e., profidelity beliefs), program trust and buy-in, perceived preparedness, and demographic traits (Gagnon & Bumpus, 2016). Reflecting this shift toward use of the most efficient tool(s) for examining implementation quality, we explored the use of a self-report measure in an international context.

Study Context

As one of the frontiers for Extension research is an international context, we collected data from the lower middle-income country of Nicaragua in Central America. One issue facing Nicaraguan youths is harmful alcohol use. A survey of residents surrounding Lake Nicaragua, conducted by the U.S. Embassy in Managua, indicated that alcohol misuse, domestic violence, and theft were the three top pressures facing the community. Given these issues, we initiated a culturally appropriate localized program to address issues of alcohol misuse and relational violence. Following the procedures for cultural grounding (Hecht & Krieger, 2006) outlined by Colby et al. (2013), we developed the Dale se REAL adolescent drug and violence prevention program. The curriculum was adapted from REAL-Prevention's U.S. evidence-based early adolescent program keepin' it REAL (Hecht & Miller-Day, 2009) and a violence prevention program based in Canada (Wolfe et al., 2009). Program content and structure were adapted to fit the Nicaraguan youth culture on the basis of interview data from youths and separate focus group sessions with program teachers (referred to as program facilitators). Due to legislative constraints within Nicaragua, we could not recruit youth program participants from public schools. Rather, private schools and other youth service organizations (e.g., National Scouts) in three Nicaraguan cities (Managua, Masaya, and Granada) were recruited for participation in the Dale se REAL youth program.

The organizations electing to participate in Dale se REAL represented multisector support for preventing youth problems relating to substance abuse and violence, not unlike Extension programs based in the United States that target similar challenges (e.g., Kumaran, Fogarty, Fung, & Terminello, 2015; Riggs, Lee, Marshall, Serfustini, & Bunnell, 2006). As Nicaragua's formalized Extension programs and partnerships are still very much in development compared to the 100-plus-year history within the United States, programs such as Dale se REAL can serve as the foundation for future Extension work (Beaulieu & Cordes, 2014; Treadwell, Lachapelle, & Howe, 2013). Indeed, a tertiary purpose of Dale se REAL was to continue the introduction of evidence-based programs and evaluations within an international Extension context. Thus, our purpose was to examine how a facilitator's proimplementation beliefs were influenced by his or her traits and behaviors within an emerging international Extension context.

Method

Participants/Procedures

In spring 2015, upon institutional review board approval, 46 facilitators from 26 schools and community groups participated in training for the Dale se REAL drug and violence prevention program. Study participants were nearly evenly split between males (48%) and females (52%), with 12 participants not providing gender-related information. The majority of facilitators identified as Latino (41, 89%). Participants had an average of 10.93 years of experience facilitating programs and teaching (SD = 8.25 years, range = .5–30 years) and had delivered an average of 1.43 life skills programs similar to the Dale se REAL program (SD = 1.44, range = 0–5 programs).

Measures

As part of program training, facilitators of Dale Se REAL completed a 69-item questionnaire that addressed basic demographic information, prior experience teaching and facilitating life skills programs for youths, and the facilitator characteristics and program contributions scale (FCPC). This measure was translated from English to Spanish and back translated by a member of the Dale se Real project team fluent in Spanish. The FCPC was designed to measure three facilitator characteristics commonly associated with program implementation: (a) perceived readiness to implement (e.g., "I feel prepared to facilitate Dale se REAL"), (b) program buy-in (e.g., "I believe in the goals of Dale se REAL"), and (c) profidelity beliefs (e.g., "It is important to facilitate Dale se REAL as designed") (Gagnon, 2014). All items were measured on a 7-point scale (1, strongly disagree, to 7, strongly agree). Prior analysis through exploratory factor analysis (Gagnon & Bumpus, 2016) and a confirmatory factor analysis (Gagnon, Stone, & Garst, 2015) indicated that the FCPC is a valid and reliable measure of the three constructs of interest in the study.

Data Preparation

In preparation for analyses, we examined the data for outliers using leverage values, scree plots, and the (non)normality of data. This examination resulted in no cases being removed from the data set. Given the prior research indicating validity and reliability of the FCPC (Gagnon & Bumpus, 2016), composite scores were created for three subdimensions of the FCPC: (a) perceived readiness to implement (5 items, M = 6.16, SD = 1.01, α = .933), (b) program buy-in (5 items, M = 6.355, SD = 1.19, α = .931), and (c) profidelity beliefs (3 items, M = 5.83, SD = 1.14, α =.641).

Results

After creating the composite scores, we conducted a hierarchical multiple regression to determine the potential effect of facilitator characteristics on profidelity beliefs. The regression results indicated that within the Dale se REAL program, preparedness, years of experience teaching, and number of life skills trainings attended had no significant relationship with teacher profidelity score. However, as evidenced by the Model 1 data in Table 1, facilitator buy-in to the Dale se REAL program did have a significant relationship with profidelity belief scores R2 = .431, F(1, 42) = 31.822, p ≤ .001; adjusted R2 = .418.

Table 1.
Hierarchical Multiple Regression Predicting Profidelity Belief Scores from Buy-In, Preparedness, Total Teaching Experience, and Number of Trainings

Variable Profidelity beliefs
Model 1 Model 2 Model 3 Model 4
B β B β B β B β
Constant 2.067 2.226 2.226 2.320
Buy-in .612** .658 .731** .784 .762** .818 .799** .858
Preparedness −.146 −.151 −.199 −.205 −.233 −.241
Teaching experience .012 .096 .015 .126
Training number −.043 −.081
R2 .431 .437 .446 .451
F 31.822** 15.939** 10.719** 8.014**
ΔR2 .431 .006 .008 .005
ΔF 31.822** .464 .594 .390
Note. N = 46. Model 1 represents the incorporation of buy-in only to the multiple regression; Model 2 represents the incorporation of buy-in and preparedness into the multiple regression; Model 3 represents the incorporation of buy-in, preparedness, and teaching experience into the multiple regression; and finally Model 4 represents the incorporation of buy-in, preparedness, teaching experiences, and number of trainings attended into the multiple regressions. All models indicated that buy-in was the only significant predictor (p ≤ .001) of profidelity beliefs.
**p ≤ .001.

Discussion

Our study, unique in assessing an international nonformal Extension education program, illustrates that program buy-in may be a potentially useful self-report measure when examining a facilitator's profidelity beliefs prior to program implementation. The results supported prior prevention science research indicating that facilitator buy-in and program fidelity are positively related (Kam, Greenberg, & Walls, 2003; Little et al., 2013). Conversely, the results did not fully support the work of Gagnon and Bumpus (2016), who found that preparedness and buy-in were both positive predictors of profidelity beliefs. Given the preliminary nature of our study, future assessment may indicate that perceived level of preparedness and training may promote quality program implementation beyond initial training. Regardless, the findings imply that working to improve facilitator buy-in is an important strategy for promoting positive program implementation and thus fostering the likelihood that programs will fulfill intended designs and that evidence-based practices will be maintained.

More broadly, our study makes contributions to implementation research in the context of Extension. First, it provides an efficient self-report assessment of issues related to implementation. Although self-report measures could introduce bias on the part of study respondents (e.g., Sonderen, Sanderman, & Coyne, 2013), the reduction in resources necessary may balance out the limitations of this approach as compared to more resource-intensive observational techniques. We also concur with others that considering implementation issues is vital in Extension settings and can provide evidence that links buy-in to profidelity beliefs (e.g., Berkel et al., 2011; Durlak & DuPre, 2008; Pettigrew et al., 2015). What remains to be seen through future research is how well profidelity beliefs correspond to observed fidelity and program outcomes. Using self-report data to circumnavigate barriers to measuring implementation quality can enhance outcome evaluations overall. Second, our study adds an international context to the Extension literature and demonstrates the possibility of developing a collaborative Extension network. In Nicaragua, there were no school collaboratives that spanned multiple cities for the purpose of promoting adolescent health. Further, there existed no customized evidence-based programs. Our study provides additional infrastructure for the exploration of program implementation in an emerging Extension context.

Within an international context, the study underscores the importance of enhancing buy-in among program facilitators. As part of the larger project, we observed high turnover in program facilitators, with many leaving mid-season (i.e., at the semester break) for more lucrative offers (e.g., increased money and facility quality). This circumstance was especially common in resource-poor areas. Thus, given the relationship between buy-in and profidelity beliefs, future researchers should explore the use of local "program champions" to enhance and sustain program buy-in and thereby mitigate attrition of facilitators (Mihalic & Irwin, 2003). Additionally, the design of the Dale se REAL content was based on evidence established in a North American context; it is possible that the adaptation of programs to reflect local needs may contribute to unmeasured challenges to implementation quality. Lastly, there are likely unaccounted-for Dale se REAL facilitator characteristics, traits, and beliefs that may have influenced the study results unique to the international study context. In summary, our study illustrated that within developing Extension contexts, there is a great deal of future investigation possible.

Disclaimer

The research reported here was supported in part by grant number S-INLEC-13-GR-1012 from the Bureau of International Narcotics and Law Enforcement Affairs to the University of Tennessee (Jonathan Pettigrew, principal investigator). The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the U.S. Department of State.

References

Arnold, M. E., & Cater, M. (2016). Program theory and quality matter: Changing the course of Extension program evaluation. Journal of Extension, 54(1), Article 1FEA1. Available at: https://joe.org/joe/2016february/a1.php

Beauleiu, L. J., & Cordes, S. (2014). Extension community development: Building strong, vibrant communities. Journal of Extension, 52(5), Article 5COM1. Available at: https://www.joe.org/joe/2014october/comm1.php

Berkel, C., Mauricio, A. M., Schoenfelder, E., & Sandler, I. N. (2011). Putting the pieces together: An integrated model of program implementation. Prevention Science, 12, 23–33.

Colby, M., Hecht, M. L., Miller-Day, M., Krieger, J. L., Syvertsen, A. K., Graham, J. W., & Pettigrew, J. (2013). Adapting school-based substance use prevention curriculum through cultural grounding: A review and exemplar of adaptation processes for rural schools. American Journal of Community Psychology, 51, 190–205.

Dane, A., & Schneider, B. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23–45.

Duerden, M. D., & Witt, P. A. (2012). Assessing program implementation: What it is, why it's important, and how to do it. Journal of Extension, 50(1), Article 1FEA4. Available at: https://www.joe.org/joe/2012february/pdf/JOE_v50_1a4.pdf

Dunifon, R., Duttweiler, M., Pillemer, K., Tobias, D., & Trochim, W. (2004). Evidence-based Extension. Journal of Extension, 42(2), Article 2FEA2. Available at: https://www.joe.org/joe/2004april/a2.php

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.

Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256.

Fagan, A. A., Hanson, K., Hawkins, J. D., & Arthur, M. W. (2008). Bridging science to practice: Achieving prevention program implementation fidelity in the community youth development study. American Journal of Community Psychology, 41, 235–249.

Gagnon, R. J. (2014). Exploring the relationship between the facilitator and fidelity. Journal of Outdoor Recreation, Education, and Leadership, 6(2), 183–186.

Gagnon, R. J., & Bumpus, M. F. (2016). Fidelity and its importance to experiential and outdoor education. Journal of Outdoor Recreation, Education, and Leadership, 8(1), 10–25.

Gagnon, R. J., Stone, G. A., & Garst, B. A. (2015). Measuring self-reported fidelity in recreation: Exploring the effectiveness of the facilitator characteristics and program contributions scale. Proceedings from the Northeastern Recreation Research Symposium 2015. Annapolis, MD: NERR.

Hecht, M. L., & Krieger, J. L. R. (2006). The principle of cultural grounding in school-based substance abuse prevention: The drug resistance strategies project. Journal of Language and Social Psychology, 25(3), 301–319.

Hecht, M. L., & Miller-Day, M. (2009). The drug resistance strategies project: Using narrative theory to enhance adolescents' communication competence. In L. Frey & K. Cissna (Eds.), Routledge Handbook of Applied Communication (pp. 535–557). New York, NY: Routledge.

Kam, C., Greenberg, M. T., & Walls, C. T. (2003). Examining the role of implementation quality in school-based prevention using the PATHS curriculum. Prevention Science, 4(1), 55–63.

Kumaran, M., Fogarty, K., Fung, W. M., & Terminello, A. (2015). Improving healthy living youth development program outreach in Extension: Lessons learned from the 4-H Health Rocks! program. Journal of Extension, 53(3), Article 3RIB5. Available at: https://www.joe.org/joe/2015june/rb5.php

Lillehoj, C. J., Griffin, K. W., & Spoth, R. (2004). Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Education and Behavior, 31(2), 242–257.

Little, M. A., Sussman, S., Sun, P., & Rohrbach, L. A. (2013). The effects of implementation fidelity in the Towards No Drug Abuse dissemination trial. Health Education, 113(4), 281–296.

Mihalic, S. F., & Irwin, K. (2003). Blueprints for violence prevention: From research to real-world settings—Factors influencing the successful replication of model programs. Youth Violence and Juvenile Justice, 4(1), 307–329.

Pawson, R., & Tilley, N. (1997). Realistic evaluation. London, United Kingdom: SAGE Publications.

Pettigrew, J., Graham, J. W., Miller-Day, M., Hecht, M. L., Krieger, J. L., & Shin, Y. (2015). Adherence and delivery: Implementation quality and program outcomes for the seventh-grade keepin' it REAL program. Prevention Science, 16, 90–99.

Pettigrew, J., & Hecht, M. L. (2015). Developing school-based prevention curricula. In K. Bosworth (Ed.), Prevention Science in School Settings: Complex Relationships and Processes (pp. 151–174). New York, NY: Springer.

Rector, P., Bakacs, M., Rowe, A., & Barbour, B. (2016). Inside the black box: An implementation evaluation case study. Journal of Extension, 54(4), Article 4RIB1. Available at: https://www.joe.org/joe/2016august/pdf/JOE_v54_4rb1.pdf

Riggs, K., Lee, T., Marshall, J. P., Serfustini, E., & Bunnell, J. (2006). Mentoring: A promising approach for involving at-risk youth in 4-H. Journal of Extension, 44(3), Article 3FEA5. Available at: https://www.joe.org/joe/2006june/a5.php

Sloboda, Z., Dusenbury, L., & Petras, H. (2014). Implementation science and the effective delivery of evidence-based prevention. In Z. Sloboda & H. Petras (Eds.), Defining Prevention Science (pp. 293–314).

Sonderen, E., Sanderman, R., & Coyne, J. (2013) Ineffectiveness of reverse wording of questionnaire items: Let's learn from cows in the rain. PLoS ONE, 8(7), e68967.

Treadwell, P., Lachapelle, P., & Howe, R. (2013). Extension learning exchange: Lessons from Nicaragua. Journal of Extension, 51(5), Article 5IAW6. Available at: https://www.joe.org/joe/2013october/iw6.php

Vierregger, A., Hall, J., Sehi, N., Abbott, M., Wobig, K., Albrecht, J. A., . . . Koszewski, W. (2015). Growing healthy kids: A school enrichment nutrition education program to promote healthy behaviors for children. Journal of Extension, 53(5), Article 5IAW3. Available at: https://www.joe.org/joe/2015october/iw3.php

Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., . . . Saul, J. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181.

Wanless, S. B., Rimm-Kaufman, S. E., Abry, T., Larsen, R. A., & Patton, C. L. (2015). Engagement in training as a mechanism to understanding fidelity of implementation of the responsive classroom approach. Prevention Science, 16, 1107–1116.

Wolfe, D. A., Crooks, C., Jaffe, P., Chiodo, D., Hughes, R., Ellis, W., . . . Donner, A. (2009). A school-based program to prevent adolescent dating violence. Archives of Pediatric Adolescent Medicine, 163(8), 692–699.