The Journal of Extension - www.joe.org

October 2012 // Volume 50 // Number 5 // Feature // v50-5a2

Evidence-Based Programming: What Is a Process an Extension Agent Can Use to Evaluate a Program's Effectiveness?

Abstract
Extension agents and specialists have experienced increased pressure for greater program effectiveness and accountability and especially for evidence-based programs. This article builds on previously published evidence-based programming articles. It provides ideas that address three problems that Extension staff face with EBPs and that Extension agents and specialists can use either to test or enhance an existing Extension program's effectiveness or to test the effectiveness of a new program that looks promising.


Robert J. Fetsch
(Co-First Author)
Professor & Extension specialist Emeritus
Colorado State University
Fort Collins, Colorado
Robert.Fetsch@colostate.edu

David MacPhee
(Co-First Author)
Professor
Colorado State University
Fort Collins, Colorado
david.macphee@colostate.edu

Luann K. Boyer
Extension agent—Family and Consumer Sciences
Colorado State University Extension
Morgan County Extension
Fort Morgan, Colorado
luann.boyer@colostate.edu

What Is the Problem?

In recent years Extension agents and specialists have experienced increased pressure from federal, state, and local governments; funding entities; and land-grant university administrators for greater program effectiveness and accountability (Dunifon, Duttweiler, Pillemer, Tobias, & Trochim, 2004; Mincemoyer et al., 2008). Simultaneously, new lists of evidence-based prevention programs are cropping up in refereed journals (e.g., Barth, 2009; Roth & Brooks-Gunn, 2003; Sanders & Morawska, 2010) and online. This increased demand for educational program accountability has fostered an increased need for and understanding of evidence-based programming.

What Are Evidence-Based Programs?

Evidence-Based Programs (EBPs), according to Small, Cooney, and O'Connor (2009, p. 1), "are well-defined programs that have demonstrated their efficacy through rigorous, peer-reviewed evaluations and have been endorsed by government agencies and well-respected research organizations. EBPs are not simply characterized by known effectiveness; they are also well documented so that they are more easily disseminated." From the perspective of a potential consumer of such programs, this definition emphasizes an Extension agent's ability to evaluate the quality of evidence for program effects, or skills at finding trustworthy clearinghouses of such information.

The Society for Prevention Research created standards to assist Extension agents and other practitioners, administrators, and policy makers to determine whether interventions are efficacious, effective, and ready for dissemination. Efficacious interventions are those that have evidence for program impact but that may not yet have sufficient information for widespread adoption. Specifically, an efficacious intervention "will have been tested in at least two rigorous trials" (Flay et al., 2005, p. 151)—implying either random assignment to groups or a strong enough design to rule out alternative explanations for program effects—that:

  • Specified the target population,
  • Used sound data collection procedures,
  • Used appropriate statistical analyses,
  • Showed consistent benefits and no unintended harm (e.g., due to labeling or diagnosis), and
  • Found at least one long-term positive outcome.

For an intervention to be considered effective, it must meet the following criteria in addition to those listed above for an efficacious intervention (Flay et al., 2005):

  • Have manuals, appropriate training, and technical support available to allow third parties to adopt and implement the intervention;
  • Have been evaluated under real-world conditions in studies that included sound measurement of implementation and engagement of the target audience;
  • Have indicated the practical importance of intervention outcome effects, not just whether the results were statistically significant; and
  • Have clearly demonstrated to whom intervention findings can be generalized.

Programs that are ready for broad dissemination (see Schorr, 1997) also must provide evidence of the ability to "go to scale," cost information, and monitoring and evaluation tools so that adopting agencies can assess how well the intervention works in their settings. Fundamentally, then, "evidence based" entails more than the collection of data on client satisfaction, or baseline and post-test measures from a (select) group of participants. Methodological rigor and in-depth documentation of program processes are essential features of EBPs.

What Are Problems for Extension Staff with Evidence-Based Programming?

Although Extension field agents acknowledge the need for using EBP, there are at least three obstacles.

First, field agents often are more able to address emerging issues nimbly with Extension programs that they identify or create in order to meet local needs. A request may come as a result of a local community concern for which an innovative approach is the best method. Or perhaps a theory-driven program has been developed, but there is a compelling need to sacrifice some fidelity (to an existing program design) in order to adapt to local needs (DiClemente, Crosby, & Kegler, 2009). There may not be time to field test, use comparison groups, or to validate an evaluation protocol. In many cases, evaluations of these programs show important changes in participants' knowledge and behaviors. Agents find it devaluing for administrators to say that their program results cannot be used for Extension professional development presentations or for award applications because they were not evidence based.

Second, in rural areas and small population centers, there may not be a large enough audience for Extension programs to replicate or use control groups to validate good programs. Rather than have an audience of 100 that can be divided into treatment and control groups, Extension programs in small communities are sometimes fortunate when they get a dozen participants. In these situations, it could take many years to get enough data to meet the expectations of a program to be evidence based. By then, the program may not even be appropriate to use because clientele needs and issues may have moved on to other concerns.

Finally, field agents may not have the training or expertise to develop EBPs. Some Extension programs may not have adequate Extension specialist support to assist agents in developing and testing these programs. Staff also need schedule flexibility to spend their time developing EBPs. Their time spent needs to be recognized during annual performance appraisal as critical to Extension programming.

Possible Solutions to Problems That Extension Staff Face with EBPs

Our task with evidence-based programming as Extension agents and specialists seems daunting, but it need not be. A process to develop a program from inception through dissemination may take 15-20 years. To test or enhance an existing Extension program's efficacy and effectiveness may take fewer years. To do it well takes years of work with a team of Extension colleagues. It helps with an Extension program to get a large grant and use it to fund the development, delivery, evaluation, testing, and reporting results of a program in order to prepare it for broad dissemination.

Identify and Target an Issue Within Extension's Mission

Identify a social problem or need in your constituents that might be solved with a research-based, effective educational program that is within Extension's mission (Yang, Fetsch, McBride, & Benavente, 2009). One way to accomplish this task is to search the literature for existing research-based and effective programs that have already been tested and found effective in resolving the social problem or need that was identified. It is usually more efficient not to reinvent wheels in Extension programming if it is unnecessary to do so.

For example, Fetsch and Yang conducted a three-state needs assessment with three random samples of constituent adults and found that prevention of child abuse was the second highest priority on their minds. They researched the literature and found the RETHINK Parenting and Anger Management program (Institute for Mental Health Initiatives, 1991a, 1991b). After reviewing it they found that RETHINK was clearly theory and research based and it had some evidence of effectiveness with 18 middle-class African-American parents in the Washington, DC area. Note that this would be a rather limited sample and design by the criteria reviewed above and might be considered "promising" by most program clearinghouses. So they used RETHINK to test the program's efficacy with samples of parents and found the program to be effective immediately after the conclusion of the workshops (Fetsch, Schultz, & Wahler, 1999) and even more effective 21/2 months afterwards (Fetsch, Yang, & Pettit, 2008).

Find an Efficacious or Promising Program That Addresses the Targeted Issue

How can an existing program be identified for adoption? Virtually all programs that meet the Society for Prevention Research standards were at some point supported by a federal demonstration grant, which means that one can find descriptions of promising, efficacious, and effective programs through federal agencies' websites. For instance, the Substance Abuse and Mental Health Services Administration has a National Registry of Evidence-Based Programs and Practices <http://nrepp.samhsa.gov/>. The National Campaign to Prevent Teen and Unplanned Pregnancy has a similar listing related to teen pregnancy prevention <http://www.thenationalcampaign.org/resources/programs.aspx>. The Office of Juvenile Justice and Delinquency Prevention has a model programs guide <http://www2.dsgonline.com/mpg/>, and the Administration on Children, Youth, and Families published a guide to emerging practices to prevent child abuse and neglect (Thomas, Leicht, Hughes, Madigan & Dowell, 2003). The What Works Clearinghouse <http://ies.ed.gov/ncee/wwc/> lists educational programs that meet rigorous standards of effectiveness.

Other means to find promising or efficacious programs would be to conduct Web-based searches or, less often, to use academic search engines such as PsycInfo and PAIS. For instance, one of the prominent and widely respected websites of evidence-based programs is the Cochrane Collaboration <http://www.cochrane.org/cochrane-reviews>. Although this site focuses primarily on health care, a number of reviews of interventions that are relevant to Extension are included. By combining search terms related to "intervention" or "program," "impact" or "outcome," and the specific content area (e.g., family functioning; youth development), one may be able to find several dozen or more programs that can be examined.

Typically, the federal staff and independent researchers who compile such program lists adopt most if not all of the standards described earlier, which means that Extension field staff do not necessarily need an advanced degree in research methods to sort the wheat from the chaff. Even so, it can be helpful if not essential to know how to evaluate whether a given program passes muster as evidence based. To expand on the standards outlined earlier, the ideal program would have been evaluated multiple times by:

  • Using random assignment to an intervention and control group; a single-group pre/posttest design is inadequate.
  • Employing valid measures, appropriate for the population, that are linked to program objectives focusing on behavior or attitude changes; measures of client satisfaction alone are insufficient.
  • Including a large, representative sample, with minimal attrition over time; small, select samples may mean that program benefits are site specific.
  • Finding changes that are robust or meaningful, with at least moderate effect sizes; trivial changes can be statistically significant with a large enough sample.

The criteria above tell us that any given intervention trial should be scientifically sound. However, "evidence based" implies more than one good study showing the program to have an impact. Replication is another key. The program should be implemented with different populations in different sites, with effect sizes reported for each trial as well as the practical significance of any changes that are observed (Cooney, Huser, Small, & O'Connor, 2007). Too often, however, programs are promulgated by curriculum publishers or program developers without convincing evidence to support their utilization (e.g., Klevens & Whittaker, 2007).

It is best practice for a program to provide information about how well it was implemented so that replications have some idea of what is required to staff and deliver the program. As well, Extension agents can use rubrics to assess the adequacy and appropriateness of a curriculum, such as one used in 4-H <http://www.national4-hheadquarters.gov/library/Curric_Design_Review_Template.pdf>. It is not a simple matter to apply these considerations to what might be found in the method and results section of a journal article, or in a program's final report, so when in doubt, ask!

Consider the Resources Needed Before Adopting an EBP

In many cases, the difficult work begins after an EBP is selected. There are several challenges to implementing an EBP with fidelity and sustaining a successful program over time. Extension agents should consider the following when making decisions about adoption of an EBP. First, is the community ready for and committed to proposed program? (See Edwards, Jumper-Thurman, Plested, Oetting, & Swanson, 2000.) Sustainable programs are those with a good fit between the program's focus and the identified needs of the community (Greenberg, Feinberg, Meyer-Chilenski, Spoth, & Redmond, 2007). Second, field agents need the knowledge to implement such programs and the resources to purchase the curriculum and be trained in implementing it—resources that may be in short supply (Cooney et al., 2007; Ockene et al., 2007). Implementation will be more daunting if the curriculum is grounded in theory-driven processes that are more subtle or sophisticated, such as self-regulation, or more complex, such as positive youth development.

Third, agency administrative support is essential for an EBP to take root in the local community (O'Loughlin, Renaud, Richard, Gomez, & Paradis, 1998). For example, the administrative climate—support and shared vision—distinguishes successful from ineffective positive youth development programs (Catalano, Gavin, & Markham, 2010), as does previous experience with successful prevention programs (Greenberg et al., 2007). Finally, technical assistance and monitoring typically are required from the EBP program developers. Thus, the Extension agent should assess how involved the program developer(s) will be in training, trouble shooting, and monitoring, and whether assistance will be provided by the program evaluator. Overall, field staff not only need to assess whether a program passes muster as effective, they also need to examine whether they are able to implement and sustain the EBP once it is adopted.

Review the Literature, and Create an Extension Program to Address the Targeted Issue

Sometimes there is no effective program in the literature to address the identified social problem or need in an Extension agent's county or region. This is especially likely if the need focuses on positive youth development as opposed to problem behaviors (Cooney et al., 2007). In such cases, it may be necessary to create our own program. Where do we begin? A first step would be to read the published literature on the need or scope of the social problem. One should read both theory and empirical research articles on the topic that contribute to a theory of change (aka program theory or impact model). Both are important, especially if grant funding is sought to support the program. Grant funders say the best programs are both theory based and empirically research based (e.g., Nation et al., 2003; Painter, Borba, Hynes, Mays, & Glanz, 2008). New programs should rest on a sound theoretical foundation, in order to enhance the chances of the program being effective at solving the identified need or problem.

Build the Extension Program on Solid Theory and Research

A program curriculum should be based on the best theory and empirical research findings that are available. This is what we do best in Extension—provide research-based and effective educational programs. Agents should clearly articulate the goals, objectives, and expected outcomes in behavioral terms and then craft the curriculum to be in synchrony with those goals and objectives. Interventionists need to be SMART: Objectives should be specific, measurable, achievable, realistic, and time framed (SMART). The tighter the links between the identified need or problem, goals, objectives, and treatment or curriculum, the more likely we will see positive outcomes that solve the social problem or need. Specific, measurable program objectives are essential, and so they must be articulated clearly.

Objectives are the means by which sound theory is translated into effective interventions. As such, they are the foundation of a program's logic model, which can be developed using these guidelines:

Logic models explicate one's assumptions, goals, strategies, and outcomes. As such, they can enhance one's effectiveness in achieving outcomes, help to manage staff and resources, and increase credibility with funders and collaborators (Cato, 2006).

Locate and Use Valid and Reliable Measures to Assess Outcomes

Essential to best practices is that valid and reliable measures are used to assess outcomes. There are many assessment tools in the literature that are already tested and available. Try to find brief but sound (valid and reliable) tools that are closely linked to the outcomes specified in the objectives, seek written permission to use them, and pilot test them if necessary to see if they are appropriate for the local population.

There are many ways to find useful measures. Mental Measurements Yearbook provides in-depth reviews of many published measures that tap into a wide array of behaviors, not just in the cognitive arena. The American Psychological Association has an online, comprehensive database of tests and measures <http://www.apa.org/pubs/databases/psyctests>. Potential users may purchase day passes to search the site for useful measures. The same articles that are reviewed in order to build a case for the program objectives are likely to have descriptions of measures that were used in the research or evaluation, in the Method section. The measures used can be tracked down, through the reference citations, by using PsycInfo; PsycInfo often lists authors' addresses and emails. Various handbooks are available that review measures in a given area, and include contact information for the test developer (e.g., Card, 1993; Touliatos, Perlmutter, & Straus, 1990). Finally, some measures are in the public domain and can be found by searching the Web.

There are several practical considerations to consider when selecting measures for evaluation.

  1. First, the measures should be brief. A valid 6- or 10-item scale may not have the reliability of a 30-item scale, but participants are less likely to get frustrated or hurry through and give inaccurate responses.
  2. Second, try to assess changes in behavior or performance, not just attitudes or client satisfaction (McKnight & Sechrest, 2004). This recognizes a difference between intentions and actions.
  3. Third, assess states rather than traits (Patrick & Beery, 1991). Traits are enduring characteristics that tend not to change, whereas states indicate functioning in the current time and context. The difference can be seen in how the item stems often are worded: "I have trouble disciplining my child" (trait) versus "In the last week, I spanked my child" (state). Effective programs are more likely to show an impact on state measures, because they are more sensitive, rather than trait measures.
  4. Fourth, check on the readability and cultural appropriateness of a measure. We recommend that individuals from the local community read proposed measures and provide feedback as to whether items are understandable and appropriate. Sometimes, an item that seems clear may not be so in particular community. Consider how the following item from a child development scale confused parents living in a rural area where there were no house numbers: "Child knows his address."
  5. Finally, before adapting an existing measure—changing the wording or abbreviating it—or developing your own, we strongly recommend that a specialist in tests and measures be called in to provide guidance.

Include at Least Two Groups for Outcome Comparison

Evidence-based programs must answer the question, "Compared to what?" Thus, it is important to figure out how we can design our workshop curriculum to include at least two groups for comparison of outcome results. The best approach is to start with a sample of 100 or so prospective workshop participants and randomly assign them to two groups. The first group is the workshop participant group; the second group is the no-treatment or "treatment-as-usual" control group. Sometimes in real-world settings, this is not practical (see Rosen, Manor, Engelhard, & Sucker, 2006). If this is not feasible, then randomly assign half the prospective participants to the workshop and the other half to a wait-list control group that gets the workshop later. Both groups complete presurveys, postsurveys, and follow-up surveys. In this way, we are better positioned to know whether the program works.

Teach Well, Evaluate Well, and Report Results Well

Once the workshops are scheduled, we should prepare ourselves well to teach well. Programs should be taught in real settings, to improve people's lives as well as their communities (MacPhee, Miller-Heyl, & Carroll, 2012). Collaborate with state specialists and other land-grant university faculty in self-reflective practice. Let the evidence tell us what worked and what did not, and then meet with the Extension team to figure out how the curriculum can be strengthened in the next wave of workshops. Repeat the process with other participants again, with solid program evaluation research, until in 15-20 years we have a program that is ready for distribution. For example, Nelson started Just in Time Parenting more than 20 years ago in Delaware. After extensive evaluation in numerous states, it is now being disseminated nationwide online <http://www.parentinginfo.org/>.

Conclusion

There is no stopping the trend: EBPs are here to stay. Although it may appear daunting for Extension agents and specialists to use EBPs, it need not be. We can use resources cited in this article to identify already proven programs that address our clientele's identified needs. Or we can use ideas that address the three problems that Extension face with EBPs to test and/or enhance an existing Extension program's efficacy and effectiveness. By doing so, we continue to do what Extension does best—provide research-based and effective programs that work to improve the lives of individuals and families across the U.S.

References

Barth, R. P. (2009). Preventing child abuse and neglect with parent training: Evidence and opportunities. The Future of Children, 19(2), 95-118.

Card, J. J. (1993). Handbook of adolescent sexuality and pregnancy: Research and evaluation instruments. Thousand Oaks, CA: Sage.

Catalano, R. F., Gavin, L. E., & Markham, C. M. (2010). Future directions for positive youth development as a strategy to promote adolescent sexual and reproductive health. Journal of Adolescent Health, 46, S92-S96.

Cato, B. (2006). Enhancing prevention programs' credibility through the use of a logic model. Journal of Alcohol and Drug Education, 50, 8-20.

Cooney, S., Huser, M., Small, S., & O'Connor, C. (2007, October). Evidence-based programs: An overview. Retrieved from: http://www.uwex.edu/ces/flp/families/whatworks_06.pdf

DiClemente, R. J., Crosby, R. A., & Kegler, M. C. (Eds.) (2009). Emerging theories in health promotion and practice (2nd ed.). San Francisco: Jossey-Bass.

Dunifon, R., Duttweiler, M., Pillemer, K., Tobias, D., & Trochim, W. M. K. (2004). Evidence-based Extension. Journal of Extension [On-line], 42(2) Article 2FEA2. Available at: https://www.joe.org/joe/2004april/a2.php

Edwards, R. W., Jumper-Thurman, P., Plested, B. A., Oetting, E. R., & Swanson, L. (2000). Community readiness: Research to practice. Journal of Community Psychology, 28, 291-307.

Fetsch, R. J., Schultz, C. J., & Wahler, J. J. (1999). A preliminary evaluation of the Colorado RETHINK parenting and anger management program. Child Abuse & Neglect, 23, 353-360.

Fetsch, R. J., Yang, R. K., & Pettit, M. J. (2008). The RETHINK Parenting and Anger Management Program: A follow-up validation study. Family Relations, 57, 543-552.

Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., … & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151-175.

Greenberg, M. T., Feinberg, M. E., Meyer-Chilenski, S., Spoth, R. L., & Redmond, C. (2007). Community and team member factors that influence the early phase functioning of community prevention teams: The PROSPER Project. Journal of Primary Prevention, 28, 485-504. doi: 10.1007/s10935-007-0116-6

Institute for Mental Health Initiatives. (1991a). Anger management for parents: Parent's manual—The RETHINK method. Champaign, IL: Research Press.

Institute for Mental Health Initiatives. (1991b). Anger management for parents: Program guide—The RETHINK method. Champaign, IL: Research Press.

Klevens, J., & Whittaker, D. J. (2007). Primary prevention of child physical abuse and neglect: Gaps and promising directions. Child Maltreatment, 12, 364-377.

MacPhee, D., Miller-Heyl, J., & Carroll, J. (2012). Impact of the DARE to be You family support program: Collaborative replication in rural counties. Manuscript submitted for publication.

McKnight, K. M., & Sechrest, L. (2004). Program evaluation. In S. N. Haynes & E. M. Heiby (Eds.), Comprehensive handbook of psychological assessment, Vol. 3: Behavioral assessment (pp. 246-266). Hoboken, NJ: Wiley.

Mincemoyer, C., Perkins, D., Ang, P. M., Greenberg, M. T., Spoth, R. L., Redmond, C., & Feinberg, M. (2008). Improving the reputation of Cooperative Extension as a source of prevention education for youth and families: The effects of the PROSPER model. Journal of Extension [On-line], 46(1) Article 1FEA6. Available at: https://www.joe.org/joe/2008february/a6.php

Nation, M., Crusto, C., Wandersman, A., Kumpfer, K., Seybolt, D., Morrissey-Kane, E., & Davino, K. (2003). What works in prevention: Principles of effective prevention programs. American Psychologist, 58, 449-456.

Ockene, J. K., Edgerton, E. A., Teutsch, S. M., Marion, L. N., Miller, T., Genevro, J. L. … & Briss, P. A. (2007). Integrating evidence-based clinical and community strategies to improve health. American Journal of Preventive Medicine, 32, 244-252.

O'Loughlin, J., Renaud, L., Richard, L., Gomez, L. S., & Paradis, G. (1998). Correlates of the sustainability of community-based heart health promotion interventions. Preventive Medicine, 27, 702-712.

Painter, J. E., Borba, C. P. C., Hynes, M., Mays, D., & Glanz, K. (2008). The use of theory in health behavior research from 2000 to 2005: A systematic review. Annals of Behavioral Medicine, 35, 358-362.

Patrick, D. L., & Beery, W. L. (1991). Measurement issues: Reliability and validity. American Journal of Health Promotion, 5, 305-310.

Rosen, L., Manor, O., Engelhard, D., & Sucker, D. (2006). In defense of the randomized controlled trial for health promotion research. American Journal of Public Health, 96, 1181-1186.

Roth, J. L., & Brooks-Gunn, J. (2003). Youth development programs: Risk, prevention, and policy. Journal of Adolescent Health, 32, 170-182.

Sanders, M. R., & Morawska, A. (2010). Prevention: The role of early universal and targeted interventions. In R. C. Murrihy, A. D. Kidman, & T. H. Ollendick (Eds.), Clinical handbook of assessing and treating conduct problems in youth (pp. 435-454). New York: Springer.

Schorr, L. B. (1997). Common purpose: Strengthening families and neighborhoods to rebuild America. New York: Anchor.

Small, S. A., Cooney, S. M., & O'Connor, C. (2009, February). Evidence-informed program improvement: Using principles of effectiveness to enhance the quality and impact of family-based prevention programs. Family Relations, 58, 1-13.

Thomas, D., Leicht, C., Hughes, C., Madigan, A., & Dowell, K. (2003). Emerging practices in the prevention of child abuse and neglect. Washington, DC: Children's Bureau, ACYF, DHHS.

Touliatos, J., Perlmutter, B. F., & Straus, M. A. (Eds.) (1990). Handbook of family measurement techniques. Newbury Park: Sage.

Yang, R. K., Fetsch, R. J., McBride, T. M., & Benavente, J. C. (2009). Assessing public opinion directly to keep current with changing community needs. Journal of Extension [On-line], 47(3) Article FEA6. Available at: https://www.joe.org/joe/2009june/a6.php