The Journal of Extension - www.joe.org

August 2019 // Volume 57 // Number 4 // Tools of the Trade // v57-4tt2

Promoting Program Evaluation Fidelity When Data Collectors Lack Research Design and Implementation Expertise

Abstract
Within Extension, certain personnel, facilitators, and volunteers expected to conduct research in the form of program evaluation may have little or no training in effective research design and practices. This circumstance can lead to difficulties in the implementation of evaluation procedures, particularly with regard to program evaluation fidelity. In addition, a lack of familiarity with effective program evaluation and research methods may limit an individual's understanding of the importance of evaluation itself, as well as the importance of fidelity in conducting an evaluation. Effective planning of, training in, and monitoring of program evaluation procedures is essential for maintaining fidelity and ensuring accurate evaluation of program outcomes.


Robert J. Cooper
Clinical Assistant Professor, Human Development
Washington State University
Pullman, Washington
robby.cooper@wsu.edu

Scott A. VanderWey
Director of Adventure Education
Washington State University Extension
Puyallup, Washington
vanderwey@wsu.edu

Kevin C. Wright
Director of King County Extension
Washington State University Extension
Renton, Washington
wrightkc@wsu.edu

Background

More than ever, those involved with Extension programs across the country are being tasked with providing evidence that demonstrates the impacts of their programming (Rennekamp & Arnold, 2009). Extension educators, facilitators, and volunteers are expected to obtain this evidence through scholarly efforts to evaluate and report program impacts. Unfortunately, some Extension personnel and others expected to conduct research in the form of program evaluation may have little or no exposure to or training in effective research design and practices. This issue can be addressed through professional development and/or collaboration with non-Extension researchers, including research faculty at their respective universities. These collaborations can be mutually beneficial to Extension professionals who need to evaluate their programs and research faculty who need programs and participants to study. However, such a partnership requires that both sides develop an understanding of the other's priorities and challenges in order to function effectively (Shulha, Whitmore, Cousins, Gilbert, & al Hudib, 2016). For instance, non-Extension research faculty must be aware of the challenges faced by Extension professionals and the facilitators and volunteers with whom they work—challenges such as time, labor, and financial constraints. Conversely, Extension professionals, facilitators, and volunteers must recognize the importance of fidelity in implementing research methods and designs to ensure the accuracy of evaluation findings. A balance of realistic expectations for the evaluation process and fidelity to research methods is essential for both the partnership and the process (Chen, 2015). Herein, we describe strategies for promoting fidelity in program evaluation that were identified through a collaborative effort by Extension and non-Extension faculty.

The Evaluation

In 2012, Washington State University Extension 4-H Adventure Education faculty and Washington State University Human Development faculty began development of a new program evaluation process for Adventure Education programs in the state. The purpose of this partnership between Extension and non-Extension faculty was to examine participant outcomes specific to the programs' goals. Our intention was to design a process comprising effective research procedures that could be implemented by trained Extension program facilitators.

After initial success with the evaluation process, we noticed that previously effective outcome measures began to demonstrate reliability issues. In the second round of data analysis, a measure of the internal consistency of the scale variables used to measure program outcomes dropped to a problematic level (George & Mallery, 2003). In examining possible causes, we confirmed our suspicions that the problems likely were due to a couple of key factors: (a) program facilitators' lack of fidelity to the evaluation process (i.e., not providing clear instructions to program participants regarding the survey, not providing enough time for program participants to complete the survey) and (b) administration of the survey to program participants who should not have been included in the survey sample (i.e., participants who were outside the target age range, participants who were unable to fully understand the survey due to limited English proficiency). The solution to these fidelity issues was a careful retraining of the program facilitators. Subsequently, only facilitators who had been present at the training were allowed to administer the survey. We offer lessons learned from this experience to other Extension and non-Extension research faculty who involve persons unfamiliar with research design and practices—be they Extension educators, facilitators, or volunteers—in data collection.

Implementation Concerns

Our collaboration presented the following implementation concerns:

  • lack of research training and experience for program facilitators,
  • time constraints of programs, and
  • program participant demographics.

Fidelity Concerns

Our collaboration presented the following fidelity concerns:

  • lack of buy-in to the research process from program facilitators,
  • lack of research training and experience for program facilitators, and
  • lack of continued fidelity to the evaluation procedures over time.

Strategies for Success

Our experiences revealed the following strategies for successful implementation:

  • Foster buy-in from all collaborators, including those involved in implementation and on-site supervision of the evaluation process.
    • Explaining the "how," not just the "what," of the evaluation process can help nonresearch faculty and professionals understand the importance of fidelity to the evaluation process.
  • Design an evaluation process that is realistic in terms of the time and resources available at the evaluation site.
  • Carefully design and conduct thorough trainings with those involved in the implementation and on-site supervision of the evaluation process. Ensure that only trained individuals are involved in the evaluation process.
    • Train data collectors to document group characteristics and program details (size of group, length of program, etc.) in addition to collecting surveys.
  • Monitor, and retrain as needed, all individuals involved in the implementation and on-site supervision of the evaluation process.
    • Ensure that the time required for data collection is scheduled into programs.
    • Choose or create an appropriate setting for data collection. Participants should be comfortable and free of unreasonable distraction when completing surveys.
    • Ensure that data are collected only from the appropriate participants (participants that are of the desired age range, can comprehend survey items, have participated in the appropriate types and amount of programs, etc.).

Collaboration between Extension and non-Extension research faculty can be a mutually beneficial partnership but requires careful attention to the needs and challenges of all parties involved. The lessons learned in our collaboration include identification of valuable strategies for avoiding possible problems and moving toward effective approaches to the program evaluation process.

References

Chen, H. T. (2015). Practical program evaluation: Theory-driven evaluation and the integrated evaluation perspective. Los Angeles, CA: Sage Publishing.

George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference 11.0 update (4th ed.). Boston, MA: Allyn & Bacon.

Rennekamp, R. A., & Arnold, M. E. (2009). What progress, program evaluation? Reflections on a quarter century of Extension evaluation practice. Journal of Extension, 47(3), Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php

Shulha, L. M., Whitmore, E., Cousins, J. B., Gilbert, N., & al Hudib, H. (2016). Introducing evidence-based principles to guide collaborative approaches to evaluation: Results of an empirical process. American Journal of Evaluation, 37(2), 193–215.