June 2005 // Volume 43 // Number 3 // Feature Articles // 3FEA4

Previous Article Issue Contents Previous Article

Increasing Educational Impact: A Multi-Method Model for Evaluating Extension Workshops

Abstract
Extension professionals are increasingly being asked to account for their activities through formal program evaluation. Many models of evaluation have been developed to accomplish the goals of evaluation (judge the merit and worth of a program, improve the program, ensure oversight and compliance, or develop theory). This article presents a unique model that combines formative and summative techniques in addition to Stufflebeam's Context, Input, Process, and Products model to successfully evaluate a series of Integrated Pest Management workshops presented to horticultural professionals. The evaluation process resulted in increased learning among the program providers and more educationally effective workshops for stakeholders.


Kathleen D. Kelsey
Associate Professor
Agricultural Education
kathleen.kelsey@okstate.edu

Mike Schnelle
Professor and Extension Specialist
Horticulture and Landscape Architecture
mas@okstate.edu

Patricia Bolin
Integrated Pest Management Coordinator
Entomology and Plant Pathology
bolinp@okstate.edu

Oklahoma State University
College of Agricultural Sciences and Natural Resources
Stillwater, Oklahoma


Introduction and Background

Extension professionals are increasingly being asked for more accountability in their work by stakeholders (Altschuld & Zheng, 1995). In response, they have turned to the processes and products of evaluation for methods in documenting impacts of their programs. Many evaluation models have been applied with varying degrees of success to Extension programs. Some models have followed a singular structured format (Bailey & Deen, 2002; Garst & Bruce, 2003), while others have used a variety of activities to demonstrate program outcomes (Brown & Kiernan, 1998; Chapman-Novakofski et al., 2004).

The purposes of evaluation have evolved over time and are currently described by Mark, Henry, and Julnes (2000) as a) assigning the merit and worth of a program, b) improving the program or organization, c) oversight and compliance, and d) knowledge development or testing theory.

The process of evaluation can occur before and during (formative), or after (summative) (Scriven, 1991) the program has been implemented. Formative evaluation is designed to facilitate program improvement, whereas, summative evaluations are designed to judge the merit and worth of a program or to focus on oversight and compliance issues.

The model presented in this article was developed by the IPM coordinator, the Extension Specialist, and the Evaluator to document impacts and outcomes of a series of Integrated Pest Management (IPM) workshops delivered by Oklahoma State University. The model focused on unobtrusive measures that would capture the processes and products of the workshop effort. The model incorporated formative and summative concepts (Scriven, 1991) as well as the Context, Input, Process, and Products (CIPP) model introduced by Stufflebeam (1973).

The CIPP model includes four phases of evaluation. Phase one is Context centered and addresses the questions of where the program is now and what the program needs to do to achieve its goals. Phase two is Input centered and asks questions about how the program will get to where it needs to be and what resources are required to drive the program. Phase three concerns the Process and asks how the program is going to achieve its goals. Phase four is Product focused and asks if the program has achieved its goals and what the measurable outcomes are.

A Description of the IPM Workshop

The workshops were funded by the Southern Regional Integrated Pest Management (IPM) Program. The project goal was to educate opinion leaders such as ornamental horticultural specialists, Extension Educators, nurserymen, and advanced hobby gardeners in IPM principles for environmentally sound use of pesticide applications around the home. The project adopted a train-the-trainer approach by targeting opinion leaders so that the participants would diffuse the knowledge throughout their communities. The objective of the project was to maximize participants' adoption of IPM principles in ornamental pest management.

Implementation of the program involved delivery of three IPM-centered workshops. The workshops were structured as follows:

8:00 a.m.: Welcome and introduction of IPM presenters and the evaluation process.

8:15 a.m.: Participants network and fill out knowledge pretest.

8:30 a.m.: Participants split into two groups and moved outside for a walking tour.

11:30 a.m.: Participants rejoin for lunch and a formal IPM presentation back in meeting room.

1:00 p.m.: Participants resume walking tour outside.

3:30 p.m.: Participants gathered in meeting room to pick up brochures and to complete knowledge posttest and customer satisfaction survey.

The walking tour was scouted in advance by the IPM Coordinator for common pest problems seen in this state. Thirteen presenters led two groups of participants around campus and explained the problems and the most effective treatments.

The IPM Program Evaluation Model

Program evaluation was an integral component of the funding proposal, thus, formative evaluation began as the proposal was written. Once funding was secured, the team worked to develop a model for program implementation where evaluation was integrated into the workshop. The team decided to use participant observation (Patton, 1990) to capture the context, input, and processes involved in delivering the workshops and to use a pretest-posttest to document the products of the program (changes in attitude, knowledge, and behavior) (Creswell, 2003).

The evaluator served as the participant observer by attending and fully participating in every workshop. The evaluator documented activities and informally interviewed other participants during the walking tours. This method was unobtrusive and yielded high quality data on the workshop processes. The evaluator was able to discuss subtleties of the workshop with program planners that a written instrument could not capture.

A written pretest-posttest instrument was developed by the team based on information to be presented in the workshop. Unlike many workshop evaluations, the instrument focused on capturing knowledge gained during the event in addition to customer satisfaction data. Sample questions are: Bacillus thuringiensis (Bt) is used to control all but which group of insects? What is the recommended treatment for leaf or petiole galls? The 2002 instrument had 27 items, the 2003 instrument had 17 items, and the 2004 instrument had 32 items. The instrument was modified from year-to-year to reflect new content added to the workshop in water ecology and termite control. The response set included matching, multiple choice, fill-in-the-blank, and Likert-type items to increase participant responses (Chapman-Novakofski et al., 1997; Shepard, 2002).

The instrument was administered at the beginning of each workshop. As participants entered the meeting room, they were greeted and handed the instrument and asked to fill out the form by the workshop leaders, adding legitimacy to the process. At the end of the workshop, the participants filled out the same instrument, allowing for a matched pairs t-test to be used for the analysis.

The results of observations, the pretest-posttest, and customer satisfaction surveys from each workshop were used to improve successive workshops. The team met before and after each workshop to discuss desired outcomes, the lessons learned from previous evaluation efforts, and if the model needed to be refined. Findings from the pretest-posttest instructed the team about strengths and weaknesses within the content and presentation. The customer satisfaction data was fed back to the 13 speakers regarding their effectiveness as an opportunity for self-reflection and improvement. The IPM Program Evaluation Model can be summarized as follows.

  1. Assess need from community for educational program.

  2. Determine educational goals and objectives for the program.

  3. Invite evaluation expert to join team to assess educational context, inputs, processes, and products.

  4. Develop program with Extension content experts and stakeholders.

  5. Develop measures and instruments for documenting outcomes (observations, survey, pretest-posttest, customer satisfaction data, etc.).

  6. Incorporate the administration of measures into the workshop program.

  7. Analyze data and implement program improvement based on data.

  8. Share lessons learned with interested audiences.

Evaluation Outcomes

In all three workshops (2002, 2003, 2004), participants were asked to put their names on the surveys so that a matched pairs t-test analysis could be run. A research assistant graded the tests, and an entomologist confirmed questionable responses. Significant gains in knowledge were documented using this procedure.

Workshop Results

In 2002, 38 participants took both the pre- and the posttest. Table 1 reports the descriptive statistics for the population.

Table 1.
Pretest-Posttest Descriptive Statistics for the 2002 Workshop

 

Pretest Score

Posttest Score

Valid

49

38

Mean

9.88

21.95

Standard Deviation

9.28

8.61

According to the results of the pre- and posttest, participants made significant knowledge gains in 22 of the 27 tested variables (81%). The presenters were effective in communicating the majority of the intended content to participants. Although participants did significantly increase their knowledge, observational findings noted much room for improvement in teaching adult learners about IPM practices. The Extension team set about improving the instructional design to deliver more effective content and coached presenters in effective speaking techniques. The team also worked to improve the comfort of participants during the long walking tour.

In 2003, 39 participants took both the pre- and the posttest. Table 2 details the descriptive statistics for the population. Significant gains in knowledge were documented for 7 of the 17 (41%) concepts. The remaining 10 (59%) concepts were already known by the participants.

Table 2.
Pretest-Posttest Descriptive Statistics for the 2003 Workshop

 

Pretest Score

Posttest Score

Valid

39

39

Mean

12.1

15.7

Standard Deviation

4.5

3.4

In 2004, 51 participants took both the pre- and the posttest. Table 3 details the descriptive statistics for the total population. The workshop was effective in communicating 19 of 32 (60%) new IPM principles to participants. The remaining 13 (40%) concepts were already known by the participants.

Table 3.
Pretest-Posttest Descriptive Statistics for the 2004 Workshop

 

Pretest Score

Posttest Score

Valid

51

51

Mean

18.20

23.33

Standard Deviation

5.95

5.13

One possible explanation for the decrease in knowledge gained from 2002 (81%), to 41% in 2003, and 59% in 2004 is that many experienced professionals attended the workshop in 2003 and 2004 in part to earn continuing education units. They may have also attended the 2002 workshop as well.

Observational Findings for All Workshops

Observational findings noted that the workshop presentation team had adjusted and improved their performance as a result of previous formative evaluations. Specifically:

  1. Bottled water was passed out to participants at the start of the walking tour.

  2. Water ecology was added to the workshop in 2004, where significant learning occurred as documented by the pre- and posttest.

  3. A termite station was added to the workshop in 2004, where significant learning occurred as documented by the pre- and posttest.

  4. In 2003 and 2004 the group was divided into two, reducing the number of people per group by one-half.

  5. In 2003 and 2004, speakers used important cue phrases to increase learning during oral presentations such as telling people what plants to buy, restating important points, and following up points made at each stop.

  6. Speakers were clear and positive with the audience.

  7. An improved sound system was used.

Discussion

An "overwhelming lack of attention to project evaluation" (Shepard, 2002) can be avoided with the effective use of stakeholder-centered evaluation practice. Shepard reported that project directors had "no plans to address evaluation" and that project evaluation seemed to be "reactive, using neither basic evaluation planning nor formative research techniques" (2002). Chapman-Novakofski et al. (1997) reported that Extension staff "found few rewards for conducting evaluations." Implementing and improving evaluation requires awareness among Extension professionals that evaluation practice can be used to learn within organizations (Preskill & Torres, 1999) and that, subsequently, they can offer more effective programs to create more satisfied clients.

The IPM Program Evaluation Model developed and implemented for the IPM program was highly effective in directing program improvement because the Extension professionals were engaged in the evaluation processes from start to finish. They valued evaluation findings and incorporated them into future activities. They were engaged stakeholders, a critical component of successful evaluation (Bryk, 1983). Inviting an external evaluator to join the project early in the process was essential for building trust within the team. Team members trusted the evaluation process because they were co-creators of the processes and products of evaluation (Kelsey & Pense, 2001; Patton, 1997).

Having solid evidence from a mixed-method approach allowed the team to improve the instructional design of the workshops and add content that was educational. As Brown and Kiernan (1998) reported, "combining quantitative and qualitative measures within the model framework led to a more rigorous examination of acceptance and impact of a pilot educational program." Over time, the workshop evolved to better meet the needs of clients.

Finally, the Extension specialists gained confidence in their own evaluation skills by working with an external evaluator. That experience will translate into a lifetime practice of incorporating evaluation methods into future activities.

References

Altschuld, J. W., & Zheng, H. Y. (1995). Assessing the effectiveness of research organizations: An examination of multiple approaches. Evaluation Review, 19(2), 197-216.

Bailey, S. J., & Deen, M. Y. (2002). A framework for introducing program evaluation to Extension faculty and staff. Journal of Extension, [On-line], 40(2), Available at: http://www.joe.org/joe/2002april/iw1.html

Brown, J. L., & Kiernan, N. E. (1998). A model for integrating program development and evaluation. Journal of Extension, [On-line], 36(3), Available at: http://www.joe.org/joe/1998june/rb5.html

Bryk, A. S. (Ed.). (1983). Stakeholder-based evaluation. San Francisco, CA: Jossey-Bass, Inc.

Chapman-Novakofski, K., Boeckner, L. S., Canton, R., Clark, C. D., Keim, K., Britten, P., & McClelland, J. (1997). Evaluating evaluation: What we've learned. Journal of Extension, [On-line], 35(1), Available at : http://www.joe.org/joe/1997february/rb2.html

Chapman-Novakofski, K., DeBruine, V., Derrick, B., Karduck, J., Todd, J., & Todd, S. (2004). Using evaluation to guide program content: Diabetes education. Journal of Extension, [On-line], 42(3), Available at: http://www.joe.org/joe/2004june/iw1.shtml

Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. (2 ed.). Thousand Oaks: Sage Publications.

Garst, B. A., & Bruce, F. A. (2003). Identifying 4-H camping outcomes using a standardized evaluation process across multiple 4-H educational centers. Journal of Extension, [On-line], 41(3), Available at: http://www.joe.org/joe/2003june/rb2.shtml

Kelsey, K. D., & Pense, S. L. (2001). A model for gathering stakeholder input for setting research priorities at the land-grant university. Journal of Agricultural Education, 42(2), 18-27.

Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation: An integrated framework for understanding, guiding, and improving policies and programs. San Francisco: Jossey-Bass.

Patton, M. Q. (1990). Qualitative evaluation and research methods. (2 ed.). London: Sage Publications.

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. (3 ed.). London: Sage.

Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks: Sage Publications.

Scriven, M. (1991). Evaluation thesaurus. (4 ed.). Thousand Oaks, CA: Sage Publications.

Shepard, R. (2002). Evaluating Extension-based water resource outreach programs: Are we meeting the challenge? Journal of Extension, [On-line], 40(1), Available at: http://www.joe.org/joe/2002february/a3.html

Stufflebeam, D. L. (1973). Toward a science of educational evaluation. Englewood Cliffs, NJ: Educational Technology Publications.