December 2005 // Volume 43 // Number 6 // Feature Articles // 6FEA4

Previous Article Issue Contents Previous Article

A Multipurpose Evaluation Strategy for Master Gardener Training Programs

Abstract
A multipurpose evaluation, developed to measure impact of master gardener training in Pennsylvania, quantitatively measured both learning and increase in confidence, applying data from before and after the training. The authors demonstrate how the same data can be summarized in different ways to better achieve program improvement or demonstrate accountability. The evaluation compiled feedback from a 16-county area, including a majority of trainees in the state. This uniform evaluation strategy eliminated duplication of effort by county educators, provided a high quality tool for the state, and serves as a model for evaluating multi-topic programs taught by many instructors.


Emelie Swackhamer
Horticulture Extension Educator
Lehigh and Northampton Counties Cooperative Extension
Allentown, Pennsylvania
exs33@psu.edu

Nancy Ellen Kiernan
Program Evaluation Specialist
Penn State Cooperative Extension
University Park, Pennsylvania
nekiernan@psu.edu


Introduction and Problem

Master Gardener volunteers greatly extend the educational reach of Extension staff. Interest in developing and maintaining effective Master Gardener programs continues across the country (Bobbitt, 1997; Finch, 1997; Mechling & Schumacher, 2001; Ruppert, Bradshaw, & Stewart, 1997).

Training volunteers requires time and money (Meyer & Hanchek, 1997; Ruppert, Bradshaw, & Stewart, 1997). To get a good return on this investment, Extension should engage trained Master Gardeners in many volunteer activities for many years. Retention of Master Gardener volunteers seems to increase as their level of experience and familiarity with Cooperative Extension programs increases (Ruppert, Bradshaw, & Stewart, 1997). A Master Gardener who is comfortable and confident doing the volunteer work that is needed is more likely to remain active in the program and gain experience and familiarity. Thus, training for Master Gardeners should increase their knowledge on a variety of topics. The training must also impart the confidence needed to field home gardening questions from the public.

State level reports of Master Gardener programs tend to include anecdotal impressions of the benefits of the volunteer experience, the number of hours of volunteer time contributed, and the contacts made (Boyer, Waliczek, & Sajick, 2002; Kirsch & VanDerZanden, 2002; Mechling & Schumacher, 2001; Schrock, Meyer, Ascher, & Snyder, 2002). These data are not useful for program improvement because they do not identify the source of strengths and weaknesses of the program.

Some Master Gardener training programs have used opinion surveys to provide qualitative data on how the process is perceived and pre- and post-tests to measure knowledge change (Jeanette & Meyer, 2002; Ruppert, Bradshaw, & Stewart, 1997; Stack, 1997; VanDerZanden, 2001; VanDerZanden & Hilgert, 2002; VanDerZanden, Rost, & Eckel, 2002). These measures, however, do not provide quantitative data on the important issue of how confident the volunteers are in applying what they have learned through their training.

Another difficulty in evaluating Master Gardener training occurs because in most states, Master Gardener programs are county-based, resulting in a great deal of variation in how the training is conducted and how the program is administered. This mosaic of programs often leads to local approaches to evaluation, where one or a few counties develop their own evaluation and gather data unique to their situation (Finch, 1997; Ruppert, Bradshaw, & Stewart, 1997; Warmund & Schrock, 1999).

The information gathered from these evaluations is useful for program improvement and accountability on a local level, but due to variations in evaluations among counties, the information is not useful for showing success of the training across a wider region or at the state level. There has been little emphasis on trying to capture an expanded view of the impact that Extension has on Master Gardener trainees, but there is great potential to show success.

Developing local evaluations also results in duplication of effort by county program coordinators across the state. Often, the staff who design these evaluations have limited experience in evaluation, which can result in low quality designs.

Penn State requires each new Master Gardener to attend training on eight core topics. The overall success of the core training could be documented by compiling uniform evaluation data.

Program Description

The training classes for new Master Gardeners in the Southeast and Capital Regions of Pennsylvania are organized as a circuit. This includes 16 counties and more than half of the Master Gardeners in the state. Generally, one instructor teaches the same topic at many training sites. This circuit approach is efficient because it reduces duplication of effort on curriculum development and allows instructors to concentrate on improving the one topic they teach. It also allows for consistency of content of classes across counties and for evaluation data to be summarized from all sites.

In 2002, there were eight sites on the training circuit. The 12 topics taught included: soils, botany, turf, plant propagation, plant disease, entomology, plant identification, vegetables, herbaceous plants, integrated pest management and pesticides (IPM), ornamentals, and diagnosing plant problems.

Recognizing the need for quantitative impact data that could be used for multiple purposes, the authors developed a uniform evaluation strategy. To achieve this, they considered how to evaluate, what to evaluate, and how to summarize the data for different uses. This article describes how to use this evaluation to quantify increase in learning and confidence. The evaluation takes into consideration the fact that the volunteers come to the program with different levels of confidence. The article demonstrates how the same data can be summarized in different ways to use for program improvement or accountability reports.

The evaluation for the training classes that is described in this article is the first component of a three-part evaluation plan for the Master Gardener Program in Pennsylvania (Swackhamer & Kiernan, 2002 <http://www.extension.psu.edu/evaluation/pdf-others/MGTrainingINTRO.pdf>). The second component of this plan is a post-training class evaluation, which measures intentions to use new gardening practices and knowledge about the Master Gardener program. The third component is an evaluation for Master Gardeners who have been active in the program for one or more years, and it measures the long-term impact of the training and their involvement in the program combined.

Evaluation of the Training Classes

HOW to Evaluate the Program

The training evaluation survey has several important features.

First, formatted as a tri-fold brochure, the layout is aesthetically pleasing and has a professional appearance. The professionalism of the evaluation survey is important to encourage participants to take it seriously and to faithfully fill it out after each class (Dillman, 2000). It also imparts a general image of professionalism and organization to Extension.

Second, because the survey is handed out at the beginning of the 12-class training period, the evaluation can be completed over time. This allows participants to evaluate each class when they attend, whenever it is scheduled in their county. Participants respond immediately after each class takes place, which gives a more valid assessment than would be obtained if participants were asked to recall their impressions of individual classes at the end of the 12-week session.

Third, the questions in this evaluation collect data that are useful for many purposes. The questions provide insight into the effectiveness of the instructor for each topic, allowing for program improvement. The questions also provide data to examine the success of the training in one or more counties, or even the whole state.

WHAT to Evaluate

In the first question the trainees indicate how much they learned in each class, i.e., "nothing new," "some new knowledge," "a lot," or "a great deal" (Figure 1). The structure of this question allows instructors to see how effectively they taught their topic at each site, at all sites, and in comparison to other instructors.

Figure 1.
Measure of Learning

Evaluation of training topics presented including soils, botany, turf, propagation and plant disease.

In the second question the trainees indicate how confident they are in fielding questions on each topic on a five-point scale before attending the class, and again after attending the class. The answer categories are "not too confident," "somewhat confident," "moderately confident," "very confident," or "extremely confident" (Figure 2). The structure of this question allows the instructor to calculate a quantitative change in confidence imparted by their teaching. Demonstrating a change using before and after measures contributes to the validity of the data and the educators' documentation of their effectiveness (Rossi, 1982).

Figure 2.
Measure of Confidence in Ability to Field Questions

  Measure of confidence in ability to field questions both before attending class and after attending class.

The structure of this question allows a second benefit, comparison of trainees who come into the program with different levels of prior experience and confidence.

In the third question (not shown) trainees provide comments about the class in each topic. The question provides a forum for qualitative feedback, useful for program improvement.

HOW to Summarize the Data

The data from each question can be summarized from two perspectives. Each type of summary serves a different evaluation purpose for Extension. Summarizing topic-by-topic gives information useful for program improvement. Summarizing across all topics gives information more useful for accountability, such as reporting to stakeholders and funding sources. Once the data are entered, both summaries can be accomplished using Excel or other similar programs that are basic software on most computers.

Results

In the fall of 2002, 178 new Master Gardeners from 12 counties participated in 12 classes of training and contributed to the evaluation of each class they attended. The counties included metropolitan, suburban, and rural areas.

Question One: Measure of Learning

Topic-by-Topic Results

The percent of Master Gardeners who learned "a lot" or "a great deal" about 10 of the 12 topics was substantial (Figure 3). Those topics include: soils, botany, turf, propagation, plant disease, entomology, plant identification, IPM, ornamentals, and diagnosing plant problems. The percent of Master Gardeners who learned "a lot" or "a great deal" in two topics, vegetables and herbaceous plants, was somewhat less.

Figure 3.
Learning in 12 Topics (N = 123-177)

Figure of percent that learned 'a lot' or 'a great deal' in different subject matters.

Across Topics Results

Summarizing the data not for each topic as above, but for individuals across the topics, the results indicate that the majority of the Master Gardeners (77%) learned a "lot" or "a great deal" about most topics (Figure 4). The remainder of the Master Gardeners (23%) had a similar gain of knowledge in up to six topics.

Figure 4.
Number of Topics in Which Individuals Increased Learning (N = 128-177)

Number of topics in which individuals increased learning.

Question Two: Measure of Confidence in Ability to Field Questions

Topic-by-Topic Results

The results indicate that the Master Gardeners increased their confidence in their ability to field questions to "moderately," "very," or "extremely" confident from before the training to after it in all twelve topics (Figure 5). A greater percent of Master Gardeners increased in confidence in fielding questions in soils, botany, turf, propagation, plant disease, entomology, plant identification, IPM, ornamentals, and diagnosing plant problems. A lesser percent of Master Gardeners increased their confidence in vegetables and herbaceous plants.

Figure 5.
Confidence in Ability to Field Questions in 12 Topics Before & After Class (N = 128-177)

Comparison of confidence in ability to field questions before and after taking the class.

Tabulating the number of levels for which the trainees increased their confidence on the five-point confidence scale, the results indicate that in plant pathology, for example, 41% increased two or more levels and 52% increased by one level (Figure 6). A similar analysis is possible for each topic.

Figure 6.
Increase in Confidence to Field Questions on Plant Disease (N = 172)

Pie chart showing 3 levels of those with increased confidence to field questions on plant disease.

Across Topics Results

Summarizing the data not for each topic as above, but for individuals across topics, the results indicate 89% of the Master Gardeners increased their confidence in their ability to field questions in seven or more topics (Figure 7). The rest increased their confidence in their ability to field questions in up to six topics.

Figure 7.
Number of Topics in Which Individuals Increased Confidence in Ability to Field Questions (N = 128-177)

Pie chart to show number of topics in which confidence to field questions increased.

Implications of WHAT Is Asked

Question One: Measure of Learning

  • Shows if participants felt they had acquired substantial knowledge through the training period.

  • Allows individual instructors the opportunity to see how effective their class was in comparison to other instructors, contributing to the overall high standards of the training program.

  • Identifies topics that may not be critical to include in future training programs.

Question Two: Confidence in Ability to Field Questions

  • Documents a valid level of impact by measuring participants' confidence before and after the training program.

  • Allows instructors to compile impact data on participants who come into the program with different levels of confidence. For example, consider two participants: one came into the program NTC before and feels MC after. Another came into the program as MC before and feels EC after. Both of these participants have increased in their confidence by two levels, and the instructor has made a similar amount of impact on these individuals.

Implications of HOW Results Are Summarized

Topic-by Topic-Results

  • Are suited for instructors and coordinators of the county-based program.

  • Provide data about the impact of the class on each topic.

  • Are most useful for future program improvement.

  • Allow for comparison of the classes, helping coordinators to identify topics which may need improvement.

  • May take longer because each topic must be summarized and presented.

Across Topics Results

  • Give a more succinct picture of the whole program.

  • Are suited for reporting to stakeholders and funding sources.

  • Lend themselves to simple pie charts for stakeholders or funders who may not be interested in all the details of the first type of summary.

  • Draw on the same data already entered into the computer for the topic-by-topic results.

Discussion

Evaluation of training programs for Master Gardeners should take into account the county-based nature of these programs and the different experience levels of the volunteers. Evaluation efforts should also lend themselves to multiple uses because, while instructors need information to improve programs, stakeholders require information to see the value of their investment. Information obtained by this evaluation also gives coordinators a unique opportunity to consider the trainees' perception of their confidence, which can influence the coordinators' goal of long-term retention of volunteers.

These results of this study suggest that the training in Pennsylvania was very successful. Trainees learned a substantial amount about most topics, and they gained confidence in applying what they learned in all topics by attending the classes.

Program coordinators can use information from both questions to decide if the classes with less impact are worth the expense of resources and staff time to continue offering in the future. Therefore, the value of each of these pieces of information to a program coordinator can increase when they are used together.

In this study, the trainees gained the least amount of knowledge in the vegetables and herbaceous plants classes, yet they indicated the highest levels of confidence in fielding questions on those topics before the classes. This suggests most trainees came to the program with a high level of prior knowledge and confidence on these topics.

The results also suggest that further training or experiences may be needed to bring trainees' confidence in their ability to field questions on certain topics (e.g., diagnosing plant problems) up to par with their knowledge.

Conclusion

The description of this evaluation for training Master Gardeners fills a gap in the literature. It provides a model for evaluating a multi-topic program taught by many instructors in various regions in a state. Additionally, the description outlines the benefits. The evaluation strategy considers how to evaluate, what to evaluate, and how to summarize the data to be useful to improve the program and to report succinctly to stakeholders. Designing a user-friendly evaluation increases response rates. Using multiple questions, gathering before and after data, and quantifying the application of knowledge allow for capture of impact on a higher level. Summarizing the data in multiple ways expands the use of the data.

In a university Extension system, evaluation data compiled from several uniform training programs is more useful for reporting successes to stakeholders of statewide programs than many local pieces of evaluation results. Using a uniform evaluation strategy reduces staff time and allows evaluation specialists to help in the development, resulting in a more scientific evaluation design.

This evaluation strategy is currently being used in many counties in Pennsylvania. The evaluation results presented from the two-region area captured data from the majority of new volunteers trained in the state in 2002. Results have been used to assess the effectiveness of Master Gardener training and have resulted in changes in instructors, addition and elimination of topics, and changes in how the topics were taught. Results were also used to report to County Commissioners, administrators, and other stakeholders across both regions.

Acknowledgements

The authors would like to thank all the Extension educators and Master Gardener Coordinators in the Southeast and Capital Regions of Pennsylvania who participated in this project.

References

Bobbitt, V. (1997). The Washington State University Master Gardener program: Cultivating plants, people and communities for 25 years. HortTechnology, 7(4), 345-347.

Boyer, R., Waliczek, T. M., & Zajicek, J. M. (2002). The Master Gardener program: Do benefits of the program go beyond improving the horticultural knowledge of the participants? HortTechnology, 12(3). 432-436.

Dillman, D.A. (2000). Mail and Internet surveys: The tailored design method. Second Edition. New York: John Wiley & Sons, Inc.

Finch, C. R. (1997). Profile of an active Master Gardener chapter. HortTechnology, 7(4), 371-376.

Jeannette, K. J., & Meyer, M. H. (2002). Online learning equals traditional classroom training for Master Gardeners. HortTechnology, 12(1). 148-156.

Kirsch, E., & VanDerZanden, A. M. (2002). Demographics and volunteer experiences of Oregon Master Gardeners. HortTechnology, 12(3) 505-508.

Mechling, M., & Schumacher, S. (2001). Multi county approach to Master Gardener program in rural areas yields results. Journal of Extension [On-line], 39(4), p. n/a. Available at: http://www.joe.org/joe/2001august/iw3.html

Meyer, M. H., & Hanchek, A. M. (1997). Master Gardener training costs and payback in volunteer hours. HortTechnology, 7(4), 368-370.

Rossi, P. H., & Freeman, H. E. (1982). Evaluation: A systematic approach. Second Edition. Beverly Hills, CA. Sage Publications Inc.

Rost, B. (2000). Interaction analyzed in traditional and satellite-delivered Extension educational presentations. Journal of Extension [On-line], 38(1). Available at: http://www.joe.org/joe/2000february/rb3.html

Ruppert, K. C., Bradshaw, J., & Stewart, A. Z. (1997). The Florida Master Gardener program: History, use and trends. HortTechnology, 7(4), 348-353.

Schrock, D. S., Meyer, M., Ascher, P., & Snyder, M. (2000). Benefits and values of the Master Gardener program. Journal of Extension [On-line], 38(1). Available at: http://www.joe.org/joe/2000february/rb2.html

Schrock, D. S., Meyer, M., Ascher, P., & Snyder, M. (2000). Reasons for becoming involved as a Master Gardener. HortTechnology, 10(3), 626-630.

Stack, L. B. (1997). Interactive television delivers Master Gardener training effectively. HortTechnology, 7(4), 357-359.

Swackhamer, E., & Kiernan, N. E. (2002, July). Measuring the impact of Master Gardener programs--A strategy of three evaluation tools. Poster session presented at the annual meeting for the National Association of County Agricultural Agents, Savannah, GA.

VanDerZanden, A. M., Rost, B., & Eckel, R. (2002). Basic botany on line: A training tool for the Master Gardener program. Journal of Extension [On-line], 40 (5). Available at: http://www.joe.org/joe/2002october/rb3.shtml

VanDerZanden, A. M. & Hilgert, C. (2002). Evaluating online training modules in the Oregon Master Gardener program. HortTechnology, 12(2) 297-299.

VanDerZanden, A. M., (2001). Ripple effect training: Multiplying Extension's resources with veteran Master Gardeners as Master Gardener trainers. Journal of Extension [On-line], 39(3). Available at: http://www.joe.org/joe/2001june/rb1.html

Warmund, M. R., & Schrock, D. (1999). Clientele perceptions of Master Gardener training delivered via interactive television versus face to face. HortTechnology, 9(1). 116-121.